Vanguard Group Inc. Has $2.30 Billion Stake in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

DNB Asset Management AS grew its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 26.2% in the 4th quarter, according to its most recent disclosure with the Securities & Exchange Commission. The institutional investor owned 19,503 shares of the company’s stock after acquiring an additional 4,050 shares during the period. DNB Asset Management AS’s holdings in MongoDB were worth $7,974,000 as of its most recent SEC filing.

A number of other large investors have also recently bought and sold shares of MDB. Wesbanco Bank Inc. bought a new position in MongoDB during the third quarter worth $202,000. Raleigh Capital Management Inc. increased its holdings in shares of MongoDB by 156.1% during the third quarter. Raleigh Capital Management Inc. now owns 146 shares of the company’s stock worth $50,000 after buying an additional 89 shares in the last quarter. Alamar Capital Management LLC increased its holdings in MongoDB by 9.4% during the 3rd quarter. Alamar Capital Management LLC now owns 9,903 shares of the company’s stock valued at $3,425,000 after purchasing an additional 847 shares in the last quarter. Polar Capital Holdings Plc raised its position in MongoDB by 18.5% in the 3rd quarter. Polar Capital Holdings Plc now owns 248,259 shares of the company’s stock valued at $85,863,000 after purchasing an additional 38,747 shares during the last quarter. Finally, Jacobs Levy Equity Management Inc. acquired a new position in MongoDB in the third quarter worth $2,453,000. 88.89% of the stock is currently owned by institutional investors.

Insiders Place Their Bets

In related news, CAO Thomas Bull sold 359 shares of MongoDB stock in a transaction on Tuesday, January 2nd. The stock was sold at an average price of $404.38, for a total transaction of $145,172.42. Following the sale, the chief accounting officer now directly owns 16,313 shares in the company, valued at $6,596,650.94. The sale was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this hyperlink. In other MongoDB news, CAO Thomas Bull sold 359 shares of MongoDB stock in a transaction that occurred on Tuesday, January 2nd. The shares were sold at an average price of $404.38, for a total transaction of $145,172.42. Following the completion of the sale, the chief accounting officer now directly owns 16,313 shares in the company, valued at $6,596,650.94. The sale was disclosed in a document filed with the SEC, which is available at the SEC website. Also, CRO Cedric Pech sold 1,248 shares of the business’s stock in a transaction that occurred on Tuesday, January 16th. The stock was sold at an average price of $400.00, for a total transaction of $499,200.00. Following the completion of the sale, the executive now owns 25,425 shares of the company’s stock, valued at $10,170,000. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 54,607 shares of company stock worth $23,116,062. Corporate insiders own 4.80% of the company’s stock.

Analyst Ratings Changes

Several research analysts have issued reports on MDB shares. KeyCorp upped their price objective on MongoDB from $500.00 to $543.00 and gave the stock an “overweight” rating in a report on Wednesday, February 14th. Royal Bank of Canada lifted their price objective on MongoDB from $445.00 to $475.00 and gave the company an “outperform” rating in a report on Wednesday, December 6th. Guggenheim upped their target price on shares of MongoDB from $250.00 to $272.00 and gave the stock a “sell” rating in a research note on Monday, March 4th. TheStreet raised shares of MongoDB from a “d+” rating to a “c-” rating in a research report on Friday, December 1st. Finally, UBS Group reaffirmed a “neutral” rating and set a $410.00 price target (down from $475.00) on shares of MongoDB in a research report on Thursday, January 4th. One investment analyst has rated the stock with a sell rating, three have issued a hold rating and nineteen have issued a buy rating to the stock. Based on data from MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and an average target price of $456.19.

Read Our Latest Analysis on MDB

MongoDB Trading Down 3.7 %

Shares of MDB stock traded down $13.50 during trading hours on Friday, hitting $355.44. 1,823,545 shares of the company were exchanged, compared to its average volume of 1,490,097. MongoDB, Inc. has a fifty-two week low of $198.72 and a fifty-two week high of $509.62. The company has a debt-to-equity ratio of 1.18, a quick ratio of 4.74 and a current ratio of 4.74. The firm has a market cap of $25.65 billion, a P/E ratio of -143.32 and a beta of 1.24. The company has a 50 day simple moving average of $421.71 and a two-hundred day simple moving average of $392.02.

About MongoDB

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Growth stocks offer a lot of bang for your buck, and we’ve got the next upcoming superstars to strongly consider for your portfolio.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Graham Capital Management L.P. Sells 441 Shares of MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Graham Capital Management L.P. lowered its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 23.2% in the 3rd quarter, HoldingsChannel reports. The firm owned 1,458 shares of the company’s stock after selling 441 shares during the quarter. Graham Capital Management L.P.’s holdings in MongoDB were worth $504,000 at the end of the most recent reporting period.

Other hedge funds and other institutional investors have also recently modified their holdings of the company. Vanguard Group Inc. lifted its position in MongoDB by 2.1% in the 1st quarter. Vanguard Group Inc. now owns 5,970,224 shares of the company’s stock worth $2,648,332,000 after buying an additional 121,201 shares during the last quarter. Jennison Associates LLC lifted its holdings in shares of MongoDB by 87.8% in the 3rd quarter. Jennison Associates LLC now owns 3,733,964 shares of the company’s stock worth $1,291,429,000 after acquiring an additional 1,745,231 shares during the last quarter. State Street Corp boosted its stake in shares of MongoDB by 1.8% in the 1st quarter. State Street Corp now owns 1,386,773 shares of the company’s stock valued at $323,280,000 after purchasing an additional 24,595 shares in the last quarter. 1832 Asset Management L.P. grew its holdings in shares of MongoDB by 3,283,771.0% during the 4th quarter. 1832 Asset Management L.P. now owns 1,018,000 shares of the company’s stock valued at $200,383,000 after purchasing an additional 1,017,969 shares during the last quarter. Finally, Geode Capital Management LLC raised its position in MongoDB by 3.5% in the 2nd quarter. Geode Capital Management LLC now owns 1,000,665 shares of the company’s stock worth $410,567,000 after purchasing an additional 33,376 shares during the period. 88.89% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Stock Performance

MongoDB stock opened at $355.44 on Friday. The stock’s 50 day moving average is $421.71 and its 200 day moving average is $391.90. The company has a market capitalization of $25.65 billion, a P/E ratio of -143.32 and a beta of 1.24. MongoDB, Inc. has a twelve month low of $198.72 and a twelve month high of $509.62. The company has a quick ratio of 4.74, a current ratio of 4.74 and a debt-to-equity ratio of 1.18.

Wall Street Analyst Weigh In

A number of brokerages have recently issued reports on MDB. Truist Financial raised their price objective on MongoDB from $440.00 to $500.00 and gave the company a “buy” rating in a research note on Tuesday, February 20th. TheStreet raised MongoDB from a “d+” rating to a “c-” rating in a research note on Friday, December 1st. Citigroup raised their target price on shares of MongoDB from $515.00 to $550.00 and gave the stock a “buy” rating in a report on Wednesday, March 6th. Stifel Nicolaus reissued a “buy” rating and set a $450.00 price target on shares of MongoDB in a research report on Monday, December 4th. Finally, KeyCorp boosted their price objective on shares of MongoDB from $500.00 to $543.00 and gave the company an “overweight” rating in a research report on Wednesday, February 14th. One research analyst has rated the stock with a sell rating, three have assigned a hold rating and nineteen have assigned a buy rating to the company’s stock. According to data from MarketBeat, the company currently has an average rating of “Moderate Buy” and an average target price of $456.19.

Read Our Latest Research Report on MDB

Insider Transactions at MongoDB

In other news, CFO Michael Lawrence Gordon sold 10,000 shares of MongoDB stock in a transaction dated Thursday, February 8th. The stock was sold at an average price of $469.84, for a total value of $4,698,400.00. Following the completion of the transaction, the chief financial officer now owns 70,985 shares of the company’s stock, valued at approximately $33,351,592.40. The transaction was disclosed in a legal filing with the SEC, which is available through the SEC website. In other news, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction on Thursday, February 8th. The stock was sold at an average price of $465.37, for a total value of $930,740.00. Following the completion of the transaction, the director now owns 1,166,784 shares in the company, valued at $542,986,270.08. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, CFO Michael Lawrence Gordon sold 10,000 shares of the stock in a transaction on Thursday, February 8th. The shares were sold at an average price of $469.84, for a total value of $4,698,400.00. Following the sale, the chief financial officer now directly owns 70,985 shares in the company, valued at $33,351,592.40. The disclosure for this sale can be found here. Over the last three months, insiders have sold 54,607 shares of company stock valued at $23,116,062. Insiders own 4.80% of the company’s stock.

About MongoDB

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Erlang-Runtime Statically-Typed Functional Language Gleam Reaches 1.0

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Gleam, an actor-based highly-concurrent functional language running on the Erlang virtual machine (BEAM), has reached version 1.0, which means it is now ready to be used in production systems with a guarantee of backward compatibility based on semantic versioning.

Gleam aims to be a language with a small surface area, easy to read and understand, and expressive.

Gleam runs on the Erlang virtual machine, a mature and battle-tested platform that powers many of the world’s most reliable and scalable systems, such as WhatsApp. Gleam can also run on JavaScript runtimes, making it possible to run Gleam code in the browser, on mobile devices, or anywhere else.

Gleam follows in the line of strong statically-typed languages like Elm, OCaml, and Rust providing robust static analysis and compile-time guarantees. It also adopts immutable data structures implemented with structural sharing as in Clojure to ensure efficient operation. Concurrent access to mutable state is achieved through actors or Erlang’s in-memory key-value database ETS.

According to the language core team, Gleam concurrency system can run millions of tasks concurrently and scale easily thanks to immutable data and a garbage collector that never stops the world.

Gleam programs can use packages created for BEAM independently from the language used to write them. Additionally, it is possible to mix Erlang and Elixir code in a Gleam program. This is possible thanks to Gleam build tool being able to compile Elixir dependencies as well as Elixir source files and supporting a specific syntax that enables importing external functions to call them from Gleam code. Gleam also supports seamless integration with JavaScript code when compiling for the JavaScript runtime:

pub fn register_event_handler() {
  let el = document.query_selector("a")
  element.add_event_listener(el, fn() {
    io.println("Clicked!")
  })
}

Gleam can also use Erlang hot code reloading but without any additional guarantees to those provided by Erlang itself. Specifically, upgraded code cannot be type checked since it is not possible to know which types are used in the running code.

The arena of languages for the Erlang virtual machine sports several Gleam competitors, including Alpaca, Caramel, and Elixir. Both Alpaca and Caramel are statically-typed languages that differ from Gleam in a number of ways. In particular, Caramel is based on OCaml and even forks the OCaml compiler to generate code, while Gleam and Alpaca are original languages. Gleam is the only one of the set that also targets JavaScript. On the other hand, Elixir is, without doubt, the most mature and popular alternative to Erlang on BEAM, providing a Ruby-like syntax and a dynamical type system.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Introduces Static Web Apps’ New Feature: Distributed Functions for Enhanced Performance

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has announced a new Azure Static Web Apps feature called distributed functions. It automatically distributes managed functions to high-demand regions of Static Web Apps.

Azure Static Web Apps is a platform that allows developers to deploy their static site to a globally distributed host and add backend functionality with integrated managed functions. However, managing the network latency of these managed functions can be challenging, mainly when serving audiences far from the functions region. To address this challenge, Azure Static Web Apps now offers distributed functions.

Distributed Functions is a feature that automatically distributes Static Web Apps’ managed functions to regions of high demand based on user traffic load. When developers create an Azure Static Web App, they choose a home region for their functions, where they are initially deployed. However, suppose another Azure Static Web App host receives significant traffic to the managed functions in a different region. In that case, it will deploy a copy of their managed functions to that region and route traffic to this new managed function backend.

Jan-Henrik Damaschke, an Azure MVP, tweeted on the announcement of Distributed Functions for Azure Static Web Apps:

This will help a lot with improving the latency in your SWA apps.

Distributed Functions offer various benefits. By distributing functions across different regions, developers can reduce network latency for requests to their backend-managed functions. This is especially helpful for request pre-processing tasks such as authorization, personalization, or routing, where minimizing network latency can lead to a better user experience. Additionally, if developers build fully globally distributed web applications, combining Distributed Functions with a global database like Azure CosmosDB ensures a highly performant web application.

Thomas Gauvin, a product manager of Azure Static Web Apps at Microsoft, writes:

This distribution of your backend functions can reduce the network latency for your managed functions calls by up to 70%, depending on the distance between the user and the function region. This can be especially useful in the context of request pre-processing, where network latency is critical to providing a good user experience.

Response Time Graph (Source: Tech Community blog post)

Developers can enable distributed functions in the Standard SKU of Azure Static Web App through the APIs blade of their Azure Static Web App resource and toggle on the Distributed Functions option. During the preview phase, Microsoft will continue to refine the trigger conditions for distributed functions based on user feedback.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: The Journey to a Million Ops / Sec / Node in Venice

MMS Founder
MMS Alex Dubrouski Gaojie Liu

Article originally posted on InfoQ. Visit InfoQ

Transcript

Liu: I’m Gaojie Liu. I’m a software engineer at LinkedIn. Alex and I are going to describe the journey about how to achieve 1 million operations per second per node in Venice. Today’s talk is going to focus on the high-level optimization strategy. We’re not going to show any code in the presentation. This picture, we borrowed from Wikipedia. It is a picture of a naval battle in Mediterranean, which happened a long time ago. Contrary to popular belief, Henry Ford did not invent mass production. The city state of Venice was doing that centuries earlier, by continuously improving their battleship manufacturing process, Venice city was able to fight large scale naval battle, which made it a superpower in Mediterranean for many centuries. Fast forward to today, my team and I developed a database called Venice. We are constantly improving the Venice performance to serve large scale workload. I will introduce Venice briefly, and then talk about Venice architecture and its evolution in the past several years. Then I will hand it over to Alex to talk about lower-level optimization in Venice, and the conclusion.

The project named Venice is named after the Venice city in Italy. Venice city is built on top of 180 Islands, which are connected by canals. In the Venice project, we also build Venice style called Venice data stream, and the data item carried by Venice data stream will be presented in Venice Island, which we call Venice stores. Here is the brief story about Venice development. In 2014, to fulfill the ever-increasing requirement from AI customers to keep data more fresh, Venice project got started. At the end of 2016, we [inaudible 00:02:20] the production use case. In 2018, to fulfill the GDPR requirement, we migrated over 500 stores from Voldemort to Venice. Voldemort was another open source key-value store by LinkedIn. In 2020, we open sourced Venice in GitHub. Until now, the Venice setup in LinkedIn is serving over 2000 stores in production.

As we mentioned, Venice is open source. Venice supports very high throughput. The Venice setup in LinkedIn is serving over 100 million key lookups per second. In the meantime, it is ingesting over 1.2 petabytes of data daily. Venice offers multiple low latency with the client. Venice currently is high availability. By default, it’s using replication factor 3, which can tolerate any random hardware failure, in some high read throughput cluster, it is using an even higher replication factor, such as 6, 8, or 12 to scale the read capacity. Venice is an eventual consistent system. That means it does not offer the traditional read or even write semantics. It does offer partial updates as an alternative. Venice supports large datasets, and it is using sharding to distribute the load into a group of nodes. As a key-value storage system, besides the single get, Batch Get API, Venice supports some more advanced API, which we will describe in a later slide.

From our view, there are two kinds of data types, primary data and derived data. Primary data is usually the outcome of users’ activity. LinkedIn profile is a good example. A LinkedIn user can edit their profile directly. Those primary data is mainly served out of a SQL database or document store. Derived data is computed out of primary data. People you may know is a notable example. The LinkedIn user can use people you may know to pass their social graph, but LinkedIn user cannot edit those recommendations directly. Venice as a derived data platform is mainly to store the machine learning training output. As I mentioned, PYMK is heavily using Venice, and LinkedIn feed, LinkedIn Ads are also using Venice to store the training output, such as embeddings.

Venice Architecture

Now I’d like to describe the high-level architecture. You can see, this is a pretty complex diagram and we are going to break it down into five sections to describe each component in more detail. First the cluster management. Behind the scenes, Venice is using Apache Helix for cluster management. Apache Helix is an open source cluster management framework, will automate the partition assignments for the partitioned, replicated, and the distributed resources you group on node. It also automates the partition reassignment in the presence of node failure, recovery, cluster expansion, and reconfiguration. Reconfiguration, typically referring to, let’s say, if we want to change the replication factor. In the Apache framework, there are three major routes, controller is to manage the lifecycle of resources. Participant is used to handle the partition assignment request. Spectator normally actively monitors the partition assignment mapping and keep in-memory for routing purpose. In the Venice project, there are correspondent for each row. This is the Venice ingestion architecture. As we mentioned, Venice guarantees eventual consistency, so all the writes will be produced to a pub-sub messaging service system. Inside LinkedIn we are using Kafka. The available writers are Grid and Samza. Grid is used to bulk load a new dataset, and Samza is used to write a real-time job to produce the latest update. Venice server is constantly polling the Kafka topic and process the latest message locally. Venice server is using RocksDB, which is one of the most popular embedded key-value stores. Venice offers multiple read client, and they are accessible for different use cases, and they offer different latency guarantees. We will dig deeper in a later section.

For the admin operation, Venice is using two-layer architecture. Only admin operations such as store creation, deletion, or schema evolution, they will be forwarded to the parent controller first, and the parent controller will asynchronously replicate those metadata to the child controller in each region. With this two-layer architecture, we can achieve eventual consistency, also it offers a better resilience because it can tolerate any shorter period of routine failure, the failed router can recover on its own automatically. Lastly, Venice supports multiple regions. As you can see in the diagram, the running server in each region is talking to both the local Kafka cluster and the remote Kafka cluster. It is using active-active replication to achieve eventual consistency. Even Venice server in different region could consume those Kafka topics at a different speed, they are still able to achieve the data convergence, since Venice server is using a timestamp-based conflict resolution strategy.

Venice Evolution

Now it’s time to inquire the architecture evolution in the Venice write path. Firstly, we want to share the Venice data model. Venice supports a lambda architecture, which mainly supports both batch processing from Grid, and real-time processing from Samza. Samza is the framework we are using inside LinkedIn. Typically, there are two ways to work with two different datasets. Read path merge are merged in the write path. Venice chose to merge the data in the write path because of the following reasons. Write path merge offer better read latency, since read path merge, the application needs to talk to two different systems, the latency will be bounded by the slowest path. Also, the write path merge offers better read resilience, since the application only needs to talk to one single system. The application logic also got greatly simplified because this merge is handled in the platform.

Let’s do a quick recap. As we mentioned, all the data updates will be put into the Kafka first, and when it’s served, it will come to the consume Kafka topic and process them in a local replicated database. This is the initial release out of Venice, and each Venice server will allocate a dedicated pipeline for each hosted store. Dedicated pipeline means Venice server will allocate a dedicated consumer which will constantly be polling the corresponding Kafka topic, and in the same pipeline, it will constantly process those consumer message and process them locally. This simple and straightforward strategy allows faster development, which help us to roll out Venice production quickly. Apparently, there’s inefficient issue here. Imagine if a Venice server intends to only host a few number of stores, distributed resources will not be fully utilized. To overcome this Inefficiency issue later, we introduce Venice Drainer Pool strategy. Each drainer is maintaining an in-memory queue. The ingestion path will constantly push the consumer message to the greener queue. Each greener side will constantly pull the message from the greener queue and processes them in the local RocksDB database. With this strategy, you would write few number of stores, distributed resources can fully utilize. Also, is offered to control the total amount of resource we allocate for data processing.

Venice server is using a hashing-based partition assignment to guarantee decrease of load, to make sure each greener side is processing enough message all the time. We have observed for each hosted store, Venice server still allocates a dedicated consumer. Each consumer is carrying a fixed amount of overhead. This overhead will be amplified if the total number of stores hosted by each link is increased significantly. To address this problem, the topic will share the consumer service. With this shared consumer service, the total amount of consumer will be fixed. Each ingestion task will be assigned to a shared consumer. With this strategy, we’re able to achieve a better GC, and in return it also offers a better ingestion throughput. Recently, for some high write throughput use case, we observed that the Kafka consumer was the actual bottleneck. Potentially, one ingestion task can only leverage one consumer. To improve that situation, we introduced partition-wise shared consumer service. With that strategy, one ingestion task can leverage multiple consumers. We observed a huge ingestion throughput increase for those high write throughput use cases. For the shared consumer service, Venice server is using least loaded strategy to achieve even distribution of all the consumer services. Right now, we are running this mode in production.

Now it’s time to talk about the Venice read path. Let’s do a quick recap. In the following several slides we are going to focus on this component in the gray rectangle. You can see we have three offerings out of read client, Thin Client, Fast Client, and Da Vinci, and they are suitable for different use cases. The first offering is Venice Thin Client. It offers single-digit milliseconds for single key lookup at p99. It is using a three-layer architecture. The transmission protocol is HTTP. The web protocol is ever a binary. Thin Client is in charge of serialization/deserialization. It will serialize the upcoming request from application and forward the binary request to the router, in the meantime it also deserializes the binary response into Java objects for the application to consume. Venice router is actually monitoring the partition assignment from ZooKeeper and caching locally for routing purpose. Venice router is using least loaded routing strategy to cool down the unhealthy or busy replica, in the meantime it’s also using long tail retry strategy to achieve good long tail latency, in the presence of a bad node or busy node. Venice server is quite straightforward, it will pass the incoming request, do the local route key lookup, return the binary response back. With this three-layer architecture, the complex components, Venice server, Venice router are still under control of Venice platform, it makes it easy to rule out any routing optimization.

In 2020, and to match the performance of Couchbase, Venice introduced Venice Fast Client. It using two-layer architecture. It can achieve below 2 milliseconds for single key lookup at p99. Essentially, the Fast Client will talk to a metadata endpoint, is pulled by the Venice server to client with this routing information, and is leveraging a similar routing strategy and retry strategy in the Venice router to deliver the similar resilience guarantee. By removing Venice router from the hot path, we achieve a much better latency, also it saves a lot of hardware. Lastly, there’s another offering called Da Vinci. It is suitable for the small use cases, whose dataset can be fully accommodated in a single node. It offers two modes. If all your dataset can be fully kept in-memory, and Da Vinci typically uses a memory mode, it is super-fast. It can deliver below 10 microseconds at p99 for a single key lookup. Even with the memory mode, there’s still an on-disk snapshot for faster bootstrap purpose. If your dataset could not be fully accommodated in-memory, you can choose to use the disk mode, and there will be an LRU cache in front of the on-disk data. It’s used to cache the index and the frequent entries. Da Vinci is not a read through cache, it is not a EGOCache, it will constantly pull the Kafka topic, apply the latest updates locally.

Besides the architecture evolution, we also made some innovation in the API layer, more AI customer they would like to score and rank a large amount of candidates to improve the relevance of the recommendation. By using the naïve Batch Get, Venice was not able to meet their latency requirement. By Venice read compute, we were able to push some of the compute operation to the Venice server layer. One compute request will be executed in parallel because of scatter-gathering, and also the read path side got reduced significantly because Venice server only needs to return the compute results back. Currently, Venice compute is using a declarative DSL. It supports projection. It supports several vector arithmetic such as dot product, cosine similarity, and Hadamard product.

Improving the performance of this high read AI use case, we blow a scalability limit with the previous partition assignment strategy. In the past, when we double set the cluster to scale out the cluster to fulfill more read QPS, we observed that the read capacity stayed the same, even with the double size of hardware. We dug deeper, and we realized that the root cause is the file size for each request will increase proportionally to the cluster size, this is a really bad behavior. To mitigate that problem, we introduced a logical group based partition assignment strategy. We divided the whole cluster into several equal sized logical group. Each logical group will keep the full replication. It means each logical group can serve any request. When we try to scale out the cluster, still become much simpler, we just add a more logical group, but keep the file sizes same as before. Venice, we have been using this strategy in production for several years. It has proven to be a horizontally scalable solution for the large AI use case.

Lastly, Venice also adopts several mature transporting layer optimizations. First, we switched the JDK SSL, from the JDK SSL to the OpenSSL. To become a native implementation of OpenSSL, the GC overhead got reduced quite a bit. Because of more efficient implementation in OpenSSL, we also observed a good latency reduction, end-to-end. Venice supports streaming, and with the streaming API, the application will be able to process the read paths as early as possible. We were able to reduce the latency end-the-end by 15%. LinkedIn will constantly do the failover type in the path, we always observe a connection storm issue at the startup of the load test. We adopted HTTP 2.0 to address that problem. With all the optimization we presented in this tech talk, we achieved 1 million operations per second per node in Venice, with an acceptable latency. These 1 million operations, it makes up 620 key lookup and 680 key message write. We execute this workload by the Venice read compute traffic, which is high based workload we have running in production. The pmap was executed in a node with 32 CPU cores, and 256 gigabytes RAM.

Common Optimizations and How They Affect Performance

Dubrouski: My name is Alex. I’m one of the technical leads of Server Performance Group at LinkedIn. We did a lot of different optimizations to the code, to configuration of Venice, but I would like to share the set which might be applicable to pretty much any applications you have. I would like to start with misconceptions. You probably all know this famous statement that premature optimization is the root of all evil. Who actually knows the authorship of this sentence? It was published in the Donald Knuth book, but when he was asked about this specific quote, he said he probably heard it from Sir Tony Hoare. Sir Tony was not able to recall that. He said like, I probably heard it from Edsger Dijkstra. Initially, this statement, this quote, was about counting CPU cycles in your assembly application before even finishing the algorithm. Now, it’s quite often used to push back any possible performance optimizations till it’s too late. I was lucky to be tasked to help Venice team literally on my first week after joining LinkedIn, and since then we’ve been working together for many years. We utilize an approach which I call continuous optimization. We continuously monitor our applications, and we have automated alerts. We continuously profile the applications, we do regular assessment of the profiling data. If we find any bottlenecks, we document them and we suggest solutions. Usually, we benchmark those solutions using synthetic benchmarks mostly in the form of JMH. We inspect the results down to the assembly code. If we see and we can explain the improvement, we test it in staging environments. Then we’ll roll it out to production and start over from step one.

To continue the topic about misconceptions, quite often, JDK version upgrades are considered a liability, a burden. Quite often, large organizations just keep the JDK version upgrades until it’s way too late. At LinkedIn we realized that JDK upgrade is one of the cheapest ways to improve the application performance. Venice was always on the edge of JDK version upgrades. It was one of the first applications migrated to Java 11. It’s pretty much the first application fully migrated to Java 17. With Java 11 upgrade, the latency improvement was in the single-digit percentage points, and the stop-the-world time was in the double-digit percentage points. With Java 17 upgrade, we experienced pretty much the same improvement. I think that all this misconception comes from the point that regular Java developers see and are aware of the changes in the JDK API, new features coming out in the new version of the JDK, but they are less frequently aware of the changes to the JVM itself, the just-in-time compiler, to the garbage collection logging, and so on. To give you a few examples, for example, ThreadLocal handshakes, which was introduced in Java 9, or concurrent stack walking, which was implemented by Erik Österlund in Java 16. Now it’s used by ZGC to significantly improve the pauses.

Reality is that JDK version migrations could be easy. If you keep all of your open source libraries upgraded, especially if you have any automation around it, the migration could be down to just changing the configuration file, swapping runtime, and it just works. Another confusion, quite often, engineers think that any libraries or application, which are implemented in statically compiled languages are not tunable. Yes, of course, you cannot dynamically change the code, you don’t have a just-in-time compiler. Still, the default is not always the best. In one of the regular rounds of performance profiling assessment, we found that seek and get operations in our JNI library, storage library, RocksDB, actually use up to 30% of active CPU cycles. Further research showed that we’re using default configuration of the storage which is block cache. Those seek operations which actually try to find the block where the data is and then find the data in that block is just the plain overhead. Switching to plain table allowed us to reduce the CPU usage and reduce the latency by 25%. This is server-side compute latency. The interesting part is, actually a RocksDB developer called plain table, not so plain, because there are a lot of interesting optimizations behind the scenes on how it works. The only caveat in this case, you need to make sure that the whole dataset fits into memory. In this case, you can actually have quite a lot of improvements in terms of performance.

This optimization actually started as a suspected native memory leak. Quite often, developers think about JVM. We know that JVM allocates the heap during the bootstrap using malloc. Some of the portions of the JVM let’s say like direct byte buffers, which use native memory, they’re actually memory mapped. Java can increase and sometimes decrease the size of the heap, but there are no obvious sources of the memory fragmentation? No. I spend quite a lot of time trying to filter out possible root causes of native memory leak, and the only possible reason is the RocksDB library. Our SREs spent weeks building very interesting scripts, they actually published an amazing blog post about this, how to parse the pmap output and attribute the chunks of the memory in the pmap output to certain specific business code in your application. In the end of this research, we found that, we don’t have any native memory leaks, we just have terrible memory fragmentation, probably mostly related to RocksDB. Switching from default Linux implementation of malloc, which is glibc, to BSD’s version, jemalloc, helped us to significantly reduce the resident size of the JVM process, for up to 30%.

On top of that, Venice is one of the very few applications of LinkedIn which uses explicit TLB huge pages. TLB is a Translation Lookaside Buffer, a special portion of the CPU which stores the results of the translation of a virtual memory into physical memory, and it has limited capacity. If you experience the TLB cache miss, it significantly increases the cost of accessing the memory. Huge pages basically significantly improve the TLB cache hit rate, allowing you to access memory faster. In high throughput applications, it’s a very important point. Also, it’s basically increasing the addressable space cached in the TLB. What’s interesting, we also noticed that after migrating to Java 11, we started seeing a lot of different applications having memory fragmentation issues. I discussed this with the JVM developers, with Erik Österlund in particular, and he provided quite interesting source of that. He said that one of the major sources of memory fragmentation in Java 11 applications is actually JIT compiler thread pool. It used to be static in Java 8, but in recent versions of applications of JVM, it’s dynamic. It does have recycling. If you have a lot of JIT compilation happening in your application, those thread stacks for the JIT compiler, they are allocated in native memory using malloc. This basically causes quite a lot of memory fragmentation. Switching from glibc to jemalloc might help quite a lot.

In terms of the code optimizations, I think the biggest part of it is fastavro. Venice by default uses Avro serialization format. It’s a pretty popular format. Kafka uses Avro by default. Especially in the early versions of Avro, serialization and deserialization was not very optimized because it’s generic, and it works in all possible cases. As you might expect, generic solutions are not always the best. Originally, the fastavro library was developed by a company called RTBHouse. They open sourced it. We forked it, but at some point, we completely took over the development of this library. This library allows us to generate at runtime, very custom, specific to each schema, serializers and deserializers. For example, deserialization speed can improve by 90%. Yes, it can be 10 times faster comparing to the newer Avro. One of the very interesting optimizations started with the complaint from one of the client teams. They came to us saying that, we have quite a significant regression in latency accessing Venice. When we checked the continuous profiling data, we found this. This is a portion of a Flame Graph. Flame Graph is the way to visualize the performance profiling data. If you want to know a little bit more, you can visit Brendan Gregg’s website. He’s the author. The highlighted element is I2C/C2I adapter. This is a very interesting thing. JIT compiler inserts those adapters when it needs to bridge interpreted and compiled code. What it actually means is that on the very hot path, we have the code executing in interpreted mode. That was quite a surprise.

When we started investigating, JIT compilation unit is actually method. JIT compiler has a hard threshold on the size of the bytecode of the method in byte, which is 8000 bytes. If the bytecode is more than 8000 bytes, even if it’s on a very hot path, JIT compiler will never compile it. It will continue running in interpreted mode. We had to optimize the code generators to make sure that the method never goes above this threshold. Of course, you might say that it might actually introduce quite a lot of regression. No, we did a very thorough benchmarking in production to make sure that this change doesn’t actually affect performance. We were not able to detect any regressions. There is way more, primitive collections. This is one of those topics where premature optimization could actually be the root of all evil. There is no unison among performance experts on this topic. There are two points. The first, you need to prove that object collections are actually the bottleneck of the performance. That your application spends a lot of CPU cycles and the allocation of the object collection is the memory allocation hotspot, first thing. Second thing, if you’re trying to swap object collections with primitive collections, you need to make sure that you do it across the entire business logic. That you don’t have those transformations, object to primitive, primitive to object, and so on. We did an extensive benchmarking, and we found that, yes indeed, in our case, it is a bottleneck. In most of the cases, we use [inaudible 00:31:16] framework. For some of the APIs, like read compute, we had to develop very custom, specific primitive collections. It allows us to do partial deserialization.

In most of the cases, those read compute APIs are designed for the AI workloads. In AI workloads, the data we store is basically some metadata plus the AI vectors. We might not need to deserialize additional fields, we might just need the AI vector, so partial deserialization can even more improve the deserialization speed. On top of that, read compute might use the result multiple times. Caching allows us to avoid additional deserializations. On top of that, we can reuse those collections to reduce overall memory allocation rate and CPU overhead. Last but not least, recently, during one of the rounds of benchmarking, we found that the low-level transformation from serialized format, which is byte array into the real format, which is float array for the AI vectors, we spend a lot of active CPU cycles there, and we spend a lot of wall-clock time there. The optimization was to swap initial implementation used by buffers to do that low-level conversion. We replaced it with VarHandles. It’s much more effective and much more JIT friendly. To show you the actual result, for example, this is reduction in memory allocation. This is real production graph from real production system, no benchmarking, no custom cases, it is just plain from production. There’s more than 50% reduction in memory allocation rate, and more than 30% reduction in latency. The same JVM, just the different modes, the old library, which is strictly based on byte buffers versus the new library, which allows us to use VarHandles. We had to implement multi-release JAR because we still have to support Java 8 runtimes.

Observability

In the end, I would like to talk about observability. Quite often, especially developers working on large enterprise applications, they take observability for granted. Yes, we have to log a lot of different lines. Yes, we have to collect those thousands of metrics. It could be fine. As soon as you try to develop very high throughput and very low latency applications, you might easily realize that observability pillars become the bottlenecks for your application. During the regular review, we found that simply starting and stopping the stopwatch takes 5% of the active CPU cycles. Yes, the system gets given current time newly, function takes 5% of the active CPU cycles. There is more to that. This is just the tip of the iceberg. You need to store the data somewhere. You collect those thousands of metrics, you need to store. Yes, you can try to reduce the memory overhead by using primitive types, but if you have high throughput, you have multiple threads. Now you need to guard the changes to those metrics. Now you have synchronization. Synchronization becomes lock contention quite a lot of it. To reduce the lock contention, for example, we swapped the object synchronization with ReentrantLock. I also benchmarked the StampedLock. In our case, in our workloads, ReentrantLock works much better. It allows us to reduce the lock contention by 30%.

We can reduce the lock contention even further, but in this case, we will have to rely on objects like LongAdder. This is a part of java.util.concurrent.atomic package. The problem with it, yes, it allows to significantly reduce the lock contention, but it’s heavily padded to avoid the full sharing. Full sharing is a very interesting phenomenon. Nowadays, CPU load the data in small chunks called cache lines, and imagine that inside the same chunk, inside the same cache line, you have two different variables. Two different threads running in two different CPU cores, load the same cache line. One of the threads changes one valid, at this point the cache coherence protocol, MESI, it has to invalidate the copy of the same cache line on a different core. Even though that thread didn’t need the change to that field, it was going to change completely and work with completely different field. In this case, you have this invalidation of the cache, and you have to go back to memory and read it again. To avoid this, LongAdder is heavily padded. It uses stripe, and between each element of the stripe, it puts at least a cache line to make sure that it avoids the full sharing. This creates a gigantic memory overhead. You might ask, ok, what is the solution in this case? The solution is to treat logging and metrics as a possible technical debt. If you have any guesses that this logline is not needed, or maybe not very relevant, if you don’t use this metric, just delete it. We started treating this as a technical debt. All the metrics which are not in use are immediately deleted, all the loglines immediately deleted. This is the only way to reduce the overhead of observability pillars in the high throughput, low latency applications.

Conclusion

In conclusion, I would like to get back to this topic of misconceptions. Quite often, especially in the enterprise world, there’s that bias that during the lifetime of the application, it becomes bigger. It does have new and more features. It becomes bigger, it requires more memory, it becomes slower, and so on. Utilizing this continuous optimization approach over the course of the last 4 years, we were able to improve the throughput 10 times. We went up from 100,000 operations per second to 1 million. The latency improved five to seven times, from low double-digit milliseconds to low single-digit milliseconds. By continuously improving performance, by continuously caring about performance, we are trying to basically negate this bias in this trend. Our application gets more features, but it gets better and fast.

Questions and Answers

Participant: You sized the overhead of serialization quite a bit. I’m wondering if you actually checked Apache Arrow as a mechanism by which you have serialization all together, particularly with the integration [inaudible 00:38:07]?

Dubrouski: Quite often, we’re locked to the LinkedIn infrastructure in all the protocols we use. Especially because this system mostly works with Kafka, and Kafka by default uses Avro, so we have to rely on that to avoid all the overhead of maintaining different formats and transformations and so on. I’m afraid that we didn’t check that.

Participant: How many engineers are on this project?

Liu: We have 18 engineers on the team.

Participant: Do you run that business low-level JVM and null optimizations around that? How do you train those engineers?

Dubrouski: In terms of design, in terms of architecture, it’s mostly the Venice team. We have separate performance teams at LinkedIn. I’m one of the technical leads. Usually, we work in collaborations. They’re responsible for the architecture, I’m responsible for helping them with performance optimization suggestion, benchmarking, verifying the results and stuff like this. We have distinct responsibilities.

Participant: How many customers are there of Venice?

Liu: Inside LinkedIn, we have over 2000 stores in production, and outside of LinkedIn we have one known customer, DataStax. We also contribute the Pulsar support. Besides Kafka, Venice also supports Pulsar.

Participant: What you saw was [inaudible 00:40:09]. You have the design awareness to meet the derived data, the people you may know. It was a product statement that we were trying to solve. That’s why they could achieve the eventual consistency and all those things that are already performant by design. Then the dependency on the JVM, that’s where Alex touched, with all the JVM, and then the upgrade path to Java 17.

Participant: Assuming that most of these applications with a lot of numbers, how do we deal with immutable primitives? Can we use immutable primitives? Message parsing, how do we share this data without having to even worry about serialization?

Dubrouski: The data is technically pretty much immutable, because the clients only read it. Ingestion is pretty much offline or batch jobs, which work independently and they ingest new versions of the data. As soon as it ingests it, basically until the next ingestion, it’s immutable. Serialization you use just the compacted data. Because it’s easy to transfer over the wire, and it’s easy to store because it just takes less space, that’s why the serialization.

Participant: You parse the data into nodes. When you deserialize, do you create objects or do you create primitives? When you create these objects of primitives, often you make sure that when you’re parsing those numbers around within that same node, that you’re not recreating these objects [inaudible 00:42:06]?

Dubrouski: This is what my part was about. We’re trying to make sure that we’re constantly profiling, and when we see those types of hotspots, like let’s say, memory allocation hotspots, we try to reduce that. There is still some object creations happening. You cannot just deserialize into just a set of primitives. If it is a schema, because fastavro does have schemas. Schemas, you will still have some kind of a wrapper here and there. You still have some overhead. We’re trying to minimize as possible this overhead. One of the constant fights is to reduce the memory allocation rate. Because you see, just try to imagine during this benchmark, we showed with 1 million QPS, at the network interface card level, we had transmission speed of 1.2 gigabytes per second, 10 gigabit of traffic to a single node. This is on the wire level. Now imagine what is happening in the JVM, the allocation rate. One of the ways to improve the throughput and reduce the latency is actually fight as much as we can for reducing the allocation rates in the JVM, reuse the objects, cache the results, use the primitives where possible, but sometimes it just could be impossible.

Liu: Most of the Venice use case, it is already bounded. The binary Avro, it is mostly compacted format, among all other formats. The deserialization, normally, for most of the users, they only deserialize in the client, which means the Venice backend only returns the binary back, we don’t deserialize in the backend, we just return the binary back. The application will handle the deserialization using the Venice client.

Dubrouski: It’s mostly about read compute. Because, to optimize the performance, we move the computation to the server side. When you have to do computation, server side has to do deserialization. Even there, we constantly fight for reducing memory allocation rates.

Participant: Did you say you use JMH, the Java Microbenchmark Harness for synthetic benchmarking. You also do benchmarking in production when you’re profiling, all the gains that you do for these incremental changes, are they the same for the ratio between the synthetic vs. the real benchmarks. How much value did you see in the JMH benchmark?

Dubrouski: Funnily enough, at least for the VarHandles benchmark, it was precise, 30% improvement. Thirty percent improvement in benchmark, 30% improvement in production latency. It was really funny, but it worked. Yet again, sometimes why we use JMH benchmarks, because it’s a mission critical system, we cannot just invest into some development, because we want to. JMH benchmarks they use because you can quickly design some micro-harness, micro-benchmark to test the optimization. You copy a piece of code, as-is, and then you modify it and compare the results. Yet again, as I’ve said, for some of the JMH benchmark, I go down to the assembly code. With the VarHandle optimization, I was checking how it’s actually folding, how it’s basically inlining the code to achieve that better performance than the byte buffer.

Participant: [inaudible 00:45:37]

Dubrouski: Yet again, it’s a very complicated topic. If you just come out of nowhere and try to develop the benchmark, you might have problems with it. It might not actually replicate the reality. You need to basically hone this skill. You need to work on it again and again. Do a lot of verifications. That’s why, I dive into the assembly code, because I’m trying to verify that it actually executes what it’s supposed to execute, to prove that literally on the assembly level. Then when I see as, yes, it is happening, then we can try. Yet again, it’s not always happening. Sometimes benchmark shows improvement in production, that’s why we have this intermediate step of building synthetic benchmark before we invest into the actual solution.

Participant: I totally agree. Benchmarking is an iterative process, but also what are you benchmarking? If you measure that aspect of it as well, as Alex mentioned, when you have this understanding of what you want to see at the assembly level, what kind of [inaudible 00:46:50], then you can understand the impact is actually happening, or if you had a lot of issues with that. It makes sense.

Participant: I was curious about the constraint you put on the method size. It was like 8 bytes.

Dubrouski: This is JVM’s constraint. This is a JIT compiler threshold. It’s not our constraint.

Participant: I was wondering whether it’s related to the type of compiler of the JVM itself, because introspection, inlining, and so forth is dependent on the type of JVM you’re using.

Dubrouski: We’re relying on OpenJDK.

Participant: OpenJDK.

Dubrouski: We’re relying on Microsoft OpenJDK. You might assume that fact. The threshold is there. Technically, it can be disabled. We knew that it might not be the best idea to disable, because at some point, the generator can generate something gigantic, and the compiled version will not work properly. We tried to adhere to the rules of the JVM, and yet again, benchmark to make sure that it doesn’t cause any regressions.

Participant: All those things are by default. Using Microsoft built OpenJDK is also adhering to the defaults of OpenJDK. If you’re seeing in the OpenJDK source, that’s what you’ll probably find in most of these builds of OpenJDK. If some JDKs out there change the default, and that’s something that’s observed, it receives feedback. For example, we do some intrinsics, we have added some datastore build, because it helps with the Azure stack. Things like that, people do modify it, change the defaults, and maybe it’s because they have observed that they have seen it in prod, and maybe their customers are asking for it.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Waives Egress Fees for Customers Exiting the Cloud

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

AWS has recently announced free egress traffic for customers leaving the cloud and withdrawing their data from the AWS infrastructure. This initiative follows the guidelines of the European Data Act and is designed to help customers switch to alternative cloud providers or on-premises data centers.

Cloud providers usually offer free data ingress into the cloud, but costs associated with data transfer out to the internet (DTO) can be substantial: previously, AWS provided 100 GB per month free egress from AWS regions to the internet as part of the Free Tier but any data beyond this allocation incurred charges. With the recent announcement, customers now have the option to request free DTO for additional data by contacting support. Upon approval of the request, AWS will issue temporary credits based on the total volume of data stored across AWS services. Sébastien Stormacq, principal developer advocate at AWS, explains:

It’s necessary to go through support because you make hundreds of millions of data transfers each day, and we generally do not know if the data transferred out to the internet is a normal part of your business or a one-time transfer as part of a switch to another cloud provider or on-premises.

Under mounting pressure from European regulators, AWS is not the only provider introducing a waiver on DTO charges. Earlier this year, Google introduced a comparable option, while Microsoft announced this week free DTO when leaving Azure. Stormacq adds:

The waiver on data transfer out to the internet charges also follows the direction set by the European Data Act and it is available to all AWS customers around the world and from any AWS region.

Credits apply exclusively to DTO charges associated with moving away from the cloud but there is no requirement to close an account or migrate all hosted workloads to qualify. Furthermore, the waiver does not apply to data out from services like CloudFront, Direct Connect, Snow Family, or Global Accelerator. Gergely Orosz, author of the The Pragmatic Engineer newsletter, comments:

Europe gets a lot of flak for how they regulate more and more of tech (in Europe, of course). Regulation sets new rules, often ones that companies wouldn’t follow if not required. AWS just dropped their atrocious egress fees, globally, because of an EU regulation.

Corey Quinn, chief cloud economist at The Duckbill Group, disagrees:

This is almost entirely done for optics; if someone’s debating moving a workload off of AWS (this happens less frequently than the tech press would have you believe) they aren’t stopping because the egress fee is expensive; it’s ~3 months of storing the data in S3. Instead, the fee hurts when customers are doing their usual business and sending traffic to customers, not “when they’re leaving the cloud.” This change does nothing to fix that core pain, but it may look like it does to regulators if they’re not on the ball.

AWS has published a FAQ page with further details on how customers can apply for the waiver. In a separate announcement, AWS explains how it supports the Fair Software Licensing Principles for customers changing provider.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Meta Platforms and MongoDB have been highlighted as Zacks Bull and Bear of the Day

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

For Immediate Release

Chicago, IL – March 15, 2024 – Zacks Equity Research shares Meta Platforms META as the Bull of the Day and MongoDB MDB as the Bear of the Day. In addition, Zacks Equity Research provides analysis on NIKE Inc. NKE, PVH Corp. PVH and Carnival Corp. CCL.

Here is a synopsis of all five stocks:

Bull of the Day:

Meta Platforms continues to see estimates rise after their last quarterly report and so it remains a Zacks #1 Rank Strong Buy based on its earnings momentum.

My colleague Derek Lewis wrote about the resurgence of META in early February after their 10% EPS beat and raised guidance.

Q4 EPS of $5.33 bested the year-ago number by a whopping 200%, while revenues of $40.11 billion beat the Zacks Consensus Estimate by 2.87% and jumped 24.7% year over year. At constant currency (cc), the top line improved 22%.

When Derek wrote about upward estimate revisions on February 7, analyst consensus for this year and next stood at EPS of $19.41 and $22.42 respectively.

Now those projections have risen to $19.94 and $23.09, representing 34% and 16% annual growth.

What About Reality Labs?

After Apple made a big splash with their new mixed reality device Vision Pro, I decided to check in to see how Zuck & Co. are fairing with their line of VR/AR headsets known as Quest.

The division that makes and sells Quest gear is still a money-losing operation, but that’s by design for their long-term development plan.

I recently wrote about META’s Reality Labs for my forthcoming special report on the Metaverse because I wanted to investigate how and when this king-pin of social media advertising, commerce and communication would realize profits from this new technology realm.

You can get on the waiting list for that report by emailing Ultimate@Zacks.com and tell ’em Cooker sent you.

Core Growth = Cash to Invest

Besides the stock ramping 400% since its nadir in Q4’22, META revenues troughed at $117 billion in that year and tagged $135B in 2023. Even better, topline growth this year is projected to approach $160B for an 18% advance.

That resurgence in META’s core business growth gives them room to experiment with the new technology platforms.

Here’s what I wrote recently in my Metaverse report…

The launch of Meta Quest Pro was a small success, but it won’t move the needle on META earnings as word is that this is still a money-losing business for them. Meta’s Reality Labs unit brought in more than $1 billion in fourth-quarter sales (the holiday quarter) but recorded an operating loss of $4.65 billion. Clearly, this is a division that Zuck & Co. can afford to invest in for a while.

According to AR Insider, whose team does a breakdown of quarterly revenue for the Meta Reality Labs unit, MRL’s topline was only $210 million in Q3 and they extrapolated that into $161.7 million for hardware sales. And these figures were down 24% sequentially and 27% year-over-year.

Plus, the cheaper Quest 2 still way out-sells the Quest Pro, with estimated revenue of $140 million. They concluded “Considering an average unit price of $315, this means that Meta sold approximately 443,931 Quest 2s in Q3.” With Quest Pro’s estimated revenue at barely $9 million and a Q3 ASP of $999 (the price dropped from $1,499 in Q1 the AR Insider team noted) this means that Meta sold approximately 8,934 Quest Pros in Q3.

So for now, the META story is still about its consumer engagement and the ability of the platform to sell more advertising and automate the shopping experience. If you’ve scrolled Instagram or any Reels lately, you know how “sticky” these apps are.

And again, this is the base of 1-2 billion daily active users that will be able to leverage the next Metaverse experiences in social and ecommerce. “I think the software and social platform might be the most critical part of what we’re doing, but software is just a lot less capital intensive to build than the hardware,” Zuckerberg said in February of 2023.

Meta Reality Labs reported $13.7 billion in operating losses for 2022 and lost $16.1 billion in 2023. They expect its operating losses to increase year over year in 2024, but I expect Zuck & Co. to keep investing to eventually leverage Metaverse tools and apps for profits — just like Apple is experimenting with what comes after the iPhone.

Bottom line on META: The Metaverse is a foot race of titans since the hardware development is so expensive. While AAPL growth stagnates at 6.5 times sales, I think META is a buy here trading 7X sales with 15% topline growth.

Be sure to grab my Metaverse report to find out which other stocks I’m picking in that race.

Bear of the Day:

MongoDBis a $27 billion provider of datacenter solutions and is a key alternative to AWS and Azure because developers seem to love the platform.

Mongo has established itself as the clear next-generation, NoSQL, general purpose database leader with over $2B of annualized revenue growing at 20%+ and its core Atlas product growing high double-digits. Based on its large total addressable market, their growing platform capabilities position MongoDB for strong growth for many years.

But despite a big earnings beat on March 7, MongoDB slipped into the cellar of the Zacks Rank this week on weaker guidance that forced analysts to lower their growth projections.

MongoDB forecast revenue growth in a range of +13% to +15% for its current FY’25 that began Feb. 1. The guidance lagged consensus views of 22% increase.

For fiscal 2025, MongoDB expects revenues between $1.9 billion and $1.93 billion. And non-GAAP net income per share is anticipated between $2.27 and $2.49.

This news compelled analysts to lower this year’s EPS consensus 19% from $3.08 to $2.49, representing a -25% drop in annual profits.

Quarter Details

MongoDB reported Q4 FY24 (ended January) adjusted earnings of 86 cents per share, which beat the Zacks Consensus Estimate by 86.96% and increased 50.9% year over year.

Revenues of $457.5 million jumped 26.6% year over year and surpassed the consensus mark by 6.02%.

MongoDB’s subscription revenues accounted for 97.1% of revenues and totaled $444.3 million, up 27.6% year over year. Services revenues declined 0.5% year over year to $13.1 million, contributing 2.9% to revenues.

Increased User Base

MongoDB added 1,400 customers sequentially to reach 47,800 at the end of the quarter under review. Of this, more than 7,000 were direct-sales customers.

The company’s Atlas revenues soared 34% year over year, contributing 68% to total revenues. Atlas had more than 46,300 customers at the end of the reported quarter, adding 1,400 customers sequentially.

MongoDB ended the quarter with 2,052 customers (with at least $100K in ARR) compared with 1,651 in the year-ago quarter.

Bottom line: Most analysts remain bullish on MongoDB and see the lowered guidance as a temporary blip that the company will quickly overcome in the next few quarters.

Additional content:

Here’s How NIKE (NKE) Looks a Week Ahead of Earnings

NIKE Inc. is slated to release third-quarter fiscal 2024 results on Mar 21. The leading sports apparel retailer is likely to have witnessed year-over-year declines in the top and bottom in the fiscal third quarter.

The company has been gaining from its Consumer Direct Acceleration strategy, along with strong demand, compelling products and robust performance in its digital and DTC businesses. Supply-chain constraints, continued weakness in Greater China and higher costs have been weighing on its bottom-line performance.

The Zacks Consensus Estimate for fiscal third-quarter revenues is pegged at $12.3 billion, suggesting 0.8% decline from the year-ago quarter’s reported figure. The Zacks Consensus Estimate for the company’s fiscal third-quarter earnings is pegged at 70 cents per share, indicating a decline of 11.4% from the year-ago reported number. Earnings estimates for the fiscal third quarter have declined 2.8% in the last 30 days.

In the last reported quarter, the company delivered an earnings surprise of 22.6%. Its bottom line beat the consensus estimate by 25%, on average, over the trailing four quarters.

Key Factors to Note

NIKE is expected to have witnessed continued gains from brand strength, robust consumer demand and an innovative product pipeline in the fiscal third quarter. Gains from its Consumer Direct Acceleration strategy, and robust digital and DTC performances are expected to have been other tailwinds.

Continued strength in retail traffic trends within NIKE Direct has been boosting conversion rates. The strong member buying trends are likely to have led to a record digital performance in the to-be-reported quarter. Strength in the North America, EMEA and APLA regions, fueled by increasing traffic, higher conversion and growth in average order value, is likely to have aided sales in the to-be-reported quarter.

The NIKE Direct business has been benefiting from robust growth across regions and an efficient digital ecosystem, which comprises its online site, and commercial and activity apps. Revenues at NIKE-owned stores are expected to have gained from improved traffic, higher conversion rates and growth in average order value. The NIKE Direct business is likely to have benefited from growth in North America, EMEA and APLA, offset by continued weakness in Greater China in the to-be-reported quarter.

We expect total NIKE Brand revenues to increase 1.5% year over year to $11,748.2 million in the fiscal third quarter, driven by a 0.8% rise in the Wholesale business.

On the last reported quarter’s earnings call, management stated that it expects strong gross margin execution and disciplined cost management to offset soft second-half revenues and drive earnings growth.

However, NKE has been witnessing a decline in the gross and operating margins due to rising costs, higher markdowns, increased freight and logistic costs, elevated input costs, and currency headwinds. Also, elevated SG&A expenses are concerning.

On the last reported quarter’s earnings call, the company predicted 160-180 bps improvement in the gross margin for the third quarter of fiscal 2024, driven by gains in strategic pricing, improved markdowns and lower ocean freight rates.

We expect gross profit to increase 3.2% year over year in third-quarter fiscal 2024, with a 160-bps expansion in the gross margin to 44.9%.

NIKE has been witnessing elevated SG&A expenses, driven by increased demand-creation expenses due to the normalization of sporting activities and overhead costs related to higher wages. Demand-creation expenses are likely to have increased in the fiscal third quarter, owing to elevated marketing and advertising investments. These investments are likely to have supported significant global sports moments and product launches, and investment in capabilities to transform NIKE’s operating model for greater speed and effectiveness.

Operating overhead expenses are expected to have resulted from higher wage-related expenses and NIKE Direct costs, as well as increased technology investments to support digital transformation in the to-be-reported quarter.

We expect demand-creation expenses to increase 9.6% year over year and operating overheads to rise 2.1% year over year in the fiscal third quarter.

Driven by the gross margin growth offset by higher SG&A expenses, our model suggests a 20-bps expansion in the operating margin to 11.6% in the fiscal third quarter.

Zacks Model

Our proven model conclusively predicts an earnings beat for NIKE this time around. The combination of a positive Earnings ESP and a Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold) increases the odds of an earnings beat. You can uncover the best stocks to buy or sell before they’re reported with our Earnings ESP Filter.

NIKE has an Earnings ESP of +5.21% and a Zacks Rank of 3.

Other Stocks Poised to Beat Earnings Estimates

Here are some other companies that you may want to consider, as our model shows that these also have the right combination of elements to post an earnings beat.

PVH Corp. currently has an Earnings ESP of +1.23% and a Zacks Rank of 2. The company is expected to register bottom-line growth when it reports fourth-quarter fiscal 2024 results. The Zacks Consensus Estimate for PVH’s quarterly revenues is pegged at $2.4 billion, which suggests a decline of 3.3% from the figure reported in the year-ago quarter. You can see the complete list of today’s Zacks #1 Rank stocks here.

The consensus estimate for PVH’s bottom line has moved up 0.6% in the last 30 days to $3.49 per share, which suggests growth of 46.6% from the figure reported in the year-ago quarter. PVH has delivered an earnings surprise of 18.9%, on average, in the trailing four quarters.

Carnival Corp. currently has an Earnings ESP of +6.26% and a Zacks Rank of 2. The company is anticipated to register top and bottom-line growth in fourth-quarter fiscal 2023. The Zacks Consensus Estimate for CCL’s quarterly revenues is pegged at $5.4 billion, suggesting growth of 22% from the figure reported in the year-ago quarter.

The Zacks Consensus Estimate for Carnival’s quarterly loss per share has narrowed by a penny in the last seven days to 17 cents per share. The consensus mark suggests 69.1% growth from the year-ago quarter’s reported number. CCL delivered an earnings surprise of 19.2%, on average, in the trailing four quarters.

Stay on top of upcoming earnings announcements with the Zacks Earnings Calendar.

Why Haven’t You Looked at Zacks’ Top Stocks?

Since 2000, our top stock-picking strategies have blown away the S&P’s +7.0 average gain per year. Amazingly, they soared with average gains of +44.9%, +48.4% and +55.2% per year.

Today you can access their live picks without cost or obligation.

See Stocks Free >>

Media Contact

Zacks Investment Research

800-767-3771 ext. 9339

https://www.zacks.com

Zacks.com provides investment resources and informs you of these resources, which you may choose to use in making your own investment decisions. Zacks is providing information on this resource to you subject to the Zacks “Terms and Conditions of Service” disclaimer. www.zacks.com/disclaimer.

Past performance is no guarantee of future results. Inherent in any investment is the potential for loss.This material is being provided for informational purposes only and nothing herein constitutes investment, legal, accounting or tax advice, or a recommendation to buy, sell or hold a security. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. It should not be assumed that any investments in securities, companies, sectors or markets identified and described were or will be profitable. All information is current as of the date of herein and is subject to change without notice. Any views or opinions expressed may not reflect those of the firm as a whole. Zacks Investment Research does not engage in investment banking, market making or asset management activities of any securities. These returns are from hypothetical portfolios consisting of stocks with Zacks Rank = 1 that were rebalanced monthly with zero transaction costs. These are not the returns of actual portfolios of stocks. The S&P 500 is an unmanaged index. Visit https://www.zacks.com/performancefor information about the performance numbers displayed in this press release.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

Carnival Corporation (CCL) : Free Stock Analysis Report

NIKE, Inc. (NKE) : Free Stock Analysis Report

PVH Corp. (PVH) : Free Stock Analysis Report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

Meta Platforms, Inc. (META) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Socially Responsible Companies – Do We All HAVE to Change the World?

MMS Founder
MMS German Bencci

Article originally posted on InfoQ. Visit InfoQ

Transcript

Bencci: My name is Germán Bencci. I’m the founder and CEO of CodeYourFuture. We’re going to cover socially responsible companies: do we all have to change the world? The reason for this presentation comes to explore the relationship between for-profit companies, and social impact, if any. The experience of being a social entrepreneur, one way or the other, aims to define a little bit your role in social impact, and create an action plan. If this is an area where you would like to see your company, your own reset of responsibilities, creating a bigger impact. First, I’m going to be covering social responsibility from our experience running the organization.

Part I: Social Responsibility and Our Experience

My background covers a few areas, first as Latin American, I think is a unique experience. Then volunteering and spending a lot of years in student organizations with different missions, and working a little bit in the arts and the humanities, on the side. Above all, professionally, working in the innovation management space, intellectual property management, traveling around Europe, meeting lots of startups and companies and research centers, understanding what companies were developing, trying to get synergies being created within the organizations I was working with, and the rest. That gave me a lot of exposure, an understanding of the opportunities, the struggles of entrepreneurship, and innovation, and competition. This is some of the areas that later on became crucial for me to be able to get into the social responsibility space. Whenever I talk social responsibility or social impact, I’m thinking of a positive change. I’m thinking that whatever effort that we’re making, is creating a difference in the area that we’re working on. It’s not enough to create something, organize an event, organize a session, and then say, we’re working with these people, or we’re working in this area. It’s like, what is the effect of that action? Everything that I have done since the beginning of CodeYourFuture is I’ve always been trying to answer that question. For me social responsibility is specifically this area. Something needs to happen as a result of our actions, of our intervention. It has to be a positive one. That’s my mantra, tangible impact.

CodeYourFuture, just to give a little bit of context and explain how we have accumulated the experience on understanding a little bit more this area of social responsibility. We offer an inclusive training, where a community of professionals are sharing their skills to other people of disadvantaged and low-income backgrounds, to help them start their careers, to start a new life mainly in the tech sector. We work with a huge range of ethnicities, of ages, of people from neurodiverse background and socioeconomic background. We work across the range. We believe that CodeYourFuture is home of the most diverse and inclusive organizations out there. Some of the key areas, the key USPs of our work. Our training is part time, is completely free. We also offer a lot of support to the participants. We will give them laptops, if they cannot afford them. We will cover their childcare, if they cannot pay for it. We will give them internet access, if it’s not good enough, or they don’t have any of this. For us it’s about lowering the barriers, as much as possible, to be able to give the tools, the resources, the training that people need to change their lives. We cover a lot of employment paths nowadays. We do software, web development, cloud engineering, QA testing, product management. It’s all about finding the right path for people.

All this experience makes sense only when we look at the cases that we have supported, like the case of Nawal. She is a refugee, came as a single mum to the UK, had never written a line of code, never been into programming, but really had the desire for change, for a new opportunity, for a career, for a new beginning. She came to the organization and started learning programming. Then, after she graduated, it’s a nearly 12-month training, she did an extra module around cloud engineering, and that was her dream area, her dream career. It clicked. It made her feel super excited. Then from there, she got a job as a cloud engineer for a tech consultancy. The second person is Mona. Mona is a case that I have mentioned a few times because it’s very inspiring. She’s also a mum. She was working. When she came to CodeYourFuture, she was working in laundry. She was ironing hundreds of shirts every day. One day she looked at her child and she thinks, “I want to give them an inspiration to be better in life, to achieve something.” I want to become a role model for my child. She joined the program, went through it. Also had never written, had no experience in programming, had a little bit of experience in graphic design. Struggled, but kept going on, kept working through, kept making progress, and never giving up. Got a lot of mentorship support from professionals. After she graduated, then she got a role as a frontend developer, using a lot of her design experience. These are the examples that for us are the guiding light of the work we do. Some of the areas that we work with, lots of ethnic minorities, the majority living below the poverty line, nearly gender equality. Most of them are not young people. They’re already mature people moving into this new field.

I give all this context to understand our experience. As part of the role we do, we have talked to lots of tech companies because our end goal is for them to get employed. Employment is the aspect that really helps in the area we work with, is the change driver. We have talked to lots of people in tech industry working across lots of different roles, head of CSR, CSI, diversity and inclusion, equity, innovation, engineering, and tech lead, and tech manager, and senior managers, and talent acquisition, and HR managers, and VPs, SVPs, all the C level. We have talked to lots of them and built a picture about what it is that people are looking for, what it is that they are interested in doing. Obviously, probably self-selection, our observation is that people want to make a difference. Obviously, they come to us with some ideas, and with some incentives, some interest in making a difference. You might be one of them. You may have some interest, some desire, and then that’s great. If you don’t, you’re more than welcome to obviously listen and see what it is, but you might get bored, because this is the clear focus for it. People do want to make a difference, but there’s always lots of reasons. Companies don’t know exactly what to do, or they don’t have a project set up for it, or a budget, or they don’t have time or resources assigned to it. There are lots of reasons that are given for that. Sometimes they talk about the company size. We’re too small, therefore, we don’t have enough resources, enough people. Or if they’re larger, and doing really well, they’re saying, we’re too fast-paced, so it’s hard, for example, to onboard junior talent. Or, we’re a little bit scared because we don’t have enough seniors. Or, when it comes to diverse hiring, it is like, we don’t have enough mentors, or we need to assign something, so we don’t have a budget. Reasons like these are given to us, but at the same time, it’s like the interest for some change. Over time, we have helped companies create some impact and create some change. These are the things that we’re going to explore.

Overall, for a lot of the conversations is, we can see that there is a subtext, at least from our point of view, maybe bias, but that’s what we believe in, is that we might be saying, people might be saying, our industry, or our product, or our company, or teams, or the skills are not fit for social impact. This is an interesting point to stop for a second and think, because for-profit organizations, a lot of talent, a lot of knowledge, but at the same time, if those companies are not clearly understanding how to work around social impact, it’s very difficult. The question is like, do they have to do it? Do you have to work on social impact? For those that want, lots of people might say, we don’t know how to be socially responsible, how to create social impact. When it comes to that point, we can think, ok, if a person working in the for-profit company sector, tech sector say, we really want to make a difference, we don’t know how. One of the solutions could be, let’s just quit our jobs, or let’s quit the products that we’re working on, and let’s work on social impact. I can already tell you, if you really want to do this, get ready for a pay cut. Is this really the solution? Then when you explore, what would be the challenge of working in the social sector. We’ll be talking about, we have to create a great product. We have to find ways to market it. We have to find channels to bring clients and to generate revenue, and to measure that product, and then we need to improve it. Does that sound familiar? Yes, of course it does.

This is a social entrepreneurship for an AI that built these beautiful pictures on social entrepreneurship, which, if we forget about the faces, it looks like our people working, getting together trying to make a difference, trying to develop something. It’s not that different. The reality is, that’s not your job. You’re working to develop a product, to have a series of clients or a series of services. You are working in a competitive market. You’re trying to generate revenue. You’re trying to constantly improve and gather feedback, and fight a tough financial climate. All of these are happening all at the same time, while you’re having in the background that thought around social impact. If we stop here for a second, and then we think, let’s listen from the people that we’re trying to support in the social responsibility area, at least the ones that we work with, of disadvantaged, low-income background. They will tell you that what they’re looking for is health, and safety, and housing, looking for jobs, for new friends. They’re looking to build a career, to work for a cool company. The differences are not that different.

Part II: What’s My Role?

If that’s the case, then, are there ways where we can find some alignment between one work and the other? In this part two, we’re going to cover what could be your potential role in here. To me, one of the most important things to aim for is to do what you love, whatever that is. If you are very happy and excited working in engineering, working in management, or leadership, or in product development, product innovation, keep doing that. Keep doing it. What we’re going to explore here is like, could that be a little bit better, so that it starts driving some other type of change, creating a byproduct in a way. Your product, and your company per se, the goal of that will not change the world. The thing that you can explore is how you’re building that product, how you’re building that service, that’s the area that really can make a difference. We’re going to explore then a few areas. We’re going to explore our business needs, humans, assets, and product and services within your organization, and understand what within those could make a difference. Let’s explain what these four areas actually mean.

Number one, business needs. Business needs means anything that relates to things like marketing, or promotion, PR. It’s all about bringing clients or bringing partners. It’s about generating revenue. It’s about hiring the people that you need to be able to do, to keep developing your product. It’s about those skills that you want to put together. All of those are business needs. Every single company has to have those ones in terms of like, the setting to create something, and then the other one is to keep that company alive and to keep it growing. The second, let’s just call it humans. What is humans? It’s a lot around the people that you’re working with. There we explore things like, how is the diversity within that, the gender, the ethnicity, the backgrounds, the experience that people bring, the culture within those individuals that are working together? That whole area is another one specifically to explore. Third one is the assets of your organization, the hardware, the software, the servers, the space. All of those are things that you need that you’re buying, that you are renting, that you’re leasing, that you’re utilizing one way or the other. That’s another area to explore. The final one is, the products and the services, the whole reason of why your company exists. Within that, you can then start looking a little bit more closely into the pricing, the tiers, the user types that you have, and the purpose of that product.

Part III: Action Plan

With all of this in mind, which is just the setting, we’re going to work on an action plan. We’re going to look into how working with what you do can make a difference still in social responsibility. Most important thing then is that, to understand, yes, you can make an impact. The thing to understand is that from our point of view, from our experience is not how most people think it is, is not through a direct path, necessarily. The most important thing that we believe needs to happen for a company to really bring social impact is to find business alignment between the purpose of the company and the purpose around the social impact that wants to be created. I will present what I call as CSIF, Corporate Social Impact Framework, something that I just invented, to give a little bit better idea around how we can create an action plan within your organization. First, we’re going to say, you have to define a purpose. Then you’re going to have to make a list of these four areas that I mentioned, the needs, the human aspect, the assets, and the services. Then we’re going to invite the employee.

The first one, the purpose. It’s very important, the company knows, decides what it is that you want to do. There are 17 United Nations development goals. I have clustered them in very high-level areas, because if organizations can identify at least one of them in which they really want to work on, then from there, they can do further work. First is poverty. That covers the area of hunger, the water, and sanitation. Second, health and well-being. Third, inequality, including gender and ethnicity. Fourth, education, decent work, and infrastructure. Clean energy, climate action, and sustainability. This area covers a number of ones, including the work that is being done on the land, on the sea. It all goes around this space of climate action and sustainability. Finally, is that peace and justice. These, to me, are the seven areas to explore and define, in which one there is the largest resonance, the largest interest that drives the organization. Might be related to the product that you’re working on. Might be related to the backgrounds of the people or the employees, or whatever reason that is, there has to be a defined purpose. It doesn’t have to be one area. It could be three. For example, us in CodeYourFuture, we work in three of those. We work in inequality, in education, and decent work. Those are our three areas. Obviously, companies, if their coverage is one of them, that’s enough. They want to do more? Sure. Try to keep focus and emphasis to make sure that you’re making a difference.

The second one is then, explore the opportunities around the business needs, the humans, the asset, and services for the purpose of that business alignment. For example, in business needs, what opportunities there are around marketing your product and your service, through the social impact, so that, for example, if you’re supporting a specific cause, you’re really aligned to them. You’re really interested. You talk about that. You’re marketing your product indirectly. You don’t talk about your product, you don’t talk about your service, you talk about that social impact. You create awareness. You invite people to participate. Within that it is like using your branding, your logo for it. Instead of doing a direct marketing, sort of an indirect. There’s nothing wrong with doing that. Because if you’re aligning that marketing need with that impact, you’re making a difference. That can also then help you then bring new clients by creating a discussion in society, in the countries that you’re targeting, to make clients feel why they should be working with you. Another really important one is for the recruitment and for the talent retention, is when companies are working in these areas where people feel much more fulfilled, they feel much more connected. It is the difference that might make them work for one company versus the other. Then finally is the opportunities of diversity, diversity hiring in terms of abilities, in terms of cultural language, and life experiences.

The second area is humans. It’s all about diversifying the team that you have built, that you work with, that you’re a part of. It’s about creating a company culture that is open, that is adaptable, so when you create a challenge of, we want to bring more diversity, different shapes, different backgrounds to that team, your culture changes. It’s affected. You may use these as a driver to move that culture in the direction that you want. Then some really important aspects around that is that lots of companies are not defining clearly the entry requirements to the organization. In many cases saying, yes, years of experience, but they’re not defining, instead of that, saying what skills, what abilities, what level of those abilities we’re really interested in working with? Finally, we have heard a lot around the mentorship part. It’s about realizing that if an entry requirement for you is that, this person needs to be mentored, this person needs to be supported in this area. A lot of that can be outsourced. Don’t think that companies have to do all of these areas.

The third one is assets. On assets, we’re talking about things like, what kind of energy you’re consuming. Is it clean? Could it be cleaner? What type of servers? What are the green credentials? The office space, how much energy it’s utilizing. How much is using sustainable materials? What kind of equipment you are acquiring. Could it be more sustainable? Could it be more coming from recycled sources? Could it be from secondhand sources? Are you ensuring that the waste disposing is done the right way, or is having a second life for the product that you’re not using anymore. There’s a lot of exploration there, a lot in the sustainability side of things.

Then, finally, is the product and the service in itself. You’d be surprised how few companies are thinking on how the products that they’re marketing, that they’re selling, could create in itself an impact in society. You could offer a discount or a free tier for organizations that work on the social impact space. Of course, then you can go back to your business needs, and then use those discounts and your free tiers to market it. Or you could then say, yes, but we don’t have any budget for any of these. Earmark it. Grab a small percentage of your profit, and as you align those opportunities, you ensure that that percentage is used on social impact activities, but you know that you’re going to have not only a social return, but also you’re going to have a return on what the company needs to grow and to sustain and to be sustainable. Within that, I think one of the areas that definitely there’s a lot of opportunities is to think less on traditional marketing and more on how you can create a message, where you are linking your own product and your own services to start a message around social impact. These are some of the four areas that we believe companies can be exploring and can be utilizing, to align the businesses and to keep doing your work, what you’re good at, what you’re expert at, while at the same time can make a difference.

It takes a village, is my third mantra, which means you cannot do anything alone, even if you’re the CEO of a company. You need to bring people along if you really want to bring change, if you want to change the culture, if you want to ensure that the values are going in the direction that you want. That includes the discussion in this area. That’s the third point of the framework. Invite employees to the journey. Ask for their opinion, create groups of discussion, and be open with them. Discuss the budgets and the initiatives, and what are the things that you want to work on. Obviously, the leadership team has to be aligned, if you want to make this company-wide. If it’s team-wide, the leaders of that team have to agree on the previous points of the purpose and the opportunities. The more you can discuss it openly, and listen to different voices, the more they will be coming, and it will make it a success. Not everyone has to be participating, that’s ok. Those that want, they should be given a voice. This creates a great sense of belonging and fulfillment, which will have an impact. It’s harder to measure but it’s not impossible to measure the impact that it may have in talent retention, in talent acquisition, in market differentiation. Those are things that you really want to have clear, and explore to see how much you can grow in those areas. This is one of our internal meetings with the community, here there are volunteers, professionals, graduates, and part of the CodeYourFuture team that works on fulfilling our mission.

Corporate Social Impact Framework (Identify Partners to Help You)

The last part of the framework is about creating SMART goals for a pilot, then define partners, and define a budget. My goals are very obvious, but they are done less frequently than you would imagine. This is not different to any work. You want to define specifically what success means for something like this. You need to set achievable goals that are measurable, and track those. If it doesn’t happen, you won’t. I will dig a little bit deeper on the partners’ part, because the same way you are likely to be working on a product or on a service that is helping other companies or other people to make their lives easier or better, to bring some differentiation in whatever work they do. It is the same thing with this area. If you try to do everything, it won’t be optimized. You have to focus on your work, on your product. You have to identify partners out there that are going to be there to help you. It’s really important that you redefine your requirements. This connects to the area of humans in your opportunities within the organization. You really want to just tell them exactly what you’re looking for. If it’s something around skills and mentorship, put those down in measurable parts, not simply about experience. Most things can be outsourced. Yes, there are challenges between what things can be discussed within the organization or outside the organization. For example, we’re working with a Silicon Valley company that told us, you need to offer them mentorship. We work with those. We work with experts that have been working in this area specifically to guide people that have joined a specific company, to be able to guide and maneuver through a specific field. Organizations can do this, but they have to be communicated. You have to really sit down with the team that wants to work in that specific area. That applies to anything. If you’re talking about climate action, it is like, what are the requirements for change? What are the things where you want to make a difference? If you want to bring certain types of equipment, certain conditions, of certain quality, all of that can be achieved. Yes, it may take some time to find the right partner, but there is a big market out there. Just work on those. Spend the time identifying and working through them, but be clear.

Part IV: Annex

In the corporate social responsibility space, business alignment from our experience is everything. For real tangible change, sustainable change, where you as a company feel that you can make it work for a long time. Long lasting, continuous intervention really makes a difference. You will change your company. You could change your culture, your purpose, the way that you see yourselves. It’s really important that, one, don’t think that you should do it alone. I’m talking about internally and externally. You have to find your right partners. First inside, like who are the people that want to help move this along. Then, outside, you want to find also the people that are going to help you achieve a certain goal that you’re going to define in a pilot. You really want to think about the purpose. What are the areas that you want to explore? Lots of organizations will be jumping from one area to the other, and it will feel like, we have to do something. If you bring this constancy to your purpose, and you work year after year on a specific area, and the whole organization is connected into that, together, it will make a difference. Then, finally, think on SMART goals. Make sure that you work on a pilot to start with, and you define it, and you communicate it, and you work through it, and you measure it, and you constantly get feedback around it. From there, you can grow it. Think like a business. Think on what you’re doing, and within that, the difference that is going to make. Do we all have to save the world? No, we don’t. You can keep working and doing what you’re doing. Can we make a difference through our work? Yes, we can. If we decide to, if we want to, then we can do it. It is not easy. If you work every single day, 1% of change, it will accumulate, it will compound, and over a year it will make a big difference. Those are the things and these are the areas that I want to share with.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Mercedes Mone closes Dynamite, comes out to help Willow Nightingale – Gerweck.net

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Mercedes Mone made her presence felt at the end of Dynamite last night after she came out to the rescue of a familiar face in the form of Willow Nightingale.

After successfully defeating Riho in the main event of the show, Julia Hart and Skye Blue attacked Nightingale as the crowd booed and chanted “CEO.”

It didn’t take long for Mone to come out, in a different outfit, to level the playing field. Blue rushed to take out Mone but she got taken out in the aisle and then Hart was on the receiving end of the Mone Maker finisher.

Willow, who defeated Mone to become the inaugural NJPW Strong Women’s champion, then raised Mone’s arm in the middle of the ring and left it all for her. Mone celebrated with members of her family at ringside and the fans before the show went off the air.

Colin Vassallo has been editor of Wrestling-Online since 1996

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Public Employees Retirement System of Ohio Lowers Position in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Public Employees Retirement System of Ohio reduced its position in MongoDB, Inc. (NASDAQ:MDBFree Report) by 4.7% during the 3rd quarter, according to the company in its most recent filing with the Securities and Exchange Commission (SEC). The fund owned 60,767 shares of the company’s stock after selling 2,965 shares during the period. Public Employees Retirement System of Ohio owned about 0.08% of MongoDB worth $21,017,000 at the end of the most recent quarter.

Several other hedge funds also recently bought and sold shares of MDB. Wesbanco Bank Inc. acquired a new position in shares of MongoDB during the 3rd quarter valued at about $202,000. Raleigh Capital Management Inc. increased its holdings in shares of MongoDB by 156.1% during the 3rd quarter. Raleigh Capital Management Inc. now owns 146 shares of the company’s stock valued at $50,000 after acquiring an additional 89 shares during the last quarter. Alamar Capital Management LLC increased its holdings in shares of MongoDB by 9.4% during the 3rd quarter. Alamar Capital Management LLC now owns 9,903 shares of the company’s stock valued at $3,425,000 after acquiring an additional 847 shares during the last quarter. Polar Capital Holdings Plc increased its holdings in shares of MongoDB by 18.5% during the 3rd quarter. Polar Capital Holdings Plc now owns 248,259 shares of the company’s stock valued at $85,863,000 after acquiring an additional 38,747 shares during the last quarter. Finally, Jacobs Levy Equity Management Inc. acquired a new position in shares of MongoDB during the 3rd quarter valued at about $2,453,000. 88.89% of the stock is owned by institutional investors and hedge funds.

MongoDB Price Performance

MDB traded down $13.50 during trading on Friday, hitting $355.44. The company’s stock had a trading volume of 1,822,320 shares, compared to its average volume of 1,490,097. The firm’s 50 day moving average price is $421.71 and its two-hundred day moving average price is $391.73. The stock has a market capitalization of $25.66 billion, a price-to-earnings ratio of -143.32 and a beta of 1.24. MongoDB, Inc. has a 12 month low of $198.72 and a 12 month high of $509.62. The company has a quick ratio of 4.74, a current ratio of 4.74 and a debt-to-equity ratio of 1.18.

Analysts Set New Price Targets

MDB has been the topic of a number of research analyst reports. Truist Financial raised their target price on MongoDB from $440.00 to $500.00 and gave the company a “buy” rating in a report on Tuesday, February 20th. Needham & Company LLC reaffirmed a “buy” rating and issued a $495.00 target price on shares of MongoDB in a report on Wednesday, January 17th. DA Davidson raised MongoDB from a “neutral” rating to a “buy” rating and raised their target price for the company from $405.00 to $430.00 in a report on Friday, March 8th. JMP Securities reaffirmed a “market outperform” rating and issued a $440.00 target price on shares of MongoDB in a report on Monday, January 22nd. Finally, Piper Sandler raised their price objective on MongoDB from $425.00 to $500.00 and gave the stock an “overweight” rating in a research report on Wednesday, December 6th. One research analyst has rated the stock with a sell rating, three have assigned a hold rating and nineteen have given a buy rating to the company’s stock. Based on data from MarketBeat, MongoDB currently has a consensus rating of “Moderate Buy” and a consensus target price of $456.19.

Check Out Our Latest Stock Report on MongoDB

Insider Activity

In other news, CFO Michael Lawrence Gordon sold 10,000 shares of MongoDB stock in a transaction on Thursday, February 8th. The stock was sold at an average price of $469.84, for a total value of $4,698,400.00. Following the completion of the sale, the chief financial officer now directly owns 70,985 shares of the company’s stock, valued at approximately $33,351,592.40. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through this link. In related news, CFO Michael Lawrence Gordon sold 10,000 shares of MongoDB stock in a transaction on Thursday, February 8th. The stock was sold at an average price of $469.84, for a total transaction of $4,698,400.00. Following the completion of the sale, the chief financial officer now directly owns 70,985 shares of the company’s stock, valued at $33,351,592.40. The sale was disclosed in a legal filing with the SEC, which can be accessed through the SEC website. Also, CAO Thomas Bull sold 359 shares of MongoDB stock in a transaction on Tuesday, January 2nd. The stock was sold at an average price of $404.38, for a total value of $145,172.42. Following the sale, the chief accounting officer now directly owns 16,313 shares of the company’s stock, valued at $6,596,650.94. The disclosure for this sale can be found here. In the last ninety days, insiders sold 54,607 shares of company stock valued at $23,116,062. Company insiders own 4.80% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

10 Best Cheap Stocks to Buy Now Cover

MarketBeat just released its list of 10 cheap stocks that have been overlooked by the market and may be seriously undervalued. Click the link below to see which companies made the list.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.