MongoDB, Inc. (NASDAQ:MDB) Stock Position Lessened by Voya Investment Management LLC

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Voya Investment Management LLC reduced its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 96.4% in the fourth quarter, according to the company in its most recent disclosure with the Securities and Exchange Commission. The firm owned 474,019 shares of the company’s stock after selling 12,794,369 shares during the period. Voya Investment Management LLC owned 0.64% of MongoDB worth $110,356,000 as of its most recent SEC filing.

Several other large investors also recently modified their holdings of MDB. B.O.S.S. Retirement Advisors LLC bought a new position in MongoDB in the fourth quarter worth about $606,000. Union Bancaire Privee UBP SA bought a new position in shares of MongoDB in the 4th quarter worth approximately $3,515,000. HighTower Advisors LLC raised its position in shares of MongoDB by 2.0% in the 4th quarter. HighTower Advisors LLC now owns 18,773 shares of the company’s stock worth $4,371,000 after acquiring an additional 372 shares in the last quarter. Nisa Investment Advisors LLC lifted its stake in shares of MongoDB by 428.0% in the 4th quarter. Nisa Investment Advisors LLC now owns 5,755 shares of the company’s stock valued at $1,340,000 after purchasing an additional 4,665 shares during the period. Finally, Covea Finance bought a new stake in shares of MongoDB during the fourth quarter valued at approximately $3,841,000. 89.29% of the stock is owned by institutional investors.

Insider Activity at MongoDB

In other MongoDB news, CEO Dev Ittycheria sold 8,335 shares of the business’s stock in a transaction on Tuesday, January 28th. The shares were sold at an average price of $279.99, for a total value of $2,333,716.65. Following the sale, the chief executive officer now owns 217,294 shares in the company, valued at $60,840,147.06. The trade was a 3.69 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, Director Dwight A. Merriman sold 3,000 shares of MongoDB stock in a transaction dated Monday, February 3rd. The stock was sold at an average price of $266.00, for a total value of $798,000.00. Following the completion of the sale, the director now directly owns 1,113,006 shares in the company, valued at $296,059,596. This trade represents a 0.27 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last 90 days, insiders have sold 47,680 shares of company stock valued at $10,819,027. 3.60% of the stock is owned by company insiders.

Wall Street Analyst Weigh In

Several equities analysts have commented on the company. Robert W. Baird lowered their price objective on MongoDB from $390.00 to $300.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. The Goldman Sachs Group cut their price objective on shares of MongoDB from $390.00 to $335.00 and set a “buy” rating on the stock in a report on Thursday, March 6th. Piper Sandler lowered their target price on shares of MongoDB from $280.00 to $200.00 and set an “overweight” rating for the company in a report on Wednesday. Needham & Company LLC reduced their price objective on MongoDB from $415.00 to $270.00 and set a “buy” rating for the company in a research report on Thursday, March 6th. Finally, China Renaissance initiated coverage on MongoDB in a report on Tuesday, January 21st. They issued a “buy” rating and a $351.00 target price on the stock. Eight analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has assigned a strong buy rating to the stock. Based on data from MarketBeat.com, MongoDB presently has a consensus rating of “Moderate Buy” and an average target price of $294.78.

Check Out Our Latest Stock Analysis on MDB

MongoDB Stock Performance

Shares of MDB traded up $0.29 during mid-day trading on Friday, reaching $173.50. The stock had a trading volume of 2,105,392 shares, compared to its average volume of 1,841,392. The firm has a market cap of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. MongoDB, Inc. has a 12 month low of $140.78 and a 12 month high of $387.19. The business has a fifty day moving average of $195.15 and a 200 day moving average of $248.95.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. During the same quarter in the previous year, the company posted $0.86 EPS. On average, research analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 AI Stocks to Invest in Today: Capitalizing on AI and Tech Trends in 2025 Cover

Discover the top 7 AI stocks to invest in right now. This exclusive report highlights the companies leading the AI revolution and shaping the future of technology in 2025.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Stifel Financial Corp Increases Position in MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Stifel Financial Corp grew its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 6.4% in the 4th quarter, according to the company in its most recent filing with the Securities and Exchange Commission (SEC). The institutional investor owned 114,216 shares of the company’s stock after purchasing an additional 6,894 shares during the quarter. Stifel Financial Corp owned approximately 0.15% of MongoDB worth $26,590,000 as of its most recent SEC filing.

A number of other large investors also recently bought and sold shares of the business. TD Waterhouse Canada Inc. raised its position in shares of MongoDB by 7.2% during the 4th quarter. TD Waterhouse Canada Inc. now owns 1,557 shares of the company’s stock valued at $362,000 after buying an additional 105 shares during the period. Tower Research Capital LLC TRC boosted its stake in shares of MongoDB by 554.0% in the 4th quarter. Tower Research Capital LLC TRC now owns 8,077 shares of the company’s stock valued at $1,880,000 after buying an additional 6,842 shares during the period. Teachers Retirement System of The State of Kentucky grew its holdings in MongoDB by 138.1% during the 4th quarter. Teachers Retirement System of The State of Kentucky now owns 51,432 shares of the company’s stock worth $11,974,000 after acquiring an additional 29,832 shares in the last quarter. Transatlantique Private Wealth LLC increased its holdings in MongoDB by 15.5% in the fourth quarter. Transatlantique Private Wealth LLC now owns 3,273 shares of the company’s stock valued at $762,000 after buying an additional 440 shares during the last quarter. Finally, Thematics Asset Management boosted its holdings in MongoDB by 49.9% in the fourth quarter. Thematics Asset Management now owns 132,313 shares of the company’s stock valued at $30,804,000 after purchasing an additional 44,061 shares in the last quarter. Institutional investors and hedge funds own 89.29% of the company’s stock.

Wall Street Analysts Forecast Growth

MDB has been the topic of several recent research reports. Daiwa America upgraded MongoDB to a “strong-buy” rating in a research report on Tuesday, April 1st. Barclays cut their price objective on shares of MongoDB from $330.00 to $280.00 and set an “overweight” rating for the company in a research report on Thursday, March 6th. Daiwa Capital Markets began coverage on shares of MongoDB in a report on Tuesday, April 1st. They issued an “outperform” rating and a $202.00 target price on the stock. Stifel Nicolaus dropped their price target on shares of MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a research note on Friday, April 11th. Finally, Scotiabank restated a “sector perform” rating and issued a $160.00 price objective (down previously from $240.00) on shares of MongoDB in a research report on Friday. Eight investment analysts have rated the stock with a hold rating, twenty-four have given a buy rating and one has assigned a strong buy rating to the company’s stock. According to data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and an average price target of $294.78.

Read Our Latest Research Report on MongoDB

MongoDB Stock Performance

MongoDB stock traded up $0.29 during midday trading on Friday, reaching $173.50. 2,105,392 shares of the company were exchanged, compared to its average volume of 1,841,392. The company has a market capitalization of $14.09 billion, a price-to-earnings ratio of -63.32 and a beta of 1.49. The firm’s 50-day simple moving average is $195.15 and its 200-day simple moving average is $248.95. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The firm had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period in the previous year, the company posted $0.86 EPS. On average, analysts forecast that MongoDB, Inc. will post -1.78 EPS for the current fiscal year.

Insider Activity

In other MongoDB news, CAO Thomas Bull sold 301 shares of MongoDB stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at $2,529,103.50. This trade represents a 2.02 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. Also, CFO Srdjan Tanjga sold 525 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares of the company’s stock, valued at $1,109,903.56. This represents a 7.57 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold a total of 47,680 shares of company stock valued at $10,819,027 over the last quarter. Company insiders own 3.60% of the company’s stock.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Read More

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 AI Stocks to Invest in Today: Capitalizing on AI and Tech Trends in 2025 Cover

Discover the top 7 AI stocks to invest in right now. This exclusive report highlights the companies leading the AI revolution and shaping the future of technology in 2025.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Scotiabank Reduces MongoDB (MDB) Price Target Amid Market Cautio – GuruFocus

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Scotiabank has adjusted its outlook for MongoDB (MDB, Financial), decreasing the price target from $240 to $160. Despite recognizing several strengths within MongoDB’s business model, analyst Patrick Colville maintains a Sector Perform rating for the company. This cautious stance suggests that the firm advises a prudent approach and does not recommend investors to quickly increase their positions in MongoDB at this time.

Wall Street Analysts Forecast

1915784108412399616.png

Based on the one-year price targets offered by 34 analysts, the average target price for MongoDB Inc (MDB, Financial) is $278.18 with a high estimate of $520.00 and a low estimate of $160.00. The average target implies an
upside of 64.47%
from the current price of $169.14. More detailed estimate data can be found on the MongoDB Inc (MDB) Forecast page.

Based on the consensus recommendation from 38 brokerage firms, MongoDB Inc’s (MDB, Financial) average brokerage recommendation is currently 2.0, indicating “Outperform” status. The rating scale ranges from 1 to 5, where 1 signifies Strong Buy, and 5 denotes Sell.

Based on GuruFocus estimates, the estimated GF Value for MongoDB Inc (MDB, Financial) in one year is $432.68, suggesting a
upside
of 155.81% from the current price of $169.14. GF Value is GuruFocus’ estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business’ performance. More detailed data can be found on the MongoDB Inc (MDB) Summary page.

MDB Key Business Developments

Release Date: March 05, 2025

  • Total Revenue: $548.4 million, a 20% year-over-year increase.
  • Atlas Revenue: Grew 24% year-over-year, representing 71% of total revenue.
  • Non-GAAP Operating Income: $112.5 million, with a 21% operating margin.
  • Net Income: $108.4 million or $1.28 per share.
  • Customer Count: Over 54,500 customers, with over 7,500 direct sales customers.
  • Gross Margin: 75%, down from 77% in the previous year.
  • Free Cash Flow: $22.9 million for the quarter.
  • Cash and Cash Equivalents: $2.3 billion, with a debt-free balance sheet.
  • Fiscal Year 2026 Revenue Guidance: $2.24 billion to $2.28 billion.
  • Fiscal Year 2026 Non-GAAP Operating Income Guidance: $210 million to $230 million.
  • Fiscal Year 2026 Non-GAAP Net Income Per Share Guidance: $2.44 to $2.62.

For the complete transcript of the earnings call, please refer to the full earnings call transcript.

Positive Points

  • MongoDB Inc (MDB, Financial) reported a 20% year-over-year revenue increase, surpassing the high end of their guidance.
  • Atlas revenue grew 24% year over year, now representing 71% of total revenue.
  • The company achieved a non-GAAP operating income of $112.5 million, resulting in a 21% non-GAAP operating margin.
  • MongoDB Inc (MDB) ended the quarter with over 54,500 customers, indicating strong customer growth.
  • The company is optimistic about the long-term opportunity in AI, particularly with the acquisition of Voyage AI to enhance AI application trustworthiness.

Negative Points

  • Non-Atlas business is expected to be a headwind in fiscal ’26 due to fewer multi-year deals and a shift of workloads to Atlas.
  • Operating margin guidance for fiscal ’26 is lower at 10%, down from 15% in fiscal ’25, due to reduced multi-year license revenue and increased R&D investments.
  • The company anticipates a high-single-digit decline in non-Atlas subscription revenue for the year.
  • MongoDB Inc (MDB) expects only modest incremental revenue growth from AI in fiscal ’26 as enterprises are still developing AI skills.
  • The company faces challenges in modernizing legacy applications, which is a complex and resource-intensive process.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Activision Reduces Build Time of Call of Duty by 50% with MSVC Build Insights

MMS Founder
MMS Matt Foster

Article originally posted on InfoQ. Visit InfoQ

Activision has cut build times for Call of Duty: Modern Warfare II (COD) in half by profiling and optimizing their C++ build system with MSVC Build Insights to uncover bottlenecks in their compilation pipeline.

The effort unblocked developers, accelerated delivery, and reduced idle time. Their success reflects a broader trend across the industry, with teams at Netflix, Canva, and Honeycomb investing in CI performance engineering as a way to improve both productivity and developer experience.

Activision observed that persistent build delays were eroding developer flow and limiting delivery velocity. In response, the Activision team collaborated with Microsoft’s Xbox Advanced Technology Group to instrument and streamline their compilation pipeline. By using MSVC (Microsoft Visual C++) Build Insights, a profiling tool for C++ builds, engineers identified a number of key inefficiencies in their build process. While these specific issues are rooted in C++, they reflect familiar challenges faced when working with large codebases and compute heavy builds.

Among the core inefficiencies, excessive inlining was inflating compile units, link-time optimizations were dragging due to complex initializations, and inefficient symbol resolution was creating CPU stalls during the final linking stage. Each issue contributed to delay in a different part of the process, and together they highlighted how localized inefficiencies – when multiplied across a large codebase – significantly extended build time.

These targeted optimizations led to a substantial reduction in build times – from approximately 28 minutes to 14 minutes. This improvement had significant implications for Activision’s development workflow. Faster builds meant more pull requests merged, more builds, less idle time and ultimately more frequent feature delivery.

But reducing build time isn’t just a technical improvement – it has measurable effects on the developer experience. Michael Vance, SVP and software engineer at Activision, noted that “slow builds create bottlenecks in our continuous integration pipelines, delaying the verification of every piece of code and content that goes into our games.” The team’s build time improvements were not just a performance win, but a way to unblock developers and maintain velocity in a tightly integrated workflow.

This aligns with broader industry findings that highlight developer experience as a key contributor to engineering throughput. Research from GitHub and Microsoft suggests that satisfaction with internal tooling, including CI/CD pipelines, correlates strongly with productivity metrics such as PR cycle time, deployment frequency, and time to resolve issues.

Activision’s experience is indicative of a broader shift in how organizations approach CI performance. As build and test pipelines grow in complexity, teams are applying similar discipline to their profiling and instrumentation as they are with the build artifacts. Netflix reported faster iteration cycles and improved efficiency for Android developers after tuning their Gradle builds. Canva reduced CI durations from over 80 minutes to under 30, improving release velocity and reducing developer frustration. Honeycomb set internal objectives to keep build times under 15 minutes, framing CI speed as a first-class developer productivity metric. In each case, pipeline performance improvements were directly tied to happier, more effective engineering teams.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Extends SLNX Solution File Support in .NET CLI

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

Microsoft has announced experimental support for .slnx files in the .NET CLI v9.0.200, unifying the developer experience among the .NET tooling. This new feature aims to remove clutter in the solution file and to reduce friction when working with large solutions.

Traditionally, .sln files have been the standard for Visual Studio solutions, but they come with limitations such as manual maintenance of project references and path dependencies, difficult merging of source code changes, and overall verbosity, as can be seen in this example:

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.9.34511.98
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DotNetMonitorWebApp", "DotNetMonitorWebAppDotNetMonitorWebApp.csproj", "{1385B389-B20C-4D19-8FE0-85629BC41343}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{1385B389-B20C-4D19-8FE0-85629BC41343}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{1385B389-B20C-4D19-8FE0-85629BC41343}.Debug|Any CPU.Build.0 = Debug|Any CPU
{1385B389-B20C-4D19-8FE0-85629BC41343}.Release|Any CPU.ActiveCfg = Release|Any CPU
{1385B389-B20C-4D19-8FE0-85629BC41343}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {C12E911E-FAA3-4ACE-B6BF-C3605E866483}
EndGlobalSection
EndGlobal

The new .slnx format, based on XML and introduced in 2024, provides a more robust and flexible alternative, similar to project files in Visual Studio. It has a minimal footprint, removing the duplication of information already present in the project files. It uses a human-readable format that reduces the chances of accidental errors when manually editing the solution file.




MSBuild already fully supports the .slnx format since version 17.13. The experimental support for .NET CLI in version 9.0.200 allows developers to use .slnx files directly with dotnet commands (e.g., dotnet build, dotnet test). Visual Studio support for the new format might require developers enabling the ‘Use Solution File Persistence Model‘ option listed under Environment / Preview Features.

A new dotnet command called migrate helps developers convert their .sln files into .slnx files. Alternatively, from Visual Studio developers can right-click the solution node in the Solution Explorer and save the solution in the new format (if enabled in the preview options). The official recommendation is to keep either .sln or .slnx file, but not both of them in the solution folder.

The .slnx file format is still officially in preview, with Microsoft encouraging developers to try the new format in their workflows and sharing their feedback with the appropriate tooling team owners. The stated goal is to make the new format the default in both Visual Studio and the .NET CLI tool. According to Microsoft, the new format will also work with legacy .NET Framework solutions.

Microsoft has also released a library called Microsoft.VisualStuidio.SolutionPersistence that allows programmatic access to both .sln and .slnx file operations. This allows third-party tools to leverage the new solution format without having to create a parser for it.

The comments by the developer community are mixed. Some praise the new format for simplicity and straightforward migration. Other developers think that new features such as globbing (dynamic project discovery inside a folder tree) should be added to the new format.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Azure MCP Server Enters Public Preview: Expanding AI Agent Capabilities

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Microsoft has announced the Public Preview of the open-source Azure MCP Server, a new tool designed to enhance the capabilities of AI agents by providing access to Azure resources. The Azure MCP Server allows AI agents to interact with Azure services such as file storage, databases, and logs, and execute CLI commands.

Model Context Protocol (MCP) is an open protocol that standardizes the interaction between AI agents and external resources. The Azure MCP Server implements this protocol, exposing Azure services to AI systems. According to Microsoft, this enables developers to build context-aware agents for their Azure resources. For instance, agents can now query Azure Cosmos DB using natural language, access Azure Storage files, and analyze Azure Log Analytics logs.

The Public Preview of the Azure MCP Server includes support for the following Azure services and tools:

  • Azure Cosmos DB (NoSQL): List accounts, databases, containers, and items; execute SQL queries.
  • Azure Storage: List accounts and blob containers/blobs; manage blob containers and blobs; list and query tables; get container properties and metadata.
  • Azure Monitor (Log Analytics): List workspaces and tables; query logs using Kusto Query Language (KQL); configure monitoring.
  • Azure App Configuration: List stores; manage key-value pairs and labeled configurations; lock/unlock settings.
  • Azure Resource Groups: List and manage resource groups.
  • Azure CLI: Execute commands directly, with full functionality and JSON output.
  • Azure Developer CLI (azd): Execute commands directly, supporting template discovery, initialization, provisioning, and deployment.

(Source: Medium blog post)

Brian Veldman concluded in a Medium blog post on the Azure MCP Server:

From now on, I can use the Azure MCP Server to interact with the Azure services within my subscription. This is especially helpful in troubleshooting scenarios, such as analyzing logs.

Microsoft states that this functionality allows agents to operate on Azure services, manage cloud resources, and deploy applications. Yet, more open-source projects, such as the Azure CLI MCP Server, are available on GitHub, leveraging the MCP Server for Azure resources. Julion Dubois, a principal manager, Java Developer Relations, tweets on the mentions the Azure CLI MCP Server:

It’s an MCP server that wraps the Azure CLI, so your LLM can directly send commands to Azure.

In addition, Madni Aghadi states on  via tweet on X:

“MCP is just hype” That’s what I thought until I saw 1000+ MCP servers built since its launch.

Any agent that supports the MCP client pattern, including GitHub Copilot Agent Mode and custom MCP clients, can use the Azure MCP Server.

  • GitHub Copilot Agent Mode: The Azure MCP Server can be installed with GitHub Copilot in VS Code. Microsoft recommends combining the Azure MCP Server with the GitHub Copilot for Azure extension for an enhanced development experience.
  • Custom MCP Clients/Agents: Agents must adopt the MCP client pattern to interact with the Azure MCP Server. Frameworks like Semantic Kernel can be used to build such agents. Microsoft provides a command (npx -y @azure/mcp@latest server start) to install and execute the server, and notes that the Azure MCP Server should work with any MCP client.

The Azure MCP Server follows similar moves by other cloud providers to enhance AI agent capabilities within their ecosystems:

Lastly, Microsoft plans to enhance the Azure MCP Server with more agent samples, documentation, Microsoft products, Azure service integrations, and additional features.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Optimizing Search at Uber Eats

MMS Founder
MMS Janani Narayanan Karthik Ramasamy

Article originally posted on InfoQ. Visit InfoQ

Transcript

Narayanan: Myself and my colleague Karthik here, we are going to be walking you through, as part of this session, on how we solve for scaling the Uber Eats backend and infra architecture, so that we can infinitely scale the number of merchants that are deliverable to any particular eater. I’m Janani Narayanan. I’m a senior staff engineer working on search ranking recommendations for the past few years.

Ramasamy: I’m Karthik Ramasamy. I’m a senior staff engineer. I work in the search platform team. Our team builds the search solutions that powers various search use cases at Uber, including Uber Eats, location search at rides, and other things.

Narayanan: A fun Easter egg here. If you have your Uber Eats app open, you could go there into the search bar and then look up who built this, and you can see the list of all of the engineers who are part of the team.

nX Selection

Which of the following is not among the top 10 searches in Uber Eats? It is unexpected for me when I first saw this. It was actually Mexican. Apparently, not many people care about eating healthy when they order from outside.

The problem that we want to talk about as part of this is, how did we go about expanding selection by nX? Before we go into what does nX mean, I want to spend a little bit of time talking about what selection means. It means different things for different people. If I were to talk to the merchant onboarding team, the operations, they would say that onboarding as many merchants as possible, that is considered as a success metric. That is considered as good selection. If I were to speak to eaters, different kind of eaters will have a different answer to this. Someone could say that I care about getting my food within a particular ETA, so that is good selection. My favorite restaurant, if it is on this platform, then this platform has good selection.

Some other folks can also say that if I get to find new stores, discover more stores, which I wouldn’t have normally found it, or instead of going out of the app and then finding somewhere else, or word of mouth recommendation and then coming back to the app, if the app in itself can curate my preferences based on my order history, based on my search history, you know everything about me, so why don’t you give me something that I haven’t tried before, surprise me. That is considered as good selection. What we are here to talk about is, given all of the real-world aspect of Uber Eats out of the picture, what is a technical challenge that we could solve where restaurants or stores are getting onboarded onto this platform, and the eaters want to have access to more of selection, more of restaurants available to them when they are trying to look it up in any of the discovery surfaces.

To get a sense of how the business is growing, how the market is growing, as we can see, just before the pandemic and through the course of the pandemic, the business has expanded and branched down to multiple different line of businesses. Why this is important is because that is all part of the scale that is inclusive of what we were trying to solve. It is not just about restaurants. Starting from pandemic, we have grocery stores, then retail stores, people who are trying to deliver packages, all of these things are part of the same infra, same ranking and recommendation tech stack, which powers it under the hood. Why this matters is that, up until now, we are talking about it in terms of restaurants and stores, but the index and complexity comes in where in case of a restaurant, if we talk about a single document, it would probably have 20, 30 items, and that’s about it.

If we think about grocery as a line of business, for every single store, there are going to be 100,000 SKUs for each and every store. All of those items also need to be indexed. Onboarding a single grocery store is very different in terms of scale, comparison with onboarding a restaurant. Another example is, before we started working on this, people were able to order from restaurants which are 10 to 15 minutes away from them. Now, you could order from a restaurant which is sitting from San Francisco, you could order it all the way to Berkeley.

Let’s say if you want to order something from Home Depot, and the item that you’re looking for is not here but it is somewhere in Sacramento, you should be able to order it from Uber Eats and then get it delivered to you. That is the breadth of the line of businesses that we wanted to unlock, and also the different challenges in terms of scale that different line of business offers for us. With that in place, in terms of selection, we are specifically focusing on the quantity of selection that is exposed to the eater when they are going to any of these discovery surfaces. The personalization aspect of it, that’s a completely different topic altogether.

Discovery Surfaces (Home Feed, Search, Suggestions, Ads)

What we mean by discovery surfaces, let’s start with terminologies. There are four different discovery surfaces. Which of the surfaces do you guys think most of the orders come from? Home feed. Depending on whether it is e-commerce or online streaming services, the surface area is going to change. For specifically delivery business, it is the home feed. In terms of the tech stack that we are looking at, there are different entry points to this, different use cases that we serve as part of the work that we did. If we take search, for example, there are different kinds of search, restaurant name search, dish search, cuisine search. Home feed has multiple different compartments to this. There is the carousels, which are all thematic based on the user’s order history.

Then we have storefronts, which is a list of stores based on the location. At the top of your home feed, if you look at your Uber Eats app, there would be shortcuts, which will be either cuisine based or promotion based and whatnot. All of these entry points need to work in cohesion. In other words, regardless of whether someone goes through suggestions, where someone is searching for pasta and you are trying to also show pastrami based on the lookahead search. We are looking at McD as a search, and we also want to show Burger King as part of the suggestions that come up. All of these different use cases need to be addressed as part of this. Because if I’m able to find a store or a restaurant through my home feed, but I’m not able to locate it through my suggestions at the same time, that is considered as a poor customer experience. We needed to address all parts of the tech stack in one go, in one XP.

Overall Architecture – Infra Side to Application Layer

Given this, let’s take a look at the overall architecture from the infra side to the application layer. The leftmost thing, that is all the infra layer of it. It is the corpus of all of our stores and indexes, how we index, how do we ingest, all of that goes into that. Then we have the retrieval layer, that is where the application layer and my team and my org starts, where the retrieval layer focuses on optimizing for recall. The more stores that we could fetch, the more stores that we could send it to the next set of rankers so they can figure out what is an appropriate restaurant to show at that time.

The first pass ranker is looking for precision and efficiency. What this means is that as the restaurants or stores are fetched, we want to start looking at, how do we do a lexical match so the user’s query and the document are matched as much as possible in terms of relevance? Then we have the hydration layer, where a lot of the business logic comes into picture in terms of, does it have a promotion, does it have membership benefits involved? Is there any other BOGO, buy one, get one order that we could present and whatnot? ETD information, store images, all of those things come into picture there. Then we have the second pass ranker, which optimizes for the precision. This is where a lot of the business metrics get addressed. We look at conversion rate. We also look at, given the previous order history, all of the other things that I know from different surfaces of interaction from this eater, how do I build the personalization aspect of it so we will be able to give more relevant results to the eater.

Given this overall tech stack, I want to give a sense of scale in terms of the tech stack. Particularly, I would like to draw your attention to two of the line items here. One is the number of stores which are part of the corpus, millions of them.

The other one is the number of matched documents that you use to fetch out of the retrieval layer. When you look at it, the scale sounds like, there’s just only thousands of documents which are matched. What matched means is when I look at the eater’s location, how many stores can deliver to that eater’s location? When we had these tens of thousands of them when we started, we said, if you wanted to make it nX or increase it more, all that we needed to do is fetch more. Let’s just increase our early termination count. Let’s fetch more candidates and then go from there. We did a very basic approach of fetching 200 more stores, two-week XP, and it blew up on our face where we saw that the latency increased, P50 latency increased by 4X. In this example, we could see that the red dot is where the eater is, that is the eater’s location, and as it expands, which is the new red dots that we started adding, that is where the latency started increasing. This is a serious problem. Then we needed to look into where exactly the latency is coming from.

The root cause, as we started diving into it, had multiple aspects to it where we needed to look at different parts of the tech stack to make sure that some design decisions made in, let’s say, ingestion, how does it impact the query layer? How do some of the mechanisms that we have in the query layer, don’t really gel well with our sharding strategy? It was a whole new can of worms that we opened as we started looking into it. First, we started with benchmarking. If there’s a latency increase, especially the retrieval layer, let’s just figure out where exactly it is coming from.

In the search infra layer, we added a bunch of different metrics. Depending on whether we are looking at grocery or eats, there is one step particularly which stood out where we were trying to match a particular document to the query, and then, this document match, put it into your bucket, then move on to the next document. When we iterate over to the next document that matches, that took anywhere between 2.5 milliseconds latency for grocery and 0.5 milliseconds for eats, and that is unexplainable for us. That was unexplainable at that time. It is supposed to take nanoseconds, especially if you have optimized index. Then we started looking into, this is a problem area that we needed to start looking into.

The other area that I want to talk about is how we are ingesting the data, and the pieces will fall in place in the next few slides. For those of you who are following Uber’s engineering blogs, you would now be familiar that Uber does most of its geo-representation using H3. H3 library is what we use to figure out how we tessellate the world and how we make sense out of the different levels that we have in place. Depending on the resolution, the different levels optimize for different behaviors that we want for the eaters and the merchants.

Given this, we represent any merchant and the delivery using the hexagons to say that merchant A can deliver to A, B, C locations by using hexagons in the resolutions. How this gets indexed is if we take this example where we have restaurants A, B, and C, and hexagons are delivery areas which are numbered, the index will be a reverse lookup, where, going by the hexagons, we would say that in this hexagon, two or three different restaurants can deliver to me. Fairly straightforward so far.

From here, what we did is, now that we understand how the index layout looks, this is the second problem that we identified as part of selection expansion. At the time of ingestion, we had this concept of close by and far away, and that is the column that we use to ingest the data. At a store level, the upstream service had the decision to say, I’m going to give you a list of stores and the deliverable hexagons, and I’m going to categorize them as close by and far away. When we did that, if we look at this example, hexagon 7 considers both A and B as far away. If we look at the real data, B is actually much close by, in comparison with A, but the ingestion doesn’t have that information.

Naturally, the query stage also doesn’t have this information. Only the upstream service had this information, which we lost as part of this. This ETD information, without that, we are treating A and B together at the time of rankers, and that was another problem. In terms of search area, even though we say that we’ve only increased by 200 restaurants, going from, let’s say, 5 kilometers to 10 kilometers to so-and-so, would mean that we are increasing the area by square. The search space increases exponentially, even though we say that I’m only trying to deliver from 10 miles to 12 miles or 15 miles, and whatnot. This meant that we are processing a lot number of candidates which will tie in into why going from one document to the other was taking such a long time.

The next thing is the store distribution. If we were to make it as a simple concentric circle around where the eater’s location is and the Lat-Long is, what we could see is, as we start expanding further into more geos, the number of stores or the distribution of stores in the innermost circle versus the outer circle and whatnot is going to be anywhere between 1:9 ratio. We will get more of faraway stores than the close-by stores, and ranking them becomes harder.

Another thing to note is, if we are going to find a restaurant which has much higher conversion rate because that restaurant is more popular and whatnot, but that is in the second circle or the third-most circle, then it is highly likely that in the second pass ranker, that store will get a higher precedence because it has higher conversion rate. In reality, people would want to see more of their close-by stores because a burger is a burger at some point in time. That was one of the problems that we saw as we started fetching more stores where good stores were trumping the close-by stores, and the ranking also needed to account for that.

Search Platform

Ramasamy: Next we’ll go share some insights about the search platform that powers the Uber Eats search. Then we will talk about some optimizations that we did to improve the retrieval limit and also the latency. How much traffic does Uber Eats get per day? It’s tens of millions of requests per day. Uber has an in-house search platform that is built on Apache Lucene. We use a typical Lambda architecture for ingestion. We have batch ingestion through Spark, and then we have real-time ingestion through the streaming path.

One of the notable features that we support in the real-time ingestion is the priority-aware ingestion. The callers can prioritize requests and the system will give precedence to the higher-priority request to ensure a high degree of data freshness for the high-priority request. At Uber, we use geosharding quite heavily. This is because most of our use cases are geospatial in nature. I’ll share some insights on some of the geosharding techniques that we use at Uber. Then, finally, we build custom index layouts and query operators that are tuned for Uber Eats cases that take advantage of the offline document ranking and early termination to speed up the queries.

Here’s the architecture overview of the search platform. There are three key components here. The first one is the batch indexing pipeline. The second component is the streaming or real-time updates path. The third component is the serving stack. We start with the batch ingestion. Usually, these are Spark jobs that takes the data from the source of truth, convert them into search documents, partition them into shards, and then builds Lucene index in Spark. The output of the Spark jobs are Lucene indexes, which then get stored into the object store.

Then updates are then constantly consumed to the streaming path. There is an ingestion service that consumes the updates from the upstream, again converts them into search documents, and then finds the shard the document maps to and then writes to the respective shard. One thing to note here is that we use Kafka as the write-ahead log, which provides several benefits. One of them that we talked earlier is implementing priority-aware ingestion. Because we use Kafka as the write-ahead log, it enables us to implement such features. It also provides us to implement replication and other things using Kafka.

The searcher node, when it comes up, it takes the index from the remote store, and then it also catches up the updates from the streaming path to the write-ahead log, and then it exposes query operators to run the search queries. There is another component here, it’s called the aggregator service. It is actually a stateless service. Its main responsibility is to take the request from upstream, find the shard the request maps to, then send it to the respective searcher node, and execute the queries. Also, aggregate the results and send it back to the caller if there are query fanouts and things like that. That’s the high-level overview of the search platform that powers Uber Eats search.

Sharding Techniques

Next, I will talk about sharding techniques that we use. As we have been talking earlier, that most of our queries are geospatial in nature. We are looking for find me restaurants for given coordinates, or find me grocery stores for given coordinates. We use geosharding to make these queries more efficient. The main advantage of geosharding is that we can locate all the data for a given location in a single shard, so that the queries are executed in a single shard.

At scale, this is quite important, because if you fan out the request to multiple shards, then there is an overhead of overfetching and aggregating the results, which can be avoided by using geosharding. The other benefit is first pass ranking can be executed on the data nodes. The reason being that the data node has the full view of the results for a given query, and then you can push the first pass ranker down to the data node to make it efficient. The two geosharding techniques that we use are latitude sharding and hex sharding. I’ll talk about both of them in detail.

Latitude sharding works this way, where you imagine the world as a slice of latitude bands, and each band maps to a shard. The latitude ranges are computed offline. We use Spark job to compute it. The way we compute is a two-step process. First is we divide the map into several narrow stripes. You can imagine this in order of thousands of stripes. Then we group the adjacent sites to get roughly equal-sized shards. In the first step, we also get the count of documents that maps to each narrow stripe.

Then we group the adjacent stripes such that you get roughly equal-sized shards, the N being the number of shards here. There’s a special thing to note here, like how we handle the documents that falls on the boundary of the shards, that is the green zone that is in this picture. Those are documents that falls on the boundary of two shards. What we do is we index those shards in both of the neighboring shards. That way, the queries can go to a single shard and get all the documents relevant for the given query. The boundary or the buffer degree is calculated based on the search radius. We know that the queries are at the max going to go for a 50-mile or 100-mile radius.

Then we find the latitude degree that maps to that radius, and then that’s the buffer zone. Any document that falls in the buffer zone are indexed in both the shards. With latitude sharding, we get this benefit of cities from different time zones getting co-located in the same shard. In this example, you can see Europe cities and cities from America mixed in the same shard. Why is this important? This is because the traffic in Uber especially follows the sun pattern, where the activities are higher during the day and it slows down during the night. This sharding naturally avoids clustering cities with same busy hours in the same shard.

That helps us a lot in managing the capacity and stuff. We also see some problems or challenges with the latitude sharding. One of them is the bands are too narrow at the center. That’s because the cities are denser in this space, and then you reach a point in some use cases where it’s difficult to divide further, especially considering you have a buffer zone. Over time, the shards become uneven, and some shards, especially towards the center, are larger when compared to the rest of the shards. This creates problems, like your index builds take longer time because you’re bound by the larger shard. Also, those shards experience larger latencies and stuff.

The optimization for this problem is the hex sharding. Hex sharding, we imagine the world as tiles of hexagons. As Janani said, at Uber we use H3 library very extensively. H3 library provides different resolutions of hexagons. The lowest resolution, which means larger hexagons, results in about 100 tiles for the whole world. The highest resolution results in trillions of tiles. Selecting the right resolution is key for using hex sharding. We use some observations and empirical data to decide the hex sizes. At Uber, we generally use for hex sharding, hex size 2 or 3.

Again, we use the same approach of offline jobs to compute the shard boundaries. Basically, we pick a resolution, we compute the number of docs that maps to each resolution, and then group them into N shards, basically N equal shards using bin-packing. We also handle the buffer zones similar to latitude sharding. In hex sharding, you have to imagine the buffer zones also in terms of hexagons. The key here is, choose a resolution that is smaller than the main resolution hex for the buffer zones. Then you index the documents that falls in buffer zone in both the hexes. In this case, the right-side shard shows that the main blue area is the main hexagon and outside are the buffer zone hexagons that gets indexed into it as well to avoid crash out queries. That’s the details on sharding and the architecture of the search platform.

Solution

Next, we will talk about some specific optimizations we did for the Uber Eats use case, taking advantage of the query patterns and other data from the use case to improve the recall and also reduce the latency. The first thing that we did is building a data layout that can take advantage of the query patterns. I will share a couple of data layouts that we used, one for the Eats use case, another for the grocery use case. I’ll also walk through how those layouts helped us to improve the latency. The second technique we’ll talk about is how we use the ETD information that Janani was talking about earlier, how we index that into the search index. Then, how we divide the search space into non-overlapping ranges and then execute them in parallel to improve the latency. Then, finally, we’ll talk about how moving some of the computations that were happening in the query time, such as far away versus nearby computation that Janani was talking earlier, and how that helped to improve the recall and the latency.

Data Layout

I will go through the data layout. This is a data layout that we use for the Eats index. If you look at the Eats query pattern to begin with, you are basically looking for restaurants or items within the restaurants for a given store. We use this insight to also organize the documents in the index.

In this case, we take the city and we co-locate all the restaurants for a given city first. You can see, McDonald’s and Subway, they’re all the restaurants under the city, SF. We then order the items or the menus under those restaurants in the same order. You go with this order, city followed by all the restaurants in that city, and then items for each of the restaurants in the same order as the store. The benefit we get is the faster iteration. A query comes from SF, you can skip over all the documents of other cities that may be in the shard and just move the pointer right to the SF and then find out all the stores. That makes the query faster. The other nice benefit that we get is that if your data is denormalized, in our case, sometimes we denormalize all the store fields into the items as well.

In that case, you have a lot of common attributes for the docs. The item docs for the store will have all similar store level attributes adjacent to each other. This provides better compression ratio. That’s because Lucene uses delta encoding and if you have very sequential doc IDs, then your compression is better. Then, finally, we also order the documents by static rank, which helps us to early terminate the queries once we reach the budget.

Next, I will share a slightly modified version of the index layout that we use for grocery. That’s because of the nature of the grocery data. It’s pretty similar. Again, we first sort by city, then we take the stores, sort by stores, stores are ranked by the offline conversion rate order. Here, the difference is we place the items of the store next to each other. I will share why that is important. This is how the layout looks, city, then store, and the items of the store go to the second store, items of the store, and third store, items of the store.

One benefit, let’s say if you look for a specific posting list with the title as chicken, then you get the store 1, all the items with the title chicken for that store, and store 2, all the items with the title chicken for that store, and store 3. As Janani was saying earlier, grocery scale is very high compared to each. You have hundreds or thousands of items in a single store that can match the given title. When you’re executing a query, you don’t want to be looking for all the items from the same store. You can give a budget for each store, and then once you reach that limit, then you can skip over to the next store. This layout allows us to skip over the stores pretty quickly, but also collecting enough items from a given store. The other benefit that we get, from the business point of view, is it scales us to get diverse results. Your results are not coming from a single store, you also cover all the stores in the search space. That’s the main advantage of this layout.

Next, here’s some interesting numbers that we observed when we moved from one unsorted or unclustered layout to the clustered layouts from location and store. Here’s the latency of a single query that is executed before and after clustering. This query returns about 4K docs. As you can see, the retrieval time before clustering is around 145 milliseconds, and the retrieval time after clustering is 60 milliseconds. It’s about 60% better after clustering the docs based on the query pattern. Down below, the graph shows the doc IDs, time taken to get each hit in the retrieval loop.

As you can see, before sorting, the hits can take anywhere from 10 to 60 microseconds for a single doc. The latency here is in microseconds. After sorting, as you can see, the latency is a lot better, like each hit takes less than 5 microseconds. Here’s the overall improvement in latency that we observed when we rolled out this clustered index layout. You can see more than 50% improvement in P95. We also see equal improvement on P50 latencies as well. The other benefit is index size reduced by 20%.

ETA Indexing

Narayanan: One of the aspects that we talked about as part of ingestion is the metadata that we get from the upstream services was not passed on to the rest of the stack to be able to do meaningful optimizations on top of it. What this means is that if we take restaurant 1 and 2 as part of this example, as we index that restaurant 1 can deliver to hexagon 1, 2, 3, 4, we do not know relative to H1 how far away is H2, how far away is H3, and whatnot. This is an important metadata that we needed to pass it to the rankers so the rankers can penalize the faraway stores in conjunction with the conversion rate that they have available. This information needed to be passed on from the upstream team altogether. We started off with this. Now that we have one more dimension that we needed to index data, we were benchmarking a couple of different approaches of how we could have both the hexagon and the ETD indexed and used in the retrieval layer.

What we finally ended up doing is that for each and every range, after discussions with product and science team and whatnot, we aligned on what ranges make sense in terms of our query pattern and we said, let’s break them down into a few ranges that overall reflects how people are querying its ecosystem. We dissected it by multiple of these time ranges, 0 to 10 minutes, 10 to 20 minutes, 20 to 30 and whatnot. After we dissected it, we also said, from this eater’s location, let’s say hexagon 1, what are the restaurants which are available in range 1, range 2, range 3, and so on. We did that for every single hexagon available. For those of you who are following along and then thinking about, I smell something here, so how about there are other different ways of doing things?

For example, in this case, there is a tradeoff that we make in terms of, where do we want the complexity to be? Should it be in the storage layer or should it be in the CPU? In this case specifically, if we take a particular restaurant A, that restaurant can be in multiple different hex ETD ranges. Restaurant A could be 10 minutes from hexagon 1, 30 minutes from hexagon 2, which means that we store it a couple of times or multiple times in this case. That is a case where at the cost of storage and ingestion level offline optimization, we get the benefit of making the query faster.

Even for this, we tried a couple of different approaches, and we will soon enough have a blog post which talks more about the multiple alternate benchmarks that we did, where we would go in-depth into one of the other approaches we tried. We tried a BKD-tree approach to see, can we do this in log-in operation, and also a couple of other approaches around, I would only maintain the hexagons as part of the KD-tree, but in the retrieval layer, I could make a gRPC call to get the ETD information and then sort it in memory. Will that work? We did a bunch of different things to get there.

Finally, this is how our query layer looks like. Like Karthik mentioned, this is like gRPC layer between delivery and the search platform. We added a new paradigm of these range queries and we started having multiple ranges that we can operate with. This enabled us to leverage the power of parallelization. To visualize this, if a request comes in, let’s say somewhere in the yellow circle, for that request, there will be multiple queries which would be sent from the application layer all the way to the storage layer.

One query would be for the yellow layer, which is the closest bucket, and another query for the light green and dark green and so on. This is how we were able to get this nX in selection expansion at constant latency, regardless of which line of business that we care about. It involved changes in every single part of search and delivery ecosystem, multiple services, multiple engineers to get it to the finish line. After we made all of these changes, we put it into production and we saw that the latency decreased by 50%, which we thought was originally not possible. The cherry on top is we were also able to increase the recall. Before this, we had a different algorithm to query the concentric circle of expanding radius, and in that algorithm, we made a tradeoff between recall and latency. In this case, we were able to get more stores because that is how the rankers are able to see more candidates to make the optimization.

One of the other use cases that we haven’t spoken about so far, but also important enough in terms of customer experience, is the non-deliverable stores. In Uber, at least in the restaurant side, there can be many cases where you would look for a store, but it is not yet available, not yet deliverable, but it is available for pickup. The reason this exists is based on marketplace conditions, where the merchants are located, where we could send couriers, the time of the day and whatnot.

At some time of the day, we won’t be able to deliver to a restaurant, and this deliverability of a particular restaurant is also dynamic. Given this, we still want the eater to know that we do have this restaurant, but for some other reasons, we won’t be able to deliver at this particular point in time. We wanted to support this. Even in this use case, we moved a bunch of complexity from the query layer into the ingestion layer. At the ingestion layer, we did an intersection of how many of these hexagons are deliverable from the store, how many of them are only discoverable. We did that discoverable minus deliverable intersection, stored it in the index, so at the time of query layer, we would quite simply be able to say that, ok, it’s either in the deliverable or in the discoverable, and I could get it from there.

Key Takeaways

Overall, what we wanted to get out of this is, we started from first principles. When the latency shot up, we did a benchmark to understand where it is coming from, and started to narrow it down to the sharding strategy of, I have a San Francisco document and I have a bunch of Japan documents, because Japan has the most concentrated restaurants possible, so given that, if I were to take a San Francisco request and go through a bunch of Japan restaurants, that is obviously going to increase the latency. That is where the millisecond latency in get next doc came in. Index layout is one of the most overlooked pieces of software that we don’t even look at, where we needed to spend the two to three years to understand the query pattern, and then figure out what is it that we needed to do in our index layout so that it can be optimized for the request pattern that we care about.

Then, the sharding strategy needed to be aligned based on what we are trying to get at. We even saw test stores, which were part of the production index, which was adding to this latency. Three times the stores that we had originally were test stores, and we were processing all of those things when we were trying to process a real-time request, so we needed to create a separate cluster for the test stores.

Apart from this, there were a few other expensive operations which used to happen in the query layer. We had some fallbacks available at the query layer. In any distributed system, there is always going to be timeout. There is always going to be some data which is not available from the upstream. When those things happen, we used to have a query layer fallback to say, try to get it from this place, or if you don’t get it from this service, get it from this other service, or get it from a bunch of other places. We moved all of this fallback logic to the ingestion layer, so at the query layer, we just know that I’m going to query and get the data that I need, and all of the corner cases are being handled.

Apart from the parallelization based on ETD, we also had a bunch of other parallelizations in terms of, this is a strong match in terms of query, this is a fuzzy match, and this is either/or match, let’s say Burger and King would mean that I’m going to look for stores which have Burger and also look for stores which have King, and then do a match. We did all of these different things to leverage the non-overlapping subqueries and get the power of parallelization.

Time to Resolution

How much time do you think was expected to be spent to solve this problem? We spent about two to three months to identify where the problem is, because there are multiple different teams, like feed is a separate team, ads is a separate team, suggestions is a separate team, 1000 engineers together. We needed to instrument in multiple different parts to even identify the problem for two to three months. It took us four to six months to get to the first round of XP. Right now, I think this Q1 or so, we are available in the rest of the world too.

Questions and Answers

Participant 1: You did all this moving of stuff to the ingestion side, is there anything you put in there that you don’t need anymore, that you’d want to go back and take out?

Narayanan: This architecture that we have is also something which is evolving. From that perspective, there are some changes that we did in terms of live ingestion. I would give an example. We went with the idea that many use cases need to be live ingested, and then we realized that there are some cases which don’t even need to be ingested at that time, which would also help us in building the indexes faster. The SLAs will become better. One thing that we decided to take out later is when a store moves a location, that location update used to be a live ingestion, which will go through Kafka and then get it into the index.

Operations said, we need to get it in right after the store moves, and it has to be in milliseconds of latency to get it ingested. When we started understanding more of what the use case is, there is a time period involved between when the store decides to move a location, when ops gets to know that, and when tech will start moving it. They usually have made this decision two or three months in advance, and we have time for about a week or two weeks to actually make that transition. We decided that, the priority queue approach that he talked about as part of the ingestion, so we don’t need this as a priority, because this can go as part of the base index build, that is not going to use my compute resources.

Participant 2: You mentioned about the two months to identify the problem, and it takes two weeks to solve it. What kind of observability platform do you use to measure these metrics? Do you use any OpenTelemetry to identify where those queries are slowing down, and what queries are performing?

Narayanan: The expectation when we started the problem was that we will land it in two weeks, not that it took us two weeks to solve the problem.

On OpenTelemetry, we have in-house telemetry services that we have in place. In fact, there is one company branched out of some of the engineers who worked in the team. We use M3. That is our metric system, and that is what we integrated. Jaeger for tracing. When we started instrumenting it, at that time our search infrastructure wasn’t integrated with M3, so that was also something that we needed to do along the way to get it instrumented and then get it out the door. One reason we didn’t do that at that time was because of in-memory usage for, it’s a sidecar agent. Because of that, we didn’t want to have that in-memory usage at the time of production. We spun off a separate cluster, which was very identical in terms of hardware configurations and capacity, and that is where we did all of our benchmarks so that it doesn’t impact production.

Participant 3: You said you use Lucene for indexing. Does that mean that for searching specifically, you have a separate cluster that is specifically used just for searching versus indexing, or is it the same cluster that serves both reads and writes?

Ramasamy: We use the same cluster for search and indexing at the time. If you notice the architecture, we have two components of ingestion. One is the batch ingestion and the real-time ingestion. What we do is we move all of the heavy lifting on the indexing to the batch side, and the live ingestion or real-time ingestion is kept lightweight. Searcher node is utilized mostly for queries. Very little is used for indexing. That’s the current state. We are also working on the next generation system where we are going to fully separate the searcher and the indexer.

Participant 4: I would think that most people are querying for food when they’re at home or at work, and so subsequent queries are going to be pretty common. Do you all do anything in order to reduce the search space, you effectively cache the hex cells on the subsequent calls? For example, if I’m at my house, on the first query, you have to go out and do the work to determine what the boundaries are, but then on the subsequent queries, the geography isn’t changing. The only thing that’s changing is the restaurants. Have you all looked at that type of caching for subsequent queries?

Narayanan: We do have a cache in place, not for the purposes that you’re looking for. We don’t cache some of these requests, because if we look at the session, so throughout the session we do have some information that we maintain in memory, and then we could serve from there. We haven’t done a distributed cache there. Many at times, we want to also be able to dynamically look at store availability, item availability, which changes very often especially during the peak times. People run out of things, like restaurants run out of things. Because of that, we don’t intentionally do caching for that particular purpose.

Also, the delivery radius or the deliverability also expands and shrinks based on search, based on whether there is accidents in that area, rerouting happens and whatnot. There is a lot of those things in place. If there is an incident, someone could go change, rejigger those delivery zones too. We want that real time to reflect, because the last thing someone wants is to be able to add everything into their cart and then see that the store is no longer available. That is the reason we don’t invest heavily in caching at the time of retrieval in that part, but we use it for a different part in the query layer.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Building Empathy and Accessibility: Fostering Better Engineering Cultures and Developer Experiences

MMS Founder
MMS Erin Doyle

Article originally posted on InfoQ. Visit InfoQ

Transcript

Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I’m sitting down with Erin Doyle. Erin, welcome. Thanks for taking the time to talk to us.

Erin Doyle: Thank you. Really excited to be here.

Shane Hastie: My normal starting point on these conversations is, who’s Erin?

Introductions [01:06]

Erin Doyle: I would describe myself as a generalist, a jack of all trades. I’ve done a lot over the course of my career. Started out at the Kennedy Space Center doing full stack development for the workers who would work on the space shuttle. After that, I did sort of product support, customer support, solutions engineering, which I wouldn’t say was my favorite job, but I learned a lot about troubleshooting, debugging, jumping into code that I didn’t write, and that was pretty valuable. After that, I did mobile development, which I was brand new to, and it was new to the company, so that was an exciting journey. And then I moved on to a full stack e-commerce team, and that’s when I really dove deep into web accessibility because that was a huge priority for the product, and had to learn all about that. And so then I got into talking and teaching workshops on web accessibility.

And then after that I did a little consultancy, which was another completely different experience, had to do everything, all the front, all the back, all the DevOps. And then eventually I moved over to a platform team. So, I’d done full stack development forever and then I got into platform development because I love the idea of supporting developers as my customers. I love doing the work to help enable my teammates, fellow developers to work more efficiently, effectively. And so I got to do a lot more of that over on the platform team. I got to wear a lot of hats, SRE, DevEx, DevProd, Ops, everything. So, now my final step in my journey, I just started a new job as a founding engineer at Quotient where we do development productivity software. So, that’s been a dream of mine. And of course as a founding engineer, I get to wear all the hats. The build-up of my career has led to this and I’m really excited about it.

Shane Hastie: Thank you. Let’s dig in first in one of those early passions, accessibility, web accessibility. Why does it matter so much?

Why accessibility matters [03:22]

Erin Doyle: Yes. It’s huge. And primarily it was huge to our customers at that company because we had a white label application for businesses, and these are businesses selling a product, and so, A, there was a lot of legal ramifications of websites that are trying to sell product. If they’re not accessible, people could sue for that. And we did have customers who had been sued for their own websites in the past, so it was a huge priority for them to be safe from legal action. But also if a customer can’t check out, if a customer can’t buy your product, you’re not making as much money.

Unfortunately, the bottom line for the business is maybe not altruistic, but that’s where it started. And once it started, and we could get into that, I could start talking to people about, we need to do this because we care, because accessibility matters to all of us. We should all have an equal experience using the web and mobile. It’s become our world. Everything we do is online. And so if we’re blocking people from being able to do the things they need to be able to do, the things they want to be able to do, that’s just not right. And so to care about our fellow human, we want to make these experiences equal and accessible. So, those are the big reasons why, legal, to make money, and because we care about our fellow humans.

Shane Hastie: What does good accessibility come down to?

Accessibility practices [04:57]

Erin Doyle: I think it’s really understanding your users, your audience and how they’re approaching your product. So, there’s a whole range of limitations. I’m not even going to just call them disabilities because sometimes it’s limitations. Sometimes it’s a temporary limitation that any of us could run into at any point in time. It could be environmental, it could be health related, whatever. It’s that many of us may approach using the web with some sort of limitation, whether it’s manual, meaning, we can’t use our hands in the same ways, maybe we can’t use a mouse, maybe we have dexterity issues, maybe it’s visual. We’re either blind or colorblind or just have impairment in your vision. Maybe it’s auditory, you can’t hear if there’s something that’s playing. So, there’s a lot of ways that people might be limited. We need to understand, based on those limitations, how are they using the web? How are they using computers? How are they using mobile devices? And then actually experiencing that.

I think really we have to try to put ourselves in their shoes as best we can and test out our applications using those same approaches that they might be using. Maybe it’s keyboard only. Maybe you can up the contrast. There’s lots of color contrast tools that maybe people who are colorblind might use. I’ve found issues where suddenly buttons on the page disappeared when the contrast was turned up on the site, and so people wouldn’t even know that those were there. Screen readers, every device has options of screen readers, and so actually using your site with a screen reader and/or keyboard only is quite a different experience. So, really approaching your site with all those and seeing, what is that user experience like and how do we need to modify it?

What are the things that we need to have the screen reader announce to people? Are there things that as the state of the page changes, we need to let them know that this thing is visible now? Or there’s an error, what is it and where is it? So, we need to just approach the user experience differently. And that’s why it really helps if you can get designers involved in this, so it’s not landing at the development stage. It’s sort of like, what is the user experience for someone using a screen reader? What is it for someone using a keyboard, etc. So, then we can develop matching those experiences.

Shane Hastie: There are guidelines. Are they useful?

Making sense of accessibility guidelines and tools [07:56]

Erin Doyle: It can be overwhelming. If you go to the various guidelines that are out there or if you look at the legal, well, and they vary by country. So, what the laws are in the US for accessibility are going to be different from the EU and so forth. So, approaching it from that direction can be really overwhelming. What are all the guidelines? How do I go through a checklist and make sure my site is doing all of these things? There are a lot of tools that help us, that can audit for the low hanging fruit, the things that are just easier to catch, not just statically, but even a runtime. We’ve got tools that you can run into the browser that as the state of the site changes, it can detect those things. But again, those are the really easier things to catch, so that you can at least start there.

I’ve always recommended taking a test-driven approach. Instead of starting with the guidelines and trying to make sure you’re matching them, start with a test-driven approach. Add the tools that are going to make your job a little easier, catch the low hanging fruit and have it tell you, this is inaccessible. And usually they’re really good at telling you why. You can get to more documentation that tells you why is this important? Why is this a problem? Here’s some recommendations on how to fix it. And so you can build up your knowledge set, bit by bit of, okay, these are the things we typically need to look for and these are the ways we typically fix them. And then you can create shared component libraries and things like that. So, you really only need to fix it once and it’s going to roll out to multiple places.

So, there are ways to help you be more efficient and get your arms around all these things to learn and make sure you’re doing. But after that, you really have to, like I said, you have to test the app. You have to find the rough edges of like, oh, this experience is terrible, or maybe technically this is accessible, but it’s really hard to do what I need to do, and then adjust that because we care about the user experience or we should. But you can go so far as auditors, there are many auditors out there that will test your site and will really go through the checklist and make sure you’re fully compliant. There are certifications out there you can get. But what’s tough is it’s a moving target. Every time you add a new feature, every time you make a change, you might inadvertently have just broken something about the accessibility of your site. So, it needs to be baked into the QA process as well to make sure you’re constantly testing and auditing for these things.

Shane Hastie: And as our styles of interaction change, so I’m thinking for many people today, the AI chatbot that’s becoming the core UI. How do we adapt to, as developers, how do we adapt to the new demands?

Adapting to new UI paradigms and accessibility challenges [11:15]

Erin Doyle: Yes. Again, we just have to be constantly testing, constantly learning and looking into those things. That’s a great point, is that this has become sort of a new UI paradigm that we’re running into a lot. And it may be that many of these chatbots, it’s one thing just to figure out, just to learn, how do I even add this AI chatbot? But then digging into making it accessible. So, we can’t just tack on these things and not continue to do our due diligence to make sure the experience is still accessible. Because if this is going to become the main way you interface with these products, you’ve just taken a site that might have been accessible for a lot of people and now added this huge barrier. It goes along with these overlays. I’m sure you’ve seen various websites that have this little icon somewhere at the bottom of the page and with some sort of accessibility icon on it.

And these overlay products really sell themselves as, we’re going to do all this for you, all you have to do is add this little thing to your code and we’re going to magically make it accessible. You don’t have to be an expert on accessibility, you just throw this in. And that’s really false. There are some ways that these overlays actually make things less accessible. Many of them add in their own screen reader, but it’s not as capable as the screen readers that are either built into the OS or that are natively available. So, they really don’t fix or make anything better. They actually hamper the accessibility. So, there’s no shortcuts. Just like everything else in software development, we have to learn our craft. We have to learn languages and frameworks and tools, and this should just really be one of the things that everybody, well, not everybody, front end developers, that just becomes part of your tool set, becomes part of your skill set.

Shane Hastie: Switching topics slightly, you made the point of your very wide experiences brought you to a state where you are comfortable now building products for engineers in that platform enablement, DevEx type space. What’s different when you are building products for engineers?

Building products for engineers [13:58]

Erin Doyle: Yes. First of all, what drew me there was empathy. I, firsthand, understand that experience when you have a workflow that’s painful, when you have friction in your day-to-day work, your feedback loops are slow or there’s just something awkward or annoying or painful or whatever, that you’re constantly being hampered and impeded and just doing the work that you need to do. I know how frustrating that is. I also know how hard it is to be on a product development team where you’re constantly under schedule pressure. We’ve got this feature we’ve got to get out, we’ve got this looming deadline, we don’t have time to go make this thing better or make this easier for us. So, we’re just going to put up with the pain, put up with these things that slow us down. We’re used to it, whatever. Maybe after the next feature launches, we can come back to it and make it better and that maybe in the future never happens.

And so I’ve experienced that over the years and I was always drawn to taking on those tasks of trying to make those things better for my team, so that my team members can stay heads down, focused on getting those features out, and I can try to help enable them to be more efficient. So, I’ve always felt passionate about that. So, when I had a chance to join a platform team, that was sort of my argument of, I know I don’t have a lot of experience in the Ops space or the DevOps space, I’ve got some, but I think I bring a perspective of what it’s like to be a product development engineer and what it’s like to deal with these things that are constantly getting in my way and what it’s like to work with a platform or DevOps team. What it’s like to be on the other side of the wall.

Understanding developer and platform team dynamics [15:50]

And so I was hoping to bring with that, this empathy, this understanding of the developer perspective. And there are all these stereotypes of when you’re on the dev side, you might be thinking, I really need help with this thing, but if I reach out to platform or DevOps, they’re going to think I’m stupid or they don’t have time. They’re so busy with all this stuff that they have to do, they don’t have time to help me with this thing. Or we speak different languages. They might be saying something that’s totally over my head and that might be a little intimidating. So, I have that. I understand what that’s like, and I have felt hampered in the past from collaborating or asking for help from the DevOps team or the platform team. So, jumping to the other side was really fascinating to see the perspective of the platform team working with developers, developers who are constantly being pushed and rushed and forced to cut corners and thus create problems that the platform team has to solve or SREs have to solve.

And so then there’s this stereotype of devs are lazy, devs are always cutting corners, devs don’t care about quality, or they don’t do their due diligence before they do a thing and they don’t want to collaborate with us. When we ask them to work with us to do something like database upgrades for instance, we need to work with them because they need to test their code to make sure it still works with the thing we’ve upgraded, but they don’t have time for this. They don’t make time for these kinds of things because it’s not on their product roadmap. So, I saw both sides of how we see each other over the wall, but having empathy for what it’s like to actually be in the shoes of either side was super powerful for me. And I was able to explain to my teammates, I know we think that maybe the devs are being lazy or maybe they should know more about this thing that we’re having to help them with, but they don’t have time.

Cognitive load and shifting responsibilities in development [18:06]

They’re taking on such a high cognitive load, especially these days as we’ve added so much in the cloud, as we’ve added so much tooling. We’ve got so many products, we’ve got so many options now and we’ve shifted left so much. The DevOps movement shifted a lot left to developers. The, you build it, you run it. Developers have to be responsible for a lot, a lot of infrastructure, a lot of architectural decisions, security, even as we just spoke about web accessibility. We’ve put a lot on their plates. So, in order to continue to meet their deadlines, they are forced to cut corners. They aren’t able to learn how to use every tool and how to really be knowledgeable about everything in the landscape. So, to explain that to my platform team members of like, we’ve got to keep in mind the context here of what they’re dealing with.

And then on the other end, how can we as a platform team change how we are seeing or interpreted by the developers, so that when they do need help, they’ll ask for it. So, they’re comfortable asking for the help they need. When they have to reach into this space of creating new infrastructure for the feature they need or getting the right amount of observability on some new feature, when they need that help, we need to be approachable, we need to be available or else they’re going to do it on their own. Again, going back to the DevOps movement, we have this concept of devs should just be able to do it all on their own in order to move quickly, but that’s asking too much these days. And so I really wanted to promote this idea of, we can meet you where you’re at. The platform team can stretch over the aisle and meet you where you’re at.

And so if you don’t know how to use these tools, if you’re not knowledgeable about the various infrastructure options available these days, let’s collaborate on it. Let’s work on the architectural design together. Bring us in as a partner, so that we can help you do these things to get that feature ready for production. Instead of the developer shouldering it all and not doing as good a job of it because they’re in an area they’re not experienced in. They’re doing their best, but if they don’t have that deep experience, they’re going to make mistakes, they’re going to miss things. And when they miss, it becomes the platform team or the SRE team’s problem later.

We’re going to have to fix that problem that blows up in production way down the road, or we’re going to find that we’re totally missing observability about something and maybe we have errors or we have problems that we don’t even know are happening or we’re not scaling appropriately. There’s so many outcomes based on just that interaction of devs trying to do more than they really are experienced or set up for success to do, and not feeling comfortable asking for help or collaborating with the platform team. So, building psychological safety, where they feel comfortable, they feel safe, they don’t feel judged, being able to reach out and say, “I really need help with this aspect of this feature I’m working on”.

Shane Hastie: Psychological safety, an incredibly important concept that’s become a buzzword. What does it really mean to create a safe environment where people can ask those slightly embarrassing questions or difficult questions?

Fostering psychological safety and collaboration [22:04]

Erin Doyle: Yes. I found that the more I learn about psychological safety, the more I start to see it everywhere, and I start to see that sort of chain reaction or root cause analysis on negative outcomes being the result of a lack of psychological safety. And it can be complex. It can be subtle because we’re humans. We’re humans interacting with other humans, and we’re all approaching those interactions with preconceived notions and whatever our histories are, whatever our psychological issues are. So, it’s complicated. But I’ve really found, and it’s been hard for me, that the more that I can model vulnerability myself, and especially as I’ve become more senior over the years, the more that I can show that there are things I don’t know or there are mistakes that I make, there’s questions I have, the more I hope I’m creating this environment for other people to feel like they can do the same.

As I was earlier on in my career, as I was just starting to become more senior, I felt a ton of pressure to prove myself, to prove that I deserved the role, and that created a lot of bad behavior. I was a perfectionist. I was a workaholic. If there was something I didn’t know, I couldn’t admit it, and I couldn’t ask for help because then people would know that I didn’t know everything and I wasn’t the best. So, it was really hard to take on all that pressure of like, “Oh, jeez, I don’t know this thing, but I don’t want anybody to know that I don’t know”. So, now I’m going to work extra hours. I’m going to work all weekend to learn this thing and try to present myself as if I knew it all along, and that’s really unhealthy.

And as sort of a negative side effect, I created this model, or I set this bar to my teammates that they had to be perfect too. When I never made any mistakes, I always knew everything, I had all the answers, then they felt, especially the more junior people, they thought like, “Oh, this is the standard, and so I too can’t ask questions. I too can’t ask for help because I guess we don’t do that here”. And so I created a really toxic environment around myself without realizing it. And so as I got a little more senior, a little more experienced, and I finally started hearing people talking about this, I think that’s another thing that’s changed a lot over the years, as more people talk about culture, as more people talk about psychology and working together, the more I was hearing things about psychological safety and how we can impact others with our behavior when we’re not cognizant about the model we’re setting for others.

So, it was really hard for me to go from that perfectionist, I have to be perfect, I have to prove myself to what I have to make mistakes. I have to show people that I don’t know everything, if I want them to feel comfortable working with me, if I want us to be able to collaborate. And I did see examples of where maybe I didn’t make the environment comfortable for others, and I knew that they had questions or I knew that they had things they wanted to say that I’d find out later, that they didn’t say to me or they didn’t ask me. And so I realized, oh, I’m not making people feel comfortable approaching me or being open with me with their thoughts or disagreeing with me. That’s a big thing.

If I don’t make the environment comfortable for people to feel like they can offer an alternate view of something, if I make a statement, if I pose a thought, if I do that in a way that’s too assertive, it could cause other people to feel like, “Oh, well, she must know better than I do”. Or “If I offer this contrary viewpoint, I’m going to sound stupid”. Whatever it may be, whatever that fear is that they’re having in their mind, I’m now not going to hear that alternate viewpoint. I might be wrong. I’m wrong all the time. Maybe there’s just some aspect of this that I’m missing or something that I don’t have personal experience with. And so if I’m shutting off that opportunity to hear that, maybe that was a better idea than what I had, maybe they thought of something that I didn’t and I missed. And so we’re so much better off when we can make people feel comfortable saying, “Oh, well what about this?” Or “Have you thought about that?” Without fear of, I don’t know, it being taken personally, it feeling like conflict.

So, somehow we have to create that environment where it’s not personal, it’s just normal. We can have discourse and it’s comfortable and it’s normal, and it’s how we do our work. But again, you have to sort of plant the seeds. You have to lay the foundation, and that comes from senior people modeling that behavior. Really, I noticed it as I was gaining experience when I would see those people that I looked up to, that I thought were really smart, talented, when I saw that they weren’t perfect, that there was things that they didn’t know and they felt comfortable, they weren’t ashamed, they weren’t apologetic, they were just like, “Hey, I’ve got this question”, or, “Oh, I just broke this thing in prod and just so you know, I’m fixing it”. But it’s not apologetic. It’s not scraping. It’s not, “Oh, I’m so sorry I broke prod. I’m so embarrassed”. We don’t have to be embarrassed. We make mistakes.

And so when you’ve got that attitude of like, “Yes. I made this mistake just FYI. I’m working on it”. It really lowers that barrier for us all to be open about like, “Yep. I made this mistake. Here’s what we’re going to do about it”. Or “I could use some help. I missed this thing”, or “I’m not really knowledgeable about this area that I’m working in, maybe someone else is, maybe they can help me”. And so the more we can just make that normal part of just how we work, the more we can allow space for all those things that if we didn’t have, we’re not doing our best work.

Shane Hastie: Being vulnerable, leading by example. It’s hard.

Modeling vulnerability and leading by example [28:36]

Erin Doyle: Yes. It’s really hard. I almost equate it to, I don’t know if you’ve ever jumped off something high up, jumped off into the water or, well, I guess you’d usually be jumping into water, but if you’ve ever taken a leap, I have a fear of heights, so that’s really scary for me. I’m never going to go skydiving or bungee jumping, but those concepts of, I’m going to take a leap and I have to believe that I’m going to be fine. It’s just this scary little thing that I have to get over, but the belief that I’m going to be fine is what pushes me to take that step. I feel that way all the time. I still to this day feel that way all the time. I’ll have a question or a problem or whatever, and I’ll pause and I’ll think, if I ask this, maybe I’ll sound stupid, this could be embarrassing, whatever.

I still have those thoughts, but then I just remind myself of that’s okay, you know that this is okay, and you know that showing this little bit of vulnerability on a regular basis is going to help someone else. So, that’s what kind of helps me take that leap.

Shane Hastie: I’ve learnt a lot of really good insights and good advice in there. If people want to continue the conversation, where can they find you?

Erin Doyle: Yes. The easiest place is probably LinkedIn. I’m on there. And I do have my own website. It’s got a few blog pages on it, unfortunately not as many as I’d like. And it’s also got whatever talks I’ve done or articles I’ve been featured, whatever I’m doing out in the community is listed there. And that’s just erindoyle.dev.

Shane Hastie: Thank you so much for taking the time to talk to us.

Erin Doyle: Thanks. It’s been a lot of fun.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

MMS Founder
MMS Rohit Garg Ankit Awasthi

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Distributed cloud computing enables efficient data processing across multiple nodes.
  • Privacy-enhanced technologies (PETs) ensure secure data analysis with compliance and protection.
  • AI-powered tools streamline data processing workflows and identify potential security threats.
  • Secure and private cloud computing technologies foster trust among organizations, enabling seamless collaboration.
  • The integration of AI, PETs, and distributed cloud computing revolutionizes data processing and analysis.

As the world becomes increasingly digital, the need for secure and private data processing has never been more pressing. Distributed cloud computing offers a promising solution to this challenge by allowing data to be processed in a decentralized manner, reducing reliance on centralized servers and minimizing the risk of data breaches.

In this article, we’ll explore how distributed cloud computing can be combined with Privacy Enhanced Technologies (PETs) and Artificial Intelligence (AI) to create a robust and secure data processing framework.

What is Distributed Cloud Computing?

Distributed cloud computing is a paradigm that enables data processing to be distributed across multiple nodes or devices, rather than relying on a centralized server. This approach allows for greater scalability, flexibility, and fault tolerance, as well as improved security and reduced latency. Here is a more detailed look at three of these architectures: hybrid cloud, multi-cloud, and edge computing.

  • Hybrid cloud combines on-premises data centers (private clouds) with public cloud services, allowing data and applications to be shared between them. Hybrid cloud offers greater flexibility and more deployment options. It allows businesses to scale their on-premises infrastructure up to the public cloud to handle any overflow, without giving third-party data centers access to the entirety of their data. Hybrid cloud architecture is ideal for businesses that need to keep certain data private but want to leverage the power of public cloud services for other operations. In a hybrid cloud environment, sensitive data may be stored on-premises, while less critical data is processed in the public cloud.
  • Multi-cloud refers to the use of multiple cloud computing services from different providers in a single architecture. This approach avoids vendor lock-in, increases redundancy, and allows businesses to choose the best services from each provider. Companies that want to optimize their cloud environment by selecting specific services from different providers to meet their unique needs can benefit from this tool. However, ​​using multi-cloud can result in data fragmentation, where sensitive information is scattered across different cloud environments, increasing the risk of data breaches and unauthorized access. To mitigate these risks, organizations must implement robust data governance policies, including data classification, access controls, and encryption mechanisms, to protect sensitive data regardless of the cloud provider used.
  • Edge computing brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This tool reduces latency, improves performance, and allows for real-time data processing. It is particularly useful for IoT devices and applications that require immediate data processing, such as autonomous vehicles, smart cities, and industrial IoT applications. Edge computing faces a significant security challenge in the form of physical security risks due to remote or public locations of edge devices, which can be mitigated by implementing tamper-evident or tamper-resistant enclosures, and using secure boot mechanisms to prevent unauthorized access, ultimately reducing the risk of physical tampering or theft and ensuring the integrity of edge devices and data.

Distributed cloud computing is enhanced when leveraging PETs, which are designed to protect sensitive information from unauthorized access, while still allowing for secure data processing across distributed systems.

PETs

PETs offer powerful tools for preserving individual privacy while still allowing for data analysis and processing. From homomorphic encryption to secure multi-party computation, these technologies have the potential to transform the way we process data. 

To illustrate the practical application of these powerful privacy-preserving tools, let’s examine some notable examples of PETs in action, such as Amazon Clean Rooms, Microsoft Azure Purview, and Meta’s Conversions API Gateway.

Amazon Clean Rooms

Amazon Clean Rooms is a secure environment within AWS that enables multiple parties to collaborate on data projects without compromising data ownership or confidentiality. Amazon  provides a virtual “clean room” where data from different sources can be combined, analyzed, and processed without exposing sensitive information. Their framework leverages differential privacy features, which add noise to data queries to prevent the identification of individual data points and maintain privacy even when data is aggregated. Additionally, secure aggregation techniques are employed involving combining data in a way that individual data points cannot be discerned, often through methods like homomorphic encryption or secure multi-party computation (MPC) that allow computations on encrypted data without revealing it.

The core idea behind Amazon Clean Rooms is to create a trusted environment by leveraging AWS Nitro Enclaves, which are a form of Trusted Execution Environment (TEE). Clean rooms provide a secure area within a processor to execute code and process data, protecting sensitive data from unauthorized access. Data providers can share their data with other parties, such as researchers, analysts, or developers, without risking data breaches or non-compliance with regulations.

In a healthcare scenario, Amazon Clean Rooms can facilitate collaboration among different healthcare providers by allowing them to share and analyze anonymized patient data to identify trends in a specific disease without compromising patient privacy. For instance, multiple hospitals could contribute anonymized datasets containing patient demographics, symptoms, treatment outcomes, and other relevant information into a clean room. Using differential privacy, noise is added to the data queries, ensuring that individual patient identities remain protected even as aggregate trends are analyzed. 

Secure aggregation techniques, such as homomorphic encryption and secure multi-party computation, enable computations on this encrypted data, allowing researchers to identify patterns or correlations in disease progression or treatment efficacy without accessing raw patient data. This collaborative analysis can lead to valuable insights into disease trends, helping healthcare providers improve treatment strategies and patient outcomes while maintaining strict compliance with privacy regulations.

These improved treatment strategies are achieved through a combination of advanced security features, including:

  • Data encryption both in transit and at rest, ensuring that only authorized parties can gain access
  • Fine-grained access controls ensure that each party can only use the data for which they are authorized
  • Auditing and logging of all activities within the clean room for a clear trail of data access and use

Microsoft Azure Purview

Microsoft Azure Purview is a cloud-native data governance and compliance solution that helps organizations manage and protect their data across multiple sources, including on-premises, cloud, and hybrid environments. It provides a unified platform for data governance, discovery, classification, and compliance, enabling organizations to monitor and report on regulatory requirements such as General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), and the California Consumer Privacy Act (CCPA). With features including automated data discovery and classification, data lineage and visualization, and risk management, Azure Purview improves data governance and compliance, enhances data security and protection, increases transparency and visibility into data usage, and simplifies data management and reporting.

  • Data classification. Azure Purview data classification employs a hybrid approach, combining Microsoft Information Protection (MIP) SDK and Azure Machine Learning (AML) to identify sensitive data. It leverages content inspection APIs to extract features from data stores, which are then matched against predefined classification rules or machine learning models (e.g., Support Vector Machines (SVMs) and Random Forests) to assign classification labels (e.g., “Confidential” and “Sensitive”) and corresponding sensitivity levels (low to high). This enables targeted security controls and compliance with regulatory requirements.
  • Data lineage. Azure Purview’s data lineage tracks the origin, processing, and movement of data across Azure resources. It constructs a graph from metadata sources like Azure Data Factory and Azure Databricks, illustrating relationships between data assets. This relationship illustration helps users to identify potential privacy risks, ensure compliance, and detect sensitive data misuse by traversing the graph and visualizing data flows.
  • Integration with PETs. While Azure Purview itself is not a PET, it can integrate with other tools and technologies that enhance data privacy. For example, it can work alongside encryption tools like Azure Key Vault, access control mechanisms like Azure Active Directory (AAD), and anonymization techniques like k-anonymity and differential privacy. By providing a unified view of data governance and compliance, Azure Purview makes it easier to implement and manage these PETs, ensuring that data privacy is maintained throughout its lifecycle.

Meta’s Conversions API Gateway

Meta’s Conversions API Gateway is a distributed cloud computation framework that focuses on user data privacy and security. It is designed to comply with regulations, helping advertisers and app developers establish trust with their users. By installing it in their managed cloud environments, users maintain control over not just their data but the underlying infrastructure as well.

The platform integrates security and data management by utilizing role-based access control (RBAC) to create a policy workflow. This workflow enables users and advertisers to effectively manage the information they share with third-party entities. By implementing access controls and data retention policies, the platform ensures that sensitive data is safeguarded against unauthorized access, thereby complying with regulatory standards like the General Data Protection Regulation.

Having explored some key examples of PETs, it’s insightful to consider their current level of real-world application. Based on industry research, the following data provides an overview of the adoption rates of various PETs.

Adoption rates of PETs

Technology Description Adoption Rate What is the adoption about?
Homomorphic Encryption (HE) Enables computations on encrypted data without decryption 22% Companies adopting HE to protect sensitive data in cloud storage and analytics
Zero-Knowledge Proofs (ZKP) Verifies authenticity without revealing sensitive information 18% Organizations using ZKP for secure authentication and identity verification
Differential Privacy (DP) Protects individual data by adding noise to query results 25% Data-driven companies adopting DP to ensure anonymized data analysis and insights
Secure Multi-Party Computation (SMPC) Enables secure collaboration on private data 12% Businesses using SMPC for secure data sharing and collaborative research
Federated Learning (FL) Trains AI models on decentralized, private data 30% Companies adopting FL to develop AI models while preserving data ownership and control
Trusted Execution Environments (TEE) Provides secure, isolated environments for sensitive computations 20% Organizations using TEE to protect sensitive data processing and analytics
Anonymization Techniques (e.g., k-anonymity) Masks personal data to prevent reidentification 40% Companies adopting anonymization techniques to comply with data protection regulations
Pseudonymization Techniques (e.g., tokenization) Replaces sensitive data with pseudonyms or tokens 35% Businesses using pseudonymization techniques to reduce data breach risks and protect customer data
Amazon Clean Rooms Enables secure, collaborative analysis of sensitive data in a controlled environment 28% Companies using Amazon Clean Rooms for secure data collaboration and analysis in regulated industries
Microsoft Azure Purview Provides unified data governance and compliance management across multiple sources 32% Organizations adopting Azure Purview to streamline data governance, compliance, and risk management

Sources:

The adoption rates illustrate the growing importance of privacy-preserving techniques in distributed environments. Now, let’s explore how AI can be integrated into this landscape to enable more intelligent decision-making, automation, and enhanced security within distributed cloud computing and PET frameworks.

AI in Distributed Cloud Computing

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We’re excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. 

Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected.

While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let’s delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.

Benefits of PETs in Distributed Cloud Computing

PETs, like Amazon Clean Rooms, offer numerous benefits for organizations looking to collaborate on data projects while maintaining regulatory compliance. Some of the key advantages include:

  • Improved data collaboration. Multiple parties work together on data projects, fostering innovation and driving business growth.
  • Enhanced data security. The secure environment ensures that sensitive data is protected from unauthorized access or breaches.
  • Regulatory compliance. Organizations can ensure compliance with various regulations and laws governing data sharing and usage.
  • Increased data value. By combining data from different sources, organizations can gain new insights and unlock new business opportunities.

The numerous benefits of integrating PETs within distributed cloud environments pave the way for a wide range of practical applications. Let’s explore some key use cases where these combined technologies demonstrate significant value.

Limitations and Challenges

Despite their benefits, implementing PETs can be complex and challenging. Here are some of the key limitations and challenges:
Scalability and performance. PETs often require significant computational resources, which can impact performance and scalability. As data volumes increase, PETs may struggle to maintain efficiency. For example, homomorphic encryption, which allows computations on encrypted data, can be computationally intensive. This can be a major limitation for real-time applications or large datasets.

  • Interoperability and standardization. Different PETs may have varying levels of compatibility, making it difficult to integrate them into existing systems. Lack of standardization can hinder widespread adoption and limit the effectiveness of PETs.
  • Balancing privacy and utility. PETs often involve trade-offs between privacy and utility; finding the right balance is crucial. Organizations must carefully consider the implications of PETs on business operations and decision-making.
  • Data quality and accuracy. PETs rely on high-quality data to function effectively; poor data quality can compromise their accuracy. Ensuring data accuracy and integrity is critical to maintaining trust in PETs.
  • Regulatory compliance and governance. PETs must comply with various regulations, such as GDPR and CCPA, which can be time-consuming and costly. Ensuring governance and accountability in PET implementation is essential to maintain trust and credibility.

Use Cases

Distributed cloud computing PET frameworks can be applied to a wide range of use cases, including:

  • Marketing analytics. Marketers can use PETs to analyze customer data from different sources, such as social media, website interactions, or purchase history, to gain a deeper understanding of customer behavior and preferences. Businesses can further analyze customer data from different sources, such as demographics, behavior, or preferences, to create targeted marketing campaigns and improve customer engagement. Instead of centralizing the data, they use federated learning to train the model on the decentralized data stored at each hospital.
  • Financial analysis. Financial institutions can use AI in distributed cloud computing to analyze financial data from different sources, such as transaction records, credit reports, or market data, to identify trends and opportunities. To preserve customer privacy, the institution uses differential privacy to add noise to the data before feeding it into the AI model.
  • Healthcare analytics. Healthcare organizations can use Amazon Clean Rooms and AI to analyze patient data from different sources, such as electronic health records, medical imaging, or claims data, to improve patient outcomes and reduce costs.
  • Major video streaming platforms demonstrate practical applications of privacy-enhanced distributed computing. Netflix and Disney+ use edge computing for localized content delivery and regional data compliance. YouTube applies differential privacy for secure viewer analytics and recommendations. Hulu implements federated learning across devices to improve streaming quality without centralizing user data.

Summary

Distributed cloud computing, combined with PETs and AI, offers a robust framework for secure and private data processing. By decentralizing data processing across multiple nodes, this approach reduces reliance on centralized servers, enhancing scalability, flexibility, fault tolerance, and security while minimizing latency and the risk of data breaches. PETs, such as homomorphic encryption and secure multi-party computation, enable secure data analysis without compromising individual privacy, transforming how data is handled. 

Looking ahead, future developments may include integrating edge computing to enhance real-time data processing, exploring quantum computing applications for complex problem-solving and cryptography, developing autonomous data management systems that utilize AI and machine learning, creating decentralized data marketplaces that leverage blockchain technology, and incorporating human-centered design principles to prioritize data privacy and security.

“The future of cloud computing is not just about technology; it’s about trust,” 

Satya Nadella, CEO of Microsoft.

About the Authors

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Analysts’ Recent Ratings Changes for MongoDB (MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Several brokerages have updated their recommendations and price targets on shares of MongoDB (NASDAQ: MDB) in the last few weeks:

  • 4/23/2025 – MongoDB had its price target lowered by analysts at Piper Sandler from $280.00 to $200.00. They now have an “overweight” rating on the stock.
  • 4/17/2025 – MongoDB was upgraded by analysts at Redburn Atlantic from a “sell” rating to a “neutral” rating. They now have a $170.00 price target on the stock.
  • 4/16/2025 – MongoDB had its price target lowered by analysts at Morgan Stanley from $315.00 to $235.00. They now have an “overweight” rating on the stock.
  • 4/15/2025 – MongoDB had its price target lowered by analysts at Mizuho from $250.00 to $190.00. They now have a “neutral” rating on the stock.
  • 4/11/2025 – MongoDB had its price target lowered by analysts at Stifel Nicolaus from $340.00 to $275.00. They now have a “buy” rating on the stock.
  • 4/1/2025 – MongoDB was upgraded by analysts at Daiwa America to a “strong-buy” rating.
  • 4/1/2025 – MongoDB is now covered by analysts at Daiwa Capital Markets. They set an “outperform” rating and a $202.00 price target on the stock.
  • 4/1/2025 – MongoDB had its price target lowered by analysts at Citigroup Inc. from $430.00 to $330.00. They now have a “buy” rating on the stock.
  • 3/31/2025 – MongoDB had its price target lowered by analysts at Truist Financial Co. from $300.00 to $275.00. They now have a “buy” rating on the stock.
  • 3/7/2025 – MongoDB had its price target lowered by analysts at Macquarie from $300.00 to $215.00. They now have a “neutral” rating on the stock.
  • 3/6/2025 – MongoDB had its “buy” rating reaffirmed by analysts at Citigroup Inc..
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Barclays PLC from $330.00 to $280.00. They now have an “overweight” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Piper Sandler from $425.00 to $280.00. They now have an “overweight” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Morgan Stanley from $350.00 to $315.00. They now have an “overweight” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Oppenheimer Holdings Inc. from $400.00 to $330.00. They now have an “outperform” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Royal Bank of Canada from $400.00 to $320.00. They now have an “outperform” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Wedbush from $360.00 to $300.00. They now have an “outperform” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Robert W. Baird from $390.00 to $300.00. They now have an “outperform” rating on the stock.
  • 3/6/2025 – MongoDB was downgraded by analysts at Wells Fargo & Company from an “overweight” rating to an “equal weight” rating. They now have a $225.00 price target on the stock, down previously from $365.00.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at The Goldman Sachs Group, Inc. from $390.00 to $335.00. They now have a “buy” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Stifel Nicolaus from $425.00 to $340.00. They now have a “buy” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Truist Financial Co. from $400.00 to $300.00. They now have a “buy” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Bank of America Co. from $420.00 to $286.00. They now have a “buy” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Canaccord Genuity Group Inc. from $385.00 to $320.00. They now have a “buy” rating on the stock.
  • 3/6/2025 – MongoDB had its price target lowered by analysts at Needham & Company LLC from $415.00 to $270.00. They now have a “buy” rating on the stock.
  • 3/5/2025 – MongoDB had its “sector perform” rating reaffirmed by analysts at Scotiabank. They now have a $240.00 price target on the stock, down previously from $275.00.
  • 3/5/2025 – MongoDB is now covered by analysts at Cantor Fitzgerald. They set an “overweight” rating and a $344.00 price target on the stock.
  • 3/5/2025 – MongoDB was downgraded by analysts at KeyCorp from a “strong-buy” rating to a “hold” rating.
  • 3/4/2025 – MongoDB was given a new $350.00 price target on by analysts at UBS Group AG.
  • 3/4/2025 – MongoDB had its “buy” rating reaffirmed by analysts at Rosenblatt Securities. They now have a $350.00 price target on the stock.
  • 3/3/2025 – MongoDB was upgraded by analysts at Monness Crespi & Hardt from a “sell” rating to a “neutral” rating.
  • 3/3/2025 – MongoDB had its price target lowered by analysts at Loop Capital from $400.00 to $350.00. They now have a “buy” rating on the stock.

MongoDB Stock Up 6.5 %

NASDAQ MDB opened at $173.21 on Friday. MongoDB, Inc. has a 12-month low of $140.78 and a 12-month high of $387.19. The firm’s 50 day moving average is $197.65 and its 200-day moving average is $249.73. The company has a market capitalization of $14.06 billion, a PE ratio of -63.22 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The company had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the prior year, the business earned $0.86 earnings per share. As a group, sell-side analysts expect that MongoDB, Inc. will post -1.78 EPS for the current year.

Insider Activity

<!—->

In other MongoDB news, CAO Thomas Bull sold 301 shares of the stock in a transaction dated Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now directly owns 14,598 shares of the company’s stock, valued at $2,529,103.50. This represents a 2.02 % decrease in their position. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this hyperlink. Also, CEO Dev Ittycheria sold 8,335 shares of the firm’s stock in a transaction that occurred on Tuesday, January 28th. The stock was sold at an average price of $279.99, for a total transaction of $2,333,716.65. Following the sale, the chief executive officer now owns 217,294 shares of the company’s stock, valued at $60,840,147.06. This trade represents a 3.69 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last ninety days, insiders have sold 47,680 shares of company stock valued at $10,819,027. 3.60% of the stock is owned by company insiders.

Institutional Inflows and Outflows

A number of institutional investors and hedge funds have recently added to or reduced their stakes in MDB. Vanguard Group Inc. boosted its stake in shares of MongoDB by 0.3% in the fourth quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock valued at $1,706,205,000 after buying an additional 23,942 shares during the period. Franklin Resources Inc. boosted its stake in shares of MongoDB by 9.7% in the 4th quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock valued at $478,398,000 after purchasing an additional 181,962 shares in the last quarter. Geode Capital Management LLC boosted its position in MongoDB by 1.8% in the fourth quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock valued at $290,987,000 after buying an additional 22,106 shares in the last quarter. First Trust Advisors LP boosted its holdings in shares of MongoDB by 12.6% during the 4th quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after acquiring an additional 95,893 shares in the last quarter. Finally, Norges Bank bought a new stake in MongoDB during the 4th quarter worth $189,584,000. 89.29% of the stock is owned by institutional investors and hedge funds.

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles



Receive News & Ratings for MongoDB Inc Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB Inc and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.