MongoDB, Inc. (NASDAQ:MDB) Director Sells $272,970.00 in Stock – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction dated Thursday, October 10th. The stock was sold at an average price of $272.97, for a total transaction of $272,970.00. Following the completion of the sale, the director now directly owns 1,130,006 shares in the company, valued at $308,457,737.82. This trade represents a 0.00 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website.

MongoDB Stock Down 1.5 %

MDB traded down $4.48 during trading on Tuesday, hitting $284.66. The stock had a trading volume of 679,636 shares, compared to its average volume of 1,448,085. The company has a market capitalization of $20.88 billion, a PE ratio of -102.90 and a beta of 1.15. MongoDB, Inc. has a twelve month low of $212.74 and a twelve month high of $509.62. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. The stock has a 50 day simple moving average of $266.49 and a two-hundred day simple moving average of $286.66.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Thursday, August 29th. The company reported $0.70 EPS for the quarter, topping the consensus estimate of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The business had revenue of $478.11 million during the quarter, compared to the consensus estimate of $465.03 million. During the same period in the previous year, the firm posted ($0.63) earnings per share. The business’s revenue for the quarter was up 12.8% compared to the same quarter last year. On average, research analysts predict that MongoDB, Inc. will post -2.44 earnings per share for the current year.

Institutional Investors Weigh In On MongoDB

Several institutional investors and hedge funds have recently made changes to their positions in MDB. Vanguard Group Inc. raised its holdings in shares of MongoDB by 1.0% during the first quarter. Vanguard Group Inc. now owns 6,910,761 shares of the company’s stock worth $2,478,475,000 after purchasing an additional 68,348 shares during the last quarter. Jennison Associates LLC raised its holdings in shares of MongoDB by 14.3% during the first quarter. Jennison Associates LLC now owns 4,408,424 shares of the company’s stock worth $1,581,037,000 after purchasing an additional 551,567 shares during the last quarter. Swedbank AB raised its holdings in shares of MongoDB by 156.3% during the second quarter. Swedbank AB now owns 656,993 shares of the company’s stock worth $164,222,000 after purchasing an additional 400,705 shares during the last quarter. Champlain Investment Partners LLC raised its holdings in shares of MongoDB by 22.4% during the first quarter. Champlain Investment Partners LLC now owns 550,684 shares of the company’s stock worth $197,497,000 after purchasing an additional 100,725 shares during the last quarter. Finally, Clearbridge Investments LLC raised its holdings in shares of MongoDB by 109.0% during the first quarter. Clearbridge Investments LLC now owns 445,084 shares of the company’s stock worth $159,625,000 after purchasing an additional 232,101 shares during the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

Wall Street Analyst Weigh In

A number of equities analysts have recently commented on MDB shares. Sanford C. Bernstein raised their target price on shares of MongoDB from $358.00 to $360.00 and gave the stock an “outperform” rating in a research report on Friday, August 30th. Mizuho raised their target price on shares of MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research report on Friday, August 30th. Truist Financial raised their target price on shares of MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Stifel Nicolaus raised their target price on shares of MongoDB from $300.00 to $325.00 and gave the stock a “buy” rating in a research report on Friday, August 30th. Finally, Wells Fargo & Company raised their target price on shares of MongoDB from $300.00 to $350.00 and gave the stock an “overweight” rating in a research report on Friday, August 30th. One equities research analyst has rated the stock with a sell rating, five have issued a hold rating and twenty have assigned a buy rating to the company. According to MarketBeat, the stock presently has an average rating of “Moderate Buy” and an average target price of $337.96.

View Our Latest Stock Report on MongoDB

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Insider Buying and Selling by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Stocks That Could Be Bigger Than Tesla, Nvidia, and Google Cover

Growth stocks offer a lot of bang for your buck, and we’ve got the next upcoming superstars to strongly consider for your portfolio.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Director Dwight A. Merriman Sells Shares – TradingView

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Reporter Name Merriman Dwight A
Relationship Director
Type Sell
Amount $671,600
SEC Filing Form 4

MongoDB Director Dwight A. Merriman sold a total of 2,385 shares of Class A Common Stock on October 10 and October 15, 2024, for a total sale amount of $671,600. The sales were conducted at prices of $272.97 and $287.82 per share, respectively. Following these transactions, Merriman directly owns 1,130,006 shares and indirectly owns 611,959 shares through the Dwight A. Merriman Charitable Foundation and by trust for his children’s benefit.

SEC Filing: MongoDB, Inc. [ MDB ] – Form 4 – Oct. 15, 2024

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Mind Your Language Models: An Approach to Architecting Intelligent Systems

MMS Founder
MMS Nischal HP

Article originally posted on InfoQ. Visit InfoQ

Transcript

HP: I think I’m going to try and elaborate our journey on how to architect systems with large language models. Let’s just examine the timeline a bit. We’re seeing a whole lot of discussions around large language models in the past year, a little bit before that. It’s not a miracle that large language models happened. It’s been a lot of work that’s been going on that’s building on top of it. The thing that you see with ChatGPT and Gemini and the likes, is the models are just way bigger, and they have a lot of data that’s been fed to them.

Hence, they do what they do, which is amazing. November 2022, DALL·E had come out a few months before that, and then ChatGPT got released. There’s a lot of buzz. We thought it’s hype. It was also a time where we saw the tech industry going through ups and downs, and we just said, “This is hype. It’s not going to last. This too shall pass, and we go back to our normal lives.” February 2023 there was a lot of buzz in the market because people were wondering if the tech giants are going to run away again with monopolizing AI. To only realize Meta had different plans. Ines talk was about how big tech will not monopolize AI right now.

I think it’s a wave that we are all witnessing. Open source took over, and we started seeing some really interesting adaptations of large language models and fine-tuning on Llama. We sat there as an organization, and we said, it’s probably the era of POCs, and we might have to do something here ourselves. 2023, December, we thought, is this a hype? Is this going to continue? We see that the revenue of companies are starting to go up.

OpenAI grew from $200 million to $1.6 billion. NVIDIA’s market revenue reached $18.1 billion, and I think their market cap is bigger than ever. Everyone’s using ChatGPT and Gemini for their work. Everybody in the organization, be it marketing, sales, everyone’s making use of these tools. Deepfake started to become a thing, to a point where there’s a financial worker who lost $25 million because he thought he got a message from his boss and he just signed off on a $25 million check.

We’re all sitting here, and we’re being tasked to bring this AI into our organizations, and we’re wondering, what do we do? 2024 and beyond, AI occupies the spotlight. As I can see from everyone, all of you are interested in understanding how to enable this. The investments continue. You can see that Amazon starts to invest in Anthropic. Inflection AI is now part of Microsoft. We had Jay from Cohere, who was doing some incredible stuff in that space. We have to ask ourselves, how can we bring something that is this powerful in a safe and a reliable way to our customers, and also without losing speed to actually innovate?

Global AI market is going to grow. The research says that it’s going to grow to $1.8 trillion. Currently it’s close to $200 billion. Nine out of 10 organizations think that AI is going to give them competitive advantage. Around 4 in 5 companies think AI is their top priority. This is what the landscape looks like. There’s, of course, a little bit of hype, but everybody is investing across the entire landscape of machine learning, AI, and data. There are risks that come along with it, and organizations are thinking about it. There are a lot of organizations that understand the risk and are working towards it.

There are organizations that have absolutely no idea how to go about working with this risk, so they’re just sitting there waiting for other organizations to implement and move forward. People are rebuilding their engine rooms. You can see that more than 60% of people are running either POCs or they’re already bringing this to widespread adoption to their customers. We need to strike a balance, and that’s what the talk’s going to be about.

There’s a lot of enthusiasm around AI. The AI revolution will be here to stay. It will change everything that we do. Deploying AI in production is quite complex, and it requires more than just thinking about it as a POC. How do we strike a balance? I’m sure a lot of you who started thinking about LLMs came across this image of, Transformers, and Attention is All You Need. The AI landscape is changing every day, so probably things I’m telling right now is going to be outdated the moment I finish the talk. Not everything requires your attention. You don’t have to change, every single day, your entire stack. The goal is, how can you build a stack that gives you the flexibility to innovate and at the same time it’s driving value for your customers?

Background, and Outline

The purpose of my talk. Everybody is talking about LLMs being awesome. I’m going to talk about everything that can possibly go wrong with LLMs, and our journey through the last 16 to 18 months, and what it took for us to bring this into production, and about the effort, people, time, money, and also a few meltdowns that we had. I’m going to be a bit opinionated, because this is some of the work that we’ve done. I want you all to take this with a grain of salt, because there’s probably other versions out there that you might have to take a look at in terms of how to do this.

I’m Nischal. I’m the VP of Data Science and ML Engineering at Scoutbee. I’m based out of Berlin. I’ve been in the ML space for a little bit over 13 years, in the last 7 years, it’s mostly been in the insurance and the supply chain space. This is the overview that we look at. Enabling LLMs in our product. Improving the quality of conversation. Improving the quality and trust in results. Improving the data coverage and quality with ground truth data. Then, summary and takeaways. I’ll go step by step and peel the onion one layer at a time.

Case Study: Supplier Discovery (Semantic Search)

Case study. I spent a lot of time thinking about what I’m going to talk about. I thought I’ll talk about some use case that everybody of you can relate to. Then I didn’t find enough conviction, because it’s not a use case that I worked on. I thought, let’s talk about what we actually did as a company. We work in the supply chain space, and we help organizations such as Unilever, Walmart, and organizations of that size to do supplier discovery. Essentially, what it is, is a Google Search for suppliers, but Google does not work for them, because Google’s not adapted to helping them in the supply chain space. There’s a lot more nuances to understanding and thinking about supply chain. I’m sure all of you have been facing some disruptions, all the way from getting GPUs in your AWS data centers, to not finding toilet papers or pasta in your supermarkets when COVID happened.

Supply chain has an impact on all of us, and every manufacturer is dependent on other manufacturers. The question is, they want to try and find manufacturers they can work with, not just for handling disruptions, but to also mitigate different kinds of risks, and work with innovating manufacturers. The challenge that we wanted to do was, this is not a product that we brought into the market right now. We’ve been a market leader for the past 6, 7 years, so we had different generations of this product. We’ve been using ML for quite some time. We thought this would be a good way for us to bring large language models as a new generation of our product.

Efficiency and effectiveness, so why are LLMs being sought after? Why are enterprises rebuilding their engine room? Efficiency is something that all of us have been working towards, irrespective of which domain we are in. We want to make things faster. We want to make things more economical. We want to get people to be able to solve their tasks. The thing that we see with LLMs is, it’s a part of that, which is efficiency.

It’s also part of effectiveness, because it’s now going to enable organizations or people working in the organizations to do things they could not do before. That would mean you ask a question or you come with a problem statement, rather than looking at static dashboards to tell you what you’re supposed to do. Then, based on the question that you ask, the LLM along with your product, figures out what data to bring in and helps you solve that problem. We’re going to try and see how we can augment organizations to do this.

Stage 1 – Enabling LLMs in Our Product

Stage 1, enabling LLMs in our product. We did what every organization probably did or starts to do right now, is enable your application. Connect it to ChatGPT or one of these providers through an API, and essentially, you’re good to go. We did the API. We did some prompt engineering with LangChain. We connected it to ChatGPT’s API. This was, I think, January 2022. We put some credits in there, and we started using it. The stack on the right is something that we’ve had for a while, which is, we were working largely with knowledge graphs. We had smaller machine learning models.

Back in the time, they were bigger, but now they’re comparatively way smaller. We did some distributed inferencing with Spark to populate these knowledge graphs. When we did this, we asked ourselves, what could go wrong? Because it was very fast for us to do this. It didn’t cost us a lot of time and money. We asked ourselves, let’s see what our customers think. The one thing that stood out immediately was the lack of domain knowledge. The foundational models did not know what supplier discovery was about, and the conversations went haywire.

People started asking questions related to their domain, and we had foundational models start to take them on a live quest to answer questions that they wanted to ask about life. It became very chatty as well, and people were tired. People used our application and went, “Can we just bring the old form back? This is just so much. I don’t want to have this conversation.” We did see the results that were coming up were hallucinations. It felt so real that for a second when we were testing the system, we looked at it and went, is this really a supplier do we not know about or a manufacturer? They were results that were being fabricated by the large language model.

The other part, and I’m not sure how many of you are dealing with enterprise security and privacy, is, a lot of the customers that we worked with were a little bit on the edge when we said, the POC is fine to use it with somebody like ChatGPT or some of these providers, but I don’t think we want to do production workloads or use this product in production if it’s integrated there, because we’re just concerned what they’re going to use the data for.

First, we thought, is there really a market for us to bring LLMs to our product? Does our product really need LLMs? What we got as feedback from the users was that they really enjoyed the experience. They were excited to use more of the product, and wanted more of it. We realized, there’s a big market we’ve identified for a new generation of the product. There were lots of things that we had to solve before we wanted to get there. First, we had to focus on efficiency even before effectiveness. We absolutely needed domain adaptation. We had to remove hallucinations. We had to build trust and reliability. We needed guardrails. We needed it to be a little less chatty. This was the outcome of stage 1.

One of our users said, we have an issue with importing coffee due to the conflict in Suez Canal, we need to find sustainable, fair market certified coffee beans from South America. The foundational model replied, Company Abc from Brazil has the best coffee ever. Coffee is very good for health. Coffee beans can be roasted. We said, “Yes, this is awesome. This sounds nice, but I don’t think our customers will pay us money to use this product.”

Stage 2 – Bringing Open-Source LLMs, Domain Adaptation, and Guardrails

We said, stage 2, the first thing we wanted to tackle was, we didn’t want to go ahead without knowing if we can actually host a large language model ourselves. Because if we did all the development and then realized that data privacy and security was a big concern, all of our work would just go down the drain. The first thing that we did was we brought in an open-source LLM. As you can see, the stack just got a little bit bigger. We brought in LLaMA-13B, which was first dropped on a Torrent somewhere, and then finally made its way to Hugging Face. We put an API on top of it, called FastChat API. We had an LLM API that we were working with.

One thing that is also very common is, even though large language models, there are plenty of them right now, you have to pick and choose your large language models, because the prompt engineering work that you do for language model A will not fit for language model B. All of the effort that you put in in doing this prompt engineering will have to change the moment you change your large language model.

We had to make some adaptations from ChatGPT’s prompts to work with Llama. The thing that we realized was, our cost and complexity just went up. We were now responsible for a new piece of software in our stack that we really didn’t want to be, but we had to because of the domain that we operate in. It was way expensive than using an API that you can work with from any of these providers.

Domain adaptation. This is one of the challenges that any of you will probably face enabling this in your organization, be it internal tooling or through your products, is, how do you bring domain knowledge to large language models? The first thinking that goes in is, it can’t be that crazy to retrain or fine-tune a large language model. Why can’t we just build our own large language model? Just as a ballpark figure with some of the secret statistics that was released around GPT-4, it took OpenAI about $63 million to ship GPT-4. That’s not including the cost of people and infrastructure and everything else. The API is about $30 for a million tokens.

You can see the big difference between using an API coming from these big houses to actually retraining a foundational model. You need a lot of data to train a good foundational model. The good news is that you can do domain adaptation without having to retrain an entire model. There are different ways you can do this. There’s zero-shot learning, in-context learning, and then you can do few-shot learning. You can also build something called agents. What essentially an agent does, is, you give it a set of instructions. You give it some examples on how to deal with these instructions. You give it the capability to make different requests to different systems.

Imagine, if it were a human, and you give a human a task, the human would essentially try to understand the task, pick the relevant data, make queries to different systems, summarize the answer, and provide it for you. Agents typically do that. What we tried to do was feed all of our domain knowledge into an agent. We did some really heavy prompt engineering to enable this, at a point where documentation around prompt engineering was also a bit poor. We had quite a few meltdowns on building this, but we thought this was a good first step in the right direction.

The third part that we introduced were guardrails. When I’m telling you this, I’m sure all of you are sitting there looking at the presentation and going, I have to go verify what he’s saying is right or wrong. Essentially, that’s what guardrails is. You can’t trust an LLM entirely because you don’t know if it’s taking the right decision, if it’s looking at the right data points, if it’s on the right path. Guardrails is essentially a way for you to validate if an LLM is doing the right thing.

There are different ways you can implement a guardrail. We started implementing a version of our own, because at that time, when we started this journey, there weren’t a lot of open-source libraries or companies that we could work with. Right now, you have NeMo Guardrails coming from NVIDIA. There’s guardrails.ai, a company that’s being built out in the Bay Area, which is focusing entirely on guardrails. We implemented our guardrails with a little bit of a twist, which was Graphs of Thoughts approach.

Talking about a business process: supplier discovery is not, I type in something and you get a result. A lot of times in an enterprise landscape, you’re essentially augmenting AI to support a business process. These business processes are not linear. We needed something where we can understand the dynamic nature of the business process, depending on where the user is, and then invoke different kinds of guardrails required to support that user. Thankfully, we saw a paper that was released, I think, came out from ETH Zurich, around Graphs of Thought, which is essentially, we thought of our entire business process as a graph.

At any given point in time, we knew where the user was, and we invoked different sorts of guardrails to make sure that LLM is not misleading the user. That was a lot for stage 2. What can happen if you don’t have guardrails? Air Canada enabled a chatbot with an LLM for its users, and the Chatbot agent went ahead and told its customer that we owe you some money. Now the airline is liable for what the chatbot did. Enabling agents, or enabling LLMs without putting guardrails, without doing domain adaptation, they can start to take actions that are probably not in the best benefit of the organization.

Just taking a step back. What did we identify as issues in stage 1? We said we needed guardrails. We needed domain adaptation. We needed to build trust and reliability. We needed to be a less chatty and reduce hallucinations. When we brought in some changes as part of stage 2, we couldn’t hit all of them, and we said, domain adaptation and guardrails increase trust and reliability. Because when the user started to work with our product, they started giving us feedback that now it starts to make sense that the system that we are working with understands the process, and we are quite happy that we don’t have to worry about our data being shipped to another company.

The next biggest thing we had to solve, that remained a very big challenge for us, was hallucinations. This played a huge role in trust and reliability of the system, because every time the user comes in and uses the system, the system gave them different answers, which essentially means they could never come back and reuse the system for the same problem. That’s something that we wanted to make sure was not going to happen. The weird aspect of using open-source models sometimes when you use something that’s foundationally big models that are available, is, our users were happier in terms of the quality of conversation with ChatGPT, and they started asking us, can we have the same quality but we don’t want it to be on ChatGPT? We had a bit of a situation there.

We constantly were thinking about, how can we do this? One big challenge we had as we implemented stage 2 was that testing agents was a nightmare. We had absolutely no idea what the agents were trying to do. Sometimes they just went completely off key. You couldn’t put breakpoints in how they were thinking, because you can’t really know what they want to do. They invoke data APIs at some point. They didn’t invoke data APIs at some point. They decided to make up their own answer. There was a bit of a challenge in debugging agents, and we were not really comfortable thinking about bringing agents into production.

With the changes that we brought in, this is what the conversation started to look like. We said, have an issue with conflict in Suez Canal, we need to get sustainable, fair market coffee beans from South America. The agent took this input from the user and said, let me understand this. You have issues with shipping due to a conflict. Your focus is looking for coffee suppliers in South America. You want to look at suppliers who have sustainable and fair market certifications. It asked the user if this understanding is correct. The user said, yes, that’s correct. The agent went on and augmented the conversation.

This is where LLMs start to enhance what users can do, leading them down the path of efficiency to effectiveness. Is to say, given that Fairtrade is a sustainability certificate, can I also include the other ones. For this, previously, the users had to go figure out what sustainability certificates were themselves. We didn’t have to train the foundational model for this. Given the amount of data that it had seen, it was already aware what sustainable certificates were, and essentially, they said, “It’s ok. We’re good to go. Let’s move ahead”.

Then the agent, instead of invoking our data APIs to pick up data, just randomly decided to start creating its own suppliers. It said, ABC from Brazil, Qwerty from Chile. They all have sustainable certificates for their coffee growing. The user asked us, but they don’t look like real suppliers. Can you tell me more? The agent said, sorry, I’m a supplier discovery agent, and I cannot answer that question for you. Now suddenly you’re back to square one, where you said, what’s the point of doing all this?

Stage 3 – Reducing Hallucinations with RAGs

We had to reduce the hallucinations. We had to bring in more trust and reliability in the system. We came across the idea of RAGs, which stands for Retrieval-Augmented Generative AI. Jay from Cohere was talking about RAGs, and a bunch of other speakers touched on this as well. I won’t get into the nitty-gritties of what a RAG is, but essentially, what that meant for us is, our engineering stack and system grew much bigger. What we are trying to do with RAGs is, essentially, instead of getting the foundational model to answer the question, we give it data, and we give it context of the conversation, and we force it to use that context to answer the question.

There are different ways you can do RAGs, and we use this concept called Chain of Thoughts framework. Our planner and execution layer for LLMs now went from having an agent and a guardrail, to having Chain of Thoughts prompting, query rewrite, splitting into multiple query generation, custom guardrails based on Graphs of Thought, query based on data retrieval, and then summarizing all of this. This is one of the biggest things for me with LLMs right now, is every time you want to make your system a little bit more robust, a little bit more reliable, there’s probably a new part of this new service or new piece of technology that you need to add in enabling it. We went from that into thinking about, how do we do RAGs with Chains of Thought prompting?

Essentially, what we saw was a big challenge with agents, was the reasoning process behind why the agents did what they did. With the Chain of Thoughts prompting, what we tried to do was, instead of going from question to answer, the LLM goes through a reasoning process. The good thing here is you can actually teach this reasoning process to an LLM. Instead of retraining your entire model, you can actually do a few-shot Chain of Thought where you take certain pieces of conversation, you take some reasoning. You provide this to the LLM, and the LLM understands, so this is the reasoning process I need to have when I’m working with the user.

It’s basically like giving an LLM a roadmap to follow. If you know your domain and your processes well, you can actually do this quite easily. For this, of course, we use LangChain. There were people asking, is LangChain the best framework out there? At this moment, we think that’s probably the framework that’s being used by the community the most. I’m not sure if it’s best or not, but it does the job for us.

Once the user starts to have a conversation with the system, and there is reasoning behind this, one of the things that we saw as a pattern on how users use the new generation of products, were, some people were still typing in keywords, because they’re still used to the idea of using Google Search. They didn’t really know how to use a conversational based system, so they typed in keywords. Whereas some of the other users put in an entire story. They spoke about, this was the problem they were having. This is the data point they were looking at. These are the kind of suppliers they want. This is the manufacturer they’d like to work with, so on and so forth.

We ended up having a passage in the very first message, rather than, tell me what your problem is. This is going to happen a lot when you enable a product or a feature that’s a bit open-ended. We decided to look at, we need to maybe understand what the user wants to do, and transform it into a standard form. At some points we might have to actually split this into several queries, rather than one single query. This, backed with Chain of Thought, gave us the capability to look at, so we break down the problem into a, b, and c, and we need to make data calls to fetch data for these problems. Using this data and using this problem, we can find an answer.

Once you enable all of these. It’s a lot of different pieces of technology, and you have to observe to understand what is going on. There are two parts around the LLM that you have to observe. The part one is the LLMs, like any other service, has response times, and also the number of tokens it processes. The bigger your large language model is going to be, the slower it can get. Which is why there’s a lot of work that’s happening in that space in terms of, how can we make this faster and also run it on smaller GPUs, rather than big clusters that you have to provision, and the actual conversation and the result.

Previously, when you built search systems and somebody typed in a query, you would find results, and these results were backed with relevancy. You would do an NDCG. You would do precision. You would do recall. Now it’s a bit more complicated, because you need to understand if the conversation, or if the LLM understood the conversation. If it picked up all of the relevant set of data from the data systems to answer the question, did it pick up all of it and how much of it were right? You have context, precision, and recall. Of course, combining all of this information into an answer that the LLM can actually work with.

The good thing about the science community right now and the open-source world is, if you think you have a problem, chances are that there are lots of people who have this problem. There’s probably a framework already being built that you can work with. We came across the Ragas framework. It’s an open-source framework that helps you generate a score between generation and retrieval. It gives you ideas and pointers in terms of understanding, where do you actually have to go in order to fix your system.

A quick summary for stage 3. In the previous stage, the biggest challenge we saw was hallucination, and testing agents was the other piece. With introduction of RAGs, hallucinations drastically reduced. We’re using knowledge graphs as our data source, and not vector database. We basically stored everything. We stored all the data that the users were chatting, the results that we were showing to the users in order to power our observability and product metrics. By eliminating agents, testing became a whole lot easier, because we could see exactly what the LLMs were trying to do, and we were trying to figure out, how can we make this better?

With this, we had other challenges that we need to start looking at. We started showing results, and it’s an open-ended environment. Our users started to interrogate the data that we were showing, and they wanted to have a deeper conversation with the data itself, to achieve their goal. This was not enabled in our system yet. One of the other challenges that we saw was higher latency. With more people using, the latency numbers were quite high, so the response rates were slower. Our users started to get a bit annoyed because they had to wait for a few seconds before they could look at the answer on their screens.

What does reduction of hallucinations with RAGs look like? We play the same story. We’re looking for coffee suppliers in South America. We want them to be sustainable. Essentially, what you can see on the right side of the screen is, because we force it to use data, and we force it to provide citations with data provenance, it actually tells the user that this calmcoffee supplier who produces coffee beans, they’re based out of Brazil, and we found this information from this place. It gives our users to go check them out themselves if they want to, and now they can start to trust that this answer comes from a certain place, without having to just sit there and wonder, where did this information come from?

We also see that it looked at RoastYourBeans as another supplier. It did let the user know that they don’t have any sustainable certificates. We found this information from roastyourbeans.com. The very next thing, the moment we unlocked this for our users, what they did was, they said, can you tell us a little bit more about RoastYourBeans in terms of their revenue, customers? How are they in terms of the delivery times? Unfortunately, we didn’t have that information to share with our users.

Stage 4 – Expanding, Improving, and Scaling the Data

That leads to stage 4, which is, expanding, improving, and scaling the data. You can have large language models, but the effectiveness of your LLM is going to be based on the high-quality data that you have. It looks something like this, where you go place a report to a person, and then you’re just praying and keeping your fingers crossed that the numbers are right when somebody asks you, can I trust you with your numbers? We didn’t want to have that for our customers. There are different ways in which we were thinking about, how do we enhance and scale data? The one thing that we looked at was, we wanted to bring data from different places, and wanted to use our knowledge graph as a system of record.

In an enterprise landscape, if you’re working in the ERP space, CRM space, or even in your own organization, you have data sitting in different datastores. What we tried to think about bringing this data was, instead of just vectorizing them and embedding them and putting them in embedding stores, you still had the challenge of understanding, how is this data even related? On top of that, we built a knowledge graph, which is a semantic understanding of the data and data sitting in different data fields. We started integrating different domain data, revenue from different data partners, a little bit about risks from different data partners, and data even coming from the customers themselves.

The other thing for us that was important was, as we scaled the whole data operation part, we didn’t want to lose control of the explainability around the data itself and the provenance around it. We dabbled a bit with embeddings. One of the key challenges that we saw on the embedding side is, they’re relatively faster to work with, and you could essentially take different questions and find answers that you’d probably not extract, and build into a knowledge graph. The challenge was explaining embeddings to an enterprise user, when you are working with a very big demographic, and also correcting them was a bit of a challenge. We still have some experiments that we are running on the embedding side.

I think we’ll get to a point where we probably use both worlds. At the moment, everything that we are powering through our product is based on knowledge graphs. This is just a sneak peek of what our ontology of knowledge graph looks like. There’s a saying that goes, at some point, most of the problems transforms itself into a graph. This is what our current one snapshot of our knowledge graph looks like. We did this about two-and-a-half years ago. We put in a lot of effort in getting and designing this ontology. One of the things that LLMs can do very well is, given your domain, you can actually use an LLM to design the ontology for you. What took us about 6 to 9 months of effort to build this is something that you can actually build for your domain in maybe a few months using an LLM.

Once we had this ontology that spread across different domains and different entities, we wanted to think, how do we now populate this? Ontology is great, but now we need to bring all of this data and populate it. We previously had our transformer-based models that were working on web content and other different data types to bring all of this information, but we had a problem that the quality that we needed the data to be in had to be much higher. We sat there wondering, we need high-quality data. We had access to some training data that we had annotated ourselves. Now, how can we get high-quality data in a short period of time to fine-tune a model? We used a superior LLM.

Basically, what we did was, instead of taking months and putting in a lot of effort with humans to generate annotated data, we took a much superior LLM and we generated high-quality training data to fine-tune a smaller LLM model. We had humans in the loop to validate this. Basically, our efforts went down by 10x to 20x of what we would have to spend with just humans annotating the data, to getting that gold standard data with using an LLM and having humans in the loop validating it.

The reason why we wanted a smaller model that’s adapted to a certain task is, it’s easier to operate, and when you’re running LLMs, it’s going to be much economical, because you can’t run massive models all the time because it’s very expensive and takes a lot of GPUs. Currently, we’re struggling with getting GPUs in AWS. We searched all EU Frankfurt, Ireland, North Virginia. It’s seriously a challenge now to get big GPUs to host your LLMs.

The second part of the problem is, we started getting data. It’s high quality. We started improving the knowledge graph. The one thing that is interesting when you think about semantic search is that when people interact with your system, even if they’re working on the same problem, they don’t end up using the same language. Which means that you need to be able to translate or understand the range of language that your users can actually interact with your system.

We further expanded our knowledge graph, where we used an LLM to generate facts from data that we were looking at, based on our domain. We converted these facts with all of their synonyms, with all of the different ways one could potentially ask for this piece of data, and put everything into the knowledge graph itself. You could use LLMs to generate training data for your smaller models. You could also use an LLM to augment and expand that data, if you know well within your domain how to do this.

The third thing that we did for expansion was we actually started working with third-party data providers. There are some data providers that specifically provide you data, and there’s a massive amount of them. We started working with data providers for getting financial information, risk information, so on and so forth, and brought all of that together into our knowledge graph.

The engineering problem of this. All of this sounds so great this. Theoretically, you’re doing this. It’s a POC. You run it on a few hundreds of documents, everything is fine. You need to now scale this to millions and probably billions of web pages and documents. Which essentially means you have big data, you have big models, and you actually have big problems, because orchestrating and running this is a nightmare. Our ML pipelines had to run LLM workloads. The LLM inference time had a big impact on throughput and cost, so we were trying to figure out, how do we reduce them? How do we run in their optimal form?

Data scientists wanted to run experiments at scale, because they weren’t able to at the moment. We had to make sure the ML pipelines were observable, and should ideally use infrastructure efficiently, so that you know which jobs can use CPU, which jobs can use GPU. Infrastructure scales, comes back down when it’s not being used. We ended up changing our entire ML and LLMOps platform. What we tried to do was we said, if we want to hit all of these things, we had a very big challenge. How many of you are running Spark pipelines, ML workloads in your organization? It’s going to be a bit more harder to get Spark pipelines to run with LLMs as well. One of the other challenges we had was our data science team were not Spark aware. Our ML engineering team was Spark aware. Which essentially meant that anything that goes into production or needs to run at scale, there needs to be a translator from the data science world to Spark world.

The other challenge that we had was, Spark is written predominantly with Java and Scala, and our data scientists are very much away from that world. They have worked with scientific packages on Python for a very long time. Observing and understanding when Spark fails was a very big challenge for them. Also understanding how Spark is utilizing the cluster, how exactly the distribution of compute needs to happen was becoming tougher. We had our data pipelines. We had our ML pipelines. We had LLM workloads. There were so many different pieces, and we realized, if we keep running in this direction, it would be an absolute nightmare for us to maintain and manage everything. What we did was we introduced a universal compute framework for ML, LLM, as well as data workloads. We started using this framework called Ray, which is an open-source framework coming out of UC Berkeley.

The enterprise version of it is run by Anyscale. What they provide us is not just with a way to work with Ray, but the platform also provides us to host and run large language models that are optimized to run on smaller GPUs, make it run faster with the click of a button, rather than us having to manage this all by ourselves. Of course, it ran on our infrastructure, which meant that we didn’t take a hit on the privacy and the security part. It’s just that we found a better way to operate. We chose the path of buy rather than build, because build, at this point, was going to take us a very long time. It’s a very cool project. If you’re running massive data workloads or data pipelines, Ray is a very good framework for you to take a look at, to a point where your data science team can just use decorators to scale their code to run on massive infrastructure, rather than having to figure out how to schedule it.

Outcome of stage 4. We ran through the whole script again, and we had various scripts that we were testing. Finally, we got to a point where, when the user asked us to tell us a little bit more about the suppliers, we said, based on the data we’ve received from XYZ data partner, we see that the revenue of the company is $80 million. Here is the revenue. When they asked us about the delivery, quality of the supplier, this information is not available, either with the data partners or on the internet as well. What we can do, and this is what we started seeing, is every time we had a piece of data missing, we wanted to see how we can enable the users to try and get this data.

We designed a Chain of Thought prompt as a way to say, we can’t get this data for you, but we do know that we have their email so we can help you draft an email so that you can start having a conversation with the supplier to understand if you can get more information from them.

Summary and Takeaways

Your product should warrant for an LLM use. Your Elasticsearches, your MongoDB databases, your actual databases, they all do a fantastic job. If your product doesn’t warrant an LLM, you don’t necessarily have to jump on the bandwagon, because doing is cool, but it can turn out to be very expensive. LLMs come at a cost. There’s cost of upskilling, running the model, maintaining it. Brace yourself for failures and aim for continuous and sustainable improvement. LLM is not the golden bullet. You have to still work on high-quality data, your data contracts, your data ops, and managing an entire data lifecycle.

Please compute your ROI. You’ll have to invest a lot of time and money and people at this, which means that your product needs to, at some point, have that return on investment. Measure everything, because it can look very cool. You can be allured by the technology. Store all the data, metadata, everything. As and when you can, get humans in the loop to validate it. The idea of LLMs need to be around efficiency and effectiveness, but not to replace humans, because it’s not there.

Even if there are a lot of people who are talking about generalized artificial intelligence, it’s definitely not there. Please do not underestimate the value of guardrails, domain adaptation, and your user experience. There’s a lot of work on the user experience side that you’ll have to think about in order to bring the best out of the LLMs and their interaction with the users. I think it adds a lot of value to your product.

Take care of your team. Your team’s going to have prompt engineering fatigue. They’re going to have burnouts. Some of your data scientists might be looking at the work they did in the last decade, and now an API can do it for you, so there’s fear of LLMs replacing people. There are meltdowns. You have to embrace failure, because there’s going to be a lot of failures before they come into production. Actively invest in upskilling, because nobody knows these things. The field is nascent. There are a lot of people coming out with very good content. There’s free content out there. There are workshops that you can sign up for. Actively invest in upskilling, because it will help build a support system for your team.

System design: once you’re past the POC stage, you have to think about sustainable improvements. You have to design your systems to work with flexibility, but at the same time with reliability. Version control everything, from your prompts, to your data, to your agents, to your APIs. Version control everything and tag your metadata with it, so you can always go back and run automated tests. One plus one is equal to two, and all of us know this, but it’s not just important to know one, but it’s also very important to know the plus operator in there. Think about this as a whole system, rather than just thinking about an LLM can solve the problem for you.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Exchange Traded Concepts LLC Lowers Holdings in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Exchange Traded Concepts LLC decreased its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 23.4% during the third quarter, according to its most recent disclosure with the Securities & Exchange Commission. The fund owned 7,836 shares of the company’s stock after selling 2,394 shares during the quarter. Exchange Traded Concepts LLC’s holdings in MongoDB were worth $2,118,000 as of its most recent SEC filing.

A number of other hedge funds have also recently modified their holdings of MDB. Vanguard Group Inc. boosted its position in MongoDB by 1.0% during the first quarter. Vanguard Group Inc. now owns 6,910,761 shares of the company’s stock worth $2,478,475,000 after purchasing an additional 68,348 shares in the last quarter. Jennison Associates LLC boosted its position in MongoDB by 14.3% during the first quarter. Jennison Associates LLC now owns 4,408,424 shares of the company’s stock worth $1,581,037,000 after purchasing an additional 551,567 shares in the last quarter. Swedbank AB boosted its position in MongoDB by 156.3% during the second quarter. Swedbank AB now owns 656,993 shares of the company’s stock worth $164,222,000 after purchasing an additional 400,705 shares in the last quarter. Champlain Investment Partners LLC boosted its position in MongoDB by 22.4% during the first quarter. Champlain Investment Partners LLC now owns 550,684 shares of the company’s stock worth $197,497,000 after purchasing an additional 100,725 shares in the last quarter. Finally, Clearbridge Investments LLC boosted its position in MongoDB by 109.0% during the first quarter. Clearbridge Investments LLC now owns 445,084 shares of the company’s stock worth $159,625,000 after purchasing an additional 232,101 shares in the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.

Analyst Ratings Changes

A number of equities analysts have issued reports on the company. Mizuho increased their price target on MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research report on Friday, August 30th. Stifel Nicolaus raised their target price on MongoDB from $300.00 to $325.00 and gave the company a “buy” rating in a research report on Friday, August 30th. Scotiabank raised their target price on MongoDB from $250.00 to $295.00 and gave the company a “sector perform” rating in a research report on Friday, August 30th. Piper Sandler raised their target price on MongoDB from $300.00 to $335.00 and gave the company an “overweight” rating in a research report on Friday, August 30th. Finally, Tigress Financial decreased their target price on MongoDB from $500.00 to $400.00 and set a “buy” rating on the stock in a research report on Thursday, July 11th. One research analyst has rated the stock with a sell rating, five have given a hold rating and twenty have assigned a buy rating to the company’s stock. Based on data from MarketBeat.com, MongoDB has an average rating of “Moderate Buy” and an average target price of $337.96.

View Our Latest Research Report on MongoDB

MongoDB Price Performance

MDB stock opened at $289.14 on Tuesday. MongoDB, Inc. has a 52-week low of $212.74 and a 52-week high of $509.62. The company has a quick ratio of 5.03, a current ratio of 5.03 and a debt-to-equity ratio of 0.84. The business has a fifty day moving average of $266.49 and a two-hundred day moving average of $286.66. The stock has a market cap of $21.21 billion, a price-to-earnings ratio of -102.90 and a beta of 1.15.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Thursday, August 29th. The company reported $0.70 earnings per share for the quarter, beating analysts’ consensus estimates of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The business had revenue of $478.11 million during the quarter, compared to analysts’ expectations of $465.03 million. During the same period in the previous year, the firm posted ($0.63) earnings per share. The company’s revenue was up 12.8% on a year-over-year basis. As a group, research analysts anticipate that MongoDB, Inc. will post -2.44 earnings per share for the current year.

Insider Activity

In other news, CAO Thomas Bull sold 154 shares of MongoDB stock in a transaction on Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total value of $39,462.50. Following the completion of the sale, the chief accounting officer now directly owns 16,068 shares of the company’s stock, valued at approximately $4,117,425. The trade was a 0.00 % decrease in their ownership of the stock. The sale was disclosed in a filing with the SEC, which can be accessed through this hyperlink. In related news, CAO Thomas Bull sold 154 shares of MongoDB stock in a transaction on Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total value of $39,462.50. Following the completion of the transaction, the chief accounting officer now directly owns 16,068 shares in the company, valued at approximately $4,117,425. This trade represents a 0.00 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which is accessible through this hyperlink. Also, CEO Dev Ittycheria sold 3,556 shares of MongoDB stock in a transaction on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total value of $911,225.00. Following the completion of the transaction, the chief executive officer now owns 219,875 shares of the company’s stock, valued at approximately $56,342,968.75. This trade represents a 0.00 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last ninety days, insiders sold 15,896 shares of company stock valued at $4,187,260. 3.60% of the stock is currently owned by corporate insiders.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Beginners Guide To Retirement Stocks Cover

Click the link below and we’ll send you MarketBeat’s list of seven best retirement stocks and why they should be in your portfolio.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) Price Target Increased to $340.00 by Analysts at DA Davidson

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (NASDAQ:MDBFree Report) had its price objective upped by DA Davidson from $330.00 to $340.00 in a report issued on Friday morning, Benzinga reports. DA Davidson currently has a buy rating on the stock.

Several other brokerages have also weighed in on MDB. Tigress Financial dropped their target price on MongoDB from $500.00 to $400.00 and set a buy rating on the stock in a research report on Thursday, July 11th. Bank of America lifted their price objective on shares of MongoDB from $300.00 to $350.00 and gave the company a buy rating in a research report on Friday, August 30th. Royal Bank of Canada restated an outperform rating and issued a $350.00 target price on shares of MongoDB in a research report on Friday, August 30th. Scotiabank lifted their price target on MongoDB from $250.00 to $295.00 and gave the company a sector perform rating in a report on Friday, August 30th. Finally, Stifel Nicolaus increased their price objective on MongoDB from $300.00 to $325.00 and gave the stock a buy rating in a report on Friday, August 30th. One analyst has rated the stock with a sell rating, five have issued a hold rating and twenty have given a buy rating to the company’s stock. According to data from MarketBeat.com, MongoDB has a consensus rating of Moderate Buy and a consensus target price of $337.96.

Check Out Our Latest Stock Report on MongoDB

MongoDB Price Performance

Shares of MDB opened at $289.14 on Friday. MongoDB has a 52 week low of $212.74 and a 52 week high of $509.62. The company has a current ratio of 5.03, a quick ratio of 5.03 and a debt-to-equity ratio of 0.84. The firm has a market cap of $21.21 billion, a price-to-earnings ratio of -102.90 and a beta of 1.15. The stock has a 50 day moving average of $266.49 and a 200-day moving average of $286.66.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Thursday, August 29th. The company reported $0.70 earnings per share for the quarter, topping analysts’ consensus estimates of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The company had revenue of $478.11 million for the quarter, compared to analysts’ expectations of $465.03 million. During the same period in the prior year, the company earned ($0.63) earnings per share. The firm’s revenue was up 12.8% compared to the same quarter last year. As a group, analysts forecast that MongoDB will post -2.44 EPS for the current year.

Insiders Place Their Bets

In related news, CEO Dev Ittycheria sold 3,556 shares of the company’s stock in a transaction that occurred on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total value of $911,225.00. Following the completion of the sale, the chief executive officer now owns 219,875 shares in the company, valued at approximately $56,342,968.75. This represents a 0.00 % decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through the SEC website. In related news, Director Dwight A. Merriman sold 2,000 shares of the firm’s stock in a transaction that occurred on Friday, August 2nd. The shares were sold at an average price of $231.00, for a total transaction of $462,000.00. Following the completion of the sale, the director now directly owns 1,140,006 shares in the company, valued at approximately $263,341,386. This trade represents a 0.00 % decrease in their ownership of the stock. The transaction was disclosed in a document filed with the SEC, which can be accessed through the SEC website. Also, CEO Dev Ittycheria sold 3,556 shares of the company’s stock in a transaction that occurred on Wednesday, October 2nd. The shares were sold at an average price of $256.25, for a total transaction of $911,225.00. Following the completion of the transaction, the chief executive officer now owns 219,875 shares in the company, valued at approximately $56,342,968.75. This represents a 0.00 % decrease in their position. The disclosure for this sale can be found here. In the last quarter, insiders have sold 15,896 shares of company stock worth $4,187,260. Company insiders own 3.60% of the company’s stock.

Institutional Investors Weigh In On MongoDB

Several large investors have recently made changes to their positions in the company. Transcendent Capital Group LLC bought a new position in shares of MongoDB in the 4th quarter valued at approximately $25,000. MFA Wealth Advisors LLC acquired a new stake in shares of MongoDB in the second quarter worth $25,000. J.Safra Asset Management Corp boosted its holdings in shares of MongoDB by 682.4% during the 2nd quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock worth $33,000 after buying an additional 116 shares during the period. Quarry LP grew its stake in MongoDB by 2,580.0% in the 2nd quarter. Quarry LP now owns 134 shares of the company’s stock valued at $33,000 after buying an additional 129 shares during the last quarter. Finally, Hantz Financial Services Inc. bought a new position in MongoDB in the 2nd quarter worth $35,000. Institutional investors own 89.29% of the company’s stock.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Binstellar: Your Premier Mean Stack Development Company – Ahmedabad Mirror

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Others Specials

Binstellar: Your Premier Mean Stack Development Company

In the ever-evolving world of web development, having a robust technology stack is crucial for creating scalable and efficient applications. One of the most powerful stacks available today is the MEAN stack, which consists of MongoDB, Express.js, Angular, and Node.js. Partnering with a proficient mean stack development company Binstellar can significantly enhance your web application development process.

Understanding the MEAN Stack

The MEAN stack is a collection of technologies that allows developers to build dynamic web applications using JavaScript at every layer of the development process. Here’s a brief overview of each component:

1. MongoDB: A NoSQL database that stores data in a flexible, JSON-like format, making it ideal for applications with large volumes of unstructured data.

2. Express.js: A web application framework for Node.js that simplifies the development of web applications and APIs by providing a robust set of features.

3. Angular: A front-end framework maintained by Google, designed for building dynamic single-page applications (SPAs) with a rich user experience.

4. Node.js: A server-side platform built on Chrome’s JavaScript runtime, enabling developers to build scalable network applications.

By leveraging the MEAN stack, businesses can develop full-stack applications with a consistent language, reducing complexity and improving efficiency throughout the development process.

Why Choose Binstellar’s MEAN Stack Development Services?

When it comes to hiring a mean stack developer, choosing Binstellar ensures you receive top-notch services tailored to your business needs. Here are some reasons why you should consider our MEAN stack development services:

Expertise and Experience

At Binstellar, we boast a team of highly skilled MEAN stack developers with extensive experience in building robust applications. Our developers are proficient in the latest technologies and best practices, ensuring that your project is executed to the highest standards.

Customized Solutions

We understand that every business has unique requirements. Our team works closely with you to comprehend your vision and goals, allowing us to create customized solutions that align perfectly with your objectives. Whether you need a new application or want to enhance an existing one, we’ve got you covered.

Agile Development Process

Binstellar employs an agile development methodology, allowing for flexibility and adaptability throughout the project lifecycle. This approach enables us to deliver high-quality products while accommodating any changes or feedback you may have along the way.

Comprehensive Support

Our commitment to your success doesn’t end with the delivery of your project. We offer ongoing support and maintenance services to ensure your application remains up-to-date, secure, and optimized for performance. Our team is always available to address any issues or make enhancements as needed.

Cost-Effective Solutions

We believe that high-quality development services should be accessible to businesses of all sizes. Binstellar offers competitive pricing for our MEAN stack development services, ensuring you receive exceptional value without compromising on quality.

Proven Track Record

Our portfolio showcases a wide range of successful projects across various industries. We take pride in our ability to deliver solutions that not only meet but exceed our clients’ expectations. Our focus on quality and client satisfaction has earned us a strong reputation in the industry.

Conclusion

In the fast-paced digital landscape, choosing the right technology stack and development partner is crucial for your business’s success. The MEAN stack offers unparalleled advantages, and Binstellar stands out as a leading mean stack development company dedicated to delivering top-quality applications tailored to your needs. Our experienced mean stack developers  are ready to bring your vision to life with innovative solutions that drive results.

If you’re looking to develop a scalable and dynamic web application, don’t hesitate to contact Binstellar today. Let us help you leverage the full potential of the MEAN stack for your business.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Sold by SG Americas Securities LLC

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

SG Americas Securities LLC cut its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 44.2% in the 3rd quarter, according to the company in its most recent disclosure with the SEC. The fund owned 2,501 shares of the company’s stock after selling 1,985 shares during the quarter. SG Americas Securities LLC’s holdings in MongoDB were worth $676,000 as of its most recent filing with the SEC.

Other institutional investors also recently bought and sold shares of the company. Transcendent Capital Group LLC acquired a new position in shares of MongoDB during the 4th quarter worth $25,000. MFA Wealth Advisors LLC purchased a new position in MongoDB during the 2nd quarter valued at about $25,000. Sunbelt Securities Inc. raised its position in MongoDB by 155.1% during the first quarter. Sunbelt Securities Inc. now owns 125 shares of the company’s stock worth $45,000 after acquiring an additional 76 shares during the last quarter. J.Safra Asset Management Corp lifted its holdings in shares of MongoDB by 682.4% in the second quarter. J.Safra Asset Management Corp now owns 133 shares of the company’s stock valued at $33,000 after purchasing an additional 116 shares in the last quarter. Finally, Quarry LP grew its position in shares of MongoDB by 2,580.0% in the second quarter. Quarry LP now owns 134 shares of the company’s stock valued at $33,000 after purchasing an additional 129 shares during the last quarter. 89.29% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Price Performance

Shares of NASDAQ MDB opened at $292.86 on Monday. The company has a debt-to-equity ratio of 0.84, a current ratio of 5.03 and a quick ratio of 5.03. MongoDB, Inc. has a 1-year low of $212.74 and a 1-year high of $509.62. The company has a market capitalization of $21.48 billion, a price-to-earnings ratio of -104.22 and a beta of 1.15. The business has a 50-day moving average of $265.14 and a 200-day moving average of $287.65.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The company had revenue of $478.11 million during the quarter, compared to the consensus estimate of $465.03 million. During the same quarter last year, the company earned ($0.63) earnings per share. MongoDB’s revenue for the quarter was up 12.8% on a year-over-year basis. On average, equities analysts anticipate that MongoDB, Inc. will post -2.44 EPS for the current fiscal year.

Insider Transactions at MongoDB

In other MongoDB news, CAO Thomas Bull sold 1,000 shares of the stock in a transaction on Monday, September 9th. The shares were sold at an average price of $282.89, for a total transaction of $282,890.00. Following the transaction, the chief accounting officer now directly owns 16,222 shares in the company, valued at $4,589,041.58. This represents a 0.00 % decrease in their position. The transaction was disclosed in a legal filing with the SEC, which is available through this link. In other news, CEO Dev Ittycheria sold 3,556 shares of MongoDB stock in a transaction that occurred on Wednesday, October 2nd. The stock was sold at an average price of $256.25, for a total value of $911,225.00. Following the completion of the transaction, the chief executive officer now owns 219,875 shares in the company, valued at approximately $56,342,968.75. This trade represents a 0.00 % decrease in their position. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through the SEC website. Also, CAO Thomas Bull sold 1,000 shares of the company’s stock in a transaction that occurred on Monday, September 9th. The shares were sold at an average price of $282.89, for a total value of $282,890.00. Following the sale, the chief accounting officer now directly owns 16,222 shares in the company, valued at approximately $4,589,041.58. This represents a 0.00 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Over the last quarter, insiders sold 15,896 shares of company stock valued at $4,187,260. 3.60% of the stock is currently owned by corporate insiders.

Analysts Set New Price Targets

A number of equities research analysts recently issued reports on the company. Stifel Nicolaus upped their target price on MongoDB from $300.00 to $325.00 and gave the company a “buy” rating in a research note on Friday, August 30th. Piper Sandler lifted their price objective on MongoDB from $300.00 to $335.00 and gave the stock an “overweight” rating in a research report on Friday, August 30th. DA Davidson boosted their target price on shares of MongoDB from $330.00 to $340.00 and gave the company a “buy” rating in a research note on Friday. Truist Financial raised their price target on shares of MongoDB from $300.00 to $320.00 and gave the stock a “buy” rating in a research note on Friday, August 30th. Finally, Morgan Stanley boosted their price objective on shares of MongoDB from $320.00 to $340.00 and gave the company an “overweight” rating in a research report on Friday, August 30th. One analyst has rated the stock with a sell rating, five have assigned a hold rating and twenty have given a buy rating to the stock. Based on data from MarketBeat.com, the company presently has an average rating of “Moderate Buy” and an average price target of $337.96.

View Our Latest Report on MDB

About MongoDB

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Orchestrating A Path to Success – a Conversation with Bernd Ruecker

MMS Founder
MMS Bernd Ruecker

Article originally posted on InfoQ. Visit InfoQ

How Did You Become An Architect? [00:46]

Michael Stiefel: Welcome to the Architects Podcast, where we discuss what it means to be an architect and how architects actually do their job. Today’s guest is Bernd Ruecker, who is the founder and chief technologist at Camunda, a company involved with process orchestration. It now has about 500 people, and your clients include banks, insurance companies, and telcos. Bernd has contributed to various open source workflow engines over the past 20 years and has written three books, and is writing a fourth. And I’ve written two books, and I know how time-consuming book writing is. It’s great to have you here on the podcast, and I’d like to start out by asking you, were you trained as an architect? How’d you become an architect? Because it’s not something you decided one morning, you woke up and said, “Today I’m going to be an architect”.

Bernd Ruecker: Yes. Thanks for having me, Michael. No, it’s probably not like that. For me, honestly, I mean, I have probably a very typical thing that I started to do things like programming with a computer super early on, when I was like 13 or 14 or whatever. And then I think one of the key moments for me was really when a friend of mine during high school basically started to start his own business. He was selling a kind of modded graphic card, it’s a very specific thing back then, but over the internet. And he had a lot of trouble because he got successful with that, and he had a lot of piles of everything, piles of hardware lying there, piles of emails lying there, piles of invoices lying there, piles of stuff lying everywhere, and basically asked me if I could help him.

And I started to, naive as I was, I basically, from the perspective I have now, I would say I started to write an ERP system for that, and that was really by accident. And I dived into that, and that was kind of coexisting with, let’s say, my interest about a kind of software architecture in a way. I tried to get books around that. I tried to understand how to structure a system, and I could try that out. And I got into Enterprise JavaBeans back then. I got into how to structure different components, how to build them that I can run them, and I wrote a system that is, really weird, still up-and-running today, to some extent.

And that taught me a lot of things about, and also taught me that I find it more interesting to understand that kind of bigger picture around that system instead of diving into one thing very, very deep, and I think that was kind of the roots of it. And I did study computer science. I did a master’s degree for software engineering, so I really focused on that in my university, as well, but I got started, I would say, even earlier than that.

Michael Stiefel: So, in other words, it was sort of an accidental discovery of necessity that got you into architecture?

Bernd Ruecker: Probably, yes. Yes.

Michael Stiefel: In other words, the mess it is and you figured out how I’m going to organize this mess that I’m in, and that led you to sort of architecture. And then you somehow, you mentioned that you were working on essentially an ERP system. Was that sort of what got you thinking about orchestration?

Bernd Ruecker: Oh yes. 100%. So I mean, part of what the system has to do is basically looking at the data address, kind of what orders do I have, what hardware do I have lying around and stuff? But at the same time, you also have to look at the processes, and that’s where we started, like the workflows, like the order fulfillment, like the returns good handling. These were basically the problems where we started, like how can we manage that process that it runs efficiently?

And also back then I started to look into that, I found it fascinating, and probably we can discuss that later on, as well, but I’m a big fan of visuals. I also was always a big fan of UML Class diagrams, for example, for structuring, and I looked into visuals for workflows. And back then that was not yet very well-developed, I would say. There were a couple of weird things happening. There was BPEL-

Michael Stiefel: I remember BPEL.

Bernd Ruecker: Okay, yes. Hopefully a lot of people forgot about that. But there was not really a good solution for these kinds of workflows, and that got me hooked into the whole question, but there should be. And even back then, there were two options I found. The one was going with super expensive enterprise software on a really complex tech stack, which was obviously not an option for that at all. And I looked at open source projects, and there were not many back then and they were super technical. But I looked into one, I got a contributor there, and I worked with that, actually, to implement the workflow within that system, basically auto-fulfillment and returns good handling. And that got me into the whole process orchestration thing, first of all, thinking about it, also in the community. Then I started giving a talk about that. That brought me into freelancing around workflow engines. And in a way that’s also the story how I co-founded a company that does process orchestration as a product. I stumbled into that, as well, if you will.

What is Process Orchestration? [05:40]

Michael Stiefel: Well, look, life, sometimes some people know where they’re going from day one and some people just stumble their way to success. But you said two things that were kind of interesting, and I want to sort of focus in on them. You mentioned process orchestration, and process orchestration is a type of distributed system, and distributed systems are becoming increasingly common in our world. We see them all over the place, but people still really do not understand when people are involved in the distributed system. And process orchestration is all about that. And why do you think people have so much trouble dealing with process orchestration?

Bernd Ruecker: It’s a very good question. To be honest-

Michael Stiefel: And you might want to define, just for our listeners who are not familiar with this, you might want to also define what process orchestration is.

Bernd Ruecker: Probably that’s a good first step. So process orchestration is basically getting order in certain steps you have to do. So you orchestrate those steps, you coordinate those steps, that’s kind of the wording behind that. And normally, then you have technology that can do that. And the way it works is you define models, like nowadays, it’s BPMN. I’m a big fan of BPMN, business process model and notation. It’s a graphical model. You define where you say, “I have those, for example, three steps I have to do in a sequence”. So I do the first step first. When that’s done, the next step is happening. And then you can run such a workflow engine or process orchestrator that can interpret those models and can run instances of that, like the orders I talked about.

And then they run through basically all those steps. And then the workflow engine makes sure that, if there are decisions on the way, there are complex patterns, like if you are waiting for too long, you should do something else or push something to a human and whatever. So that’s one thing, process orchestrators can interpret those models so you get a visual. And the second big thing is it’s also about long-running processes. So the process orchestrator can wait for things to happen, that’s very often a human, but it might also be an asynchronous answer from a system that takes longer or a response from a customer. So there are a lot of things where you might need to wait, and that’s what a process orchestrator can do. And then it’s normally like persisted and you have all the tooling around that, where you can see what’s currently happening, where are you waiting, and so on and so forth.

What Is So Difficult About Understanding Orchestration? [08:01]

Michael Stiefel: So what do people find so difficult about this?

Bernd Ruecker: That’s the question that also puzzles me for a decade now, I think. I think that there are a couple of perceptions around that. It’s a lot about perception, I think, because I started with visual models, for example. If you ask developers about visual models, very often you get a negative feeling around that. That’s true for UML Class diagrams, I never understood that. That’s also true for process models. And I make the case with UML first, because that’s probably what most people out there know, and a lot of people probably learned that in university and it’s a drawing, it’s like dead paper. Or you even had model-driven architecture quite a while back, where we tried to generate software out of these models, which never worked very well.

But I also recall when I started there was a tool, which was actually super great that where you wrote Java code and then you could just look at the code you wrote as an UML diagram, because it knows if I have a collection typed of that type, I can just make two boxes in the right arrow. So it was kind of a automatically documenting things, it was live documentation, and it worked the other way around, as well. You could add attributes there and it was adding that to the code. I found that so helpful to get an overview of what I was doing, and the same is true for these process models. They are so helpful at their executable code, so they are not a wishful thinking document, they are executable code, but they are readable by every human. And that does not actually have to be a computer scientist, it can be a business person, as well, it can be whatever. And that’s so valuable, but somehow developers are scared of these process models. That’s the first perception I see everywhere.

Michael Stiefel: I mean, some of that is sort of historical. After all, the original idea behind COBOL was that even a manager could program. I don’t know if you remember COBOL, but that was one of the first computer languages developed, and of course, that was an utter failure, from that point of view. I mean, COBOL is used all over the place from the idea that that would be obvious. I think some of the fear programmers have, and I remember from when I used some of those visual tools, is sort of the paradox of programming.

Normal engineers, or a civil engineer, can essentially spend their entire career conceptually building the same bridge over and over again. But in software, if we want to build something new, we reprogram it all from scratch, because if you want another copy of Microsoft Word, you just copy the bits and you’re done. You only undertake a programming project if there’s something new to do. But the technology that we’ve automated is based on what we knew in the past. So there’s a disconnect between what we could automate in the past, but the fact that we’re doing something new that we never did before and that the tools are not capable of doing that. And that’s the frustration I had when I used some of those visual modeling tools.

Bernd Ruecker: Yes. But I would look at it from a different lens. For me, it’s really a little bit of a different view on the same thing. I mean, you can write code in whatever, Java, Node, or C#, it’s also an abstraction. You don’t write it on whatever assembler level anymore, right? And the same is for BPMN, for example. It’s a language that’s defined exactly how it should behave, and then you express problems with that language on a different abstraction level, but it’s still general purpose. You can model whatever you want to model there. And the other thing people forget then very often is you’re not, especially if you could use good tooling, that’s probably, again, something a lot of vendors did pretty bad in the past, but if that’s flexible enough or developer-friendly enough, you can say, “There is an SPI, there is a hook where it can hook in code, if I don’t get any further with BPMN, then I simply just code it”.

But there are certain elements in BPMN which are great for expressing long-running problems, and I simply should use them because that’s what I see on a weekly basis, at least. Otherwise, people start implementing their own homegrown, bespoke workflow engines without knowing it. It’s the, “Oh, we just need that order status column in that table over there”, but we have to monitor that. We write a small schedule looking at that every day and then we do this and that, and they start to build workflow engines. I see that so often, and it always starts with, “Oh, we don’t want to have the complexity of an additional component”, but you have the complexity of that mess you wrote.

Michael Stiefel: Plus, you’re rethinking something from scratch and not getting the benefit of other people’s experience. So that raised something also interesting that makes me think about the low-code movement. So in other words, when you use a system such as this, you’re trying to minimize the amount of code that a developer has to write, which is generally a good thing.

However, when you’re trying to figure out why the system is not working, do these orchestration engines, or whatever you want to call them, get in the way of understanding what’s happening? Are they too much of a black box?

Bernd Ruecker: Let’s say the core engine itself should be a black box. I mean, even if you have the source available, you could look into that, nobody wants to do that. I mean, I recall times when I debugged into Hibernate, that’s nothing you ever want to do just because you can.

Michael Stiefel: Yes, I’ve done that. Yes.

Bernd Ruecker: So I would say overall, that it should a black box, and that’s good as it is. But it’s, again, very general purpose, so if you have a good product there, the core should work and it’s not… I mean, it’s still complex to get that right, especially at scale, for example. But what it does, from looking at it from the outside, is not that complicated. So it basically triggers certain activities, and that’s either connector, which is code, or it’s your code, so it’s normally not that big of a problem. So you still have the possibility to debug into things, for example, that you just don’t debug into the core engine, you debug into what you made out of that, basically the activities you put into the process. You have the possibility to write automated unit tests, for example. So that’s not gone because you used such an orchestrator.

Michael Stiefel: I guess what you’re sort of saying is that we write stuff on top of the operating system, and we don’t step into the operating system.

Bernd Ruecker: Yes. Yes. Yes, you’re right.

Michael Stiefel: And there are bugs in the operating system, but I guess what you’re saying is we look at them sort of as a, this system is supposed to X, it doesn’t do X. We have to call up the vendor and understand why that is the case.

Bernd Ruecker: And I mean, overall, it’s a bit like also the cloud discussion, “Oh, we can’t put that in the cloud. Is that secure?” I’m 100% sure it’s more secure than what most companies do at home.

Michael Stiefel: Yes, yes, yes. The only interesting thing about the cloud, just as a side issue, as you mentioned it, it just builds a more tempting target for someone to hack.

Bernd Ruecker: Probably, yes. Yes.

Michael Stiefel: Yes. Willie Sutton, who was a very famous bank robber, and he was asked why he robbed banks and he said, “Well, that’s where the money is”.

Bernd Ruecker: True, it makes sense. But at the same time, if you compare using a product that’s built for that-

Michael Stiefel: Obviously. Yes, yes, yes.

Bernd Ruecker: And used in a lot of organizations, it’s probably better than just code it yourself. Yes.

Michael Stiefel: Yes, yes. But again, when you’re dealing with human fear-

Orchestration vs. Event-Driven Architectures [15:44]

Bernd Ruecker: Yes. But honestly, the typical objections are not about, “Ooh, is my software still stable?” It’s very often about, “I don’t need that component”. It’s still that belief, because it’s so simple, my problem is so simple. It’s not. And probably because people are not really looking into, let’s say, the full scope of process orchestration, like on an enterprise level, that’s a different topic we should probably look into. And the one thing, and that kept me actually busy for a couple of years, discussing orchestration versus choreography, basically. The whole thing, do I want to control what’s happening, or do I want to let it evolve by pushing events around, for example?

And there was a belief for quite a while, I think it changed at that moment, but there was a belief that event-driven is so much easier to do. It’s so much more decoupled. It’s better, or probably more beautiful or modern or whatever you want to call it, architecture. And that led a lot of projects into we don’t need orchestration. Orchestration is kind of bad. They brought that together with, oh, centralized bottlenecks and whatever, and that’s 100% not true. And that’s another perception we saw there where people deliberately didn’t want to go into the orchestration route, especially out of IT, which is kind of weird.

Michael Stiefel: Yes, it is. But you’re not arguing against event-driven architectures altogether?

Bernd Ruecker: No. No. So my point is there is a place for event-driven and there is a place for orchestration. Basically, both are typically about different components or services communicating their way back to the distributed system. Because in my whatever, order fulfillment process, I need to talk to the CRM system, to the ERP system, to the stock, to the whatever, logistic system, so I need to get all those pieces together. So the question is how do those components interoperate? And with orchestration, basically I have one component that has the duty, the responsibility, of orchestrating the end-to-end process and saying, “Okay, the first thing I have to do is, for example, getting the stuff from stock or retrieving the payment”, and so on and so forth. It can control that, to some extent. With event-driven, with the choreography, you basically distribute that responsibility, saying, “There’s an event, the order just came in”, and then some component knows, okay, then I have to collect money.

And that’s an architecture question, actually, one of the core architecture questions, how do you want to distribute responsibilities amongst the components? And that’s a design choice you can make. And there are sometimes good arguments where you want to be event-driven, and there are sometimes good arguments where you want to be orchestrated. And especially if I look at these typical business processes, like order fulfillment or whatever, I want to get a new bank account, I want to get a claim settled or all these kind of business processes, then you very often, from a business perspective, you want to have somebody responsible for the end-to-end, because the customer is only happy if the parcel arrives at my door. It doesn’t look at how beautiful my events are flowing within, just if I get my stuff here or if I get my money paid out or whatever, then the customer is happy. So there is a big emphasis on the end-to-end and understanding certain KPIs (Key Performance Indicators) and bottlenecks and everything on the end-to-end level.

And because of that you want to have one owner, and because of that you should have also on the IT side probably one personal, one component being responsible for that. And if you want to help that component responsible for something, it needs to control certain aspects, like the sequence of things for example, or the SLA. What happens if the logistic provider doesn’t talk back to me within two hours? Is that okay or not? And deciding all these kind of factors, that’s an architectural decision, and that’s… To make a very long answer short, it depends on how you want to lay out the system, and in most bigger systems I see a coexistence of both event-driven and orchestration.

Michael Stiefel: Where do you see, ideally speaking, event-driven should predominate in your architecture choice?

Bernd Ruecker: Every time when the component that triggers something is not really responsible for doing it. I’ll give you one example, I like examples. I find that much easier to understand, and it’s actually one thing I discussed with Sam Newman regularly. He wrote these microservices books, and he has an example also of an account opening in his book and I have an account opening in my book, so that’s a good collision of examples. And basically, what I look at from an orchestration perspective is the whole the customer applies to the bank account, for example, is opened because that’s a responsibility of one component, one microservice, if you will, or however you want to implement that, and that should be orchestrated.

He looks at, okay, the customer needs to be put into the loyalty points bank, he needs an account there, he probably needs a whatever letter, and certain things that are not in the responsibility of that core opening process. And then events are great because then you could just push an event on the bus saying, “Hey, the customer is created”. And then it could build different components reacting to that saying, “Okay, then we do this, but if we don’t do that, nothing happens”. The bank account is still open. Your COO will not come to you basically asking you, “Why is it not on the loyalty points bank?” It’s a different responsibility, and this kind of trade-off, I think, is very important to think about.

The Real Problem Is Most Systems Are Architectural Spaghetti [21:19]

Michael Stiefel: So, if I could summarize what you said in sort of a sentence or two, and tell me what I’ve got it right about, it’s basically, from a business perspective, where the responsibility lies?

Bernd Ruecker: Yes. Interesting side note here, while this is one of the things, I think, on a conference level I discussed the most over the last years, it’s still, when I look into our customer base, it’s not the predominant problem. It might be discussed, but the predominant problem is that they have an integration spaghetti there, where systems are talking to each other in a totally uncontrolled manner and you have no way of really extracting the end-to-end business process. It’s already hard to understand it, which makes it almost impossible to adjust it, and that’s a big, big problem at the moment because there is so much change. You might want to drive customer experience, you might want to add new services, you might want to infuse AI everywhere, and they have mostly no idea how the process is currently running and they can’t touch it. And I think that’s a much bigger problem at the moment, to be honest.

The Public Does Not Understand How Fragile These Systems Are [22:26]

Michael Stiefel: Yes. Well, it’s also a big problem from the public’s point of view because they don’t understand how fragile these software systems actually are. I don’t know if you’re familiar, for example, in the United States there was this, the security vendor for some Microsoft systems had a bug, and this caused tremendous outages in the airlines and, especially Delta, took days to recover, and there’s all kinds of finger pointing going back and forth. But the public really doesn’t understand how hodgepodge or fragile these software systems are, and you seem to, when you talk about this from the IT perspective, I don’t think upper management sometimes realizes how fragile these systems are. Do you have any feeling about this one way or the other, how to explain this to people, whether upper management or the public?

Bernd Ruecker: But I think that’s, honestly, one of the, let’s say, core areas where we also look at when we talk about business processes and process orchestration, because it’s exactly that connection between business and IT. The IT side is implementing the process, okay, but the business side needs to understand the end-to-end process. And if I talk to the business, they have different language. They talk about whatever business model canvas, customer journeys, value streams, and these kind of things. We can get also the business to level where we talk about an end-to-end business process and probably business capabilities. And then we want to link that to implementation, and the process, for me, is one of the key elements to do that because that’s also what a business person understands. And then as soon as you have that running, you can even link data to that, yes.

The Architect is the Translator Between Business and IT [24:12]

Michael Stiefel: You hit on one of what I think is a key role of an architect, because the architect is the person who can talk to the business people about business and the technical people about technology. They’re sort of the interpreter of the two different worlds.

Bernd Ruecker: Yes, 100%. When you ask about architecture, what always comes to my mind first is I think there are still different levels to an architect. So I started, I would say, as a solution architect, looking at a single solution only, which is a bit different to looking on an enterprise architect or in the business architecture level. But I totally agree, especially the latter two should be able to, I like the Gregor Hohpe metaphor, actually, of the architect elevator, that you have to ride the elevator up and down, from the engine to the penthouse and you have to be able to talk to all of them.

Michael Stiefel: Which means that one of the skills of an architect is, even if because of their generalization they lose some of their technical chops, they still have to keep their intuition about what is technically right and wrong.

Bernd Ruecker: I agree, and I’ am still a big fan, I mean, I also do a lot of discussions nowadays about business value, for example, express business value, and I understand why it’s so important to do that, right? At the same time, I try to program, probably not on a daily basis anymore, but at least a weekly basis. I need to touch code very regularly to, exactly, keep that feeling what you’re talking about, so yes, 100%

Michael Stiefel: Yes, yes. Good. So you mentioned about processes in the enterprise and you wanted to get back to that. Is there anything that comes to mind when you think about that?

Bernd Ruecker: I think I touched on that briefly. So what we are currently seeing is a lot of change in organizations. That’s probably not news. AI or agentic AI, or whatever the buzzword of the day is, puts a lot of pressure, and we see organizations struggle with that. And what we also see them doing, if you look at it from an enterprise perspective, very often they try to optimize things in local areas, like in that one application or I use a, whatever, robotic process automation bot over here, I want to use that AI tool over here. But very often that’s local because that’s how the teams are siloed in that organization, very often the business teams. And then they start to do projects which, on a local level, might make sense because they have an ROI, oh, we automated that, we saved manual labor, there’s a ROI.

But zooming out, and that’s what a lot of companies miss, zooming out, looking at these end-to-end processes, it doesn’t have to be necessarily good what they are doing. Either it doesn’t have any influence on the overall end-to-end process, it can even cement certain bad behaviors, bad patterns, like making it harder to change that overall process, and it definitely not looks at improving it end-to-end. There are tons of examples. So we have seen, for example, AI to do support requests you could save if you make the overall application process much smoother, for example. And then, of course, you should invest there and not making the support better, even if that looks good on an isolated basis.

And all these kind of local optimizations, I think, are a huge problem. So we need to elevate that discussion to understand that value stream and the business process, and for me, process orchestration is kind of the key element to make that happen because it can actually look at the process and make the connection to really executable software and then orchestrate different pieces, different tasks, together. And that’s super powerful if you look at that on an enterprise level. So not so much in my local project, how can I orchestrate the process better? We have that, as well, but really look at it more holistically. And that’s what we are currently doing with quite a couple of customers, which is super interesting.

Michael Stiefel: I don’t know if you’re aware of this or not, but this is actually a broad economic phenomenon. I don’t know if you’re familiar with something called Pareto optimization, but the argument there is very much the one that you mentioned, is that if you want to optimize something and you can’t do local optimizations, you generally have to reoptimize the whole system to get an incremental optimization. And what you’re describing actually fits into a larger economic system, and you seem to say that process orchestration is a way of forcing that issue to the front.

Bernd Ruecker: Yes. And it’s a way of probably solving the riddle, at least to some extent. I mean, that’s not the only piece, but I think it’s a centerpiece. And it’s something, again, which you can make a CIO or a CEO or a COO understand because business process is not-

Michael Stiefel: A foreign language.

Bernd Ruecker: I think that’s one problem of the microservice community, for example, it’s discussed in IT, IT only. If I go to a business process, business capability level, I can talk to business, as well. And that’s a different thing because you need them. I mean, that’s actually why the business exists, so you need them, and that’s saying me as kind of an engineer by heart, you need them, of course, on board to make transformation happen.

The Technology Is Fun Trap [29:24]

Michael Stiefel: I think, in my career, I’ve had the same experience, that it’s amazing how many of the technology people think in terms only of technology and don’t understand the business reason for doing something. After all, it’s the business that’s paying, it’s the business that’s driven. I mean, yes, I’m fascinated with technology, too, but the idea is in service of a goal.

Bernd Ruecker: Yes. I totally agree. I thought about that quite a bit, as well, because for me it was kind of a transformation, a journey, I would say, personally because I was super nerdy and enthusiastic about technology for a long time.

Michael Stiefel: It’s fun. Technology is fun.

Bernd Ruecker: It is, and I loved it. I mean, we did a cross-orchestration engine that can scale on geographically-distributed data centers without adding latency, but we still have the same throughput, for example, so that’s obviously awesome and exciting.

Michael Stiefel: But you tell that to a business person and they go, “Duh”.

Bernd Ruecker: Yes, it’s like, “Duh”.

Michael Stiefel: Right.

Bernd Ruecker: On georedundancy, they might understand, but not how we did it.

Michael Stiefel: Right.

Bernd Ruecker: But I still had that progression, I would say, to understand why business value is important, and then also starting to understand how you can explain that better. Because very often it’s an indirect story you have to tell why this technology enables this, what it can do in a software application, and that will basically increase whatever customer experience, and then you can talk to them. And that’s so important, and probably, I would say, it’s a typical journey a lot of people run through, and I think it’s probably a good thing. I want to keep all those enthusiastic 20-years-old people that love technology. They are awesome, and we need them.

Architecture and Development are Two Equal Roles [31:05]

Michael Stiefel: And maybe some of them will stay that way for 40 years.

Bernd Ruecker: Yes, it’s also fine. Yes, yes, yes, true.

Michael Stiefel: I think it’s a mistake to think that everybody has to become an architect. To be a super developer is great, too.

Bernd Ruecker: It is. It is. And for me, for example, I did a very conscious decision not to go into people management. And for me, that architect, for example, is also a path you could take where you say, “I don’t want a people manage, but I want to get to a broader view”, and have kind of whatever, probably making bigger decisions, if you will.

Michael Stiefel: That must be tough running a company.

Bernd Ruecker: You have to find the right co-founder. Again, a piece of luck I had in my life.

Michael Stiefel: But also, it’s important to explain that story to the technology people, why they can’t fall in love with this piece of technology, and they have to do it maybe the old-fashioned way because that’s better for the business.

Bernd Ruecker: Yes. It’s not always easy. You probably know that there’s a blog post called Choose Boring Technology, or something like that, and that’s what it’s very often about.

How Do You Train Developers? [32:06]

Michael Stiefel: Yes. So two questions that sort of come to mind, and they’re separate questions, but I think they’re things that you’ve touched on that are important, how do you train developers? If developers have trouble understanding when to use events and choreograph and when to orchestrate and distributed systems and asynchronous systems, how do you train new developers to understand this?

Bernd Ruecker: There is, as always, nuance to that. So there are things where I just think people have to be made aware just by trying things out, and that’s, I would say, for the smaller problems, for example, mastering asynchronous communication, having probably consistency problem, like transactional problems, long-running things, those you can occasionally train by doing examples, probably let them writing things, probably review stuff, so that-

Michael Stiefel: And wonder why their system’s hanging.

Bernd Ruecker: Yes. I even personally learned the trouble of two-phase commit early on in my life in a real-life scenario. I mean, that’s something that sticks with you afterwards. So that’s okay. I would say the bit more architectural questions, the bigger, hairy ones like event-driven or orchestration, which might influence not only one communication link, but probably how you lay out the whole architecture, that’s probably not for a newbie. That’s probably something where it’s good to have some experienced people on board. We very often do coachings, but when I talk to people, it’s very often architects that already have seen something, otherwise it might be hairy.

Michael Stiefel: So sort of what you’re saying is that gradual exposure to increasingly more complex situations is basically the way people learn?

Bernd Ruecker: Yes, probably. I would say so.

The Importance of Visual Tools [33:51]

Michael Stiefel: And the other thing you talked about is sort of your love, if that’s too strong a word, but your appreciation of visual communication and visual tools.

Bernd Ruecker: Oh, I would say love is a good word.

Michael Stiefel: Okay. I think, given that people very often don’t appreciate this, and I know from the work that I’ve done with workflow and the little work that I’ve done with process orchestration, visuals are very, very important, especially with long-running processes that you can’t necessarily get in your head completely.

Bernd Ruecker: Yes. It’s even a problem that you can express very well with visuals. It’s a flow of activities, probably some different path, then there are events happening where you go somewhere else, so it’s a problem that fits well on a visual. And to be honest, I don’t get it why people are so against it. And every time, when I discuss a process with people that don’t like visuals, what they do first is take a whiteboard and draw it.

Michael Stiefel: Yes.

Bernd Ruecker: It’s like, because otherwise you can’t follow. So I personally don’t get it. And then sometimes get arguments like, “But how can I do diffing?” Yes, you can do visual diffing. I’m not sure if that’s the most important question, but you could do it. So no, I just don’t get it.

Michael Stiefel: There’s a limit to the amount of complexity the human mind can conceive of, and they’re not linear processes, necessarily, and there’s feedback. It seems almost perfect for a diagram.

Bernd Ruecker: Yes, I agree. And same thing, and probably you need different levels of abstractions, for example, you can do subprocesses or certain things that might happen you might want to handle in a different process or some things you might still want to handle in code, so you can play with that. It’s one tool in your tool belt, and if it fits, it simply should be used.

The Architect’s Questionnaire [35:39]

Michael Stiefel: Yes. So one of the things I like to do to ask all the architects that appear on the podcast is the architect’s questionnaire, which gets into the more human side of being an architect. So what’s your favorite part of being an architect?

Bernd Ruecker: I think I said that relatively in the beginning, seeing the big picture. Not going super deep in one element, but understanding the different pieces and how they are connected. Probably also how they are connected to business strategy, how they are connected to the technical implementation, so I think that’s my favorite part, I would say.

Michael Stiefel: So what is your least favorite part of being an architect?

Bernd Ruecker: To some extent, that’s probably also connected to my current role, which means I’m very often talking to a lot of companies, but always as an external. And I think that’s also true for architects in organizations, so you’re not part of the team typically. You’re very often an outsider, which I can live with, but it’s probably the one my least favorite thing of it.

Michael Stiefel: Is there anything creatively, spiritually, or emotionally about architecture or being an architect?

Bernd Ruecker: I love that question. I mean, the creativity part, I think it’s clear. I think it’s super creative. You have to very often find elegant solutions for very complex problems and make or communicate them in a way that they’re easy to understand. That’s very creative, and I love doing that. It always makes you a little bit happy if you see that this seems to be understood and works. And the spiritual or emotional element, it’s probably harder to answer, but for me it’s, again, I would say happy is kind of the thing. If you see that it’s working out, if you see that people are understanding it, if you see that whatever you draw helps people understand things or implement that, it’s just a good feeling.

Michael Stiefel: The world is a little better off incrementally. What turns you off about architecture or being an architect?

Bernd Ruecker: I would say it’s the, there is a typical perception around architects also being kind of the naysayers, the no-sayers, the governance entity, the police, if you will, which I think is not true, at least not for good architects at all, but it’s kind of the one thing very often I hear.

Michael Stiefel: Do you have any favorite technologies?

Bernd Ruecker: That’s easy. Of course, BPMN. Of course, Camunda. I contributed to a couple of other workflow engines in the past, as well. So there it’s great technology. Otherwise, I would say simply also programming. So I am a Java person, I started with Java and I like Java. I like the type languages, to be honest. I like the dependency management, I like the tooling, the ecosystems, the open source around it.

Michael Stiefel: What about architecture do you love?

Bernd Ruecker: Probably repetition. I would say breaking things down into manageable pieces, solving a complex problem. I would say that’s more or less it.

Michael Stiefel: And what about architecture do you hate?

Bernd Ruecker: Discussions. Too long discussions, too many discussions, not fruitful discussions. I know it’s part of the thing, and to some extent I like it and it makes things better, but at some point in time it’s too much.

Michael Stiefel: What profession, other than being an architect, would you like to attempt?

Bernd Ruecker: I haven’t found any so far. I mean, co-founding the company, I had a couple of roles in the past, but it always was architecture related, to be honest. I think my absolute favorite was kind of what we now call a solution engineer doing proof of concepts, like flying into the customer three days, get something to work in their environment, and leave. You don’t have to finish it, but you can draw the boxes to sketch the idea and prove that it really works, so that was always my favorite.

Michael Stiefel: Do you ever see yourself as not being an architect anymore?

Bernd Ruecker: Probably, let’s see. I mean, what I said early on, what I experienced at the moment is that I had that phase of my life that, I would say 10 years, where I was so into enthusiastic about technology. When I was at a conference I picked up something, some keyword I had to Google that, I could try it out. I was sitting in the evening in the hotel room hacking something to understand it. To some extent, that changed a little bit, probably also with family and kids and whatever, not so much time, and seeing that generation is still there doing that, probably now with AI and stuff. So I kind of foresee to probably do more mentoring than architecture actively myself, but not yet.

Michael Stiefel: And when a project is done, what do you like to hear from the clients or your team?

Bernd Ruecker: When I read the question, my first thought was, it’s not really an answer, but my first thought was, don’t make any more projects. Go into the product thinking world, that’s where you should be, actually. So a product is never done. If you do software, it should be kind of a product, it should be maintained, it should be a life cycle, it should keep doing things. So that’s one thing. But I totally get the question.

So the other thing, and that’s probably, again, because of my time in my life I’m currently, I would like to hear from them what’s the value of what we just did? What’s the story that connects it to the business value? What could we tell everybody in the company why we did what we just did, and why is this great? I think if they achieve that, for me, that would be even better than, quote unquote, just having a great technical solution.

Michael Stiefel: Well, thank you very much. I found this conversation fascinating, and I think you’ve illuminated many dark corners for people about both orchestration and how to think about architecture, so thank you very much.

Bernd Ruecker: Thanks for having me, Michael.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Spotting Winners: Couchbase (NASDAQ:BASE) And Data Storage Stocks In Q2

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

BASE Cover Image

As the craze of earnings season draws to a close, here’s a look back at some of the most exciting (and some less so) results from Q2. Today, we are looking at data storage stocks, starting with Couchbase (NASDAQ:BASE).

Data is the lifeblood of the internet and software in general, and the amount of data created is accelerating. As a result, the importance of storing the data in scalable and efficient formats continues to rise, especially as its diversity and associated use cases expand from analyzing simple, structured datasets to high-scale processing of unstructured data such as images, audio, and video.

The 5 data storage stocks we track reported a strong Q2. As a group, revenues beat analysts’ consensus estimates by 2.5% while next quarter’s revenue guidance was 1.3% above.

After much suspense, the Federal Reserve cut its policy rate by 50bps (half a percent) in September 2024. This marks the central bank’s first easing of monetary policy since 2020 and the end of its most pointed inflation-busting campaign since the 1980s. Inflation had begun to run hot in 2021 post-COVID due to a confluence of factors such as supply chain disruptions, labor shortages, and stimulus spending. While CPI (inflation) readings have been supportive lately, employment measures have prompted some concern. Going forward, the markets will debate whether this rate cut (and more potential ones in 2024 and 2025) is perfect timing to support the economy or a bit too late for a macro that has already cooled too much.

Luckily, data storage stocks have performed well with share prices up 15.2% on average since the latest earnings results.

Couchbase (NASDAQ:BASE)

Formed in 2011 with the merger of Membase and CouchOne, Couchbase (NASDAQ:BASE) is a database-as-a-service platform that allows enterprises to store large volumes of semi-structured data.

Couchbase reported revenues of $51.59 million, up 19.6% year on year. This print was in line with analysts’ expectations, but overall, it was a mixed quarter for the company with full-year revenue guidance exceeding analysts’ expectations but a miss of analysts’ billings estimates.

“I’m pleased with our hard work and execution in the quarter,” said Matt Cain, Chair, President and CEO of Couchbase.

Couchbase Total Revenue

Couchbase delivered the weakest performance against analyst estimates and weakest full-year guidance update of the whole group. Unsurprisingly, the stock is down 14.4% since reporting and currently trades at $16.24.

Is now the time to buy Couchbase? Access our full analysis of the earnings results here, it’s free.

Best Q2: Commvault Systems (NASDAQ:CVLT)

Originally formed in 1988 as part of Bell Labs, Commvault (NASDAQ: CVLT) provides enterprise software used for data backup and recovery, cloud and infrastructure management, retention, and compliance.

Commvault Systems reported revenues of $224.7 million, up 13.4% year on year, outperforming analysts’ expectations by 4.2%. The business had an exceptional quarter with an impressive beat of analysts’ billings estimates and full-year revenue guidance exceeding analysts’ expectations.

Commvault Systems Total Revenue

Commvault Systems scored the biggest analyst estimates beat among its peers. The market seems happy with the results as the stock is up 30.7% since reporting. It currently trades at $161.12.

Is now the time to buy Commvault Systems? Access our full analysis of the earnings results here, it’s free.

Weakest Q2: Snowflake (NYSE:SNOW)

Founded in 2013 by three French engineers who spent decades working for Oracle, Snowflake (NYSE:SNOW) provides a data warehouse-as-a-service in the cloud that allows companies to store large amounts of data and analyze it in real time.

Snowflake reported revenues of $868.8 million, up 28.9% year on year, exceeding analysts’ expectations by 2.1%. Still, it was a slower quarter as it posted a miss of analysts’ billings estimates.

As expected, the stock is down 8% since the results and currently trades at $124.19.

Read our full analysis of Snowflake’s results here.

DigitalOcean (NYSE:DOCN)

Started by brothers Ben and Moisey Uretsky, DigitalOcean (NYSE: DOCN) provides a simple, low-cost platform that allows developers and small and medium-sized businesses to host applications and data in the cloud.

DigitalOcean reported revenues of $192.5 million, up 13.3% year on year. This number surpassed analysts’ expectations by 2%. Overall, it was a very strong quarter as it also recorded full-year revenue guidance exceeding analysts’ expectations and a solid beat of analysts’ ARR (annual recurring revenue) estimates.

DigitalOcean scored the highest full-year guidance raise among its peers. The stock is up 48.7% since reporting and currently trades at $43.28.

Read our full, actionable report on DigitalOcean here, it’s free.

MongoDB (NASDAQ:MDB)

Started in 2007 by the team behind Google’s ad platform, DoubleClick, MongoDB offers database-as-a-service that helps companies store large volumes of semi-structured data.

MongoDB reported revenues of $478.1 million, up 12.8% year on year. This print surpassed analysts’ expectations by 3%. Overall, it was a very strong quarter as it also recorded an impressive beat of analysts’ billings estimates and full-year revenue guidance exceeding analysts’ expectations.

MongoDB had the slowest revenue growth among its peers. The company added 52 enterprise customers paying more than $100,000 annually to reach a total of 2,189. The stock is up 19.2% since reporting and currently trades at $293.01.

Read our full, actionable report on MongoDB here, it’s free.

Join Paid Stock Investor Research

Help us make StockStory more helpful to investors like yourself. Join our paid user research session and receive a $50 Amazon gift card for your opinions. Sign up here.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to invest in US cloud software stocks? Piper Sandler strongly recommends “MAC”

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

According to the Zhītōng Finance APP, Piper Sandler believes that Adobe (ADBE.US), MongoDB (MDB.US), and Salesforce (CRM.US) are the most worthwhile investment stocks in cloud software.

Piper Sandler analysts led by Brent Bracelin stated: “We recommend large-cap growth investors to take on more risk and increase their position in ‘MAC’.” MAC refers to Adobe, MongoDB, and Salesforce.

Bracelin added: “These are stocks that have underperformed market expectations, with average prices still 26% below their 52-week highs, despite having healthy product and profit catalysts, as well as favorable risk-return profiles.”

After hitting a low point in the second quarter, MongoDB may accelerate its growth in the future. MongoDB’s stock price has already fallen by 29% this year. Piper Sandler believes that as a high-quality database franchise operator, the stock is expected to recover the 29% decline over the next few quarters and reach $335.

Adobe’s situation is similar, and with the benefit of its latest artificial intelligence products, the stock is expected to recover this year.

Piper Sandler pointed out: “The new innovation product cycle is underestimated, which may help revitalize growth.”

Salesforce’s stock price has risen by 9% this year, but has fallen by nearly 5% in the past six months.

Bracelin noted: “We believe Salesforce will rise by 11% to $325, and in a bullish scenario, it could rise by 39% to $405.”

Piper Sandler rates Adobe as “overweight” with a target price of $635; rates Salesforce as “overweight” with a target price of $325; rates MongoDB as “overweight” with a target price of $335

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.