MongoDB, Inc. (NASDAQ:MDB) Shares Sold by Andra AP fonden – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Andra AP fonden reduced its stake in MongoDB, Inc. (NASDAQ:MDBFree Report) by 25.4% in the second quarter, according to the company in its most recent disclosure with the Securities & Exchange Commission. The firm owned 5,000 shares of the company’s stock after selling 1,700 shares during the period. Andra AP fonden’s holdings in MongoDB were worth $1,250,000 as of its most recent SEC filing.

Several other institutional investors also recently made changes to their positions in the stock. Vanguard Group Inc. grew its position in MongoDB by 2.9% in the 4th quarter. Vanguard Group Inc. now owns 6,842,413 shares of the company’s stock worth $2,797,521,000 after purchasing an additional 194,148 shares during the last quarter. Atalanta Sosnoff Capital LLC increased its position in MongoDB by 24.7% in the 4th quarter. Atalanta Sosnoff Capital LLC now owns 54,311 shares of the company’s stock valued at $22,205,000 after acquiring an additional 10,753 shares during the period. Artisan Partners Limited Partnership acquired a new stake in MongoDB in the 4th quarter valued at approximately $10,545,000. Prudential PLC increased its position in MongoDB by 2.4% in the 4th quarter. Prudential PLC now owns 21,169 shares of the company’s stock valued at $8,655,000 after acquiring an additional 489 shares during the period. Finally, Bornite Capital Management LP acquired a new stake in MongoDB in the 4th quarter valued at approximately $6,133,000. 89.29% of the stock is owned by institutional investors.

MongoDB Stock Up 0.7 %

Shares of NASDAQ:MDB traded up $2.21 during midday trading on Friday, hitting $297.39. 24,066 shares of the stock were exchanged, compared to its average volume of 1,495,210. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. MongoDB, Inc. has a 52-week low of $212.74 and a 52-week high of $509.62. The business’s fifty day simple moving average is $255.38 and its two-hundred day simple moving average is $305.33. The company has a market capitalization of $21.81 billion, a PE ratio of -105.05 and a beta of 1.15.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings data on Thursday, August 29th. The company reported $0.70 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The business had revenue of $478.11 million during the quarter, compared to analyst estimates of $465.03 million. During the same period last year, the company earned ($0.63) earnings per share. MongoDB’s quarterly revenue was up 12.8% compared to the same quarter last year. As a group, research analysts predict that MongoDB, Inc. will post -2.46 earnings per share for the current fiscal year.

Wall Street Analyst Weigh In

Several equities analysts have recently issued reports on MDB shares. Wells Fargo & Company lifted their price target on MongoDB from $300.00 to $350.00 and gave the company an “overweight” rating in a research report on Friday, August 30th. Mizuho raised their price objective on MongoDB from $250.00 to $275.00 and gave the stock a “neutral” rating in a research note on Friday, August 30th. Barclays reduced their price objective on MongoDB from $458.00 to $290.00 and set an “overweight” rating for the company in a research note on Friday, May 31st. Needham & Company LLC raised their price objective on MongoDB from $290.00 to $335.00 and gave the stock a “buy” rating in a research note on Friday, August 30th. Finally, Scotiabank raised their price objective on MongoDB from $250.00 to $295.00 and gave the stock a “sector perform” rating in a research note on Friday, August 30th. One equities research analyst has rated the stock with a sell rating, five have assigned a hold rating and twenty have issued a buy rating to the stock. According to data from MarketBeat.com, MongoDB currently has an average rating of “Moderate Buy” and an average target price of $337.56.

Read Our Latest Research Report on MDB

Insider Buying and Selling at MongoDB

In other MongoDB news, Director Dwight A. Merriman sold 3,000 shares of the firm’s stock in a transaction dated Tuesday, September 3rd. The stock was sold at an average price of $290.79, for a total transaction of $872,370.00. Following the transaction, the director now directly owns 1,135,006 shares in the company, valued at $330,048,394.74. The transaction was disclosed in a filing with the SEC, which is available through this link. In other news, CAO Thomas Bull sold 138 shares of MongoDB stock in a transaction dated Tuesday, July 2nd. The stock was sold at an average price of $265.29, for a total value of $36,610.02. Following the transaction, the chief accounting officer now directly owns 17,222 shares in the company, valued at $4,568,824.38. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is available at the SEC website. Also, Director Dwight A. Merriman sold 3,000 shares of MongoDB stock in a transaction dated Tuesday, September 3rd. The stock was sold at an average price of $290.79, for a total value of $872,370.00. Following the completion of the transaction, the director now owns 1,135,006 shares in the company, valued at $330,048,394.74. The disclosure for this sale can be found here. Insiders have sold a total of 33,179 shares of company stock worth $8,346,169 over the last three months. Corporate insiders own 3.60% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

See Also

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Reduce the Risk Cover

Market downturns give many investors pause, and for good reason. Wondering how to offset this risk? Click the link below to learn more about using beta to protect yourself.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Engineering Excellence: Declan Whelan on Technical Health, Agile Practices, and Team Culture

MMS Founder
MMS Declan Whelan

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I have the privilege of sitting down with Declan Whelan. Declan, welcome. Thanks for taking the time to talk to us today.

Declan Whelan: Pleasure to be here, Shane, so nice to see and hear your voice again, and thank you for having me.

Shane Hastie: My pleasure. It’s been a while. You and I obviously know each other, but I suspect a fair bunch of our audience haven’t come across you and your work, so who’s Declan?

Introductions [01:09]

Declan Whelan: Fair, good question. I live in Canada and I’m an electrical engineer by training and I’ve been involved in software my entire career and I’ve opted for a career path that always kept me close to code, bits under my fingernails. Along that journey, I’ve been a coach, I’ve been a individual participator, or contributor, and I’ve been a CTO of two different start-ups and just love doing technical work, and especially in an agile context. I still do a lot of training and some coaching and a lot of coding whenever I can.

Shane Hastie: One of the things that you’ve been known for over the years, it must be a decade at least, that you and I have known each other and you’ve been in that whole time, you’ve been strong on strong technical practices in teams and talking a lot about technical health as opposed to technical debt. What does technical health look like?

Technical Health over Technical Debt [02:12]

Declan Whelan: It’s a good question. Well, let me start with technical debt, because as you know, and I suspect a lot of the listeners would know, that was a term invented by Ward Cunningham back in, I’m going to say the late 1990s. It was really around a mechanism to explain how if you take shortcuts in your software development in the short-term, you may get some immediate benefit such as you would for taking out a loan, you would have some cash in hand for which you could purchase something. If you took a bank loan, you might be able to buy a car or a house.

In technical terms, you might be able to deliver more features out to customers by taking shortcuts. Ward drew the metaphor, a financial metaphor for that action with your technical practices, you can choose to take shortcuts and you will have to pay back the reaper at some point in time.

I found that metaphor to be really strong, but also as any metaphor or any analogy has, it has its shortcomings. I initially wanted to start to have people think about technical debt in a more positive light because when we think about debt, it has a negative connotation to it. You’re in debt, I’m in debt, and being out of debt is a good thing. We know from technical work, you’re never out of debt. There’s always some work to be done. I decided that having a more positive term would be better. Then I started to realize, well, technical, so I invented the term technical health as really just a different way of reframing technical debt.

Since I’ve used the term, I found some unanticipated benefits to that term, and one of them is that you can think about, let’s say your own personal health. If you are just participating in the 2024 Olympics, if you’re a world-class athlete, your notion of what is healthy is going to be very different than say yours or mine, where maybe we’re just looking to stay fit and have our blood pressure at a certain level and so on.

The point being that different systems or different contexts can have different things that are applicable and important to them. I’ve been using the term technical health as a way to expand people’s thinking about their technical practices and what’s important to them in order for them to achieve the business outcomes that they want. Then recognizing that we all have health, it’s just regardless. You can choose to have debt or not, but we all have health, and I think that’s what happens when we build systems.

There’s always going to be some work that you wish you had done differently or would choose to do better now that you’ve done it once, you know how to do it better. That’s the technical health term. I wanted it to be more holistic and more relatable so that people could actually take more positive action with it. That was how I came up with the term and that’s where it led me. Does that make sense?

Shane Hastie: It does. Technical health is every system will have technical health and it is a result of the environment. You said result of the system they’re in. How do we improve that system to make, or should we improve that system to make the technical health of our products better?

Use metrics to help improve the organisational system to improve technical health [05:31]

Declan Whelan: Yes, great question. One of my more recent clients asked a similar question, how do we know that our technical practices are actually effective? A lot of companies would look at things like static code analysis and look at their cyclomatic complexity or code coverage and things like that, and those are good, but they’re not customer focused. Those things don’t matter to customers. What I think has shifted, and I think that I would credit the DevOps movement with a lot of that is really starting to think about things as they touch customers. Teams now are responsible for delivering to customers. The shift for me in terms of what you might do with me would first of all, to measure it. I think right now the metrics that I see that are most widely used and are available would be the DORA metrics, and they happen to be really useful.

In case anyone doesn’t know, those would be deployment frequency, lead time for change, change failure rate, and meantime to recovery. Now, those just happen to be four metrics you could choose a whole slew of other ones, but those four in particular have strength because of the DevOps’ work that has been done and these four metrics have bubbled out as being key differentiators for organizations that are delivering well. If you were to stick with DORA metrics, for example, and looping back to your question, how would you go about improving your technical health? It could be starting to measure these and decide, “Oh, wait, do we want to increase our deployment frequency? What’s standing in the way of deployment frequency? What’s slowing us down?”

Technical debt then just becomes the things in the code that slow you down and the technical health is more expansive. It’s about how well is our system working. When I’m asked that question that you asked, which is really a good, one is would be yes, start to measure. If you’re not measuring DORA metrics, DORA metrics to me are really a good indicator of how well you’re building things technically.

Then beyond that, I’ve become more recently interested in value streams and having overall flow metrics. For example, in SAFe® now they have six flow metrics and I don’t have a lot of experience with those. I’m now just starting to read a book called Flow Engineering from Steve Pereira and another author. It’s really expanding around the DORA metrics to really focus more on the overall flow.

For example, in DORA metrics the lead time for change is from the time you commit code to when it goes into production, doesn’t include the time for business analysis and product requirements understanding. An even better metric would be the overall cycle time or lead time, whatever works for you, but back earlier to when it actually comes from customers and touches customers, so the full cycle. Other times I’ve worked with companies where they sometimes would be all focused on their code coverage or something, and those are good, but they’re all internal. What’s really important is how frequently are you able to ship value to customers and focus on that, basically.

Shane Hastie: That shift from the internal metrics like cyclomatic complexity to the customer focus metrics more like DORA and then even thinking the end to end. Haven’t we talked about shift lift and shift right for a long time?

Challenges in Shifting Left and Right [08:51]

Declan Whelan: That’s true. We still need to do it. I think some of the leading companies are certainly doing well in that regard, but most companies I encounter really struggle with that shift left and shift right. Those are both big shifts to have. I still see, I don’t know if you see it, but I still see a lot of separation, for example, between QA and development. Some organizations, maybe even though they might have those members on their teams, they might report differently, they’re measured differently. I’ve even worked with teams where they say they’re a team, but the QA is working in a different repo than the rest of the team. It’s really, you may as well call them a team in name only because of the separation. Yes, it is a shift left, but I think that’s still a really challenging shift for many organizations, even though they’re trying to do that, they still struggle. At least that’s what I see. Do you see differently?

Shane Hastie: I do see it and I just wonder why. What is it in our systems that is making that shift left, that true value stream, the true team, what’s making it so hard?

Declan Whelan: I can only report on what I’ve seen, but where I’ve seen it, one thing is definitely, I worked with a company who was 150 years old, and I worked with them about 10 years ago. Then I was working with the development team and I was interested in talking to their testers, and I had to make an appointment to meet their, and as soon as I met the testers, I was asked for which project code are we going to bill this meeting to? I was refused, given a lot of passive aggressive resistance to talking to the testers, I guess, and it was because they reported differently up the organization. Fast forward, say seven years, and I went back, now that’s no longer true. I no longer need a project code to have a conversation with a tester, but they still report differently.

They’re still managing different repos, even though it’s been seven years of them working towards teams, they’re still not there. They physically are sitting on the same team in an org chart, but they’re still working on completely separate streams of work. If you want to shift left, you really need your QA and your devs working collaboratively together. Your QA team can’t do all the heavy lifting on its own. I see organizational barriers between QA and development still staying in the way, and I see that in banking and telecom, insurance, more traditional regulated environments where they’re used to being separated. I would say many of these companies would still have another decade or more to solve that problem.

Shane Hastie: Can they afford to wait a decade?

Declan Whelan: Well, in Canada, all of those types of companies do quite well. They don’t suffer. They’re probably more profitable than they’ve ever been, in general. Maybe that’s it, they could afford to do it. Shift right, I don’t have a lot of experience with shifting right, which is basically being able to chaos monkey things in production sort of idea. I would say, yes, until you can shift left, I don’t think you’re ready to shift right. Until you can reliably design, build, release, yes, it’d be awfully difficult to get into putting in appropriate safeguards to shift right in production. I still think we’re a ways from that in many places that I’ve worked, for sure.

Shane Hastie: If we go back to the beginnings of some of these practices, extreme program. Kent Beck gave us some great ideas in 1999, 2000. There was a period when XP was held up as this is the way to build software, and it seemed to then fade away. What’s happening with those practices now?

Core Technical Practices from eXtreme Programming [12:36]

Declan Whelan: I grew up like you did, I think I got exposed to Agile through XP, so extreme programming and the technical practices in particular that it brought were very different than Scrum. I would say in my view, Scrum, because it specifically hasn’t had an opinion on technical practices, in fact, it’s the world of work, it’s not just software development. There was no guidance from Scrum or Scrum.org or any Scrum material around technical work. What I think has happened, and by the way, yes, so SAFe® and other frameworks, they’ll usually have something about technical and they’ll almost always include extreme programming as the technical practices. Where I’ve worked in SAFe®, I’ve worked in four different SAFe® organizations, I’ve never seen the extreme programming technical practices used, although they’re actually part of SAFe®. I think with Scrum, it doesn’t matter. They had no opinion.

With SAFe®, even though it does have an opinion, what I’ve seen is it’s so low in the order of things that they should take care of, there’s so much to save. It’s just another line item of things that you may or may not do or could not do. I’ve just seen just companies get overwhelmed with SAFe® so much that they actually don’t get to the technical practices. I was at the Agile 2024 conference in Dallas, and I had exactly the same question that I had you had. I was like, “Where is Agile going and where is extreme programming in particular going? Because I like to work in extreme programming. It’s the way I choose to work.” I think one of the silver linings on what I would say is the downturn on Agile, which is the people feeling Agile has perhaps run its course.

We have certainly in late majority type of framework situations with Agile now. One of the good things is that as people have seen the challenges with it, and a lot of them are because I feel that some of the challenges Agile has had is because the lack of technical practices have led companies to be accruing technical debt at a rate that just overwhelms them at some point. Certainly, the more traditional companies I’ve seen, they just struggle so hard to be able to put out something every week or two is an extremely stressful situation, and they haven’t shifted their technical practices to enable them to deliver at a more rapid rate. I think in that way, I feel like the lack of engineering practice has been a contributor to what is perceived as the downfall or certainly the perception of a downfall in Agile.

One thing that really made me feel better when I was in Dallas was, I went to a session on FAST, which is the fluid scaling technology. There, it was really interesting, but one of the things that they are really quite strong about is really being opinionated about using extreme programming practices as the core practices. I think there’s an opportunity for a resurgence in some of the technical practices precisely because some of the traditional Agile approaches haven’t panned out as well as people had hoped. I’m somewhat optimistic about the technical practices right now, so we’ll see. Maybe I’m just a wishful thinker.

Shane Hastie: What are the good technical practices that if I’m setting up an engineering team, this is the way we do things? What does that solid technical core look like today?

Modern Technical Practices Beyond XP [16:04]

Declan Whelan: Well, certainly it’s shifted since Kent Beck came up with extreme program, but if we start back in that day, there are really four core technical practices at extreme programming and pair programming, which now I would probably throw ensemble or mob programming in as an extension to that test driven development, always been a sort of controversial practice. Refactoring, I would say, has been entrenched in the industry, even though people may not be doing it, everyone would know what it is and everyone does it to some extent, and the last one will be simple design, the four rules of simple design.

That is what I usually, if I’m coaching or training, those would be things that I would be introducing to teams. Those are the core, if you’re doing Scrum and you were to add those four practices, I think you would be in really good shape. What it doesn’t include in that is some of the more modern practices around DevOps and being able to get to continuous delivery and having really good observability in your production systems is now going above and beyond what extreme programming had in mind.

We’re into things like feature flags and things like that to achieve it. I would say the big shift from say extreme programming has been this notion of continuous delivery. The other part of it, which I’m only becoming more recently exposed to, is this notion of real stewardship of the services that we build.

Instead of having extreme programming, which was really focused on building what we’re building the here and now, but really thinking about how do I build my services so that they’re maintainable indefinitely into the future? What do I need to be building in so that others that follow can maintain them as well as we can? That’s a powerful notion. I think that’s really coming from team topologies and that notion that we’re now building a more service-oriented systems than we were in the past, and stewardship, I think is a really nice addition to the way of thinking about technical practices.

Shane Hastie: That’s some of the architectural shifts, the pendulum that seems to swing back and forth – from large monolith to microservice to too many microservices to distributed monoliths.

Distributed Monoliths [18:19]

Declan Whelan: Most of the companies I’ve worked with have really gone down the microservices or service-oriented approaches, for sure. I think one of the problems with service-oriented architectures that I’ve seen is that usually people will design their service boundaries pretty early. You have to, if you’re going to spin up these services. You have to make decisions about what is their domain and what do they do.

It turns out with service-oriented systems like that, changing those service boundaries is going to be where you’re going to have a lot of friction because whoever’s using your service, those changes would potentially need to be coordinated to roll those out. I think architectures that allow you to defer your service boundaries or encapsulate your service boundaries perhaps within a monolith, seem to me to be good approaches. I’m not an expert.

One technology I’m always drawn to for this is Elixir and Erlang because they were built so that when you build something, it’s not just a single service, but it’s actually a set of cooperating services, making it much easier to roll out changes that might touch multiple parts of your system. Outside of say, Elixir or something like that which handles that internally would be moving towards mono repos where even though I’m changing multiple services, I just have one mono repo, and so when I make my change, I can actually change and eventually, deploy systems together or close together in time.

A mono repo with a single monolith which might have different services is certainly a direction that some companies are going. I’ve not actually worked, well, I’ve worked with one client who was just in the middle of doing a mono repo, so I haven’t actually seen how well they work in the wild, but I can see the value in it for companies struggling with microservice boundaries.

Shane Hastie: This is the Engineering Culture podcast, we’ve been talking a lot about practices in here. What are the cultural influences that allow us to get into good technical practices? What is a good team?

Team Cultures that Enhance Technical Excellence [20:28]

Declan Whelan: For a good team, I’m not sure that technical practices would be different, or if I have a good team, it wouldn’t matter whether they were doing technical work or not, I guess is what I’m trying to say. It would be about certainly the collective, having an idea that we’re in this together and the extreme programming had the idea of whole code ownership. It’s our team that owns this code, so having that idea of we’re in this together, I really liked the idea of stewardship from, I think I mentioned it already from Matthew Skelton and the team topologies, that idea that we need to take care of this not just now, but for ourselves in the future. We don’t want to be woken up at 3:00 AM with beeper calls if we can avoid it, so that idea of high-quality work. I love the word stewardship, really taking care of the products that we’re releasing and recognizing that we’re not just building for now, but we’re building for the future as well.

Your horizon might differ, it could be three months or a year or whatever, but we need to be cognizant of not accruing too much technical debt by taking care of our future. Certainly, I’ve always been on a culture perspective, this sort of learning organization ideas from, just rereading the fifth discipline actually, and that idea that we’re always, especially in our field in tech, things are always changing, so we need to be in a position and a culture of continuous learning.

That’s one thing that I’ve always been attracted to the agile extreme programming practices, because they really focus on using pair program and ensemble programming to share and learn together as a team. The rising tide raises all boats. We’re all going to learn together and that we learn from each other. I may know some keyboard shortcuts that you don’t know, Shane, and Shane, you might possibly know a little bit more about business analysis than I do.

We could learn a lot by working together on things, so this team culture of true collaboration. What I see in a lot of companies, I started to say, yes, we need collaboration over cooperation and coordination. I often see teams being really good at coordinating their work. Oh, I’ve got this story. You’ve got that story, and they coordinate that. Then once they’ve figured it out, they divide and separate. They don’t actually work together to solve the problems. I think teams that focus more on collaborative work rather than cooperative or coordinating work are also really positive cultural aspects that I would see.

Shane Hastie: If I can dig in a little bit to pair programming and ensemble programming, one of the core XP practices that was there from the beginning that is consistently told and shown, there’s metrics that show that we get better quality product, but organizations still, but you’re getting two people to do one person’s work. How do you challenge that?

Pair and Ensemble Programming [23:22]

Declan Whelan: Well, I think there is data that shows it, but it’s not super strong. The evidence is there, but it’s not order of magnitude improvement. In fact, most of the work I’ve seen says something like, say, pair programming, you will be as productive as you would be if you had done it singly, but your defects will go down. That’s usually, the work is higher quality, but it’s not going to impact you speed-wise either faster or slower, to boil down most of the literature I’ve seen on pairing. For that, it would be just try it, just experiment, and if it works for you, then go for it. If it doesn’t work for you, then try something else. I think mob programming is more challenging because it’s not just, you could probably get two people that would agree to pair program, but getting a whole team would be even more challenging.

Again, the idea would be, well try it, let’s try it, but you have to try it for long enough. It might be six weeks or more. A lot of my coaching work, I will coach as a learning hour if I have an opportunity, or you wouldn’t have to be a coach, you could be a team lead or something and say, “Okay, let’s every Friday get together for an hour. Let’s work on a coding problem together.” It could just be a toy exercise just to improve the skills. Finding ways to make small changes, small moves, and finding ways to incorporate that learning into the day-to-day work. For example, one thing I don’t like are the improvement sprints or something at the end. Instead of doing that, why don’t you try to have something that might be more regular? Experiments, quick, rapid feedback, and if it doesn’t work, try something different.

Shane Hastie: An experiment. Declan, thanks very much. A lot of good advice, good ideas in here. If people want to continue the conversation, where do they find you?

Declan Whelan: Probably the best place is on LinkedIn. It’s just Declan Whelan, all one, on LinkedIn. I’m easy to find. It’s funny, Shane, I tore off my badge in half. Apparently, I do not know how to separate a beer ticket from a conference pass. I was ripping out a drink ticket and I ripped my ticket in half, so it only said Declan. Then I realized, that’s probably all I need in the Agile community. It’s Declan Whelan, and LinkedIn is probably the best way to reach me.

Shane Hastie: Wonderful. Thanks so much for taking the time to talk to us today.

Declan Whelan: Oh, it’s always a pleasure, Shane. Always lovely to chat with you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Sold by Principal Financial Group Inc.

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Principal Financial Group Inc. lessened its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 77.9% in the second quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission. The firm owned 5,935 shares of the company’s stock after selling 20,918 shares during the period. Principal Financial Group Inc.’s holdings in MongoDB were worth $1,484,000 at the end of the most recent reporting period.

Other institutional investors and hedge funds also recently added to or reduced their stakes in the company. EMC Capital Management raised its stake in shares of MongoDB by 125.0% in the 2nd quarter. EMC Capital Management now owns 3,600 shares of the company’s stock valued at $953,000 after acquiring an additional 2,000 shares during the period. TrueMark Investments LLC purchased a new stake in shares of MongoDB in the second quarter worth about $1,768,000. Migdal Insurance & Financial Holdings Ltd. acquired a new stake in shares of MongoDB in the second quarter valued at about $10,498,000. Whittier Trust Co. of Nevada Inc. raised its holdings in shares of MongoDB by 4.3% during the 2nd quarter. Whittier Trust Co. of Nevada Inc. now owns 14,534 shares of the company’s stock valued at $3,633,000 after buying an additional 602 shares during the period. Finally, Whittier Trust Co. lifted its position in MongoDB by 4.9% during the 2nd quarter. Whittier Trust Co. now owns 29,764 shares of the company’s stock worth $7,440,000 after acquiring an additional 1,384 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

Wall Street Analysts Forecast Growth

A number of brokerages recently commented on MDB. Citigroup boosted their target price on shares of MongoDB from $350.00 to $400.00 and gave the stock a “buy” rating in a research report on Tuesday, September 3rd. JMP Securities reiterated a “market outperform” rating and issued a $380.00 price target on shares of MongoDB in a report on Friday, August 30th. UBS Group boosted their target price on shares of MongoDB from $250.00 to $275.00 and gave the company a “neutral” rating in a research note on Friday, August 30th. Loop Capital decreased their price target on shares of MongoDB from $415.00 to $315.00 and set a “buy” rating for the company in a report on Friday, May 31st. Finally, Oppenheimer upped their target price on MongoDB from $300.00 to $350.00 and gave the company an “outperform” rating in a report on Friday, August 30th. One research analyst has rated the stock with a sell rating, five have assigned a hold rating and twenty have assigned a buy rating to the company’s stock. According to data from MarketBeat, MongoDB has an average rating of “Moderate Buy” and an average price target of $337.56.

Check Out Our Latest Research Report on MongoDB

Insider Transactions at MongoDB

In other MongoDB news, Director Hope F. Cochran sold 1,174 shares of MongoDB stock in a transaction that occurred on Monday, June 17th. The stock was sold at an average price of $224.38, for a total transaction of $263,422.12. Following the sale, the director now directly owns 13,011 shares of the company’s stock, valued at $2,919,408.18. The transaction was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through the SEC website. In other news, CAO Thomas Bull sold 1,000 shares of MongoDB stock in a transaction on Monday, September 9th. The stock was sold at an average price of $282.89, for a total value of $282,890.00. Following the completion of the sale, the chief accounting officer now directly owns 16,222 shares of the company’s stock, valued at approximately $4,589,041.58. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through this link. Also, Director Hope F. Cochran sold 1,174 shares of the company’s stock in a transaction dated Monday, June 17th. The shares were sold at an average price of $224.38, for a total transaction of $263,422.12. Following the transaction, the director now owns 13,011 shares in the company, valued at $2,919,408.18. The disclosure for this sale can be found here. Insiders sold a total of 33,179 shares of company stock worth $8,346,169 over the last three months. 3.60% of the stock is currently owned by insiders.

MongoDB Price Performance

Shares of MDB opened at $295.18 on Friday. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03. The firm has a market cap of $21.80 billion, a PE ratio of -105.05 and a beta of 1.15. The firm’s 50 day moving average price is $255.38 and its 200 day moving average price is $305.33. MongoDB, Inc. has a twelve month low of $212.74 and a twelve month high of $509.62.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Thursday, August 29th. The company reported $0.70 earnings per share for the quarter, beating analysts’ consensus estimates of $0.49 by $0.21. MongoDB had a negative return on equity of 15.06% and a negative net margin of 12.08%. The business had revenue of $478.11 million for the quarter, compared to the consensus estimate of $465.03 million. During the same period last year, the business earned ($0.63) EPS. The firm’s revenue was up 12.8% compared to the same quarter last year. As a group, research analysts expect that MongoDB, Inc. will post -2.46 EPS for the current fiscal year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


CoreWCF Gets Azure Storage Queue Bindings

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

Microsoft released a CoreWCF service library and a WCF client library with the bindings for Azure Storage Queue. The new bindings allow developers to use Azure Service Queues for reliable and scalable messaging solutions. They also unlock simple migration of legacy Microsoft MSMQ WCF solutions to an Azure cloud-based architecture.

The Azure Storage Queue is a cloud-based queue service built on top of Azure Storage that allows any components to send, store and receive messages reliably and asynchronously.

It is worth clarifying that the Azure Storage Queue binding in CoreWCF, like the legacy MSMQ (Microsoft Message Queuing) binding, is a one-way operation, meaning the client gets no response for the service invoked via the binding. Consequently, only void methods are allowed in the CoreWCF service interface. The MSMQ support in CoreWCF remains in the preview stage, and it has been in development for over a year.

There are two packages for Azure Storage Queue support: one for the client scenario (sending messages to the queue) and one for the server scenario (reading the messages from the queue and processing them). The Azure Storage Queue client library and Azure Identity library are pulled together with the packages as a dependency. There is a sample of the usage of the new bindings on the Azure SDK for .NET GitHub account.

The bindings are released by the Azure SDK for .NET team and not by the CoreWCF team. Also, the client library is released as a .NET package compatible with the Windows-only .NET Framework WCF stack. The stated goal of the packages is to enable migrating existing WCF clients to .NET that are currently using MSMQ and wish to deploy their service to Azure, replacing MSMQ with Azure Queue Storage.

To call an Azure Storage Queue using CoreWCF or WCF, the developers must add Microsoft.WCF.Azure.StorageQueues prerelease NuGet package to their .NET project. The first step is to authenticate the client that is calling the Azure Storage Queue. The default mechanism is to leverage the DefaultAzureCredential provider using a parameterless constructor for AzureQueueStorageBinding. The queue address is passed to the CoreWCF’s regular ChannelFactory class to create a communication channel for the service that will be invoked.

// Create a binding instance to use Azure Queue Storage.
// The default client credential type is Default, which uses DefaultAzureCredential
var aqsBinding = new AzureQueueStorageBinding();

// Create a ChannelFactory to using the binding and endpoint address, open it, and create a channel
string queueEndpointString = "https://MYSTORAGEACCOUNT.queue.core.windows.net/QUEUENAME";
var factory = new ChannelFactory(aqsBinding, new EndpointAddress(queueEndpointString));
factory.Open();
IService channel = factory.CreateChannel();
channel.Open();

// Invoke the service
await channel.SendDataAsync(42);

To create a CoreWCF service that will be exposed as a consumer of the Azure Storage queue, developers have to use Microsoft.CoreWCF.Azure.StorageQueues prerelease NuGet package. The process of creating the service binding starts with adding the queue transport to the CoreWCF services collection in the service configuration step. In the app builder configuration step, the binding itself is constructed. Then, the service is connected to the queue using the AddServiceEndpoint method, passing the instantiated binding and specifying the queue address.

public class Startup
{
  public void ConfigureServices(IServiceCollection services)
  {
    services.AddServiceModelServices();
    services.AddQueueTransport();
  }

  public void Configure(IApplicationBuilder app, IHostingEnvironment env)
  {
    app.UseServiceModel(services =>
      {
        serviceBuilder.AddService ();
        var aqsBinding = new AzureQueueStorageBinding();
        string queueEndpointString = "https://MYSTORAGEACCOUNT.queue.core.windows.net/QUEUENAME";
        serviceBuilder.AddServiceEndpoint  (aqsBinding, queueEndpointString);
      });
  }
}

Although these examples use the default Azure authentication, the binding library allows for the use of either a storage shared key credential, a SAS (shared access signature), an OAuth token or an Azure Storage connection string. There is a Security.Transport.ClientCredentialType option in the AzureQueueStorageBinding class to specify these settings.

The CoreWCF project was officially released in April 2022, although it had already begun in 2019. It aims to provide a subset of the most frequently used functionality from WCF service on the .NET platform. It is .NET Standard 2.0 compatible, allowing it to be migrated in place on .NET Framework 4.6.2 or above. It covers HTTP and TCP transport protocols with mainstream WCF bindings and other protocols such as Kafka or RabbitMQ. The current version is 1.6.0.

The Azure Storage Queue binding support for CoreWCF comes almost a year after the AWS team released a similar package supporting AWS SQS (Simple Queue Service) queues.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


IIT Ropar, Excelsoft Collaborate on AI-Focused EdTech Lab – Elets Digital Learning

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

IIT Ropar

IIT Ropar has joined forces with Excelsoft Technologies to launch the ‘Professor Dhananjaya Lab for Education Design and Creative Learning’. Named in honour of renowned educationist MH Dhananjaya, this lab is designed to drive innovation in the EdTech space by merging AI-driven solutions with academic and industry expertise.

Led by Sudarshan Iyengar, head of IIT Ropar’s Computer Science and Engineering (CSE) department, the lab will focus on the development of artificial intelligence models to improve learning and assessment tools. The collaboration also seeks to create scalable, robust solutions aimed at transforming educational methodologies and enhancing the student learning experience.

This initiative represents a significant step toward positioning India as a global leader in EdTech innovation. Teams from both Excelsoft Technologies and IIT Ropar—including PhD scholars and MTech and BTech students—will work together to explore cutting-edge technologies and develop next-generation learning tools.

Also Read: IIM Raipur MDPs: Essential Skills for Today’s Dynamic Business World.

By combining academic research with real-world applications, this AI-powered lab will play a pivotal role in shaping the future of education and contributing to the rapidly growing EdTech ecosystem.

“Exciting news! Elets Education is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest insights!” Click here!

Be a part of Elets Collaborative Initiatives. Join Us for Upcoming Events and explore business opportunities. Like us on Facebook , connect with us on LinkedIn and follow us on Twitter , Instagram.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Shares Bought by Wedbush Securities Inc. – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Wedbush Securities Inc. grew its position in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 20.7% during the second quarter, according to its most recent Form 13F filing with the Securities & Exchange Commission. The institutional investor owned 2,739 shares of the company’s stock after purchasing an additional 470 shares during the period. Wedbush Securities Inc.’s holdings in MongoDB were worth $685,000 as of its most recent SEC filing.

A number of other large investors also recently added to or reduced their stakes in the company. Quadrant Capital Group LLC lifted its stake in MongoDB by 5.6% during the fourth quarter. Quadrant Capital Group LLC now owns 412 shares of the company’s stock worth $168,000 after purchasing an additional 22 shares in the last quarter. EverSource Wealth Advisors LLC lifted its stake in MongoDB by 12.4% during the fourth quarter. EverSource Wealth Advisors LLC now owns 226 shares of the company’s stock worth $92,000 after purchasing an additional 25 shares in the last quarter. Raleigh Capital Management Inc. lifted its stake in MongoDB by 24.7% during the fourth quarter. Raleigh Capital Management Inc. now owns 182 shares of the company’s stock worth $74,000 after purchasing an additional 36 shares in the last quarter. Advisors Asset Management Inc. lifted its stake in MongoDB by 12.9% during the first quarter. Advisors Asset Management Inc. now owns 324 shares of the company’s stock worth $116,000 after purchasing an additional 37 shares in the last quarter. Finally, Atria Investments Inc raised its stake in shares of MongoDB by 1.2% in the first quarter. Atria Investments Inc now owns 3,259 shares of the company’s stock valued at $1,169,000 after acquiring an additional 39 shares in the last quarter. 89.29% of the stock is currently owned by institutional investors and hedge funds.

MongoDB Stock Performance

Shares of MDB stock opened at $296.69 on Thursday. The stock’s 50-day moving average is $254.78 and its two-hundred day moving average is $305.40. The firm has a market cap of $21.92 billion, a price-to-earnings ratio of -105.58 and a beta of 1.15. MongoDB, Inc. has a 1 year low of $212.74 and a 1 year high of $509.62. The company has a debt-to-equity ratio of 0.84, a quick ratio of 5.03 and a current ratio of 5.03.

MongoDB (NASDAQ:MDBGet Free Report) last announced its earnings results on Thursday, August 29th. The company reported $0.70 earnings per share for the quarter, topping the consensus estimate of $0.49 by $0.21. MongoDB had a negative net margin of 12.08% and a negative return on equity of 15.06%. The business had revenue of $478.11 million during the quarter, compared to the consensus estimate of $465.03 million. During the same quarter in the prior year, the firm earned ($0.63) earnings per share. The business’s quarterly revenue was up 12.8% compared to the same quarter last year. Analysts predict that MongoDB, Inc. will post -2.46 earnings per share for the current year.

Wall Street Analysts Forecast Growth

Several analysts have recently issued reports on MDB shares. Sanford C. Bernstein increased their price objective on MongoDB from $358.00 to $360.00 and gave the company an “outperform” rating in a research note on Friday, August 30th. UBS Group increased their price objective on MongoDB from $250.00 to $275.00 and gave the company a “neutral” rating in a research note on Friday, August 30th. DA Davidson increased their price objective on MongoDB from $265.00 to $330.00 and gave the company a “buy” rating in a research note on Friday, August 30th. Loop Capital reduced their price objective on MongoDB from $415.00 to $315.00 and set a “buy” rating on the stock in a research note on Friday, May 31st. Finally, Robert W. Baird reduced their price objective on MongoDB from $450.00 to $305.00 and set an “outperform” rating on the stock in a research note on Friday, May 31st. One analyst has rated the stock with a sell rating, five have given a hold rating and twenty have assigned a buy rating to the company. Based on data from MarketBeat, the company has a consensus rating of “Moderate Buy” and a consensus price target of $337.56.

Check Out Our Latest Stock Analysis on MDB

Insider Transactions at MongoDB

In other news, Director Hope F. Cochran sold 1,174 shares of the stock in a transaction dated Monday, June 17th. The shares were sold at an average price of $224.38, for a total transaction of $263,422.12. Following the completion of the sale, the director now owns 13,011 shares in the company, valued at $2,919,408.18. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this link. In other news, Director Hope F. Cochran sold 1,174 shares of the stock in a transaction dated Monday, June 17th. The shares were sold at an average price of $224.38, for a total transaction of $263,422.12. Following the completion of the sale, the director now owns 13,011 shares in the company, valued at $2,919,408.18. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available through this link. Also, CAO Thomas Bull sold 138 shares of the firm’s stock in a transaction dated Tuesday, July 2nd. The shares were sold at an average price of $265.29, for a total transaction of $36,610.02. Following the completion of the sale, the chief accounting officer now owns 17,222 shares of the company’s stock, valued at approximately $4,568,824.38. The disclosure for this sale can be found here. Over the last quarter, insiders sold 33,179 shares of company stock worth $8,346,169. Company insiders own 3.60% of the company’s stock.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Building SaaS From Scratch Using Cloud-Native Patterns: A Deep Dive Into a Cloud Startup

MMS Founder
MMS Joni Collinge

Article originally posted on InfoQ. Visit InfoQ

Transcript

Collinge: We’re going to be talking about building SaaS from scratch using cloud native patterns, a deep dive into a cloud startup. We’re going to start our session with a bit of a backstory. Our story begins back in late 2018 with Mark Fussell and Yaron Schneider. Now at this time, they’re working at Microsoft in a research and development team, and they’re incubating projects, one of which became the KEDA project, the Kubernetes Event Driven Autoscaler. They’re thinking about the problem of how to make enterprise developers more productive when it comes to writing distributed and cloud native applications. They come up with an idea that they start to ideate on. It gets a bit of traction. They start to prototype it, and eventually Microsoft publicly make this open source as the Dapr project, which stands for the Distributed Application Runtime. Who’s heard of the Dapr project? Is anyone actually running Dapr in production today? The Dapr project really started with the ambition of codifying best practices when it comes to writing distributed applications, so things like resiliency, common abstraction patterns, security, observability. Can we bake all of this into an easy way for developers to consume, that is cloud agnostic, that is language agnostic, and framework agnostic?

This is where I join the story. My name is Joni Collinge. At the time, I was also an engineer working at Microsoft, and I spent the last 8 years seeing enterprise developers solving the same problem time and again on the Azure platform. The value proposition of Dapr really resonated with me. I started becoming an open source maintainer of the project, and started contributing to it almost immediately. The project continued to develop and mature, and eventually Microsoft decided to move to an open governance model, where they donated it to the CNCF, and it became an incubating project. This is where we really start to see an uptick in enterprises adopting the project. Mark and Yaron decided that they were going to double down on this mission of empowering developers to build distributed systems and actually create a new company called Diagrid. For some reason, unbeknown to me, they decided to ask me to become a founding engineer at Diagrid to help build out the cloud services to deliver on this mission. After a little bit of convincing, I did just that and joined them at Diagrid. We had a vision to build two services off the bat, the first of which was called Conductor. Conductor was about remotely managing Dapr installations in users’ Kubernetes cluster. That was our first ambition. Our second ambition was building the Catalyst service, which would be fully serverless Dapr APIs, and we would turbocharge those as well with providing infrastructure implementations and a bunch of value-added features that you wouldn’t get through the open source project. That was the vision.

We’re going to start today’s presentation right there, and we’re going to look inside the box. Because I think a lot of the time we talk about clouds as just this black box, and we don’t really understand what’s going on inside those. A lot of these big cloud providers treat this as their secret sauce, when really, it’s just common patterns that are being applied across each of the cloud and really the secret sauce is the services that you’re exposing. Hopefully, this talk is insightful for anyone who’s going on this journey, and maybe encourage others to share how they’ve approached the same problems.

Why Do I Need a Cloud Platform to Do SaaS?

As a SaaS provider, why do you even care about a cloud platform? The cloud platform is all of the plumbing that actually delivers your service to your end users, and gives them a unified experience to adopt services across your portfolio. It includes things like self-service, multi-tenancy, scalability, and so on. There are many more things that I haven’t been able to list here. We’re just going to focus on the top five. I’m going to look at self-service, multi-tenancy, scalability, extensibility, and reliability. There might be some of you that are thinking, is this going to be platform engineering? Am I veering into a platform engineering talk? Although a lot of the problem space that we’re going to talk about and some of the technologies is common to both platform engineering and cloud engineering, I do want to make a distinction that it’s the end user of the cloud that is different. Internally, when you’re thinking about a platform engineering team, they are delivering a platform for your internal developers to build services, whereas the cloud platform I’m talking about is delivering a cloud to your end users so that they can adopt your services having a unified experience.

Most of you will probably think GCP, AWS, or Azure, or one of the other cloud providers that are out there when you think of a cloud platform. At Diagrid, our mission was to build a higher-level cloud that would serve the needs of developers rather than just being an infrastructure provider or service provider. We were going after these high-level patterns and abstractions and trying to deliver those directly to application developers so that they could adopt them from their code directly, rather than infrastructure teams provisioning Kafka or something like that. Obviously, as a startup, we’ve only got limited capacity. We’re going to be bootstrapping ourselves on top of existing cloud infrastructure. When you are adopting cloud infrastructure, obviously each of those are going to be providing different sets of APIs and abstractions to adopt. Some of those are common, things like Infrastructure as a Service, virtual machines. Even some platforms such as Kubernetes allow you to natively use those across the cloud. Then obviously you’ve got richer and higher-level proprietary services, things like Functions as a Service and container runtimes, things like that, which are bespoke to every cloud. At Diagrid, our strategy was to be cloud agnostic and portable. That might not be your strategy. If your strategy is to go all in on one particular cloud, then you can make slightly different tradeoffs. We decided to be cloud agnostic, and this meant that our abstractions were: we went for Kubernetes as our compute abstraction, we went for MySQL as our database abstraction, and we went for Redis as our caching and stream abstraction.

Just to think a little bit about the user journey of using a cloud, and I’m sure everyone has done this. You’ve sat at your laptop, you’ve got an SSH or an RDP session to some cloud VM running somewhere. What can we infer about that virtual machine? We know it’s running in some hypervisor, in some server, in some rack, in some data center, in something that a cloud calls a region. We’ve got this concept of regions that clouds are telling us about. How did that virtual machine get there? Or presumably you went to some centralized service that was hosted by that cloud provider, either via web page or via some form of CLI or potentially some SDK, and you ask that cloud provider to provision that virtual machine in that region. You would have had a choice of regions when you made that request, so you could have provisioned into various regions around the world that the cloud provider supports. Obviously, we can infer something about the cardinality here, that there is some global service that is able to provision resources into these regions at our demand. How does this actually look at Diagrid? Some terminology I want to set at this point is you can think about this centralized service offered by the cloud provider as a cloud control plane, and then we think about regional data planes which are configured by that cloud control plane. How does this look at Diagrid Cloud? For the Catalyst service that I talked about earlier, which is this serverless API service, it looks pretty much exactly like that model. We have a centralized control plane where we can provision infrastructure into regional data planes, which users can then consume from. For Conductor, where we’re actually managing infrastructure in users’ environments, the user is responsible for provisioning. We allow them to come and create something called a cluster connection. They can configure how they want that Dapr installation to look like. At the end of the day, it’s running in their Kubernetes cluster, so they are the ones that have to install it. We effectively give them an artifact to install, and from that point on, it connects back to our control plane and then can be remotely managed. There’s two slightly different use cases there that we have to support within Diagrid Cloud.

The big picture, we can think about admins managing cloud resources through some centralized control plane, which is in turn configuring data planes at the regional level to expose services for users to consume. As I said earlier, our compute platform was Kubernetes. This does mean, ultimately, that we’re going to have one or more Kubernetes clusters as part of our control plane, and then many data planes provisioned within regions spread across those. Just to touch a little bit on the multi-cloud story, because many people will say, I don’t care about multi-cloud. At the control plane, I think you’ve got more flexibility about making that choice, things like egress costs and stuff you might need to consider, but that’s becoming a bit of a non-issue, given some of the legislation changes. At the data plane, you might actually have customers, if you are working for enterprises, who are going to come to you and they’re going to say, I have a regulatory or a compliance reason that I need to only store data in this cloud provider and only in this region. If you’ve gone all in on one particular cloud provider, and they’re not portable and can’t even pretend to potentially move to an on-premise model, at the data plane, you might not be able to service those customers. Just something to consider is to keep your data plane as portable as possible. You might disagree with that, but that’s just one of my pieces of advice.

The Control Plane

We’re going to click into this control plane. How can we think about actually exposing this? Most clouds effectively have the same infrastructure to support this. Really that front door is some form of API gateway that’s going to be dressed up in many forms, but that API gateway basically has a bunch of centralized functionality that you don’t want in your control plane services, or you don’t want to repeat in your control plane services. It does things like authentication through some IDP. It does authorization. It does audit. Then it does some form of routing to control planes and control plane services. This is effectively a solved problem. I’m not going to spend too much time here, but API gateways, there’s many vendors. Take your pick. Then, what is that API gateway actually routing to? Sometimes we think about the control plane as just a big black box again. Is it just one monolithic service that’s servicing all of our different user experiences? I’ll break that down in a couple of slides. As you scale your control plane, you’re taking on more users, more resources, and you’re having to store more tenants. You might start to think about the cellular architecture. It’s basically partitioning your cloud. You’ll partition your control plane, and then book it different tenants into different instances of that control plane. They then map onto regions. You need to map onto regions given demand. You’re not mapping onto regions for scale. You only actually move between regions, possibly for some form of availability, but that’s handled at the AZ level. Mainly, it’s for data sovereignty reasons, or to serve particular customers with low latency. Really, you generally book it those cells and map them onto regions depending on your users.

What services do we actually have inside that control plane? I’ve just taken a couple of screenshots. These are just a couple of screenshots from our Catalyst product. I think the experiences that they’re exposing are fairly common to most cloud providers. We have the concept of configuring resources, and we’ll get into that. We have some visualizations that the cloud is providing to us. We have API logs and other telemetry, and we have graphs as well from the telemetry. There’s lots of other types of data that you’ll be interfacing with as a cloud provider, but these are just some common core functions that I think you need to think about as a cloud provider. We can think about breaking that down into types of services. I’m not saying these are the services like you need to go and write a resource service. I’m just saying these are the types of services you need to think about how you handle that data. We think about resources. Think about views, which is those visualizations that’s read only data that is built within the system to expose to your users. Then you have telemetry, which usually includes things like logs, metrics, and sometimes traces as well. There’s a bunch of other stuff that you also need to support. We’ll focus on resources and views for this session.

Resources API

How should we design our control plane resources API? There is some prior art in this space. GCP, AWS, and Azure all have public APIs, and they’re all working quite well. You can have a look at their documentation. You can understand how they work. Thankfully for us, GCP have a design document about how they did go about designing their cloud APIs. It really boils down to these three very simple steps. We’re going to use declarative resources so that the consumer doesn’t care how that cloud provider actually works. Those resources can be modeled in a hierarchy, which tells us that there’s some relationships between those resources, and that can be nesting. Those resources can either be singular or they can be in a collection, like a list. Then we’ve got these standard methods which can be performed on every resource. We’ve got list, get, create, update, and delete. Anyone who’s thinking, this sounds an awful lot like RESTful API principles is absolutely bang on. This is just a REST API. All they’re saying is, you need to build a REST API over whatever domain objects your cloud wants to expose. One thing they don’t really tell us more about, and they’re all taking a slightly different approach here is, how should you actually shape those resources? What does a payload look like that you’re interfacing with? What does that mean for your system?

Is there something from the cloud native space that we can actually look for more inspiration here that gives us a more fully featured API design. This is where we introduce the Kubernetes resource model. The Kubernetes resource model is only the API part of Kubernetes effectively, and it’s designed in isolation from the rest of the system. It does have ramifications on the rest of the system, but it is its own design proposal. If you actually read the design proposal, it says that it is analogous to a cloud provider’s declarative resource management system. They’ve designed it from the ground up as a cloud resource management system. How do they expose their resources? As many of you probably know, Kubernetes uses this declarative YAML format where it has some common properties, such as API version, a kind which is effectively an API type, some metadata, and then a spec and a status. By having this common shape for all of its resources, it means that the Kubernetes API server has a bunch of API machinery, which only operates on generic objects. It has a bunch of code that it doesn’t need to rewrite for every single type in the system, it just handles generic objects. It doesn’t care about the specialization of that particular object. The specialization comes through the fields of the spec and the status. The spec is there to define the desired state of the world, the object. The status is the feedback mechanism from the system to report back a summary of the last observed status of this resource. Even by looking at the declarative resource, we can start to infer the type of system that we’re going to have to build to perform what we call reconciliation against this resource. A resource like this is mapped onto a HTTP path like that, which is fairly intuitive.

To look at a concrete example, this is a pod definition. The pod definition is clearly just saying that I want a container that is called explorer, and it uses this image. That API shape is defined by Kubernetes. This is one of their API types they’ve hardcoded into the API server. You can operate on these types of resources at these URL paths, and you can use these HTTP verbs. You can use GET, PUT, POST, PATCH, DELETE. It’s all fairly straightforward, fairly intuitive, and that’s what we want from our resource API. Why can’t we use the same exact approach for our own resource types? Why does it have to be a pod? Why can’t it be our own type, whatever we want to expose in our cloud? Why can’t we just use the same path structure and the same methods? There’s nothing stopping us. Just to touch on a slight tangent here, is that if you’re familiar with Azure, then you might be familiar with what are called ARM templates. If you’re familiar with AWS, you might be familiar with CloudFormation. These are a way of you basically composing resources and sending an entire unit towards the cloud, and the cloud then goes through that, parses it, and provisions all of the resources and manages the dependencies. As a cloud provider, do you think that you need something similar? If you look at KRM, it explicitly says that they don’t do that. They don’t bake in templating, but what they do do is something called resource composition, which means that you can implicitly define a higher-level resource which will ultimately break down into multiple lower-level resources. Or you could take the Crossplane approach, which is to have a resource type which explicitly defines those resources. It says, these are all the different resources, these are the dependencies. Then it’s up to the control loop, whatever that logic is, to parse that and process it. Or another alternative is to do something like Terraform or OpenTofu these days, and that is that you just defer this to the client. Terraform does not run on top of ARM templates or CloudFormation APIs. It runs on cloud primitive APIs, and it manages a dependency graph, and it manages the state, so you can always offload this to the client, and that might be a better experience than what you actually build natively in your cloud.

Just to summarize what I’ve covered so far. A cloud has a brain called a control plane, which configures many data planes. Authentication, authorization, audit, and routing can be provided via an API gateway. Cloud resources should be exposed via a REST like API. Kubernetes actually gives us a blueprint for how to build that API. High-level resources can compose low-level resources, which could avoid you doing things like templating.

Resources API Server

How do we actually expose that resource API? Many of you might be thinking, you’re running your control plane on Kubernetes, so you’ve got a Kubernetes API, why don’t we just expose that to the customers? Why don’t we just let users create objects in that Kubernetes API? I’m going to suggest that this is not a good idea, and that’s primarily because Kubernetes API is not multi-tenant, so you’re effectively going to be competing as a service provider with your own users. Your users are going to be creating objects in that API server. You’re going to be creating objects in that API server. Kubernetes can’t differentiate between you as a service provider and your users, and therefore will throttle you both accordingly. What we do want to do is we want to find another way of exposing a Kubernetes like API server. I’ve changed the terminology here to Kubernetes like, because I want you to think about this from the abstract sense. You want to think about the founding principles of what Kubernetes is exposing, the behaviors and the concept that it’s using, and seeing if there’s other ways that we can potentially get that that is not running Kubernetes, but it could potentially be running Kubernetes. I just want us to box ourselves into only thinking that Kubernetes is the solution. A couple of options you might be thinking about here is, just run Kubernetes on Kubernetes, and manage your own etcd server. I’m sure some people are doing it. It comes with overhead. You might even use something like the cluster API that’s exposing Kubernetes to provision managed clusters somewhere else that you’re going to use for your customers, or you might use technologies like vCluster or Capsule to try and build a multi-tenant model on top of the existing Kubernetes API server. I’m sure, again, you can build a system like this, where you’re provisioning independent API servers for your tenants and storing their resources, isolated inside that API server. There are a few projects to call out that are specifically built to try and solve this problem. One of them is KCP. KCP, I’m pretty sure, came out of Red Hat, maybe like 5 years ago, 6 years ago. It was a bit of an experiment. What they were trying to do is repurpose Kubernetes to literally build control planes. There’s lots of really good ideas and lots of good experiments that have gone into that project. Maybe going back two-and-a-half years ago, when we were building this cloud, the future of the project was a little uncertain, and it was basically just a bunch of promises and some prototypes. It’s definitely worth checking out if you are interested in this space. Basically, it has this concept of workspaces, which allows you to divvy up your API server and use it as a multi-tenant API server, which gives you stronger isolation than just namespaces, which is what you would get out of Kubernetes natively. Another technology you might have come across is Crossplane. This gives you rich API modeling abstractions, and it also gives you these providers that can spin up cloud infrastructure and various other systems. The problem with Crossplane is it needs somewhere to store those resources. You can’t just merely install Crossplane and then it runs. You need an API server in order to drive Crossplane. You have this bootstrapping problem where you still need to solve the API server problem. There are companies like Upbound who provide this as a managed API server. If you are interested in going down that road, check that out. Finally, there’s always the custom option, where we just learn from what these systems show us and try and build our own system.

I think in order to really make the decision of which way we want to go here, we really need to understand what those founding principles are. I’m just going to unpack the Kubernetes API server quickly, just so that we understand exactly what we’re going after in terms of the behavior we want to replicate. The Kubernetes like API server, as I’ve mentioned, is just a REST API, so I start up a HTTP server and start registering routes. How do those API types get into the API server? They can either be hardcoded or they can be registered through some dynamic mechanism. Then once a request comes in, you’re just going to perform the usual boilerplate stuff that you do in a REST API. You’re going to do some validation against some schema. You’re going to do defaulting and transformations. Ultimately, what you want to do is you want to store that resource somewhere. The reason you want to store that resource is you want to move from the synchronous world of the request to the asynchronous world of the processing. I’ve built systems. I’ve worked with people on systems that basically store this in all sorts of different types of storage. It could be a database, it could be a queue, could be a file system. I’ve even seen people modifying Git repositories. Basically, depends on the context of what you’re trying to solve. As a general-purpose control plane, I say the best choice here is to store it in a database. That’s what they’re good at. What you want from that database is you want to be able to enforce optimistic concurrency controls. What I mean by optimistic concurrency controls is that you can effectively get a global order of operations on any resource that’s stored in that database. The way you do that is through a sequence number. Every time you want to mutate a resource, let’s say you’ve got multiple requests that are going to be concurrently accessing a resource, and they all want to perform an update, if you took a system that does something like last write wins, you’re going to lose data. Because they’re all just going to start writing over each other. You need to enforce an order of operation to avoid data loss. With optimistic concurrency controls, when you first read the resource, you will get a sequence number with that. Let’s say the sequence number is 3. You then perform your update, and then you write it back to the database. On that write, if that sequence number has changed just to a value that you are not expecting, the database rather will reject that write, and you will now have to reread that resource, reapply the update, and write back to the database. This is really useful for these systems. Once the data is stored in the database engine, we then want to asynchronously through some eventing mechanism, trigger some controllers to perform the reconciliation.

This is a really interesting point that we’ve been talking about Kubernetes and all of the patterns from Kubernetes, but you could build this system on AWS using serverless. You could use lambda as the API server. You could store your data in DynamoDB. You could use EventBridge to trigger your controllers, and those controllers could be lambdas. You use the context of your problem space and the decisions you’re making about what platforms you want to run on and what abstractions you want, to actually build the system, but just look at the founding principles that we’re trying to build the system on top of, and the behaviors that we’re going after. We sometimes refer to this as choreography, because it’s event based. That means that there’s clearly going to be the alternative, which we can talk about as orchestration. This might be that you basically predefine all your reconciliation logic, and you bundle it into some workflow engine, and the request comes in, and then you effectively offload that to the workflow engine to durably execute, and you expect the workflow engine to handle transient failures, do things like compensation during errors, and all the rest of it. Some technologies you might want to think of, if you’re going down this road, is something like Temporal or even Dapr workflows. My personal preference is to go with the database approach first, so write the resource to the database. The reason for that is you can then read it. Rather than going off and having some asynchronous workflow run, you have a resource that’s stored in the database that represents the latest version of that resource that you can quickly serve to your clients immediately. Then you have the eventing mechanism that triggers your controllers, and that eventing mechanism decouples the controllers from the resource, which means future use cases, as you bring them online, don’t have to reinvent everything. They can just simply subscribe to that eventing mechanism and start writing the logic. If those controllers themselves need to use some durable workflow to execute their logic, then go and do it, so be it. You can use both choreography and orchestration together to get the best of both worlds.

How does this actually work in Kubernetes? You’ve got the Kubernetes API server. It has some hardcoded types, things like pods, config maps, secrets, all that gubbins. It supports also custom API types via CRDs or custom resource definitions, and then it writes to its database, which is etcd. It uses optimistic concurrency control, and it uses a sequence number that’s called resource version. We’ve talked about all of that, and that makes sense. Now we’ve stored our resource in etcd, and it has this concept of namespaces, which allows you to isolate names of resources, because that’s all a namespace is. There’s no more isolation beyond literally just separating names with a prefix. Then, it has the concept of a watch cache. For every type of API that you bring to the API server, every CRD, you are going to get a watch cache that’s going to watch the keys in etcd. etcd has got this nice feature that does this natively. The API server is going to build these in-memory caches of all of your resources in order to efficiently serve clients. Some of those clients are going to be controllers, and controllers, you can build them a million different ways, using things like controller runtime or just client-go, or whatever. They all typically follow the same pattern of having this ListWatch interface. What that means is that when the controller comes online, it initially does a list. It says, give me all of the resources of this kind. Then, from that point on, it just watches for new resources. Then, periodically, it will do a list to see if it missed anything from those watch events. That basically is the whole engine that’s driving these controllers, running the reconciliation. As we know, Kubernetes was not invented to support CRDs out of the bat. What it was invented for was scheduling workloads onto nodes. You have the scheduler, and you also have all of these workload types that you might not actually need in your system, but you have because you’re using Kubernetes. You might want to consider that baggage for the use case that we’re talking about.

What did we do at Diagrid? People say, don’t build your own databases. They probably say, don’t build your own API servers either, but we did. Basically, we tried to take the simplest approach, which was that we did things like statically build in all of our API types into our API server. We used effectively the same API machinery as Kubernetes in order to handle our resources which were ultimately written to our database. Rather than using etcd, which is horrible to run, and no cloud provider offers a managed version, we just write it directly to our managed SQL database, and then we set up a watch. Rather than the watch cache building an in-memory buffer of all of these resources, we externalize the state to a Redis cache, and we also push onto a stream to trigger the controllers. This is like a change data feed that will drive the controllers. Notice where those controllers are. Those controllers are actually inside the API server, which means we install our API server, we get all of our types, and all of our control logic inside that single monolithic API server, which we can then scale horizontally, because all of our state is externalized. Then we also added support for remote controllers as well, which run outside of the API server, and they use the ListWatch semantics that we saw in Kubernetes as well. Just one thing to call out there is that you can efficiently scale your database by vertically partitioning by kind. Because in the Kubernetes world, you only ever access your resources by kind. You list pods. You list deployments. You don’t necessarily or very often go across resource types, so you can partition that way to efficiently scale.

If we dive in a little deeper into the API server to look at how that actually works internally? We’ve got all the REST gubbins that you would expect, and that’s the Kubernetes like API machinery, but that then interfaces with something we call resource storage. At the resource storage layer, we are using that generic object. All of that specialization of all the types and everything is basically lost at this point. We’ve done all the validation. We’ve done all the templating and all that stuff. We’re now just working with generic objects. That resource storage is abstracting us over the top of a transactional outbox pattern. When we write to our resources table, we are transactionally writing to an event log table at the same time, and that allows us to set up a watcher that is subscribed to that event log change, and then when it detects that there’s a change or an offset change, it will grab the relevant resource, update the cache, and then push an event onto the stream to signal the controllers. It does all of that using peek lock semantics so that it won’t acknowledge the offset change until it has both grabbed the resource, updated the cache, and pushed to the stream. What we’re getting from the stream is what we call level-based semantics, and this is the same as Kubernetes. What this means is, because we have ordered our changes at the resource in the database layer, we don’t have to operate on every single event, because we know the last event is already applied on top of every other resource change that has come before it. Effectively, you can compress 20, 30, 40, 100 changes if they happen in a quick time, into a single reconciliation at the controller level. These controllers have to run idempotently to support things like retries. They basically run until they succeed. Or, if they have some fatal error, they’ll dead letter, and this will feed back through to basically report a bug in the system at this point.

These controllers are clearly not Kubernetes controllers, and we’ve had to build our own framework for this. We have this generic base controller that is abstracting the interface to the cache and the stream, and it also performs a resync to basically do drift detection. When it detects that there is something that we need to reconcile, it will basically only call add or delete. An add is for a create or an update, and a delete is obviously for a delete. It calls that on the actual controller’s reconciliation logic, and that controller will then do whatever it needs to do to reconcile that resource. That logic is completely API specific, whatever that reconciliation looks like. There are some other things that our controllers do because they are so lightweight, is they just generate data. You don’t think about a Kubernetes controller that just writes a row to MySQL, you usually think about a Kubernetes controller that then goes and configures some cloud resources or updates things in Kubernetes, but why not use the same pattern to just drive database changes and business logic? We actually have these lightweight controllers that can do things like that, and they could just build things like materialized views. For instance, that visualization we talked about earlier, you could just have that as some reconciliation over a graph type or whatever. You can start to think about using this really generic pattern in lots of different ways. Once the reconciliation logic is completed, it effectively calls update status, which is the feedback mechanism to close out the full reconciliation. The system detects that, ok, we don’t need to do anything else, this resource is reconciled. For anyone who’s deeply interested in controllers and that logic, we do also use finalizers for orchestrating deletes. If you are interested, check that out on Kubernetes, because it’s well documented.

To summarize, try to isolate user resources from your internal resources, especially through some form of tenancy or a specific Kubernetes cluster. Evaluate the various ways that you can run a Kubernetes like API server against your use case. It’s not necessarily the only option to run Kubernetes. A system can support both choreography and orchestration, and they both have advantages and disadvantages, so use them wisely. Resource composition can satisfy some templating use cases.

The Data Plane

We’ve talked about the control plane, but the data plane is where actually things get configured to actually give a service to end users. I like to think about this in a few different models. There’s the centralized approach where all of the resources we’ve been talking about are being stored in an API server at the control plane level, and then that’s where the compute is running, or the controllers which are reconciling those resources into the data planes. You have all of this centralized management and all the reconciliation happening centrally, but it’s reaching into all of those regions and configuring the data planes. This approach might work well at a fairly low scale, and it does have some downsides, which I’ll get onto in future slides. The second approach I think about is decentralized control, and this is where you have resources stored at the control plane level, but you are synchronizing them down to the data planes at the regional level, which is actually where the controllers run to do the reconciliation. Obviously, the API servers are only synchronizing the particular resources that they need in that data plane. I’ll quickly just touch on KCP. This is similar to how KCP basically builds its model, which is that you can have these virtualized workspaces and API servers, but you then bind it to workload clusters, which is actually where the work happens. The last approach that I’ll quickly touch on is the federated control approach, which is that no resources are stored at the control plane at all. Basically, you’ve just got a big router. That router is directing you to whichever data plane you need in order to store that resource. Then the controllers continue to run in the data plane. By extension of this model, you could also think about a mesh model where basically all the API servers are in some form of mesh and can talk to each other, and can share resources among the regions. That’s a lot more complicated.

At Diagrid, we’ve followed the decentralized control model, which is similar to this, where you have a Kubernetes like API server in the control plane, and that’s where you store your resources. You need to somehow claim those resources from the data plane. You need to know which ones need to be synchronized down to the data plane. There is some form of claiming or binding which is assigning resources to a data plane. Then there’s a syncer, which is pulling those resources and updating the local API server, which then has the same logic we’ve already talked about, so that’s going to then drive the control loop, which will provision the end user services and shared infrastructure. One of the niceties about this approach is that that controller that is running in the data plane, can handle all the environment variance, because if you have a multi-cloud strategy, that could be running in AWS, it could be running in Azure, it could be running in GCP, it could be running in OpenShift, it could be running anywhere. Because that controller is running natively, it can use things like pod identity and it can use all of the native integrations with the cloud, rather than having some centralized controller having to second guess what that particular region needs. One of the things that we saw when we followed this approach is that you quickly start to bottleneck the API server, and if this is a managed API server from some cloud provider, you’re going to get throttled pretty quickly. That’s because you are synchronizing resources from the control plane into the API server, and then you have the controllers watching the API server, and then, in turn creating resources in the API server which also have controllers which are watching the API server, and so on. You end up really basically bottlenecking through your API server. We asked the question, could we go direct? Why are we using the API server at the data plane? Is it giving us any benefit? We basically summarize that we could go direct, but we would have to lose Kubernetes interoperability. We would lose the ability to use native Kubernetes controllers, and we would have to go it alone using our own custom approach. We did effectively build a model around this, which is that we have this syncer, which can rebuild state at any time from the Diagrid API server using that ListWatch semantics, which we talked about, and then it effectively calls an actor. There’s basically an actor per resource in the data plane. I’ll touch on this in the next slide a little bit more. This is all packaged in one of those remote controllers that we talked about earlier, which can talk to the Diagrid API server. All of these messages are going over a single, bidirectional gRPC stream, so we can efficiently pick up any changes in resources from the API server almost immediately and react to that without waiting for some 30-second poll or anything like that.

Let’s look at these actors a little bit more. This is not strictly an actor from some formal actor definition, but basically what it is, is it’s an object that represents something in the data plane. We think about things like projects or application identities or Pub/Subs, and things like that, as resources. This actor is like something that’s in the memory of the process, and it’s basically listing on an inbox for differential specifications. Changes to the specification are getting pushed to it through the inbox, and then when it detects that change, it updates its internal state of what that specification looks like, and then reruns a reconciliation loop, which is just using a provisioner abstraction to configure things in Kubernetes through either natively Kubernetes, or Helm, or a cloud provider. Throughout that process, it’s flushing status updates back up to the control plane, so you as a user, you can see it transitionally going through all of these states as it’s provisioning infrastructure and managing the various things that it needs to do. The reason I say it’s not strictly an actor is because there’s no durability. Our state can be rebuilt on demand, so we are not using any persistence for this actor. This actor is literally something that’s in memory. There’s no messaging between this actor and any other actor, which means there’s no placement, and there’s no activation. There’s none of that stuff. If you’re deeply familiar with actors and very strict on that, then me using the actor term is probably not correct, but it does give us the sense of the concurrency controls that we’re using, which is that we are blocking on an inbox channel and we are pushing through an outbox channel. In reality, this is actually leveraging Go’s concurrency primitives. This is actually a goroutine. This goroutine is listing to a Go channel, which is the inbox, and it’s writing on a Go channel on the outbox. The Go runtime is optimized to basically schedule these goroutines efficiently. They are virtual threads, green threads, whatever you want to call them, and you can have tens of thousands, if not hundreds of thousands of these in a single process, using very little memory and CPU. Because these actors are mostly idle, or they’re doing I/O bound work talking over the network, we can really efficiently context switch between many actors and do concurrent processing at the same time.

Just coming back to the top-level picture, the last thing I wanted to talk about here is the ingress path. How are the users actually talking to these end services? At the data plane level, you need to be exposing some form of public load balancer and ingress. You need to provide some way of them routing to these services. Typically, you might, like in this instance, use a Kubernetes ingress server with a public load balancer, and then use some wildcard DNS record to do the routing. Your user will have a credential that they will have got when they provisioned whatever resource it was through the control plane. You will either give them a connection string, an API token, or, preferably, an X.509 certificate. They then provide that to you on the data plane API, and then you perform the routing to whichever service is the one that they’re assigned to. A couple of things to think about here is that you will need to have variable isolation and performance levels at the services. It is just expected these days that if you are providing a cloud service that you can configure the performance so you can request more CPU, more memory, more throughput, lower latency, all of that stuff needs to be a scale. You need to build the system so that your actors can reconcile different types of system. They need to be able to say, I’m going to provision this inside some types of virtualization because I need stricter isolation, or I’m going to provision this using some external service because it gets higher throughput. You need to build in all of this variability into your data plane to support your end users.

Lastly, to summarize, clouds can use a centralized, decentralized, federated, or mesh approach to data plane resource propagation. Try not to set fire to your API server, because it’s quite hard to put out once it is. Consider how to handle environment variants in your data plane if you’re doing multi-cloud. Provide tiers of isolation and performance at the data plane. One size is not fits all when it comes to cloud resources.

Timeline (Dapr and Diagrid)

October 2019 is when Dapr was first open sourced and made public on GitHub. It was donated to CNCF in November 2021. I joined the project about a month after it was first announced in November 2019. Diagrid was span out in December 2021. We set out to build Conductor in about 7 or 8 months, which we pretty much did, and we did it with two backend engineers, one frontend engineer, and one infra engineer. That now serves hundreds of Dapr clusters and thousands of Dapr applications and millions of metrics per day. That’s now in production, and it’s actually free. Then the second service is Catalyst, and that went into private preview in November 2023. Again, we built that with a fairly lightweight team. We had four backend engineers, two frontend engineers, and two infra engineers, and we were also still working on Conductor and our open source project of Dapr. That also runs hundreds of Dapr clusters, they just happen to now be internal, thousands of Dapr applications, but also now has to process millions of requests per day as obviously, it’s an API as a service.

Questions and Answers

Participant 1: If you were to rewrite what you’ve done in the last 2 years, are there bits that you’re not happy or you would change, or has anything changed in the last 2, 3 years?

Collinge: Yes. We had to iterate a few times on this. Basically, the first way we built this was we had the Diagrid API server, and then we had more logic in the control plane that was effectively copying these resources down to a different database table, and then another gRPC API that then exposed it to the agent that ran in the data plane cluster. We realized we were just basically copying data all the way down, probably three times to get it into the data plane. Then we just had this light bulb moment where we were like, why don’t we just run this as a remote controller and just use a gRPC stream, because the previous model was built on polling, and it basically left us taking minutes to provision resources. Although users are pretty familiar with waiting minutes to create a virtual machine, if you want to build the next level UX for these things, being able to provision application identities and APIs in seconds is really what you’re after. Moving to this model allows us to reduce that time massively.

Participant 2: I saw that you’ve basically mimicked a lot of Kubernetes logic and functionality. Was it a conscious decision to not use Kubernetes as like a product decision to decouple yourselves from the scheduling system and be agnostic so you can run on any cloud, even ones that don’t offer any managed Kubernetes solution. Why didn’t you just go with Kubernetes from the beginning?

Collinge: Kubernetes has a lot of interesting characteristics, which we’re trying to gain. It wasn’t designed to run business logic. It wasn’t designed for running lightweight controllers. In fact, it wasn’t even designed for running controllers and CRDs. It was built for the kubelet to provision pods, and it’s just been extended, and people have repurposed it for these new use cases, because they like the extensibility of the API. When we wanted to build Conductor initially, we had jobs that were literally just generating YAML and writing them to like a file in S3 or in GCS. You think about all of the plumbing that goes into writing a Kubernetes controller for it to just do a simple job of generating some YAML and stick it in a file, you start thinking of all the overhead that you’re buying into with Kubernetes. Basically, it came down to what I said at one point, which is, if you limit the solution space to Kubernetes, Kubernetes has lots of constraints, and you start limiting yourself more. If you just expose that and just think about the more founding principles, I think you’ve got a lot more flexibility to explore other options. Like I said, you could. We couldn’t, because we needed to be cloud agnostic. You could build all this on serverless, for sure, and it would be a lot simpler than some of the stuff that I’ve talked about, but we didn’t have that luxury.

Participant 3: The way I understood this is basically that you created this Kubernetes like API to a bit go around the specificities that the different Kubernetes and the different cloud providers may have. Like Kubernetes in AKS is not one in AKS, and like on Azure and AWS, you may have some differences. Now, like for a data platform team that would need to build some service in a given cloud provider, let’s say you build something on AWS and you want to build some kind of well interfaced services, would you now take that road of building a “simple” API with a controller on the back and deal with that yourself. Or would you, in this more constrained context of one cloud provider, pick one of AKS, or one of the provided Kubernetes and build a controller on top of it?

Collinge: I think this touches a little bit more on the platform engineering side of things. It is a bit muddy and a bit vague, which is that we didn’t have a platform team. We were three engineers, so to think about platform engineering is a bit nonsensical. You can build a cloud without using all of these cloud principles, to actually provision your infrastructure internally. If you do want to get into the world of platform engineering, then on the infrastructure side, I would definitely not custom build stuff, basically. For provisioning your services, for provisioning like data platform teams and all that stuff, I would stick to Kubernetes and traditional workloads and integrations, and use everything off the shelf as I could, and tools. The reason we built this cloud is for serving our end users efficiently and giving the experience we wanted to our end users, but all they’re doing is provisioning resources. They’re not building platforms on top of our cloud.

Participant 3: I also think you are probably doing some platform engineering for it, but as a SaaS. It’s fairly similar, but indeed the fact that you have a product and everything on top of that makes some kind of customization worthy.

Collinge: The closest thing is like Upbound, I think, to building a system like this as a SaaS, like a full SaaS cloud as a service thing, but they are still very infrastructure focused. I think that there probably is an opportunity to build cloud as a service thing which is a bit more flexible and supports more lightweight business logic, because you might just want to create an API key. Why do you need all this logic to create an API key?

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GenAI Unleashed: Mastering AI Data – Tech Barcelona

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The foundation of a thriving, #AI-driven business is no longer static, outdated data architectures. Today, the future belongs to those who build a dynamic real-time data framework.

On October 3, join MongoDB, Confluent, and Amazon Web Services in Barcelona for a roundtable discussion and happy hour where you will learn how to unlock new #genAI opportunities.

This event is aimed at architects, software developers and engineers from enterprise-level & Digital Native companies that love to leverage cutting edge technologies to build amazing AI powered applications”.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB and AICTE Partner to Upskill 500,000 Indian Students – Elets Digital Learning

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB and AICTE

MongoDB has joined forces with the All-India Council for Technical Education (AICTE) under the Ministry of Education to upskill 500,000 students across India. This partnership, part of MongoDB’s “MongoDB for Academia” initiative, aims to provide in-demand full-stack development skills to students through a 60-hour virtual internship. This training includes experiential learning, boot camps, project work, and exposure to corporate-style environments.

The collaboration is backed by SmartBridge’s SmartInternz platform, offering over 150,000 students access to virtual internships and hands-on experience with MongoDB Atlas, a leading multi-cloud developer data platform. Launched in September 2023, the program provides educators with curriculum resources, students with free credits to use MongoDB tools, and certifications to jump-start careers in technology.

To further expand its reach, MongoDB has also partnered with GeeksforGeeks, a well-known computer science learning platform in India. This collaboration will offer MongoDB’s Developer Learning Path to 25 million GeeksforGeeks users, reaching over 100,000 aspiring developers through both online and offline centres.

Dr Buddha Chandrasekhar, CEO of Anuvadini AI and Chief Coordinating Officer at AICTE, emphasised that the ongoing wave of AI and modern technologies presents India with vast opportunities. He highlighted the importance of equipping developers with the right skills to capitalise on this potential.

Also Read: Literacy Extends Beyond Basic Skills Of Reading & Writing

MongoDB has already made substantial strides in India, with over 200 partnerships across educational institutions, training more than 100,000 students, and completing over 450,000 hours of learning. According to Sachin Chawla, Area Vice President of MongoDB India, the initiative is a testament to MongoDB’s commitment to nurturing the next generation of tech talent in the country.

This initiative positions MongoDB and AICTE as key contributors to India’s tech education ecosystem, offering unparalleled resources to students eager to develop cutting-edge skills.

“Exciting news! Elets Education is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest insights!” Click here!

Be a part of Elets Collaborative Initiatives. Join Us for Upcoming Events and explore business opportunities. Like us on Facebook , connect with us on LinkedIn and follow us on Twitter , Instagram.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


IIM Raipur MDPs: Essential Skills for Today’s Dynamic Business World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

IIM Raipur

IIM Raipur is set to launch its much-anticipated Management Development Programs (MDPs) on September 20, 2024, offering professionals a unique opportunity to upgrade their managerial skills and advance their careers. Running for just over a month until October 23, 2024, these programs are tailored to meet the needs of professionals across industries, helping them navigate today’s complex business environment.

The Indian Institute of Management (IIM) Raipur has designed six Management Development Programs that cover essential subjects, including Healthcare Management, Business Analytics, and Public-Private Partnerships in General Management. These courses aim to equip professionals with the expertise needed to drive growth and efficiency within their organisations.

According to IIM Raipur, the MDPs also include specialised training in Project Appraisal, Financing, and Project Management, as well as Financial Risk Management. Participants will also explore Innovation and Technology Management within the Strategic Management domain, ensuring they gain cutting-edge insights into managing modern business challenges.

Also Read: MongoDB and AICTE Partner to Upskill 500,000 Indian Students.

The curriculum employs a range of practical learning methodologies, such as:

  • Case studies, simulations, and role-playing exercises
  • Critical incident techniques and In-Basket exercises
  • Interactive group discussions and program-specific assignments
  • Classroom lectures led by expert faculty

IIM Raipur’s MDPs are designed to offer professionals hands-on experience with real-world scenarios, enabling them to apply contemporary strategies and tools to manage their organisations effectively. Whether you’re looking to specialise in finance, project management, or strategic management, these programs provide the advanced knowledge needed to excel in today’s fast-changing business landscape.

“Exciting news! Elets Education is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest insights!” Click here!

Be a part of Elets Collaborative Initiatives. Join Us for Upcoming Events and explore business opportunities. Like us on Facebook , connect with us on LinkedIn and follow us on Twitter , Instagram.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.