How MCP could add value to MongoDB databases – InfoWorld

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Context-aware vibe coding via MCP clients

Another advantage of MongoDB integrating MCP with its databases is to help developers code faster, Flast said, adding that the integration will help in context-aware code generation via natural language in MCP supported coding assistants, such as Windsurf, Cursor, and Claude Desktop.

“Providing context, such as schemas and data structures, enables more accurate code generation, reducing hallucinations and enhancing agent capabilities,” MongoDB explained in the blog, adding that developers can describe the data they need and the coding assistant can generate the MongoDB query along with application code that is needed to interact with it.

MongoDB’s efforts to introduce context-aware vibe coding via MCP clients, according to Andersen, will help enterprises reduce costs, both financial and technical debt, and sustain integrations with AI infrastructure.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Schonfeld Strategic Advisors LLC Has $5.95 Million Stock Holdings in MongoDB, Inc …

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Schonfeld Strategic Advisors LLC reduced its holdings in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 19.1% in the 4th quarter, according to its most recent 13F filing with the Securities and Exchange Commission. The institutional investor owned 25,556 shares of the company’s stock after selling 6,018 shares during the quarter. Schonfeld Strategic Advisors LLC’s holdings in MongoDB were worth $5,950,000 as of its most recent filing with the Securities and Exchange Commission.

A number of other hedge funds and other institutional investors also recently added to or reduced their stakes in the business. Vanguard Group Inc. increased its position in MongoDB by 0.3% during the 4th quarter. Vanguard Group Inc. now owns 7,328,745 shares of the company’s stock worth $1,706,205,000 after purchasing an additional 23,942 shares in the last quarter. Franklin Resources Inc. grew its holdings in shares of MongoDB by 9.7% during the fourth quarter. Franklin Resources Inc. now owns 2,054,888 shares of the company’s stock worth $478,398,000 after buying an additional 181,962 shares in the last quarter. Geode Capital Management LLC raised its position in shares of MongoDB by 1.8% in the fourth quarter. Geode Capital Management LLC now owns 1,252,142 shares of the company’s stock valued at $290,987,000 after buying an additional 22,106 shares during the last quarter. First Trust Advisors LP lifted its stake in shares of MongoDB by 12.6% during the fourth quarter. First Trust Advisors LP now owns 854,906 shares of the company’s stock valued at $199,031,000 after buying an additional 95,893 shares during the period. Finally, Norges Bank acquired a new stake in MongoDB during the fourth quarter worth approximately $189,584,000. Institutional investors and hedge funds own 89.29% of the company’s stock.

Insider Transactions at MongoDB

In related news, CFO Srdjan Tanjga sold 525 shares of the company’s stock in a transaction that occurred on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $90,961.50. Following the sale, the chief financial officer now directly owns 6,406 shares of the company’s stock, valued at approximately $1,109,903.56. This represents a 7.57 % decrease in their position. The transaction was disclosed in a legal filing with the SEC, which is accessible through this hyperlink. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at $2,529,103.50. This trade represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. In the last 90 days, insiders sold 36,345 shares of company stock valued at $7,687,310. Insiders own 3.60% of the company’s stock.

Wall Street Analyst Weigh In

MDB has been the topic of a number of recent analyst reports. Barclays reduced their price target on MongoDB from $330.00 to $280.00 and set an “overweight” rating on the stock in a research note on Thursday, March 6th. Stifel Nicolaus reduced their target price on MongoDB from $340.00 to $275.00 and set a “buy” rating on the stock in a research report on Friday, April 11th. Cantor Fitzgerald initiated coverage on MongoDB in a research note on Wednesday, March 5th. They issued an “overweight” rating and a $344.00 price target on the stock. Rosenblatt Securities reaffirmed a “buy” rating and set a $350.00 price objective on shares of MongoDB in a research note on Tuesday, March 4th. Finally, Daiwa America raised shares of MongoDB to a “strong-buy” rating in a research report on Tuesday, April 1st. Eight research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company. Based on data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and an average price target of $294.78.

Read Our Latest Research Report on MDB

MongoDB Stock Performance

Shares of NASDAQ MDB traded up $1.32 during midday trading on Monday, reaching $172.96. 1,254,868 shares of the company’s stock were exchanged, compared to its average volume of 1,851,850. MongoDB, Inc. has a 52-week low of $140.78 and a 52-week high of $379.06. The business’s fifty day moving average price is $184.75 and its two-hundred day moving average price is $244.79. The stock has a market capitalization of $14.04 billion, a PE ratio of -63.12 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last posted its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the prior year, the firm earned $0.86 earnings per share. Analysts predict that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

7 Energy Stocks to Buy and Hold Forever Cover

With the proliferation of data centers and electric vehicles, the electric grid will only get more strained. Download this report to learn how energy stocks can play a role in your portfolio as the global demand for energy continues to grow.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Slack’s Migration to a Cellular Architecture

MMS Founder
MMS Cooper Bethea

Article originally posted on InfoQ. Visit InfoQ

Transcript

Bethea: My name is Cooper Bethea. I’ve been an infrastructure engineer, an SRE, a production engineer. I worked at Slack and Google. I worked at a couple smaller places too. I worked at Foursquare, and Sift, formerly Sift Science. I’m a software engineer, but I’m bad at math. I like people. I’m better at people than I am at math. I ended up in this place where I’m doing a lot of work on site reliability and uptime, mostly because I was willing to hold a pager.

It turns out, in the end, this all adds up to me enjoying leading large infrastructure projects where the users don’t really notice anything, except at the end, everything works better. This talk is about one of these projects. I was an engineer at Slack, and I led a project to convert all of our user-facing production services from a monolithic to a cellular topology. How many of you use Slack at work? Clearly, this project needs some justification, and that’s what my VP thought too. I’m going to start by describing how we came to start this project.

Then I’ll give you an overview of how our production environment looked before, and then how it looked after, and these will look similar. You will wonder why this was so hard that I got to do a talk, because we didn’t even invent an algorithm or anything. This project was hard. I think projects like these are hard. We tried to do this at Slack once before, and it never really got off the ground. Part of what I’m going to talk about is how and why our rendition of this project actually succeeded.

One thing I would ask you to consider as you’re listening to this talk is these large migration projects often fail or run aground, even when you thought they would be simple, just changing one system to another system. We’re not even writing new software. A lot of the time, it’s just these reconfigurations of software. Why is this hard? This is some good old AI-generated slop that I made about a ship voyage, because a lot of these pictures are copyrighted. Similarly, we can ask, why did it used to be hard to sail a ship across the ocean? You know where you’re starting from, you know where you want to end up, but it was still hard. I think that big projects like these are similar, like these exploratory voyages that we used to do, where you know where you’re going, but you don’t know exactly how you’ll get there. I think that’s very confusing. It can be hard for organizations. Like all the best projects, this project was born from fear, anger, lost revenue, and a bad graph.

One day, we’re just slacking along, doing our projects, running the site or whatever, and this happens. This is a graph of TCP transmits split by availability zone in Amazon U.S.-east-1. For us, TCP transmits have always been a pretty clear sign that something is going wrong. Packets are getting dropped somewhere, whether it’s on one of the endpoint nodes, there’s a network and hardware in between, something is not good, and we need to take a look at that. If you look at the monitoring and do a little mental math, you can see that we have more of these dropped packets in one AZ than the others. You can see that the one sums to the same as the second two. We got the idea that this was about packets being lost traveling from that AZ into the other two AZs. We were like, that’s bad.

We basically called AWS, we did the thing where you filed your support ticket, and they were like, we were looking at it, and then we’re all sitting on Zoom. Then they found the network link and they fixed it, and all our errors went away. Then a few hours later, there was an automated system that put the link back in service, and it went down again. We called AWS again, and they processed the ticket, and they fixed it again. We were tired. We were just like, why do we have to go through this? Why do our users have to endure this? We built this distributed system, and it really ought to be able to deal with a single AZ being entirely unavailable. Why do we service errors? Why didn’t we just steer around the bad availability zone? It’s in the name, availability zone. You’re supposed to be able to build availability out of these things, multiples of them. We do a lot of trying to detect errors in distributed systems like, why didn’t that save us here either?

Slack’s Architecture

First, let’s take a look at the architecture of Slack. It’s a little different from what you’ll see at other sites, but not that weird. What we do is we have these edge regions where we terminate SSL close to users. We’ll get you a WebSocket out there. We’ll serve files, stuff like that that makes sense to push close to the users. Then the most important work on the site all actually happens inside U.S.-east-1. That’s where we retrieve channel histories.

If you post a message, that’s where the fanout starts. All that happens in one region, U.S.-east-1. What we do is we forward the traffic into this internal load balancing layer that you can see here that fronts each availability zone, and then these internal load balancers direct the traffic into a webapp, which is what you’re thinking, it’s based in Hack. It processes HTTP requests for users with the help of backend services and datastores. You would look at this and be like, we could just stop sending traffic to that AZ, and this would just work. We should just have done that when we had the outage. That didn’t work because this slide is a lie, because this is actually what everything looks like behind the webapp servers. You can see what’s going on here. This is spiritually descended from these three-tier architectures that we know. You’ve got a reverse proxy.

Then you’ve got an app server that terminates HTTP. Then you’ve got some databases behind it that it gets data from to answer these questions. We’ve just got this extra layer of reverse proxies because we’re crossing a regional boundary. We’ve got a whole bunch of different services on top of the database, including Memcache. There’s a Pub/Sub system that does fanout. There’s a search tree, stuff that is useful to the webapp in answering these queries. You’ll notice that most of these services are actually being accessed cross-AZ, because that was how we built them out, basically. We were counting on failover happening. We weren’t paying attention when we set up this part of the stack.

It turns out we got even more problems than that. Our main datastore is Vitess, which is a coordination layer, a sharding layer on top of MySQL in the end, and it’s strongly consistent. It needs a single write point for any given piece of data. You have to manage failover within the system. You can’t just stop sending Vitess traffic in one zone. If that zone contains the primary for a shard, you actually need to do an operation in Vitess to failover primariness, mastership of that shard to a different availability zone. We’ve got our work cut out for us at this point. We couldn’t pop the AZ out of frontend load balancing, but maybe we could do this automated thing I was talking about. Like just have the computers figure it out themselves and be like, that’s not really good anymore. We can just have the app service manager in our backends and fail away from the impacted AZ. This is a simplified waterfall. There’s a chunk of a waterfall graph between the webapp and its backends.

Once an API request from a user gets in there, there’s actually a lot of fanout. It’s not just these five RPCs, it’s maybe 100 RPCs. I’m like, wave our hands, like if you need to do 100 RPCs to your backends and you’re living with a 1% error rate, you’re probably not going to ever assemble a whole HTTP request, without a bunch of retries, things getting slow and messy. Then, conversely, once the app server is trying to handle things, the only reasonable response to like, I’m serving a lot of errors, I’m missing a lot of my backends, is for it to just lame duck itself to the load balancer and get taken out. This is viscerally unnerving to me, because we are forced to face this idea that if these webapp servers all have the same backend dependencies and they’re starting to lame duck themselves because one of these backends that they’re fanning in on is failing, we’d have a big possibility for cascading failure across the site.

Another consideration is like, in Memcache, you mostly shard your data via clients hashing over backend, use consistent hashing. Clients in the affected AZ had a different view of their backends in the Memcache ring, so we would get cache items duplicated in the Memcache ring. There’s some little consistency issues in there, and they can be found missing when they actually exist. That’s a recipe for database pressure, which again, fanning in, overloading the site in a bad way.

We thought about all this and we were like, it’s actually hard for the computers to do the right thing here. We want, perfectly, all the webapps in that one AZ to lame duck themselves and none of the webapps anywhere else to lame duck themselves. Then we got to do some stuff with Vitess, we’ll talk about later. It just felt bad. We were humans and we could see the graphs. We were sitting on the Zoom call being like, maybe we should drain this availability zone. If we could have smashed the button, we would have smashed the button, and we would have done it. We were also worried that if we remove traffic entirely from one AZ, it would maybe overload the other two AZs and send the site in cascading failure. We’re limping along at this point.

Originally, we’re serving like 1% errors. We are not in our SLO. It’s not as bad as it could be. We were afraid that we could send it into cascading failure and make it worse. We had some ideas, but we didn’t really trust it. It’s really scary to do things for the first time in an incident. What actually happened is we just sat on Zoom and talked about it for a while, and then AWS fixed it while we were talking about it. Then we felt bad. I felt bad enough that I wrote a post in Slack because that’s what we did there. It was called, we should be able to drain an AZ. Everybody was like, yes, we should. Then we’d made a plan.

Cellular Design

I’m going to start by talking about cells. In our implementation, a cell is an AZ. A cell is a set of services in an AZ. That set may be drained or removed from service as a unit. This is what we ended up looking like. Here’s the goals that we needed to satisfy. We took this backwards. We arrived at the cellular design by considering this actual use case that we had, which is, we wanted to be able to drain an AZ as much as possible.

Our goals are, remove as much traffic as possible from an AZ within five minutes. This is just table stakes when you’re in like four or five nines and higher territory, you need tools that work fast because you only get to have a little bit of downtime. Drains must not result in user visible errors. One of the things about being able to drain a data center in AZ is it’s a generic mitigation. As long as a failure is contained within a single AZ, you can just be like, something is weird in this AZ, I’m going to drain it. You can do that without understanding the root cause. It offers you a chance to make things better for your users while you’re still working on understanding what went wrong. If we add errors by draining, we’re probably not going to want to do that because it will make our SLOs worse. Drains and undrains must be incremental.

By the same token, what we want is we want to be able to pull an AZ, fix whatever we think broke inside that AZ, and then leak a little bit of traffic back in there, just like 1% or something, and be like, are we still getting errors? Did we fix the thing? It’s much better than just dumping all the traffic back in there. We wanted to be very incremental to a 1%-ish layer. Finally, the drain mechanism must not rely on the impacted AZ. Sometimes, especially when we have these big network outages, you’ll see some of the first things that we did when we were trying to make this work is we would SSH around the servers and make them lame duck themselves. We just did not want to be in a place where we were trying to do that on the end of a bad network link in a real big incident.

How Things Were Done, and Choices Made

Now I’m going to talk about the ways that we did it and back into how we choose to do those things. The first one, this is pretty straightforward and it’s the thing that you’re probably already thinking about, siloing. We have servers in one AZ and they just talk to upstream servers in that same AZ. It’s pretty simple, and for services that it works for, it works great. This is mostly a matter of configuration for services that are ok with this. We did some help in the service discovery layer and our internal service mesh to support this, but you don’t really need to do all that stuff.

Other services on the other end, difficulty, we have Vitess, where, as I mentioned, we have to internally manage the draining. Anything that’s strongly consistent in your architecture is probably going to need to have writes processed through a single primary point. You’ll have to do some failover. In our case, we’re actually able to do this faster than it seems in Vitess because each of these shards is independent. As long as there’s not too much pressure on the database overall, you can start flipping the shards in parallel and actually go quite quickly. We ended up building features into some autohealing machinery that we have for Vitess that would let us do this really quickly.

Then we started thinking like, what do we choose for each service and why? I’ve mentioned some things about state and strong consistency. It would be nice to have something a little more principled to think about here. Like I said, we like the siloing the best we can do it. Service centers don’t have to manage drains for themselves. They get to have three or four isolated instances of their service, so the blast radius of failures is better. In practice, they can’t all do that. Services that can do this and can’t do that, sort of break down by this idea of stateful or stateless. Like, is the service a system of record for some piece of data? If so, it will probably be harder to just silo.

Further than that, we can look at the CAP theorem a little bit to get an idea. We know the CAP theorem tells us that a system can be available during a network partition, or it can be consistent during a network partition, but not both. Actually, in practice, this ends up being a little more of a continuum. There are different pieces of storage software that satisfy this differently with different properties. When you look here, we’ve got the webapp, which is our canonical stateless thing. It’s just processing based on stuff it gets from datastores. Then we’ve got Vitess on the other end, and that’s strongly consistent.

In the middle, we actually have stuff like our Pub/Sub layer or the Memcache. It turned out that we were using Memcache as a strongly consistent datastore, which is the default thing that you do because you’re just always writing to one Memcache node and then reading out of that Memcache node again. This didn’t really work out great for us, but we found that there were many places in our application where we had trusted that we would have strongly consistent semantics. One of the things that we ended up doing for Memcache, and this is me trying to give you a little bit of flavor around the decision-making process, is we set up a series of per-cell Memcaches. Then we maintained a global Memcache layer, and we slowly, call site by call site, migrated from the old system to the new system.

Let’s look at one other thing here. We’ve got this other idea of services that need to be global, but they’re built on these strongly consistent datastores. For example, we used Consul for service discovery. Consul, underneath it all has a consensus database that is strongly consistent. A Consul cluster tends to fail as a unit and totally. What we used to do is we had many of these Consul clusters, they were striped across all the availability zones.

Eventually, we were able to bring down the number of Consul clusters in each availability zone into a high and low priority. Then we use our xDS Control Plane. xDS is the Envoy API for dynamic configuration. We were also able to use that for service discovery information, since that’s just a subset of what it’s already doing. We have this eventually consistent read layer where you see the Global xDS Control Plane over at the top, and it sits on top of these pre-AZ Consul clusters and assembles the information and presents it to them. By maintaining this as an eventually consistent service, if you have to run global services, I really recommend that they be eventually consistent as much as possible. It’s just the data model makes sharding and scaling much simpler. It’s a pretty safe choice for you. You can also see that we’ve brought down the number of Consul clusters. I’ll talk about that later. There’s a way in doing this project that we were able to do some things that we’ve been putting off, some housekeeping.

Finally, there’s the drain mechanism itself. We have traffic control via the Envoys and the xDS to get to the drains. As mentioned, the first implementation, we would use service to health check down. That’s why you see a duck emoji. We decided that that wasn’t incremental or nice enough for the data. There’s a very nice feature in the Envoy configuration called traffic weighting. We can just say, by default, everybody gets 33%. Then when we drain, we will set the cell to 0%. It just works.

Now I’ll back out. Here’s our original diagram. This is where we started. Again, here’s where we ended up. I’m going to flip back and forth a couple times. You can see we have less lines now. Things are nice and orderly. I’m glad that the slide is laid out this way. We’ve controlled the cross-AZ traffic in some pretty nice ways. We just have a few services that are talking cross-AZ now, and we’re being choosy about them. We can treat these services with more care, because we’re letting them break some cellular boundary, and there’s some potential for bad data poisoning, like for them to put load on other cells.

Fortunately, these are already our databases and our systems that are the system of record for data. We’re already being careful about them. Everybody who runs databases says it’s hard. I’ve always avoided it. This also gives us a model for bringing up a new availability zone if we want to, because we can just build one of these things out, build the data storage systems, start the replication underneath, and then build the other layers up from the bottom, and eventually enable them in load balancing. If we want to move around, do more or less AZs, stuff like that in the future, it becomes more like doing one at the end.

The Success Drivers

I did say we tried and failed to do this before. This looks pretty simple, and I’ve explained it to you in a nice way. Why did we succeed this time? This is the part of the talk where things get messy, and we’re going to talk about people’s feelings and why doing stuff is hard. The first time we talked about doing this, we were going to make these empty isolated cells and build all the infrastructure into them, and at some point, go lights on and cut over. Among other things, there were going to be solid network boundaries between these cells. There’s going to be a bunch of data repartitioning. It gave us a lot of nice properties. In the end, it was too hard to do. The reason for that is because we couldn’t stop working on features. Customers do not care that you’re doing this. None of your customers care that you are rebuilding your production environment, I promise.

At the same time, we’re talking about building this very beautiful system here. This is a lot different from the old system. The whole time we’re like building these, we’re maintaining two very separate environments and production topologies. This is a big drag on productivity because you have to make everything work twice and then keep it working. The environment that you’re not using is going to decay over time and stuff. Not everything is going to be covered perfectly by integration tests. Weird things will fail. God help your monitoring. There’s a lot of divergence that is happening when you’re working in this mode. The mental overhead is really high. We were like, we couldn’t even do this in a quarter. You can’t take a quarter off work for your whole infrastructure organization to do stuff. There’s no way we flip it out really quick. The resource cost is also an issue.

If we’re paying double for the site the whole time, we’re going to get broke. In the end, we abandoned this design, not because the end state was flawed, but because we couldn’t really find good answers to the concerns about how we actually develop and roll it out. The core property this solution is lacking is incrementality. We can’t really move smoothly back and forth from the old regime of things to the new regime. We can’t keep working on other stuff at the same time without doubling effort.

Conversely, we don’t realize any value from this project until we cut everything over to the new system. There’s a very real risk that it’ll get canceled along the way. We’ll start trying to cut over to the new systems and realize that we are worse off than before. Then we’ve just made a big mess and everybody is mad at us. None of these problems are really, again, entwined with the beginning or end states. In these large projects, it’s the messy middle where all the complexity and risk is lying. You have to respect the migration process. It’s the most important and the most fraught part of any big project like this.

At this point, we went to the literature, actually. This is called the slime mold deck, a lot of people do. This guy, Alex Komoroske at Google wrote it. You can see the URL here, https://komoroske.com/slime-mold/, but also if you just Google, slime mold deck, it’s the first thing. Then a bunch of Reddit stuff for slime and decks. It’s actually about how these coordination headwinds can slow progress on projects in large organizations. As Alex states the problem, is that coordination costs go up as the organizations grow larger. That’s what actually this is all about, coordination costs. Our original plan required us to tightly coordinate across every service at the company to build these empty cells all the way up before you turn them on. We can’t really set things up so that engineering teams can take bites out of the problem for themselves quarter over quarter.

Every team is going to support two separate environments until everybody is done. What do we do instead? We embrace this approach where we’re doing bottom-up development. When we went service by service and really figured out what made sense for that service, you remember there was that rubric with like AP and CP, I told you about. The people working on the core project team, we just sat down with each service and worked on developing a one-pager, like, maybe you can’t just be siloed or maybe you can’t just fail over now, but can we get to that place? What do we need to do to do that? Then also, one of our tactics is that we really embraced good enough. One service not being compliant with this does not risk the whole project. We can’t like not turn the thing on because one service got delayed a quarter. People have outages. Priorities change. We need to not coordinate so tightly. We need to make it so that things converge.

The concept laid out in the slime mold deck is this idea that instead of doing a moonshot, a straight shot for a very ambitious project, you should do a series of roofshots where you’re not necessarily going the most direct route, but you’re locking in value at each step of the way. Each service in my example here, that becomes compliant, locks in value for the project and reduces our execution risk. We don’t go straight to the goal, we use it site by. Another way to look at this is we’re willing to sacrifice some efficiency in return for a reduced risk of failure overall. We operated at Slack in this way where the infrastructure teams were in the same organization, but were largely independent. I think this is pretty common at large organizations now. I believe that most services operate at a local maximum of efficiency.

Some people show up and are like, “This service is terrible. That must be awful. You must’ve designed it in a dumb way”. I don’t think that’s true. All the people that you’re working with are actually smart just like you are. They have just been operating under different constraints. Services evolve over time. Every service owner in particular has a laundry list of technical debt that they want to clean up and they never have time to. That is because tech debt is the price of progress. It is not some engineering sin. Some of these things that service owners want to do and never get the time to do will make cellularizing things easier. In our big project, we can give these teams air cover to reduce some complexity before they add it back.

For example, in the service discovery system, returning to this example, we used to operate all these functionally sharded Consul clusters, and they spanned AZs. The idea here was that we would separate these internal Slack servers from stepping on each other’s Consul clusters. That wasn’t actually the greatest idea because it turns out, once again, customers don’t care. Any of these critical services being down basically meant that Slack was down for everybody. We just knew which service to point the finger at when we had these outages. In the spirit of reducing complexity, as part of doing the work, we just moved to these high and low SLO clusters, each within an AZ. Then we assembled the service discovery information over the top using the xDS layer that I talked about before. We were able to collapse all these functional clusters into high and lower priority clusters.

The high priority clusters aren’t better and we don’t run them better. We just expect more from the customers of the high priority clusters. If you’re a critical service, you need to be attached to a high priority cluster. Then you need to be careful about how you use that shared resource. Again, this is one of these things like selecting the datastores where we’re able to zero in on these extremely critical services and expect a little more from them. As it turns out, teams love this. Even if the team that you’re working with isn’t really sold on cellularization as a project, they almost certainly think that their team has a lot better work to do, and they’re right. They can get some value to themselves just from cleaning up the tech debt. You can see how this leads to a place where the teams have both the flexibility and enthusiasm to work on this larger product.

Project Cadence

I’m going to talk about the cadence that we used in this project. I started with this one-pager, which is, we should be able to drain an AZ. I circulated it to a bunch of engineers from different teams that I knew and trusted. The goal here is really to identify anything that just obviously won’t work. You don’t need to plan this project all the way to the end from the beginning. Most of the time, you’re looking for reasons to stop. You can bring this to institutional design review processes, just circulate it to people and listen to everybody’s opposition to why it won’t work. At this point, you haven’t really spent a lot of effort. We haven’t spent a lot of effort organizationally on making this project work. If it doesn’t work, that’s fine. We didn’t waste too much effort here. At the end of this phase, in return, you should have enough confidence that you can go to the engineering management, go through whatever processes you need and get agreement to run a pilot for a quarter or two. There’s a theme here where we’re gradually committing more as we gain confidence.

At that point, you want to start with several high-value services and engage deeply to make progress. It’s important that they are high value and critical services because those services are probably the most complex and because they will pay off faster in terms of like when we get them cellularized. There we go, a big chunk of value from doing that service. We also know the complexity of the problem is tractable, and we get some ideas about the common infrastructure that we’ll build to help.

At the end of this phase, we’ll attain organizational buy-in again through the management chain to expand all critical services. This is going to be the longest phase of the project. You can imagine there’s like a graph of engineers over time, and this is definitely the fattest part of the graph. Then we’ll start tailing off later. At the end of this phase, what we really want to do is all services have items in their roadmap or we’ve decided they’re just not worth doing. We start tracking our progress really heavily during this phase. One thing that we do is we regularly would do drains of the AZ to see how much we can get. Then, week over week, we can make a graph that’s like, we removed more bandwidth, and it goes up, hopefully. Then we can also build the infrastructure. I mentioned we can do some things to help people in the service mesh, but there’s also deployment tooling, job control. This is the part where your shared infrastructure tooling and systems will need to start changing to accommodate.

Finally, in the last phase of this project, we’re going to wind down, where we set things up, so like the happy path for new services is going to be siloed by AZ. You have to go outside some guardrails in the configuration to make a global service. We make sure that our long-term work for non-compliant services is tracked in roadmaps and OKRs. Then any work that doesn’t make the cut, we have incident review processes. As long as we’re tracking these things that we decided not to do, we can always bring them back up in response to a need.

When is Good Enough, Good Enough?

Now I’m going to go and talk about what I mean by doing things just good enough. At some point, the juice isn’t worth the squeeze. We had data warehousing systems at Slack that were basically just used by data analysts. Maybe there’s some batch processes that run, but user queries basically never touched those data warehouses. We don’t really care if they become cellular or not. That’s fine. If we decide it’s really important at some point, sure, but we’re here on behalf of the customers. It’s probably fine to stop before we get to those. Again, as with the rubric before, we want some structured way to think about this. Our two concerns are really, how do we figure out where to stop, and how do we keep what we’ve gained despite not doing literally everything in our environment? We figured out where to stop by working with each service to figure out the criticality of that service. chat.postMessage, for example, is the name of the API method that you use to post a message into Slack.

If chat.postMessage needs to touch your service before it can return, you are very critical. You’re really important for us to convert. Then, side by side, we want to have some idea about difficulty. That’s where the getting together with each service team and writing this one-pager about like, what it would mean, what we think the best solution for that team is, and how much effort it will take us to get there. You can use this to gauge difficulty. Then this tuple of criticality, importance, and difficulty. Criticality and difficulty is a good way that you can interface with engineering management about this project. You can be like, this is important and hard. This is important and easy. This is not important and easy. This is not important and hard, and do this Eisenhower decision matrix thing. We did a lot of these in phase one and two. Then when we got to phase three, which is the very wide phase, everyone was used to looking at and talking about these things. There wasn’t a lot of back and forth about whether we were doing this the right way or not.

Again, incrementality is most important. You should expect some of these services will need to take maybe a year, or more than a year to really get everything done because sometimes your most critical services are the most backlogged services, and the most difficult to maintain. We need to maintain an awareness of where the moon is as we roofshot along quarter to quarter. We made it so that each team’s engineering roadmap actually had compliance with this program as a goal. Quick sidebar is that, I believe that engineering roadmaps are just completely crucial for teams in infrastructure, in larger companies. If you just have a short document that says like, “This is what our service is like now. This is what we think our service will be like in a year, and here are some ways that we are getting there”. It’s just a very powerful tool to communicate both outside your organization into the rest of the engineering organization. It’s a good way for team members to understand the value and importance of their own work. I love a roadmap.

Finally, we measured our progress with weekly drains. If you’ll remember, we were worried about the capacity implications of it. We were like, every Friday for a while, around noon Pacific, we would drain the AZ and see how far we got. Then we watched the number go up and up over time. We’d do like 10%, and then we’d be like, can we do 20%? Then keep pulling traffic until we got scared. I think we only broke stuff one time. It was really good because it was a great signal for us to give to the company that we were getting something done. It’s meaningful. It’s the amount of bandwidth that’s left. It is a reasonable stand-in for user requests going through there. We were able to make it move. When it stopped moving, we could have conversations about that, and when we could expect it to move again.

Where Does That Leave Us?

This is where we actually ended up. Siloed services are drainable in 60 seconds via the load balancing layer. The Vitess automation reparents the nodes away from the fast AZ, as fast as the replication permits. We didn’t get all the critical services there 100%, but they’ve all got their roadmaps. One thing that you can use if you feel like things are getting deprioritized for too long is like, is it that important if they’re not having outages? Maybe it’s not. Conversely, if they have outages, maybe cellularizing would help. This is a really good thing to include in your incident review process. We’ve built into all our relevant infrastructure systems this default happy path, which is siloed per AZ configurations. We’re doing regular drains. At some point, we were like, we’re just doing these enough now that we don’t need to do them every Friday. Once you’re in the cycle, then you start having a good feeling about your capacity planning and you’re able to do it, and this is something that you can do if there’s an outage. Finally, we got a little bit of a savings because we reduced cross-AZ data transfer costs. That was nice.

Do We Actually Use This Thing?

The question that people always ask is like, do you really drain? Yes, we do. You get to use it more over time. You can use it for these single AZ AWS outages, but then you can also use it to help you do deploys. The same database that we power the frontend drains from, we just open the same one up to internal service mesh customers. Then they can roll forward, roll back their services. It actually opens up this new blue-greenish deploy method, where instead of just having like blue-green, you have like AZ 1, 2, 3, 4.

Then if you want to deploy some software, you just drain one of those AZs, pop the new software on it, undrain to 1%, 10%, 20%, whatever, and step it up that way. You can do that. It can give you a cleaner rollback experience than just popping binaries, like some people do. Siloing is helpful in other ways too. Sometimes you can have a poisonous deploy of a service, where the service itself is ok, but it’s causing another service to crash somehow. The siloing just helps naturally because of that. You can only endanger your upstream services in your own AZ. In general, we got to a place where if something looks wrong and it’s bounded by an AZ, generic mitigation. Like, let’s just try a drain and see if it gets better. Drains are fast. Drains are easy to undo. We don’t have to root cause our problems anymore.

Lessons Learned

Finally, what do we learn about running infrastructure projects in a decentralized environment? You really have to listen to people and you really have to move gradually and incrementally. You have to remember that every service is operating at a local maximum. The way things are is not because people are foolish, it’s because of the evolutionary pressures that shape the development of your system. Projects like this can actually provide potential energy to help these services escape from their local maximum to something better.

Questions and Answers

Brush: Have you had someone have an outage that this would have helped and they hadn’t completed their roadmap yet? Has that happened yet?

Bethea: Yes. That actually was very satisfying while we were in progress. People would be like, we had this outage, and we’d be like, siloing would help you, actually. Have you considered doing our program?

Participant 1: At my company, we do something very similar to this every week, where we’re draining, it’s not in AWS right now, but we often run into challenges convincing leaders that this is a smart thing to do because sometimes putting strain on the system will have some impact. Did you all have that and how did you all overcome it, or address it?

Bethea: Especially when you first start doing this, people will get anxious. You are removing capacity from a system. You can either do it like the inductive or the deductive way, where you either pencil it out and be like, we should be able to do this. Or you can do what we actually ended up doing where we would do these drains and walk them up slowly, and then like, if things got hot. Also, there is, I think, a compelling argument to be made that if you can’t do it when things are pretty normal, then you really shouldn’t be doing it when things are really bad.

Bethea: Did going to silos impact our continuous deployment? Tremendously, actually, because we ended up redoing the way that we did deploys entirely. We used to have a deployment system that did a lot of stuff, but in the end, it went from maybe 10% to 100%, spread across all AZs. We actually reworked it to fill up an AZ first and then spread across the other AZs on the idea that we could just simply revert that AZ by draining.

Participant 2: Then, as it came up, you would ship the traffic to a new AZ, 1%, and then bring the others up, or you would have old version and new version of this traffic at the same time?

Bethea: I think we were doing a little bit of a hybrid version just for historical reasons where we would roll that AZ forward by popping the binaries, by just flipping the binaries. If we needed to revert during the process, we would just pull traffic.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JAT Capital Mgmt LP Sells 72,745 Shares of MongoDB, Inc. (NASDAQ:MDB) – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

JAT Capital Mgmt LP lessened its stake in shares of MongoDB, Inc. (NASDAQ:MDBFree Report) by 84.3% in the 4th quarter, according to the company in its most recent 13F filing with the SEC. The institutional investor owned 13,575 shares of the company’s stock after selling 72,745 shares during the quarter. MongoDB comprises approximately 0.5% of JAT Capital Mgmt LP’s holdings, making the stock its 27th largest position. JAT Capital Mgmt LP’s holdings in MongoDB were worth $3,160,000 as of its most recent SEC filing.

Several other large investors also recently made changes to their positions in MDB. Strategic Investment Solutions Inc. IL bought a new stake in shares of MongoDB in the fourth quarter valued at approximately $29,000. Hilltop National Bank increased its position in MongoDB by 47.2% during the fourth quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after acquiring an additional 42 shares during the period. NCP Inc. bought a new position in MongoDB during the fourth quarter worth $35,000. Versant Capital Management Inc increased its position in MongoDB by 1,100.0% during the fourth quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after acquiring an additional 165 shares during the period. Finally, Wilmington Savings Fund Society FSB bought a new position in MongoDB during the third quarter worth $44,000. Institutional investors own 89.29% of the company’s stock.

Analyst Ratings Changes

MDB has been the topic of a number of research reports. Daiwa Capital Markets initiated coverage on shares of MongoDB in a research report on Tuesday, April 1st. They set an “outperform” rating and a $202.00 target price on the stock. Scotiabank reaffirmed a “sector perform” rating and set a $160.00 target price (down previously from $240.00) on shares of MongoDB in a research report on Friday. Loop Capital dropped their target price on shares of MongoDB from $400.00 to $350.00 and set a “buy” rating on the stock in a research report on Monday, March 3rd. Truist Financial lowered their price target on shares of MongoDB from $300.00 to $275.00 and set a “buy” rating on the stock in a report on Monday, March 31st. Finally, Stifel Nicolaus dropped their price objective on shares of MongoDB from $340.00 to $275.00 and set a “buy” rating for the company in a research note on Friday, April 11th. Eight research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has given a strong buy rating to the company’s stock. Based on data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and a consensus price target of $294.78.

Get Our Latest Stock Analysis on MDB

Insider Activity at MongoDB

In related news, CEO Dev Ittycheria sold 18,512 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total value of $3,207,389.12. Following the sale, the chief executive officer now directly owns 268,948 shares of the company’s stock, valued at approximately $46,597,930.48. The trade was a 6.44 % decrease in their position. The transaction was disclosed in a legal filing with the SEC, which can be accessed through this hyperlink. Also, insider Cedric Pech sold 1,690 shares of the business’s stock in a transaction on Wednesday, April 2nd. The shares were sold at an average price of $173.26, for a total transaction of $292,809.40. Following the completion of the sale, the insider now directly owns 57,634 shares in the company, valued at $9,985,666.84. This represents a 2.85 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders have sold 39,345 shares of company stock worth $8,485,310 in the last ninety days. Company insiders own 3.60% of the company’s stock.

MongoDB Trading Up 0.1 %

Shares of NASDAQ MDB opened at $174.69 on Wednesday. The firm has a market capitalization of $14.18 billion, a PE ratio of -63.76 and a beta of 1.49. MongoDB, Inc. has a 1 year low of $140.78 and a 1 year high of $387.19. The firm’s 50-day moving average is $190.43 and its 200-day moving average is $247.31.

MongoDB (NASDAQ:MDBGet Free Report) last issued its earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing the consensus estimate of $0.64 by ($0.45). The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. During the same period last year, the business posted $0.86 EPS. As a group, equities research analysts anticipate that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBFree Report).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

5G Stocks: The Path Forward is Profitable Cover

Enter your email address and we’ll send you MarketBeat’s guide to investing in 5G and which 5G stocks show the most promise.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Unveils Ironwood TPU for AI Inference

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Google has unveiled its seventh-generation Tensor Processing Unit (TPU), Ironwood, at Google Cloud Next 25. Ironwood is Google’s most performant and scalable custom AI accelerator to date and the first TPU designed specifically for inference workloads.

Google emphasizes that Ironwood is designed to power what they call the “age of inference,” marking a shift from responsive AI models to proactive models that generate insights and interpretations. The company states that AI agents will use Ironwood to retrieve and generate data, delivering insights and answers.

A respondent in a Reddit thread on the announcement said:

Google has a huge advantage over OpenAI because it already has the infrastructure to do things like making its own chips. Currently, it looks like Google is running away with the game.

Ironwood scales up to 9,216 liquid-cooled chips, connected with Inter-Chip Interconnect (ICI) networking, and is a key component of Google Cloud’s AI Hypercomputer architecture. Developers can leverage Google’s own Pathways software stack to utilize the combined computing power of tens of thousands of Ironwood TPUs.

The company states, “Ironwood is our most powerful, capable, and energy-efficient TPU yet. And it’s purpose-built to power thinking, inferential AI models at scale.”

Furthermore, the company highlights that Ironwood is designed to manage the computation and communication demands of large language models (LLMs), mixture of experts (MoEs), and advanced reasoning tasks. Ironwood minimizes data movement and latency on-chip and uses a low-latency, high-bandwidth ICI network for coordinated communication at scale.

Ironwood will be available for Google Cloud customers in 256-chip and 9,216-chip configurations. The company claims that a 9,216-chip Ironwood pod delivers more than 24x the compute power of the El Capitan supercomputer, with 42.5 Exaflops compared to El Capitan’s 1.7 Exaflops per pod. Each Ironwood chip boasts a peak compute of 4,614 TFLOPS.

Ironwood also features an enhanced SparseCore, a specialized accelerator for processing ultra-large embeddings, expanding its applicability beyond traditional AI domains to finance and science.

Other key features of Ironwood include:

  • 2x improvement in power efficiency compared to the previous generation, Trillium.
  • 192 GB of high-bandwidth memory (HBM) per chip, 6x that of Trillium.
  • 1.2 TBps bidirectional ICI bandwidth, 1.5x that of Trillium.
  • 7.37 TB/s of HBM bandwidth per chip, 4.5x that of Trillium.

(Source: Google blog post)

Regarding the last feature, a respondent on another Reddit thread commented:

Tera? Terabytes? 7.4 Terabytes? And I’m over here praying that AMD gives us a Strix variant with at least 500GB of bandwidth in the next year or two…

While NVIDIA remains a dominant player in the AI accelerator market, a respondent in another Reddit thread commented:

I don’t think it will affect Nvidia much, but Google is going to be able to serve their AI at much lower cost than the competition because they are more vertically integrated, and that is pretty much already happening.

In addition, in yet another Reddit thread, a correspondent commented:

The specs are pretty absurd. Shame Google won’t sell these chips, a lot of large companies need their own hardware, but Google only offers cloud services with the hardware. Feels like this is the future, though, when somebody starts cranking out these kinds of chips for sale.

And finally, Davit tweeted:

Google just revealed Ironwood TPU v7 at Cloud Next, and nobody’s talking about the massive potential here: If Google wanted, they could spin out TPUs as a separate business and become NVIDIA’s biggest competitor overnight.

These chips are that good. The arms race in AI silicon is intensifying, but few recognize how powerful Google’s position actually is. While everyone focuses on NVIDIA’s dominance, Google has quietly built chip infrastructure that could reshape the entire AI hardware market if it decides to go all-in.

Google states that Ironwood provides increased computation power, memory capacity, ICI networking advancements, and reliability. These advancements, combined with improved power efficiency, will enable customers to handle demanding training and serving workloads with high performance and low latency. Google also notes that leading models like Gemini 2.5 and AlphaFold run on TPUs.

The announcement also highlighted that Google DeepMind has been using AI to aid in the design process for TPUs. An AI method called AlphaChip been used to accelerate and optimize chip design, resulting in what Google describes as “superhuman chip layouts” used in the last three generations of Google’s TPUs.

Earlier, Google reported that AlphaChip had also been used to design other chips across Alphabet, such as Google Axion Processors, and had been adopted by companies like MediaTek to accelerate their chip development. Google believes that AlphaChip has the potential to optimize every stage of the chip design cycle and transform chip design for custom hardware.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Achieving Sustainable Mental Peace in Software Engineering with Help from Generative AI

MMS Founder
MMS John Gesimondo

Article originally posted on InfoQ. Visit InfoQ

Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, I’m sitting down with John Gesimondo. John, welcome. Thanks for taking the time to talk to us.

John Gesimondo: Yes. Thank you. Glad to be here.

Shane Hastie: Now, John, you were recently a QCon speaker. But before we get into that, tell us a little bit, who’s John?

Introductions [01:09]

John Gesimondo: Sure. Yes. I’m John. I’ve been a software engineer for quite a while now, I guess about 15 years. My undergrad was actually in business. I’m a self-taught engineer. But with enough time and diligence, you can do anything. I’ve been working at Netflix for the last six years, doing various things, mostly internal tools, and this will be relevant later, but I have ADHD and definitely some traits of autism. I’m super keen on software in the tooling space, and then I really enjoy optimizing life, especially thinking about mental health and how that touches on technology.

Shane Hastie: Your talk, Achieving Sustainable Mental Peace at Work, with the twist, using Gen AI… What brought that about?

Using generative AI for mental peace [02:07]

John Gesimondo: Well, organic. I basically just started using generative AI for basically everything in my life. There’s people at work that made fun of me for this. It’s actually hilarious, but I, in the early days, tried to apply Gen AI to everything I could possibly think of just as a pure explore. When things are brand new, it’s nice to know what they’re capable of. I really pushed the limits, and I found quite a lot of sticky areas with a lot of value that were actually in this work adjacent/work support, I guess, space. Maybe you could call it the socio side of sociotechnical. There’s been a lot there that has really helped me out at work, so tried to turn it into a framework and give it as a talk.

Shane Hastie: Let’s dig in first there… You’ve been using that to support the socio side of the sociotechnical systems. Why is mental such a challenge for us in the software engineering space?

Challenges of mental health in software engineering [03:20]

John Gesimondo: Oh, man. Software is just really complicated, the way that it’s actually made. Then, it’s obviously also technically challenging. There’s a lot of needs being managed and a lot of communication and organization. Then, I find that there’s a temporal nature of everything that really messes with it.

If you can manage to get the usual corporate stuff, let’s say, compared to a non-engineering situation in a corporation, it’s already hard. There’s a lot of organizing. You have cross-functional difficulty aligning goals and all this type of stuff. But then, with software, you have this added element of the temporal nature of “why are things the way they are? Who knows actually how this software works? Why did this person make this decision? Oh, they don’t work here anymore”. That creates a whole other aspect of things that makes it impossible to really keep things structured and predictable and on time.

You’ve also got incidents that happen. You have support rotation. You have this need to focus. But then, at the same time, there’s so much work that’s going to interrupt you by the nature of how this whole thing works. Quite a lot of forces going on here and most of them don’t have good answers. That leaves you with difficulty.

Shane Hastie: You said you came up with a framework. What is the framework?

Overview of the Sustainable Mental Peace Framework [04:58]

John Gesimondo: The concept that we’re trying to get to is sustainable mental peace, which I call basically a steadfast calm while you’re in the middle of work chaos.

If we accept that work chaos is inevitable based on the reasons we just gave, then there’s not much you can do to control the environment permanently. Therefore, you have to take matters into your own hands to make sure that you can deal with living in that environment in a peaceful way. That’s where the framework comes in.

The framework is specifically limited to stuff where you can use generative AI since that was the topic of the talk. The way that I framed it is… starting from the most unstable, is basically having tools to recover quickly from emotional difficulty. Then, next up is getting unstuck when you’re stuck. Then, the next is to enhance your planning and communication skills because proper planning and proper proactive communication can save you so much of the interruptions I was just mentioning. Structure often, in this area, gives you a sense of calm. That’s very helpful.

Then, after that, now, we’re in a state that’s more stable. You’d want to try to add more time doing the things that you intrinsically enjoy and attempt to shortcut the things that aren’t really bringing you any personal value, which we now have some tools to do. Then, lastly, try to do some divergent thinking to keep things interesting, and to see opportunities, and to have some fun sometimes.

Shane Hastie: We maybe work through the different stages. It feels to me almost like a Maslow Hierarchy Pyramid. If we start at that base, recovering quickly from emotional difficulties, that just… The stereotypical engineer is considered not to have emotions. But of course, we’re all emotional beings and do respond. How do we recover quickly?

Stage 1: Recovering from emotional difficulty and getting unstuck [07:24]

John Gesimondo: Yes, definitely. I think I can group the first two, so recovering from emotions and overcoming stuck points. I think they’re very similar. They’re both often emotional in nature. When I say recover from emotions, I’m thinking a more acute situation, like some trigger big or small, or sitting in a room with someone that you really don’t like, or something… Things happen. It’s just… creates these unstable mental states. For being stuck, it’s a milder version, but it’s as practically important.

How do you do this? In either case, basically, we’re looking at a AI therapy/AI coach model. With the “recover from emotions”, I recommend using prompts that actually mention some therapy techniques, so using CBT technique or using IFS, which is Internal Family Systems, can sometimes help. Especially if the person using this already goes to therapy, this is extra helpful because it’s like having the tools that your therapist may have already gone over with you available to you at any time, and you don’t have to run the whole thing yourself. You’re still having that interaction which really helps open up.

For overcoming stuck points, it’s a blend. It’s like sometimes you need a coach, which AI… Again, similarly, you can just prompt, in a way, to ask, “Be my coach. Help me through this. Get me back to taking action”. But in other times, it can start the work for you. Sometimes you’re stuck because you have blank page syndrome, for instance. You don’t have to deal with that anymore. You can generate a crappy draft that you hate, but at least you can be the editor now, instead of a person staring at a blank screen. I think those two… You’re looking at using the AI as a coach/therapist.

Shane Hastie: Then, moving into the next level, the planning and communication, getting better at that.

Stage 2: Enhancing planning and communication with AI [09:53]

John Gesimondo: In this case, now you’re looking at, again, a mix. I guess, in this one, you’re looking at actually doing things in the standard way. I think that this one touches on the neurodivergence themes a lot because this is where, at least in my experience, as someone with ADHD, my thoughts are very scattered. Actually, getting them into a structure that other people understand is not a free exercise.

Now, with copilots and such, that exercise can be actually fairly close to free, I would say: dumping a bunch of bullet points and making sure that I covered everything, and asking for a review and to expand the set of points to a proposed audience, like, “If I were this type of person, what would I say? If you were that type of person, what would you say?” Agreeing on the content and then refactoring. Put it in programming terms, the whole thing into exactly the way that the target audience would expect to see it. You can even get that down to quite local. Giving an example and a prompt, in this case, is super helpful for structure, tone, language.

As much as I love creativity, as a neurodivergent person, when it comes to planning and communication, the less creative the end result is, the more buttoned up and structured and follows-what-people-expect it is, the less people will ask you questions and interrupt you later. They’re the reason for these things. At least in my experience, the way I’m using this is as a shim. I call it the neurodivergent planning shim. You’re just taking away the code-switching that I would have to do to write this myself and delegating that work to the AI.

Shane Hastie: But it starts, of course, with your knowledge, what you’re providing it.

John Gesimondo: Yes, exactly. But that’s the fun part, at least for me. The back and forth in that middle process, when it’s about agreeing, “Is this the right content?” is really fun. Asking different personas… “What would this person say? Would that person say?” It’s a lot more efficient than actually asking those people, which, of course, we’ll do anyway later. But it’s a nice creative brainstorming at the beginning, and then just speed run me through the structuring part because my brain structure… not a natural fit, I will say.

Stage 3: Maximizing flow and enjoyment in Engineering Work [12:42]

For the add time in flow bit, this is all about… Each of us have different parts of engineering that we like to do the most. Even when you look within what I’ll call, I guess, the core loop of engineering, of actually writing code and testing it, documenting it, and getting feedback, and all of this stuff, even within that process, there’s a lot of preferences.

Even within writing the code, there are preferences. Some people like to do more theoretical stuff. Other people like to do the structuring and architecture and system design. I think that with the flow aspect here, what you’re hoping to do is spend more time on the parts you really like and less time on the other parts, and especially don’t get interrupted.

Previously, before AI was around, we all had to go to Stack Overflow and read through all kinds of different correct and incorrect answers for our problems. When you have to do that in the middle of a working session, it takes you out of the flow of everything.

Now, with copilots, we often get that information at the snap of a finger. That’s already helpful. But on top of that, I recommend tuning your overall time spent coding to be tuned towards the areas that you really deeply enjoy/that feel immersive for you and do that using any latest frontier tooling that you can find. The cool thing is that this is currently where the frontier moves the fastest, is in the copilot space. This one’s super fun to keep up to date on.

Shane Hastie: And then divergent thinking.

Stage 4: Enabling divergent thinking and creativity [14:35]

John Gesimondo: Yes. Divergent thinking is really interesting. I think this is a strength of neurodivergent people. I mean, there’s a reason why “divergent” is in the name. But, in a sense, it’s interesting because this whole framework is helping neurodivergent people and others. But for neurodivergent people, it helps them be a little more neurotypical for less cost. But in this case, this is the flip. This is like, “Help neurotypicals be more neurodivergent”.

It’s funny. My story is I’m looking for a place to live. I was considering different suburbs or, back to San Francisco, in the city. I just was stuck in… There’s trade-offs everywhere, and nothing’s capturing my attention. Sometimes when I’m in this situation, including at work, the answer is zoom out and get a little more divergent with the thinking. In this case, I asked ChatGPT for some ideas of… I asked it for some housing ideas. It gave me a normal list, and then I asked it, “Get a little more divergent with this list”. It came up with some really funny stuff.

One of them is the tethered air home. This is a living pod suspended from cables between skyscrapers, or cliffs, or large trees. Nice hammock-like structure; could be retractable, and can retract the ground level when needed. Not bad. There’s one that was really funny, which was a distributed home network, so instead of one location, your home is spread across multiple cities. But its cloud-connected lockers store your essentials, minimizing the need for luggage. I don’t think that’s how the cloud works. I don’t think that’s how physical systems work but sounds good.

Anyway, it really got me very unstuck. Now, my mind is open to… I don’t know how to explain it. That’s what happens when you do creative thinking. It just got that problem a little bit shaken up into some new directions. You can do that at work. I think especially when you look at career, or you look at people being technical leaders, you want to spend some time thinking about what could be. Sometimes, these crazy ideas turn into… They spark some actual practical ideas. It’s a fun process.

Shane Hastie: As a neurodivergent person, what has your journey been and how has the environment supported or inhibited you?

Personal journey with neurodivergence [17:40]

John Gesimondo: Well, I didn’t know I was neurodivergent until during COVID. There’s, I guess, two eras of this journey: unaware and the aware stage. I think the only time that being unaware was blissful was certain parts of school. I actually am a very, very curious person so whenever I was interested in the topic that was in that class, I did quite well. It was miserable when I wasn’t interested, but that didn’t happen that often, luckily for me. That was I will call the mostly blissful unaware period.

Where it was no longer blissful is when I started working. I think the further I’ve gone in my career, the more difficult it’s been, to be honest. As a junior who’s just expected to learn all the time, it’s school-like. It’s somewhat structured, depending on the environment. My environment was very structured, so that was great.

During mid-career, there’s a lot of learning to be done still, and a little bit more confidence, I think, in your abilities. You start to get to know your strengths, something that helps. There’s this linearish feeling/sense of learning. It’s clear that you’re getting smarter and better at this job, even if you just look at a maybe two-week basis. Even within a sprint, you’re like, “Oh, wow. That was hard. Now, I could do it again much faster”. It’s great.

Then I think it gets a little difficult after that. As a senior, it’s like the structure lines start to blur. I think that’s especially true at Netflix, but I’m sure it’s true of many places. But the structure starts to decrease, and, as you get even further past that, you’re expected to add some structure for the less tenured folks.

This is where I think the burden of planning and communicating and structuring things in ways that other people can understand and being able to manage… I don’t have an answer for this one, but being able to manage the worker… The maker schedule and the manager schedule getting mixed in your calendar is brutal when you have ADHD.

I think this is where the rigor of my systems has had to increase a lot lately because of pursuing being a technical leader. I mean, I think, for a lot of people… They just wouldn’t. But I think, in good and bad ways, my achiever side is stronger than my neurodivergence side is suffering. That means build the tools, learn the processes, add the rigor to the systems, and rely on AI so that I can have my sustainable mental peace and be able to achieve what I’m looking to achieve.

Shane Hastie: You said you got a formal recognition during COVID. What difference has that made for you?

The impact of formal neurodivergence recognition [21:03]

John Gesimondo: Oh. Yes. It helps and it hurts. I know there’s a lot of mixed feelings about finding out that you’re neurodivergent later in life. But I think, in a practical way, for me, it’s mostly been beneficial because you really have to understand that you experience things differently than the people around you to get to some point that you can do something about it.

If you don’t know that, then it feels like the world is gaslighting you all the time. Someone proposes a system, like, “Oh, just use a task list to organize your tasks”, and that system doesn’t quite work for you… Well, if you don’t know that you have ADHD, then it just makes you feel like, “Oh, something’s wrong with me, or I’m lazy, or I didn’t try hard enough”.

But if you do know you have ADHD, then you know that you should check… “Is this an ADHD problem, or is it a me problem?” You’ll find out, in this example, that it’s an ADHD problem. You just find an alternative, or you give yourself extra care and patience on the journey of trying to figure out how that works usually to find a workaround, or find support tends to work better. Definitely need patience either way.

I often see this analogy with glasses. It’s like if you don’t know that your vision is bad, and then someone’s talking to you about something that’s a little too far away, and you can’t see it, you don’t have anything to work with there. It’s a crazy feeling, like, “What are they talking about? Should I say I don’t understand what they’re talking about? Because they sound pretty confident about it. Everyone else knows what they’re talking about”. Then, it’s like, “Okay, now I wear glasses. Okay, great. Well, now I know that if I’m not wearing my glasses, then that’s the problem. I can’t see the thing because I don’t have my glasses on”.

It just gives you this sense of certainty, even though that doesn’t fix the problem, but it does make you feel better about it. You understand the problem. Then, you can start the road, I think, from there, of looking for solutions. I shouldn’t even say solutions. I think that’s a little unfair. There’s no solution per se, and there’s no… We don’t have to go fully down that road, but no need to solve the problem. But you do need to adapt if you want to work with a majority neurotypical people at work, which is usually the case.

Shane Hastie: Thank you for that. Good insights there. If we swing back around to flow, you mentioned vibe coding when we were chatting earlier. What’s it like, and how do you get into that? In the swing of that today?

Vibe coding and the future of AI in development [24:02]

John Gesimondo: Yes. Vibe coding is so fun, sometimes. Vibe coding, we’ll say, is… For those that are not familiar, I think most engineers know about copilots, like GitHub Copilot, for instance, been around for a while. But the latest frontier of this is to have an agent-based copilot, so you’re giving it one instruction.

It can go do an entire loop of work of many different back-and-forth prompts, and you can either approve each step, “Oh, I want to write to this file”, accept, “I want to write a test now”, accept, or you can just click the auto approve, and you can just sit back and watch and hope for the best. This is what we call vibe coding in the AI developer community. It’s a fun time.

For those listening that haven’t tried it, even managers, please try it because it is probably one of the most exciting yet jarring experiences I’ve had in a while, and I hear that from others as well. The jarring part is just… It’s the closest you get to that, “Are we still going to be having jobs in the next year?” But it’s just… I don’t know. It’s a mind-blowing experience.

It really flips everything that you think you know about what’s difficult, what’s easy, how I’m going to do my work from now on. When should I use this? What is it good at? But let me tell you, on the other side of it, it’s so hard to abstract any actual lessons from these experiences. What is it good at? You just try, and you find out. You’re either really delighted, or you’re basically trying to pair with a under-educated intern. One of those two, and you don’t know until you try.

Shane Hastie: Lean into the unknown and experience the flow.

John Gesimondo: Totally, totally. I see a lot of people… I guess we could combine this with divergent thinking as well because that allows you… If you’re coming up with what you think are crazy ideas, and they’re in the software space, it is easier than ever to make a prototype through vibe coding, especially that, because I think the big open question is, “What kind of code quality are we making for the maintainers of all this stuff?” But if you want to just play around and make a inspirational prototype of something, it’s never been easier. Just open up your favorite vibe coding copilot and pray.

Shane Hastie: The tester in me shutters.

John Gesimondo: Absolutely. Absolutely.

Shane Hastie: John, some great insights here, and some really interesting stuff. If people do want to continue the conversation, where can they find you?

John Gesimondo: I think LinkedIn would probably be best. My username on there is jmondo. I’m occasionally active on Twitter as well, or X, and that’s also @jmondo. I’d go with that.

Shane Hastie: Thank you so much.

John Gesimondo: Thank you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Sees Unusually Large Options Volume (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) was the target of some unusual options trading activity on Wednesday. Stock traders bought 23,831 put options on the company. This represents an increase of approximately 2,157% compared to the typical volume of 1,056 put options.

MongoDB Price Performance

Shares of NASDAQ:MDB opened at $172.19 on Friday. MongoDB has a 1-year low of $140.78 and a 1-year high of $380.94. The firm has a fifty day moving average price of $186.51 and a 200 day moving average price of $245.99. The stock has a market cap of $13.98 billion, a P/E ratio of -62.84 and a beta of 1.49.

MongoDB (NASDAQ:MDBGet Free Report) last announced its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 EPS for the quarter, missing the consensus estimate of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The firm had revenue of $548.40 million for the quarter, compared to analysts’ expectations of $519.65 million. During the same period last year, the firm earned $0.86 EPS. Sell-side analysts expect that MongoDB will post -1.78 EPS for the current fiscal year.

Wall Street Analysts Forecast Growth

<!—->

MDB has been the topic of several recent research reports. Bank of America cut their price objective on MongoDB from $420.00 to $286.00 and set a “buy” rating on the stock in a report on Thursday, March 6th. Rosenblatt Securities reaffirmed a “buy” rating and set a $350.00 price objective on shares of MongoDB in a research report on Tuesday, March 4th. Scotiabank reissued a “sector perform” rating and issued a $160.00 price objective (down previously from $240.00) on shares of MongoDB in a research note on Friday, April 25th. Piper Sandler lowered their price objective on MongoDB from $280.00 to $200.00 and set an “overweight” rating for the company in a research note on Wednesday, April 23rd. Finally, China Renaissance started coverage on shares of MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 target price on the stock. Eight investment analysts have rated the stock with a hold rating, twenty-four have assigned a buy rating and one has issued a strong buy rating to the company’s stock. Based on data from MarketBeat.com, the company currently has an average rating of “Moderate Buy” and a consensus price target of $294.78.

Read Our Latest Analysis on MongoDB

Insider Activity at MongoDB

In related news, CFO Srdjan Tanjga sold 525 shares of the business’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total value of $90,961.50. Following the completion of the sale, the chief financial officer now directly owns 6,406 shares in the company, valued at $1,109,903.56. The trade was a 7.57 % decrease in their position. The transaction was disclosed in a document filed with the SEC, which is available through the SEC website. Also, CEO Dev Ittycheria sold 18,512 shares of MongoDB stock in a transaction dated Wednesday, April 2nd. The stock was sold at an average price of $173.26, for a total transaction of $3,207,389.12. Following the completion of the sale, the chief executive officer now owns 268,948 shares of the company’s stock, valued at $46,597,930.48. This trade represents a 6.44 % decrease in their position. The disclosure for this sale can be found here. Insiders sold a total of 39,345 shares of company stock worth $8,485,310 in the last quarter. Corporate insiders own 3.60% of the company’s stock.

Institutional Investors Weigh In On MongoDB

Several hedge funds have recently modified their holdings of the stock. Strategic Investment Solutions Inc. IL purchased a new position in shares of MongoDB in the 4th quarter valued at $29,000. Hilltop National Bank lifted its holdings in shares of MongoDB by 47.2% in the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after purchasing an additional 42 shares during the last quarter. Cloud Capital Management LLC bought a new stake in shares of MongoDB during the first quarter valued at approximately $25,000. NCP Inc. bought a new position in MongoDB in the 4th quarter valued at about $35,000. Finally, Wilmington Savings Fund Society FSB acquired a new stake in MongoDB in the third quarter worth approximately $44,000. 89.29% of the stock is owned by institutional investors and hedge funds.

About MongoDB

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc. (NASDAQ:MDB) Given Consensus Recommendation of “Moderate Buy” by Analysts

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Shares of MongoDB, Inc. (NASDAQ:MDBGet Free Report) have been given an average recommendation of “Moderate Buy” by the thirty-three research firms that are currently covering the company, Marketbeat Ratings reports. Eight equities research analysts have rated the stock with a hold recommendation, twenty-four have given a buy recommendation and one has issued a strong buy recommendation on the company. The average twelve-month target price among analysts that have updated their coverage on the stock in the last year is $294.78.

A number of research analysts have weighed in on the company. Rosenblatt Securities reiterated a “buy” rating and set a $350.00 target price on shares of MongoDB in a report on Tuesday, March 4th. Truist Financial decreased their price objective on MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a report on Monday, March 31st. Monness Crespi & Hardt upgraded shares of MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, March 3rd. Scotiabank reiterated a “sector perform” rating and issued a $160.00 price target (down from $240.00) on shares of MongoDB in a research note on Friday. Finally, KeyCorp lowered shares of MongoDB from a “strong-buy” rating to a “hold” rating in a research note on Wednesday, March 5th.

Check Out Our Latest Report on MDB

Insider Buying and Selling

In other news, Director Dwight A. Merriman sold 3,000 shares of the business’s stock in a transaction on Monday, February 3rd. The shares were sold at an average price of $266.00, for a total transaction of $798,000.00. Following the completion of the sale, the director now owns 1,113,006 shares of the company’s stock, valued at $296,059,596. The trade was a 0.27 % decrease in their position. The sale was disclosed in a legal filing with the SEC, which is available at this link. Also, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction on Wednesday, April 2nd. The stock was sold at an average price of $173.25, for a total value of $52,148.25. Following the completion of the transaction, the chief accounting officer now directly owns 14,598 shares in the company, valued at approximately $2,529,103.50. This represents a 2.02 % decrease in their ownership of the stock. The disclosure for this sale can be found here. Insiders sold 39,345 shares of company stock valued at $8,485,310 over the last 90 days. Company insiders own 3.60% of the company’s stock.

Institutional Trading of MongoDB

Hedge funds and other institutional investors have recently bought and sold shares of the stock. Cloud Capital Management LLC bought a new stake in MongoDB during the first quarter valued at about $25,000. Strategic Investment Solutions Inc. IL purchased a new stake in shares of MongoDB during the fourth quarter worth about $29,000. Hilltop National Bank raised its stake in MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after purchasing an additional 42 shares during the period. NCP Inc. purchased a new position in MongoDB in the 4th quarter worth approximately $35,000. Finally, Versant Capital Management Inc boosted its stake in MongoDB by 1,100.0% in the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock worth $42,000 after purchasing an additional 165 shares during the period. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Price Performance

NASDAQ MDB opened at $174.69 on Wednesday. The business’s 50-day moving average is $190.43 and its 200 day moving average is $247.31. The stock has a market capitalization of $14.18 billion, a PE ratio of -63.76 and a beta of 1.49. MongoDB has a 1 year low of $140.78 and a 1 year high of $387.19.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings data on Wednesday, March 5th. The company reported $0.19 earnings per share for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to the consensus estimate of $519.65 million. During the same quarter in the prior year, the business posted $0.86 earnings per share. As a group, equities research analysts anticipate that MongoDB will post -1.78 earnings per share for the current year.

MongoDB Company Profile

(Get Free Report

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Analyst Recommendations for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a Moderate Buy rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Elon Musk's Next Move Cover

Explore Elon Musk’s boldest ventures yet—from AI and autonomy to space colonization—and find out how investors can ride the next wave of innovation.

Get This Free Report

Like this article? Share it with a colleague.

Link copied to clipboard.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Target of Unusually High Options Trading (NASDAQ:MDB) – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDBGet Free Report) saw unusually large options trading on Wednesday. Stock investors bought 36,130 call options on the stock. This is an increase of approximately 2,077% compared to the average daily volume of 1,660 call options.

Insider Transactions at MongoDB

In other news, CAO Thomas Bull sold 301 shares of the firm’s stock in a transaction that occurred on Wednesday, April 2nd. The shares were sold at an average price of $173.25, for a total value of $52,148.25. Following the transaction, the chief accounting officer now owns 14,598 shares in the company, valued at approximately $2,529,103.50. This trade represents a 2.02 % decrease in their ownership of the stock. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is accessible through this hyperlink. Also, Director Dwight A. Merriman sold 3,000 shares of the stock in a transaction on Monday, February 3rd. The shares were sold at an average price of $266.00, for a total transaction of $798,000.00. Following the sale, the director now directly owns 1,113,006 shares of the company’s stock, valued at $296,059,596. The trade was a 0.27 % decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders sold 39,345 shares of company stock valued at $8,485,310. 3.60% of the stock is owned by corporate insiders.

Institutional Investors Weigh In On MongoDB

A number of institutional investors have recently bought and sold shares of MDB. Cloud Capital Management LLC purchased a new stake in MongoDB in the 1st quarter worth $25,000. Strategic Investment Solutions Inc. IL purchased a new stake in MongoDB in the 4th quarter worth $29,000. Hilltop National Bank increased its stake in MongoDB by 47.2% in the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock worth $30,000 after buying an additional 42 shares in the last quarter. NCP Inc. purchased a new position in shares of MongoDB during the fourth quarter valued at about $35,000. Finally, Versant Capital Management Inc grew its position in shares of MongoDB by 1,100.0% during the fourth quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock valued at $42,000 after purchasing an additional 165 shares in the last quarter. Hedge funds and other institutional investors own 89.29% of the company’s stock.

MongoDB Stock Performance

<!—->

Shares of MongoDB stock opened at $172.19 on Friday. MongoDB has a fifty-two week low of $140.78 and a fifty-two week high of $380.94. The company has a market capitalization of $13.98 billion, a P/E ratio of -62.84 and a beta of 1.49. The company has a fifty day moving average of $186.51 and a 200 day moving average of $245.99.

MongoDB (NASDAQ:MDBGet Free Report) last issued its quarterly earnings results on Wednesday, March 5th. The company reported $0.19 earnings per share (EPS) for the quarter, missing analysts’ consensus estimates of $0.64 by ($0.45). MongoDB had a negative return on equity of 12.22% and a negative net margin of 10.46%. The business had revenue of $548.40 million during the quarter, compared to analysts’ expectations of $519.65 million. During the same quarter in the previous year, the company posted $0.86 EPS. Analysts forecast that MongoDB will post -1.78 EPS for the current year.

Analyst Upgrades and Downgrades

Several research firms have recently commented on MDB. Royal Bank of Canada dropped their target price on shares of MongoDB from $400.00 to $320.00 and set an “outperform” rating for the company in a research report on Thursday, March 6th. Rosenblatt Securities reaffirmed a “buy” rating and issued a $350.00 target price on shares of MongoDB in a research report on Tuesday, March 4th. Truist Financial dropped their target price on shares of MongoDB from $300.00 to $275.00 and set a “buy” rating for the company in a research report on Monday, March 31st. Macquarie lowered their price target on shares of MongoDB from $300.00 to $215.00 and set a “neutral” rating for the company in a research report on Friday, March 7th. Finally, China Renaissance started coverage on shares of MongoDB in a research report on Tuesday, January 21st. They set a “buy” rating and a $351.00 price target for the company. Eight equities research analysts have rated the stock with a hold rating, twenty-four have issued a buy rating and one has issued a strong buy rating to the company. Based on data from MarketBeat, the company currently has a consensus rating of “Moderate Buy” and an average target price of $294.78.

Get Our Latest Stock Analysis on MongoDB

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Moving Your Bugs Forward in Time: Language Trends That Help You Catch Your Bugs at Build Time Instead of Run Time

MMS Founder
MMS Chris Price

Article originally posted on InfoQ. Visit InfoQ

Transcript

Price: I’m going to be talking about moving your bugs forward in time. This is the topic that I’ve been thinking about on and off for many years. Before we get into the meat of the talk, how many folks are up to date on the Marvel Cinematic Universe? Those of you who are not, no problem. I’m going to start with a little story related to it. In the recent movies and TV shows, they’ve been building on this concept of the multiverse, where there are all these different parallel universes that have different timelines from one another. They differ by small details along the timelines. There’s one show in particular called Loki. He’s kind of a hero/villain. In his show, there are all these different timelines where the difference in each timeline is like Loki is slightly different in each one. In one of them, for example, Loki is an alligator, which, as you might imagine, leads to all sorts of shenanigans.

In that show, there’s this concept of the sacred timeline. There’s the one timeline that they’re trying to keep everything on line with. All of the other timelines somehow diverge into some weird apocalyptic situation, so they’re working to try to keep everything on this sacred timeline. I’m going to talk about what a bug might look like on this sacred timeline. We start off with, a developer commits a bug. We don’t like for this to happen, but it’s inevitable. It happens to all of us. What’s important is what happens after that. This is a little sample bit of toy Python code where we’ve got a function called divide_by_four. It takes in an argument.

Then it just returns that argument divided by four. Somewhere else in our codebase, some well-meaning developer creates a variable that is actually a string variable, and they try to call this function and pass it as the argument. What happens in Python is that’s a runtime error where you get this error that says, unsupported operand type for the divide by operator. On the sacred timeline, we’ve got something in our CI that catches this after that commit goes in. We may have some static analysis tool that we’re running. We may have test coverage that exercises that line of code. The important thing is our CI catches it, and that prevents us from shipping this bug to production. What happens next? The developer fixes the bug.

Then the CI passes, and they’re able to successfully ship that feature to prod. This is a pretty short, pretty simple to reason about timeline. The cost of that bug was basically like one engineer, like one hour of their time, something on that order. Fixing the bug probably actually took less than an hour, but dealing with monitoring the CI, monitoring the deployment, maybe it takes an hour of their time. Still not a catastrophic expense for our business.

Now we’re going to look at that bug on an alternate timeline. We’ll refer to this as the alligator Loki timeline. In this timeline, the developer commits the bug. For whatever reason, we’re not running the static analysis tool or we don’t have test coverage, and the bug does not get caught by our CI. Then we have our continuous delivery pipeline. It goes ahead and deploys this bug to production. Say we’re a regional service that deploys to regions one at a time to reduce blast radius, so we deploy this bug to U.S.-west-1. Then, for whatever reason, this code path where the bug exists isn’t something that gets frequently exercised by all of our users, so we don’t notice it. Maybe a day passes. Then the time passes. The bug ends up getting deployed to U.S.-east. Some more time passes, we still haven’t noticed there’s a bug. Deploys to Europe. Some more time passes.

Then alligator Loki eats the original developer, or maybe something more realistic happens, like they transfer to another team, or get a promotion, or whatever. That developer is not around anymore. Our bug keeps going through the pipeline and deploys to Asia. Now we have this big problem. It turns out that in that region, there is a customer who uses that code path that we didn’t catch in the earlier regions when we deployed. Now we’ve gotten this alert from this very important customer that they’re experiencing an outage and we’ve got to do something about it. Our operator gets paged. The manager gets paged because the operator doesn’t immediately know what’s going on. Maybe some more engineers get added to a call to try to address the situation. They start going through the version control history to figure out where this bug might have come in. They identify the bad commit, so now they know where the bug came in, but this has been days.

Several other commits have probably come in since then, and now they have to spend some time thinking about whether or not it’s safe to roll back to the commit prior to that, or whether that’s going to just cause more problems. They spend some time talking about that, decide if the rollback is safe. Then, they decide it’s safe. They do the rollback in that one region, and then they confirm that that customer’s impact was remediated. That’s great. We’re taking a step in the right direction. Now we have to deal with all those other regions that we rolled it out to. Got to do rollbacks in those as well. Depending on how automated our situation is, that may be a lot of work. Then this could just keep going for a long time, but I’m going to stop here.

When we think about the cost of this bug, compared to the one on the sacred timeline, the first and most important thing is there was a visible customer outage. Depending on how big your company is and how important that customer was, that can be a catastrophic impact for your business. We also spent time and money on the on-call being engaged, the manager being engaged, additional engineers being engaged, executing all these rollbacks however much time that ended up taking. Re-engineer the feature. Now we have to assign somebody new to go figure out what that original developer was trying to achieve, redo the work in a safe way, get it fixed. They’ve also got to make sure that whatever other commits got rolled back in that process, that we figure out how to get those reintroduced safely as well.

Then the one that we don’t talk about enough is opportunity cost. Every person who was involved in this event could have spent that time on something else that was more valuable to your business, working on other features, whatever it may be. When we compare the cost of these two timelines, the first timeline looks so quaint in comparison. It looks so simple. The cost was really not that big of a deal. On the second timeline, it bubbled into this big giant mess that sucked up a whole bunch of people’s time and potentially cost us a customer. The cost is just wildly different between the two. We want to really avoid this alligator timeline. What’s the difference between those two timelines? The main difference is that in the sacred timeline, we caught that bug at build time. In the alligator timeline, we caught it at runtime. That subtle difference is the key branching factor that ends up determining where you end up between these two scenarios.

Background

That’s what my talk’s going to be about, when I say moving your bugs forward in time. I’m talking about moving them from runtime to build time. Thankfully, I think that a lot of modern programming languages have been building more features in to the language to help make sure that you can catch these bugs earlier. That’s what I want to talk about. My name is Chris Price. I am a software engineering manager/software engineer at Momento. We’re a serverless caching and messaging company. Previous to that, I worked at AWS with a lot of other folks that are at Momento now. I worked on video streaming services and some of us worked at DynamoDB. Before that, I worked at Puppet doing infrastructure as code.

Maintainability

Then zooming out before I get into the weeds on this, this phenomenon I’m talking about, about moving bugs from runtime to build time is really a subset of maintainability, which as I’ve progressed through my career as a software engineer, I’ve really found more of that maintainability is one of the most important things that you can strive for, one of the most important skills that you can have as a software engineer.

When I first got started straight out of college, I thought that the only important thing about my job was how quickly I could produce code, how fast can I get a feature out the door, how many features can I ship and how quickly. As I got more experience in the industry and worked on larger codebases with more diverse teammates, what I realized is that that’s not really the most important skill for a software engineer. It’s way more important to think about what your code can do tomorrow and how easy it’s going to be for your teammates and your future teammates that you haven’t even met yet to be able to understand and modify and have confidence in their changes that they’re making to your code. That’s going to be the central theme of this talk.

Content, and Language Trends

These are the six specific language features that I’m going to dive into. First, we’ll talk about static types and null safety. Then we’re going to talk about immutable data and persistent collections. Then we’ll wrap up by talking about errors as return types, and exhaustive pattern matching. Some of the languages that have influenced the points that I’ll be making in this talk. I spent a lot of time working in Clojure a while back, and that is where I really got the strong sense for how valuable it is to use immutable data structures, how much that improves the maintainability of your code. Rust is one of the places where I really got used to doing a lot of pattern matching statements. Go is the first language that I worked in that really espoused this pattern of treating errors as return types rather than exceptions.

Then, Kotlin is a language that I really love because I feel like it takes a lot of these ideas that come from some of these more functional programming languages, and it makes them really approachable and really accessible in a language that runs on the JVM. You can adopt it in your Java codebase without boiling the ocean. You can ease your way into some of these patterns without having to switch out of a completely object-oriented mindset overnight. It’s a really awesome, approachable language. Two engineers have had a lot of influence on my thinking, Rich Hickey, the creator of Clojure, Martin Odersky, the creator of Scala.

If you get a chance to watch any talks that these gentlemen have given in the past, I highly recommend them. They’re always really informative, and they’ve been really foundational for me. I also highly recommend if you can find a way to buy yourself some time to do a side project in a functional programming language. The time that I spent writing Clojure, I think, was more formative for me and improved my skills as an engineer more than any other time throughout my whole career, even though I haven’t written a line of Clojure code in quite some time now.

Static Types, Even in Dynamic Languages

We’ll start off with static types, and I’m saying even in dynamic languages. I realize that that may be controversial to some folks. We’re going to go back to this bug that we started off with on the sacred timeline and the alligator timeline, where we passed the wrong data type into this Python function. A lot of times when I try to talk to people about opting into static typing in some of these dynamic languages, I hear responses like this, “I can build things faster with dynamic types, and I can spend my time thinking about my business logic rather than having to battle with this complicated type system”. Or, another thing I hear is, “I can avoid those runtime type errors that you’re talking about as long as I have good test coverage that exercises all the code”. I used to believe these two things, and they’re still definitely very reasonable opinions to have, but I’ve drifted away from these.

Working at AWS was probably the place where I really started to drift away from these. Inside of AWS, there’s a lot of language and a lot of shared vocabulary that gets used to try to give people a shared context about how you’re thinking about your work. One of the ones that really stuck with me was this one, “Good intentions don’t work, mechanisms do”. This is a Jeff Bezos quote, but it’s really widely spread through a lot of AWS blogs and other literature. Mechanisms here just means some kind of automated process that takes a little bit of the error-prone decision-making stuff out of the hands of a human and makes sure that the thing just happens correctly. It takes away your reliance on the good intentions of engineers. That’s going to be another key theme of this talk is that a lot of these types of bugs that we’re talking about, they come to play when you have something in your codebase that relies on the good intentions of your engineers to stick to the best pattern.

We’ve got these beloved dynamic languages like Clojure, JavaScript, Ruby, Python. My claim is that if you opt into the static type systems in these languages, you just completely avoid shipping that class of bug to production, no if, ands, or buts about it. That particular bug that we started with on the sacred timeline and the alligator timeline, it just goes away. What’s really powerful about it is you’re taking away this reliance on the good intentions of your engineers. You may have best practices established in your engineering work that whenever you’re using a dynamic language, you better make sure you have thorough test coverage that’s going to prevent you from having one of these kinds of bugs go to production, but you’re relying on the engineers to adhere to that best practice.

Then you hire new people to your team and they don’t know the best practices yet and they’re prone to making mistakes sometimes. Putting that power in the hands of the compiler instead of the humans, it just eliminates that class of bugs. That doesn’t mean that we have to abandon our favorite dynamic languages. Pretty much all of these languages have added opt-in tools that you can use to get static analysis and static typing. Python has mypy. JavaScript, obviously TypeScript is becoming much more popular over the last five years or so. Ruby has a system called rbs. Clojure has several things including Typed Clojure. Whenever you opt into one of these, you can usually do it pretty gradually. You don’t have to boil the ocean with your codebase. It really just boils down to just adding a few little type int to the method signatures. That little action changes this bug from a runtime bug to a build time bug where mypy is going to catch this up front and say you can’t pass a string to this function. That allows us to avoid that alligator timeline.

Null Safety

Second one I’ll talk about is null safety. You’ve probably all heard the phrase about this being like the million-dollar mistake in programming. If you’ve written any Java, you’re probably really familiar with this pattern where like every time you write a new function, there’s 15 lines of boilerplate of checking all the arguments for nulls up front. Same thing in C#. These are again relying on good intentions. The first thing is you’re relying on your developers to remember to put all those null checks into place. Then, even worse, if they do put the null checks into place, it’s still a runtime error that’s getting thrown, so you’re still subject to the same kind of bugs that led us to the alligator timeline. A lot of the newer languages like Kotlin have started almost taking away support for assigning nulls to normally typed variables. In Kotlin, if you declare a variable as of type string, you just can’t assign a null to it. That won’t compile.

If you know that you need it to accept null, then you can put this special question mark operator on the type definition, and that allows you to assign a null to it. Now once you’ve done that, you can no longer call the normal string methods directly on that object. The compiler will fail right there. Instead, the compiler will enforce that you’ve either done an if-else and handled the null case, or you can use these special question mark operators to say that you’re willing to just tolerate passing the null along. In either case, the compiler has made you made an explicit decision upfront about what you’re going to do in case it’s null rather than you essentially finding out about this bug at runtime. Rust is another language where there is no null.

In Rust, the closest thing you have to null is this option type. Any option in Rust is either an instance of None or an instance of Some. This is similar to optional in Java, but in Rust, it’s much more of a first-class concept. In this code here, you can see I declared this function called foo, and I’d said its argument is a string. I cannot call that function and pass a None in. That’s a compile time error. Bar, I said it’s an option of string. I can pass a None in or I can pass a Some in, but again I’ve had to be explicit about it and make the decision upfront. Compile time null safety, most languages have some support for this these days.

The languages that have been around the longest like Python, C#, Java, those languages have to deal with a lot of backward compatibility concerns. They can’t just flip a switch and adopt this behavior. In those languages, you’ll probably have to work a little bit harder to figure out how to configure your build tools to disallow nulls, but they all have some support for it. An experiment that I suggest is just writing an intentional bug where you pass a null to something that you know should not accept a null, and then play with your build tool configuration until it catches that at build time rather than allowing it to possibly happen at runtime.

Immutable Variables, and Classes

Now we’ll move on to number three, which is immutable variables and classes. There are very few things that I’ve worked with in my career that I feel improve the maintainability of my code as much as leaning into immutable variables as much as humanly possible. The main reason for this is that they dramatically reduce the amount of information that you need to keep in your brain when you’re reading a piece of code in order to reason about it and make assertions about it. As an example of that, here’s some Java code. I’ve got this function called doSomething that takes in a foo as an argument. Foo is just a regular POJO in this case. Then it calls doSomething else and passes that foo along. Now here’s some calling code that appears in some other file where I construct an instance of the foo, and then I pass it to that doSomething function. Then imagine we have maybe 100 lines of code right here or maybe even more than that.

Then we eventually get to this line of code where we print out foo. If I’m an engineer working on a feature in this codebase and the change that I want to make is somewhere around this line that’s doing the print statement, what can I assert about the state of my foo at this point in the code? Were there any statements in between those two that might have modified my foo? It’s certainly possible, so I’m going to have to read all that code to find out. Was my foo passed by reference to any functions that might’ve mutated it? Yes, it was passed to do something and then that passed it along to do something else. Does that mean that I need to go examine the source code of all of those functions in order to be able to reason about the state that this variable is going to be in when I get to this line of code? The answer to that is basically yes. Without knowing what’s happening in every one of those pieces of code, I have no idea whether this variable got mutated in between those two points in time.

Then the situation gets infinitely more difficult if you have concurrency in your program. If potentially this doSomething else function is passing that reference to some pool of background threads that may be doing work in the background, then you can imagine a scenario where I add another print line here, just two print lines in a row printing this variable out twice. I can’t even assert that it’s going to have the same value in between those two print statements because some background thread might’ve changed it in between the two.

Again, I have to go read all of the code everywhere in my application to know what I can and can’t assume about this variable at this point in time. That just slows me down a lot. An alternate way to handle this with newer versions of Java is rather than foo being a POJO, we use this new keyword called record, which basically makes it a data class. It means that it’s going to have these two properties on it and they can’t change ever. It’s an immutable piece of data.

Then I also add this final keyword, which says that nobody can reassign this variable anywhere else in this scope here. With those two changes in place, I know that nobody can have reassigned my foo to a different foo object because that would have been a compile time error. I also know that nobody can have modified this inner property, this myString, because that also would be a compile error. I don’t care anymore that we passed a reference to this variable to the Bar.doSomething method, because no matter what it does, it can’t have modified my data. I don’t have to worry about that. Also, if there’s 100 lines of code here, I know that they can’t have modified it. I no longer have to spend any time thinking about the state that might have changed in between these lines of code.

When I get down here to this print statement, I know exactly what it’s going to print. That means I can just move on with my changes that I want to make to the code without getting distracted by having to page all of the rest of this application into my brain and think about it. Most languages have some support for this these days. Kotlin definitely has data classes. Clojure, everything’s immutable by default. Java has records and final. You can find this in pretty much any programming language. TypeScript and Rust, you have to roll your own a little bit, but it’s definitely possible to follow these patterns.

Persistent Collections, and Immutable Collections

That leads us into a related but slightly different topic, which is about collections. I also want my collections to be immutable for the same reasons, but that’s a little bit harder. You can see this line of code here where I’m constructing a Java ArrayList. I’m using this final keyword because I want this to be immutable. I want to be able to make those assumptions about the state of my list without having to spend a bunch of time reading my other code. The problem is this ArrayList provides these mutation functions, the .add, .remove, whatever else. I’m right back in the world where I was before, where these other functions that I’m calling, these other lines of code that might happen here, they can mutate that list in any number of ways.

Again, I cannot make any mental assertions about what this list has in it by the time I get to this point in my code. Recent versions of Java have added some stuff, like there’s this new list of factory function that actually does produce an immutable list, which is what I want. Now I don’t have to worry about the fact that I’ve called doSomething because I know that this list is immutable. I do, again, know that by the time I get to this print statement, I know what my list has in it. The flaw with that is you’ll notice this list.of factory function is still returning the normal list interface.

That list interface provides these mutating functions like add, remove, whatever else. Even worse, if I call those now, it’s a runtime error. The compiler won’t detect that this is a problem, but the program won’t throw an error at runtime. Now I’m back to the world of relying on good intentions. Now I’ve got this immutable list, which is what I wanted, but if I’m passing it around to all these other functions and only advertising it as a list, then they may try to call the mutation functions on it and then we get a bug at runtime.

Some of the more modern languages like Kotlin, they’ve solved this problem by, in Kotlin, collections are immutable by default. If I say listOf, then I get an immutable list and it doesn’t have any methods on it like add. Again, compile time error if somebody tried to call that. It does also have mutable variants of those collections. If I really need one, I can have one, but the key here is that it has a separate interface for the two. I can lean into the immutable interface in all the places where I want to make sure that I don’t have to worry about somebody modifying the collection underneath me.

Whenever I talk about this concept of these immutable collections, people ask me, what about performance? Your code is going to make changes to the collection over time, otherwise your code’s not doing anything interesting. Doesn’t that mean that we have to clone the whole collection every time we need to make a modification to it, and isn’t that super slow and memory intensive? The answer to that is, no, thankfully. There’s a really cool talk from QCon 2009, from Rich Hickey, the author of Clojure, about persistent data structures, which is the data structures that he built in as the defaults in the language of Clojure. They present themselves to you as a developer as immutable at all times. When you have a reference to one, it’s guaranteed to be immutable. It provides modifier functions like add, remove, but what they do is they produce a new data structure and they give you a reference to it.

Now you can have two references, one to the old one, one to the new one, and neither one of them can be modified by other code out from underneath you. The magic is, behind the scenes, they’re implemented via trees and they use structural sharing to share most of the memory that makes up the collection. It’s actually not nearly as expensive as you might fear. This was a hard thing for me to wrap my head around when I first started writing Clojure. I was like, that can’t possibly be performant. It’s a really nice solution to the problem. In practice, the way that they’re implemented, you almost never need to clone more than about four nodes in the tree in order to make a modification to it, even if there’s millions of nodes in the tree. This is a slide from Rich’s QCon talk where he talks about how these are implemented. What you can see here is two trees. The one on the left with the red outline, that’s the root node of the original collection. It has all these values in it.

On the right, he’s showing us, so we want to add a new child node to this purple node with the red outline. We’re going to try to add a new child node to it. To implement that, what we actually do is we just clone all the parent nodes that go down to that one, and we add the new child node there. Then the rest of the child nodes of all of these new nodes that we’ve created, they just point back to the same exact memory from the original data structure. We’ve cloned four tiny little objects and retained 99% of the memory that we were using from the original collection. With this pattern, you can have your cake and eat it too. You can have a collection that presents itself to you as immutable so that you know that it can’t be modified out from underneath you while you’re working on it. You don’t have to sacrifice performance when other threads, for example, need to change it.

This is hugely powerful in concurrent programming because there’s all kinds of problems that you can run into with shared collections across multiple threads in your concurrent code, where you either have to do a lot of locking to make sure that one thread doesn’t modify it while another thread is using it, or you can end up just running into these weird race conditions that cause runtime errors. With this pattern, any thread, once it grabs a reference to this collection, you know that that collection’s not going to change while you’re consuming it. After it’s done with it, it can go grab a new reference to the collection, which might have been updated somewhere else, but again, that one will be immutable, and we don’t have to worry about it being modified from underneath this either. Clojure and Scala have these kinds of collections built right into their standard library, but every other programming language that I’ve looked into has great libraries available on GitHub for this, and they’re usually pretty well-consumed and battle-tested.

Errors as Return Types – Simple, Predictable Control Flows

Now we’re going to move on to errors as return types. This one has mostly to do with control flow. When I’m talking about this one, I like to reflect on the history of Java and how at the beginning of Java, it was really common for us to have these checked exceptions versus unchecked exceptions. Method signatures would be really weird depending on whether they’re using checked or unchecked exceptions. These are trying to do exactly what I’m advocating for in this talk. They were trying to give us compile time safety to make sure that we were handling these errors that might happen.

In practice, we just collectively decided we did not like the ergonomics of how it was implemented and we drifted away from it over time. I think one of the funniest examples of that evolution is in the standard library of Java itself, the basic URI class that you use for everything that has to do with networks. It throws a checked exception called URISyntaxException whenever you call its constructor, which means you literally cannot construct one of these objects without the compiler forcing you to put this try-catch there, or without you changing your method signature to advertise that you’re going to rethrow that.

Then everybody else who’s calling your function now has to deal with the same problem. Everybody hated that because the odds that we were going to actually pass something in there that would cause one of these exceptions were really low and drove people crazy. A couple releases later in Java, they added this static factory function called create that literally all it does is call the constructor and then catch the exception and rethrow it as a runtime exception. They put that into the standard library. That was an interesting trend to observe. Likewise, all of the JVM languages that have appeared in the last 10, 15 years, Kotlin, Scala, Clojure, they’ve all basically gotten rid of these checked exceptions in favor of runtime exceptions. That means now all of our errors are runtime errors. That again is really against the grain of what I’m pitching in this talk. It means now we have to go read the docs or the code for every function we’re calling and make sure we know what kinds of exceptions it could be throwing, and handle them successfully. We’re back here, good intentions.

Go is the first language recently that I’ve tickled something in my brain for thinking about different ways to solve this problem. Go really leaned into the syntax of, if you’re going to call a function that might cause some kind of error, instead of there being an exception with weird control flow semantics and relying on this weird try-catch syntax, just returns a tuple instead. You either get your result back or your error back. One of those is going to be nil whenever you call this function. Then the compiler can force that, that you’ve done some checking on that nil, and you’ve decided how to handle it.

This is again, like the compiler is now doing this work rather than relying on good intentions. The other thing that I really like about this is we’re just using an if-else statement to interact with this error. It’s not a new special language construct that differs from how we’re dealing with all the other pieces of data in our code, like a try-catch is. It’s just like the same type of code we’d write for any other piece of data. We got more clear control flow. It allows the compiler to enforce more explicit handling, prevents us from silently swallowing types of exceptions. Yes, again, we can use our normal language constructs rather than the special try-catch stuff. Here’s a Rust equivalent of that. In Rust, there’s this type called result. Any instance of result is either error or ok.

Then it’s a generic type. If it’s a success, if it’s an ok, then the type is going to be this integer 32 bit. If it’s an error, then the value is going to be a string. Then we can use this pattern match statement and say, if it’s ok, then I’m going to do something with the success case. If it’s an error, then I’m going to do something with the error case. In these case statements, we get back the types that we declared in the result declaration.

Exhaustive Pattern Matching, and Algebraic Data Types

Errors as return types help us move our bugs from runtime to build time. I’ve shown you that they’re pretty ingrained in the languages in both Go and Rust, but can we do this in other languages? That leads me into my last topic that I want to talk about, which is about exhaustive pattern matching and algebraic data types. I’m going to explain what those are a little bit, and then I’m going to close the loop on the error handling part of this. What is an algebraic data type? It’s basically like a polymorphic class. You can imagine if you had a parent class called shape, and then you had child classes called circle, square, octagon. It’s basically just that, except for the compiler knows upfront all of the existing subtypes that can exist rather than it being open-ended. Most modern languages have some way of expressing these now.

Then they have these pattern matching statements that you can use to branch on which ones of the types that you end up getting. Here’s an example in TypeScript. You can see I’ve declared this type called shape, and this little or operator just means I’m unioning together several other different types. The key in TypeScript is that I have this common property, which I happen to call type, but you could call it whatever you wanted. As long as all of the types that you’re declaring have that property, and they all have a unique value for it, then the compiler can tell the difference between all of these types. Then I can do a pattern match statement on that variable, and then I can do these case statements to handle the individual branch. This is really cool because the compiler is smart enough to know once I get inside this circle branch, that I’m going to have a radius property available, and I’m not going to have a width and height. If I tried to reference width or height here, the compiler would fail, and it wouldn’t allow me to write that code.

Conversely, the same thing with the rectangle. It gives me a lot of type safety. Exhaustive pattern matching is basically just that same concept, but the compiler can give you a build time error if your pattern match statement doesn’t cover all the possible cases. This is why algebraic data types are important, because we want the compiler to know all of the legal types that are available. Most of the languages that have this stuff, they have the support for an exhaustive pattern match statement. Not all of them have it enabled by default. In TypeScript, you’ve got to turn that on as a compiler option. If you turn it on, then this becomes an exhaustive pattern match statement. What that means is if I go modify the definition of my shape type, and I add a third one in here called square, now this shape definition may be in one file somewhere in my code, and I may have these pattern match statements scattered throughout lots of other places in my code. They’re not guaranteed to live right next to each other.

As an engineer, if I come in here and I add in this new square type, then the next thing I got to do is search all over my codebase and find all the places where I might have been doing one of these pattern matches and make sure that I add support for the square. If I don’t do that, then we can get some weird runtime failure. With an exhaustive pattern match, you’re telling the compiler that you want it to fail if it finds a pattern match statement where you’re not explicitly handling all the cases. This would fail to compile in TypeScript because I don’t have a handling for the square case here, and I have to go add it before it’ll build. That’s really powerful.

Similar concept in Kotlin. In Kotlin, these algebraic data types are called sealed classes. You can see here I’ve got one where it can either be a success1 or a success2. I’ve got this win statement that I can use as a pattern match on it. What I want to show here is, if the function that I’m using to get this result might throw an error, this is where I’m going to tie this back into the error handling, this thing might cause an error. I have to put in this try-catch statement, and I have to know what exception type might get thrown here. We’re in good intentions land again. I might forget to put that try-catch statement in there. I might not handle all of the different types of exceptions that could possibly get thrown by that function.

If I make a small change to the way I model this, and I just add the error in as a different branch of this result, sealed class, then now I get to take advantage of all this other stuff that I’ve just shown you all. This is what my code looks like now. I just have a new branch in my pattern match statement that handles the result case. Now I’m not relying on good intentions to put the try-catch into the code. The compiler, because this is an exhaustive pattern match statement, will fail to build if I haven’t added the branch to handle this error. This code is just cleaner and simpler. It doesn’t involve this extra level of nesting and weird special case code. Highly recommend looking into the support for this in various languages. This is one of the more recent trends that I’ve seen. Like in Java, it didn’t come in until Java 17. In Python, it was in Python 3.10 when it got introduced. You can find something that will allow you to do this in pretty much whatever programming language you’re using.

Key Takeaways

Allowing bugs to surface at runtime can be really expensive. That can put us on that alligator timeline that we’re trying to avoid. Modern language trends are giving us really cool tools to catch bugs at build time instead of allowing ourselves to be subject to this problem. These same trends, I think, have this nice side benefit that they make the code more maintainable and easy to reason about anyway. It’s like a double win. More maintainable code obviously leads to increased developer productivity. It makes it easier for your teammates, present and future, to understand your codebase and feel confident about making changes to it. What we’re really trying to do here is find places where we can avoid relying on good intentions to solve problems.

The specific language features that I am advocating here, leaning into type checkers for dynamic languages. Configuring your build tools to disallow nulls. Using immutable variables and data classes wherever you can. Finding a persistent collections library in your language if it’s not built into the standard library. Surface errors as return values, not exceptions. Using exhaustive pattern matching with algebraic data types to allow the compiler to make sure that you’re handling all the cases whenever you can. It just allows you to model your business logic a little bit more concretely as well. It’s really nice.

Questions and Answers

Participant 1: Are there any good examples in the open-source world that you could point to that use a lot of the best practices you were talking about?

Price: I’ve found that the way to find the good examples is to find projects that are built in these languages that make this stuff be core constructs. Any project that you find in Rust is going to be forced to follow a lot of these paradigms just because that’s how the language was designed. Many or most Kotlin projects that I’ve seen really lean into the immutable variables and the pattern matching stuff. Mostly I think about that by language more so than by specific projects.

Participant 2: The principles that you mentioned, let’s say primarily I’m a Java workshop and it’s one of those languages, like many of those principles are checked. Would you suggest that I try navigating Go or Rust and start moving your platform, like a mix of these languages or would you say that stick to just one which checks everything?

Price: It probably depends a lot on your team and their interest and willingness to branch out into different languages. There is obviously a cost for managing codebases in multiple different languages. With Java in particular, like inside of AWS over the last five years, there’s been a really big shift towards teams that had big existing Java applications starting to add new features to them using Kotlin, because Kotlin has really good JVM interop, and so you don’t have to rewrite the rest of your code. You can just start adding Kotlin classes to it. When you write your Kotlin code initially, you can choose to write it in a way that makes it look almost exactly like Java code, so it’s really familiar to your engineers that already have experience with that. You can start experimenting with the features over time and gradually migrating things over.

I think in most big existing codebases, that’s always going to be a more successful long-term strategy than trying to just cut everything over all at once because that just ends up usually not being practical given the business requirements for delivering new features and stuff like that. That’s one thing to consider. If you have some isolated project, like a new microservice that isn’t tightly coupled to your existing codebase, then that’s a reasonable place to consider trying a new language. Then, like I mentioned before, just finding some little toy side project when you have the time and interest to play around with these different languages and get a sense for how you feel about them. That really helps you decide whether it’s something that you want to lean into or not.

Participant 3: When you talked about exhaustive pattern matching and you showed the switch statement, my mind immediately went to traditional interfaces and virtual classes. Why would I want to do, adding a new enumeration to my result rather than adding a new implementation to do the different things? It was more when you did the shapes thing. When I wanted to add a square, why couldn’t I just have a shape interface class and I just have three different implementations? When I want to add in a polynomial or whatnot, I don’t have to go everywhere, I just have an interface of array or area that I would have to implement.

Price: There’s a lot of ways to skin this cat. The thing that I’m really advocating for here is choosing one that gives you the exhaustive pattern matching so that when you do make that addition to the parent type that the compiler can automatically catch all the places in the code where you haven’t added support for it. There are definitely other ways to handle that besides this one. I like this best in TypeScript because in TypeScript, if you use interfaces or subclasses, then you have to start using this weird instanceof keyword, and it gets into that realm of JavaScript where it behaves really differently for one type of data than it does for another type.

In JavaScript, if you ever had a piece of code that’s trying to check and see if a variable is a string, it may be like, if is instanceof string, or thing.type equals string, or several other conditions that you have to check just because JavaScript gets wonky when you start trying to do reflection type stuff on that. Of the different patterns that I have personally played around with in TypeScript that really work well with this exhaustive pattern matching, this is the one that has been the most foolproof for me. This is the one that Google uses in their implementation of Protobuf. Protobuf has this concept of OneOf where you can say that a piece of data is either this thing, or this thing, or this thing. If you look at the way that Google’s Protobuf libraries generate TypeScript code to handle those OneOfs, this is the way that they do it. It’s just worked really well for us when we tried it out. It’s not the only solution though.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.