Month: February 2025
Podcast: Your Software Will Fail, It is How You Recover That Matters: A Conversation with Randy Shoup

MMS • Randy Shoup
Article originally posted on InfoQ. Visit InfoQ

Transcript
Michael Stiefel: Welcome to the architects’ podcast where we discuss what it means to be an architect and how architects actually do their job. Today’s guest is Randy Shoup, who has spent more than three decades building distributed systems and high-performing teams. He started his architectural journey at Oracle and as an architect and tech lead in the 1990s, then served as chief architect at Tumbleweed Communications. He joined eBay in 2004 and was a distinguished architect there until 2011, working mainly on eBay’s real-time search engine.
After that, he shifted into engineering management and worked as a senior engineering leader at Google and Stitch Fix. He crossed the architecture and leadership streams in 2020 when he returned to eBay as chief architect and VP for eBay’s platform engineering group. He’s currently senior vice president of engineering at Thrive Market, an organic online grocery in the U.S. It’s great to have you here on the podcast, and I would like to start out by asking you, were you trained as an architect? How did you become an architect? It’s not something you decided one morning and you woke up and said, “Today, I’m going to be an architect”.
How Did You Become An Architect? [01:41]
Randy Shoup: Great. Well, thanks for having me on the podcast, Michael. I’ve listened to so many of the, in general, InfoQ podcasts, and particularly your interviews with Baron and Lizzie, and various other ones, so super excited to be here. Yes. How did I become an architect? Yes. Well, I woke up one day and said, “Today, I will architect”. No, my background is multidisciplinary. So, when I went into university, I was not expecting to be a software engineer. I always loved math and computer science, but I was planning to be an international lawyer. When I was in college, it was the late 1980s. I graduated in 1990. So it was the height of the Cold War, and the U.S. and the Soviet Union had tens of thousands of nuclear weapons pointed at each other, and I wanted to stop that.
Anyway, long story short, through my university career, I studied political science, with a particular focus in international relations and East-West relations. And from that, I took an appreciation of nuance and seeing the big picture and really understanding the problem writ whole. I won’t go through all the boring details, but I also, while I was in college, interned at Intel as a software engineer. So I worked building software tools for Intel’s mask shop, which is one of the things that you need to do to make chips, and I love that.
So when I graduated from university, I was planning on being an international lawyer, that was the mainline career, and I ended up double majoring in political science and then mathematical and computational science because then I was like, “Well, I’m not going to go straight to grad school”. So I worked for two years as a software engineer at Oracle, then I was like, “Okay, time to take the GRE and the LSAT and go to law school and international relations school”, which I did start. For reasons, that didn’t work out. I wasn’t as interested in that as I thought I would be, and the secondary career side gig of software was really pretty fun, and so-
Michael Stiefel: So, in other words, don’t quit your night job.
Randy Shoup: Exactly, yes. Well, very well said, yes. If you have a side gig that you love more than your main gig, maybe you can make it your main gig, and so that’s what I did. So, eliding over lots of details, which we could talk about or not, I wasn’t excited about the international law career for reasons and then begged for my job back at Oracle. My friend and mentor and manager at the time welcomed me back with open arms. And so it’s really the combination then of the engineering skills and doing software with the typing, and also being able to see the big picture.
What makes a good architect? [04:01]
And so, to me, I think we’ll talk about this: a good architect is somebody who can go all the way up and all the way down. And Gregor Hohpe, whom I hope everybody who is listening to this knows, has this wonderful phrase that’s in his book. It’s called The Architect Elevator. So, from the boardroom, you can talk to executives, all the way down to the engine room where you can actually tweak the machinery. So, how did I become an architect? It was crossing those two things. I always was a techie and a fuzzy, as we used to say in college. So, both on the technical side, really enjoyed the math and the science, and also on the other side, fuzzy, like liberal arts and so on.
Software Interacts with the Real World [04:40]
Michael Stiefel: And lo and behold, here you are. So, one of the things that we’ve talked about, and I know both of us are interested in, is realizing how fragile our critical systems are. And very often, the engineers, which is surprising, and the public, which is not as surprising, do not realize it and realize the consequences of the fact that these systems are fragile. The most recent one was the example of the CrowdStrike failure. But even things such as… Let me pick an example that people are probably most familiar with. You go to Amazon, you’re told that there are three books left, but you actually have to wait until you get an email, later on in the process, to find out you’re actually getting the book.
Randy Shoup: Right.
Michael Stiefel: And even if they tell you, you get the book, what happens when in the process of getting the book, it gets damaged in the warehouse, in the process of sending it to you. Do they cancel? They ask you, “Do you want it?” So the interaction between the software and the messy world is something that I don’t think people understand.
Randy Shoup: Yes. And the real world can be messy, as you just said, like, “Okay, physical goods can be damaged in warehouses”. They could not make it in shipping. They could get rain damage. They could not be there, even though the system says they are. And also, even more often, the software can screw up. The software thinks that we decremented… Software forgot to decrement a counter, and they should have because of reasons, and all sorts of stuff. People can’t see me because this is a podcast, but I do not have hair. But once, I did. And so I like to joke that, when I entered software and then architecture in particular, I did have hair, that’s actually a true statement. And yes, with all the failures and trying to deal with them and trying to design around them, that’s in large part how my hair… how I went bald.
Designing For Failure [06:55]
Michael Stiefel: Well, I think you just said something very, very important that a lot of people don’t think about, and I’ve gotten pushback from people when I used to talk about this. I used to have a whole talk about designing for failure.
Randy Shoup: Yes.
Michael Stiefel: People didn’t like it. Well, some people did appreciate it, of course. I’m not saying nobody liked it. But especially if you got to the higher ups, they didn’t like it because, “Why should we think about failure? We want design for success”. And-
Randy Shoup: Michael, you’re guiding for failure to start. What are you doing?
Michael Stiefel: Right. But I think we see this also in software engineers, and this is in security, this is in lots of places. They like to design for the happy path.
Randy Shoup: Yes.
Michael Stiefel: They think this will work and this will then work. And when it comes to an error, “Okay, we’ll just throw an exception”, without thinking about where the exception goes, what the program state is, to be caught by the great exception handler in the sky. So how do we get developers and the business leaders, and the public at large to understand that software failure actually is a fact of life?
How to explain software failure to executives [08:09]
Randy Shoup: Yes. In your question, you already went boardroom to engine room, which I love. So, let’s start with the boardroom and then drop to the engine room. So, how do you have this conversation with executives? And look, if you look at talks I’ve given, sections in them, even if not, entire names of talks, are all about designing for failure and everything fails, which is absolutely true. How does one have that conversation with executives is just making it clear that this is not about, “I want it to fail”. This is about things go wrong in the world and we have to deal with them, and so the designing in the face of failure. It’s not designing in order for it to fail, you were never saying that, but it’s designing to be resilient. So that’s the term of art these days is resilient in the face of failures.
So there are lots of ways to do that, but having that conversation is, “Look, as architects, we should be able to say, must be able to say, ‘Hey, we get the happy path. There are many things can go wrong along the way, and here is the way that the system is going to handle when those things go wrong.'” And that could be the system retries things, that’s legit. That could be the system times out, that’s legit. That could be the system reconciles, comes back around, like in banking. You send things back and forth between banks in real time, but then there’s this reconciliation at the end of the day, at least traditionally, where you looked at the 999,000 on the one side and the 1 million on the other side, and you started it out with rules or with humans.
How to get engineers to care about software failure [09:31]
So, in all these situations, the architect needs to imagine… I guess, now, I’m in the engine room already. The architect needs to imagine all the things that can go wrong. And what I think trips up a lot of engineers is it’s scary to think of, “Well, gosh, so many things could go wrong”, and that’s true, so many detailed things can go wrong. But if you think about it from a higher level, there are usually not a million different classes of failure. There are a million different instances of failure along the way. But even a relatively complicated workflow, like, I don’t know, a payment system or a checkout, or something like that, there are a handful of classes of things that can go wrong. Resources could be unavailable. How do we deal with that? Logic could not work. How do we deal with that? Things could be slow or down. How could we deal with that.
So what trips up people that are new to this, which is fine, people are beginners, I’m a beginner in lots of things in my life, but the beginner is that it feels overwhelming. There’s a million things that go wrong. How could I check for all of them? And the answer is don’t. The answer is, think about the three or four classes of things that can go wrong and have a pattern of how to deal with those individual things.
Michael Stiefel: In other words, think about the system state, thinking about what the safe point is, where you go back to.
Randy Shoup: Sure.
Michael Stiefel: To give, maybe, a concrete example is there’s lots of reasons why the credit card service you use may not be available, but you only care about the fact that the credit card service is not available.
Randy Shoup: Beautifully said. Exactly right, yes. I’m going through my checkout flow and I want to charge the payment method, and I can’t. We can imagine, if we thought about it, 25 different reasons together off the top of our heads in a minute why that could be true, network down, my payment system is down, their payment system is down, my credit card… blah, blah, blah, blah, blah. But from the perspective of the logic of that thing, it only matters that I cannot charge this payment method at this time. So, the wonderful thing about working with credit card systems is they’ve been around for a million years and they’re clunky, but the interface does tell you what happened. So, you can get something back that says, “Oh, the card was declined in an unrecoverable way. This is a fraudulent card”, “Okay, we’re not going to retry that one”.
But the more common case is it’s some kind of transient failure and like, “All right, we just retry it”, or if a human is there, we could say, “Hey, human, we couldn’t charge this method. Want to give us another card?” As every single person listening to this has had the experience in buying something at the grocery store, or whatever, and for whatever stupid reason, their particular card in the moment isn’t getting taken, so, “Hey, do you have another one?” “Okay, sure”. Again, patterns to deal with this exception handling, all the way up to the top of the system, which in this case, if someone is there as a human, or pop it up and then retry that part of the workflow, retry the charging payment methods, like Step. There’s lots of ways to deal with it.
Business Rules Help In Handling Failure [12:28]
Michael Stiefel: Also, there’s very often a business rule that enters in. So, for example, if the credit card service is not available and this is a customer that you know very well, and it’s $5 or $1,000, or whatever is small in your business, you may say, “Go ahead”.
Randy Shoup: Yes, yes. And actually, I’m sure you know this, but not everybody listening might, this is exactly how ATM machines work.
Michael Stiefel: Yes, that was the exact thing that popped into my mind when you said that.
Randy Shoup: Yes, yes, yes. And again, just to say… either of us could say it, but I’ll start saying it, the way ATM machines work is, “Hey, they have network connectivity to the office systems of your bank”, just like anything else, and they have rules in there that say, “If you cannot connect to that person, you feel free to give Randy $20, but do not give him $2,000”, that kind of thing.
Michael Stiefel: And also, this is a situation where, because you may be on a network that’s not your bank’s, there has to be reconciliation among all the banks at some point in time, which is what you made reference to before.
Randy Shoup: Right.
Michael Stiefel: So, it’s a combination of business rules and software judgment.
Randy Shoup: Yes, I’m going to riff off that in two ways. I really love the way you said it. So, again, to restate, a million individual instances of things could go wrong. There are typically a small number of classes of failure. For each of those classes of failure, we may have a business rule that says, “Hey, this is how payments are supposed to work. When this kind of failure happens, you’re supposed to retry for three days and then you can give up”, or whatever. So, to your point, there could be, in an ideal situation, a business rule that says what to do. In any case, we have to solve it. Regardless of whether there’s a business rule that told us how to solve it, we, as engineers, have to solve it. That problem will occur and we need to decide what to do. We can’t throw up our hands. And so, again, we should apply some kind of pattern to that.
The other thing that, when you mentioned business rules, I really love and I want to reinforce for people, we’re talking about a payment workflow or a checkout workflow, but this is a generic comment I’m about to make. The initial instinct of the naive engineer, and I’ve been naive for more time than I should admit, is to hide that failure and better is to have it be part of the interface. Again, if you’ve not worked with payment systems or checkout systems, the interface is not, “Please make the payment, yes, no”. It is, “Please start this payment”, and there’s a workflow with states behind it, and those states are visible outside. So it’s visible to say, “Hey, the payment is started. It’s pending. It’s authorized. It’s completed”.
And my point there is, as an architect or a software engineer dealing with one of these domains, oftentimes, when you have these failures or when it’s possible to have these failures and you can’t resolve them immediately, as in this example, pop that up a level. That’s now part of your interface. Part of the workflow and part of the payment system that we just sketched is the idea that payments can be in this intermediate transient state. They can be accepted, but not yet already done. And in the U.S., for those of us who live here, we still have to wait three days for… If we’re going to pay Michael for something, typically, the banks give themselves three days to get everything all sorted, even in the modern world.
The “It’s Never Going to Happen” Fallacy [15:54]
Michael Stiefel: Just as an amusing side point, I remember the days when I ran a business that took credit card payments, but this was before real computerization. You’d have a stack of credit card slips, the one the merchant had, and you go call the bank and list off all the numbers, and they would tell you, in other words, whether it was good or bad. So, in other words, part of the problem is that I think a lot of people have never worked with these manual systems and realize that, actually, the probability of failure is higher than people think. People very often think, “It’s not going to happen”. I’m sure you’ve heard engineers say that. “We don’t have to worry about that”.
Randy Shoup: Yes. The wonderful and terrible thing of working at places like eBay, Google is the things that occur one in a million times occur thousands of times a day.
Michael Stiefel: Because you have 10 million times.
Randy Shoup: Yes, yes, yes. And the things that occur even a billion times a day at Google scale occur thousands of times a day. So, there’s no hiding. And you didn’t really ask me to do this, but I’m going to say it anyway. It is not professional software to ignore failure. You are not being a professional. You know this, and I know you believe it, but just for the listeners, you are not being a professional software engineer if you don’t handle, in some way, failures, and handle could be, again, retry, or reconcile, or something automatic, or it can be simply fail, fail, fail all the way up. But one way or another, you can’t ignore, I was going to say, the possibility, the guarantee. I guarantee you that everything in your system will fail at one point or another. Guaranteed, absolutely guaranteed.
Michael Stiefel: Yes. And you have to leave the system however you handle a failure in a stable state.
Transactions and Workflows and Sagas, oh my! [17:44]
Randy Shoup: Correct. And I know you and I have chatted about this, so maybe this is going to make me want to take it in a transactional way. So, a way that we traditionally have approached this problem, and it’s a good one, it’s a great tool to have in our toolbox, is transactions. So, the conceptual idea… I know everybody knows, but the conceptual idea is I have several things I want to do and let’s wrap them all in a transaction and make them all happen together or none at all. So atomic, consistent, isolated, and durable, the ACID properties. And when you have a system where that can work, it implies a single database. When you have a system that can work, that’s great, that’s a wonderful tool in your toolbox you absolutely should use.
And also for the systems, even the “simple” systems that we were just talking about, payment systems, checkout, et cetera, not like that. There’s not a single database between Bank of America and Deutsche Bank when they exchange stuff. There’s not a single database or a single distributed transaction between my local grocery store and the credit card processor.
So, how do we deal with that? It’s asynchronous, and so we make it a workflow. And workflows are made up of asynchronous messages and we figure things out, and we have the SAGA pattern, which we could talk about in detail if we’re interested, but the conceptual model at the higher level is, “Well, when I can’t make it a transaction where it’s all or nothing in this moment, I need to then think about it as a workflow or a state machine, and it’s incumbent on me, as the software engineer, to make sure that we enter that state machine from a safe system state and that we transition through the states in that state machine and we exit the state machine in a safe state”.
Do Not Hide Transient States [19:22]
And again, what I was saying before about making these transient states visible, when you’re in one of these situations, it really behooves you to not hide the fact that there’s all these state transitions and transient stuff happening. It’s said that it should be explicitly part of the external interface, if that makes any sense.
Michael Stiefel: I think I can give you a simple example of that where it should be visible not only to the system, but to the end user. Let’s say you’re signing up a customer and they have to provide social security number, all kinds of information.
Randy Shoup: Right.
Michael Stiefel: They may not have all the information at once.
Randy Shoup: Of course.
Michael Stiefel: So what are you going to do? Throw them out and make them reenter everything all over again, or keep it in a semi-complete state, which may enable them to do certain things, but not other things.
Randy Shoup: Right. What a wonderful, visceral, evocative metaphor. That’s gorgeous because everybody can say, “Well, yes, when I’m getting my passport”, as my son just did, “there are steps and there’s a whole workflow”. And he doesn’t have his birth certificate and all these things all at the same exact moment and transactionally enter them all and not. And even if he did, to your exact point, he spent two hours entering all this various biographical information about himself and various things and proving his identity, and so on. And if, at the end, some stupid thing at the U.S. government went wrong,
Michael Stiefel: The Internet goes down.
Randy Shoup: … and you had to go reenter all those two hours again, that would have been insane. It was annoying enough as is.
Workflows are Resilient to Failure [20:57]
So, anyway, I love that metaphor because it doesn’t mean it’s a failure… A workflow is resilient to failures. A well-timed workflow is exactly resilient to the kinds of failures we’re talking about. And also, it is “resilient”, there’s probably a better way to say it, to, “You know what? I don’t have that information right now.” So, thinking about filling out a government form, or doing a payment process, or even the software engineering that we do, thinking about it as a workflow is really freeing and it is the problem.
Align the Architecture WIth the Problem [21:31]
One of the other… It’s a meta principle for me, and this is the way I look at it, but I think the very best architected and design systems I’ve ever worked with are very directly aligned with the problem. This is exactly domain-driven design. But when you can take your overall problem and reify the real world into the software directly, if that makes any sense, there’s a thing that’s part of the world like, “Oh, we have eBay”. “Okay, people buy and sell things online”. Okay, well, just imagine what happens when you’re buying and selling things offline.
Every one of the steps that you do, like I walk into the store, I go choose the thing, I pull out my wallet, all the steps that happen should have, do have, an exact analog in the software system that we build. And if we can use the inspiration and the, often, thousands of years worth of human knowledge about how to do those things in the real world and just put those into the software essentially, then we’re in much better shape.
The example I always give in this is I’ve never worked for Lyft, or Uber, or Grab, or whatever, and I don’t know their history, but I should look it up, I guarantee though, because this is how every system evolves, they started as a monolith. Each of them, I bet. And then what’s the natural domain decomposition for an Uber? It’s driver side, rider side. So, the rider side has a bunch of concerns and apps, and so on. And then totally separately from that, there’s the driver’s side. And then totally separate from that, there’s the back office like, “Well, how do you show the rider what drivers are available?” So where I’m going with that idea is it behooves us as architects to really fully understand the real problem. What’s the real thing we’re trying to do? And in this case, it’s obvious like, “Okay, I want to get a ride from point A to point B, and somebody is going to drive me”, and then reify that or express that in the software.
And then to the point back that we were talking about, about how to deal with failure, it becomes pretty obvious what the patterns are. We have to still type and stuff, but it becomes pretty obvious what the patterns or conceptual mechanisms, if that makes any sense, to deal with these things. What happens? What’s supposed to happen when I schedule a ride and they don’t show? Well, because they’re late or whatever. Well, I don’t know. When you’re hailing a cab in New York City, how does that work? Well, okay, it’s exactly the same thing.
Michael Stiefel: Although some things become a little more difficult in the virtual world. For example, to go back to your eBay example, if the merchandise is right in front of me, I can inspect it.
Randy Shoup: Yes.
Michael Stiefel: That’s a more complicated problem in the virtual world, or to go to your example of Uber and Lyft, in the past, there was a taxi commission that I knew the police ran a security check on the drivers.
Randy Shoup: Right.
Michael Stiefel: So, in other words, how, as you say with the reification, sometimes it’s not a one-to-one mapping.
Randy Shoup: Totally. So, we are strongly agreeing, but it won’t seem like that for one moment. Yes, the conceptual problems are the same like, “Hey, I want to see if this merchandise I want to buy is good”. That is a real problem. To your exact point, the solution in the virtual world is different from this. I can’t touch and feel it, so what else can we do? And you weren’t asking, but I’ll actually tell because it’s cool, eBay has at least three mechanisms I can think of off the top of my head for that. Number one, eBay’s feedback system, that’s been around for the 28 years, or whatever, of eBay. And this is gameable, but you can develop over time a trust system for the seller and for the buyer. Okay, so that’s number one.
Number two, in terms of the specific merchandise, for various things, I think it might be broader now than when I was here before, there’s a money-back guarantee. If you get something and it doesn’t meet the description, there’s a mechanism to return it and get your money back. And also, for particular items that are very often counterfeit, think sneakers, watches, handbags, a bunch of these things, particularly for Gen Z are like traded assets, essentially, people get the… I’m going to say it wrong, but people get the Michael Jordan super sneaker, or whatever, and only made 10 of them. They got gold stars on them, or whatever.
Anyway, for that, eBay has started actually bringing those things to physical warehouses and physically inspecting them and putting a virtual stamp of approval, if that makes any sense. So, a long-winded way of saying, 100%, there is that same problem statement, which is, “Hey, I want to buy this thing. Is it good quality and is it the thing that I actually want to buy?” And in the virtual world where we can’t touch it, we actually need to do a bunch of different other schemes, essentially.
Michael Stiefel: But if you think about it, and I’m going to date myself a little bit here, this is the exact problem at Sears, Roebuck and Montgomery Ward had with mail order.
Randy Shoup: Totally.
Michael Stiefel: So, in other words, there was a reputation in the company there, as opposed to the person. But again, there are more analogies than one might imagine to help you think about this problem.
Randy Shoup: Yes, that’s great. I love it, I love it. Yes. Exactly mail order, yes.
Workflows and State Machines [27:05]
Michael Stiefel: I have found that people have trouble with workflow and state machines, especially where there’s approvals involved and there’s long-running. Before we were talking about state machines, we’re talking about, in a program where there’s failure, it may be asynchronous, but, more or less, the software is waiting on itself, so to speak.
Randy Shoup: Sure.
Michael Stiefel: But when the thing have things like getting approvals, which is another way where workflow comes in, when it’s long-running, people have trouble with that. Especially when you have to deal with choreography or event-driven stuff, that becomes another level of distraction that you have to put on things.
The SAGA Pattern [27:50]
Randy Shoup: Yes, 100%. In fact, I’m dealing with this exact issue in my day job because, “Hey, we’re an online grocery and we take people’s payments, and we need to ship them things, and that’s a workflow”. So, the most widely known well-functioning pattern for this is called the SAGA. Anybody who wants to Google with that, they’ll find stuff from Chris Richardson and Caitie McCaffrey on what are called SAGAs. And so just very high-level, it is, we got this workflow and lots of different… not one system does it all. It’s interactions between different systems.
So, A sends a message to B, B does some things, B sends a message that’s received by C, they do some things, and the SAGA is just a way of representing that at a bit of a higher level. And then if there’s a failure at C or D, then you do compensating operations. So you do individually transactional, individually durable operations along the way, but they’re separated in time. So A happens, and then at a totally different time outside that transaction, B thing happens. And totally after that, the C thing happens. And if there’s anything that goes wrong with this workflow, you do what’s called compensating operations, essentially undoes in the reverse direction. So there’s a lot of literature and techniques around that SAGA pattern. That’s a great one for people to… Oh, why did people even invent that? And exactly because this is hard.
Orchestration and Choreography [29:13]
Relatedly though, and this is something that I’m just looking into but lots of people know a lot about, is an open-source system called Temporal. It is a way of making these workflows durable in a very easy to program setup. And I’m not going to do it justice because I haven’t actually done this stuff yet, but we’re going to do it real soon now. So, you mentioned choreography. So there’s choreography versus orchestration. Choreography is these events fire and traverse themselves, but there’s no central coordinator. Orchestration is where, just like in a conductor in an orchestra, there’s a central “controller” that makes sure that A and B, and C, and D happen or don’t happen and controls the workflow going back and forth.
So, Temporal is an orchestrator concept, and you program that orchestration logic in a regular programming language, Python, Java, PHP, whatever, Go. They support a million of them, and the system stores where you are in that workflow. And if you have failures along the way, it brings the system back to the state where you were, and then you just keep going.
I’m not giving it full justice. But actually, if anybody googles for Temporal, they have a great website with lots of sample code and lots of great explanations. And also, there are literally 100 or more YouTube videos about how it works and all the companies that use it. And so I won’t even be able to list them all, but every Snap story is a Temporal workflow. At Coinbase, every Coinbase transaction where you’re moving crypto back and forth is a Temporal workflow. Netflix uses it as the base of Spinnaker, which is their CI/CD system. I’m actually forgetting a bunch of things, but it’s used very… oh, Stripe. Every Stripe transaction, which is also money, is also a Temporal workflow. So it’s open-source, then there’s a cloud offering by the company that supports it.
Anyway, I mentioned this only because I have this exact workflow problem in my day job and I wanted to make it easier, and I was all set to teach everybody about what I was just saying, event-driven systems and SAGAs, and compensating operations, and state machines, and so on, and those are the real thing. They actually work that way, that’s actually how the systems ultimately work at the base. But I was searching for a way to make it easier, and I think that Temporal is it. And don’t trust me, trust Stripe and Coinbase, and…
Michael Stiefel: Of course, people sometimes have trouble deciding when to use choreography and when to use orchestration.
Randy Shoup: Yes.
Michael Stiefel: But as you say, you really have to think about what’s important in the problem that you’re going to solve.
Randy Shoup: I have my own answer. Because again, as a reminder for people, if these are new to you, choreography is like a dance where there are lots of events that are happening, but there’s no central coordinator that is saying, “You step here, you step there”. Whereas by contrast, again, orchestration is the orchestra conductor, tap, tap, tap on the lectern, or whatever you call it, and getting everybody to play in rhythm. So, when to use both? If this workflow is very important and is complicated, then you want orchestration, for sure. So payment processing, checkout, all these things that we’re talking about, those, in my view, absolutely should be orchestrations. Why? Because you have a state machine that you need to make sure executes durably, reliably, and completes successfully one way or another, either all done or all the way back to the beginning.
You use choreography in those cases where you don’t… I don’t want to say don’t care, but you don’t care as much. It’s not as much of a state machine as an informing of other systems to do a thing, and you’re like, “What do you mean, Randy?” Well, I’ll give you an example. So, I’ve been at eBay twice. eBay has been using an internally built Kafka-like system for many years, almost 20 years. And a thing we learned, everybody else learned at the same time, too, is choreography is best in those cases where you don’t have a state machine. You just want to inform people and have them do stuff.
So example is, when you list an item on the site, we absolutely want to make sure that all the payment and the exchange of stuff actually happens, so that stuff is orchestrated. But when you list an item on the site, there are literally tens of different other things that happen. So, you list a new item, eBay takes the image that you gave them and thumbnails it, and all these different things. It gets checked for fraud. It increments and decrements a bunch of counters about the seller’s account. Right now, the seller has sold a thousand things and, yay, they now get a gold star or a platinum star. So all these different things, and none of those is a workflow in the sense that we could and should continue the mainline work of accepting that item and putting it on the site as those other things happen in parallel.
Michael Stiefel: Because if no one gets the gold star, nothing else is dependent on that gold star.
Randy Shoup: Yes. And I want to be clear that, ultimately, you do get the gold star, but it doesn’t have to happen in a state machine-y way.
Michael Stiefel: Yes, yes.
Randy Shoup: This example, I think, is a reconciliation that, if we didn’t process that event like, every so often, we come around and go, how many things did you actually sell? Anyway, but I hope that explanation makes sense.
Michael Stiefel: Yes. So, I want to summarize this part by saying, and maybe this is something that will appeal to both business people and engineers. We talked about how designing for failure is a reification, or an abstraction, or an implementation of what happens in the real world. Well, if you think about the real world, failure happens, and the point is you dust yourself off when you have a failure, and you get up and go on.
Randy Shoup: Yes.
You WIll Fail – How Will YouRespond to Failure? [35:07]
Michael Stiefel: So the real issue is not did you fail, but how do you respond to that failure.
Randy Shoup: Yes, it’s exactly about the resilience to failure. The wonderful framing, which is not mine, I think it’s John Allspaw and the whole resilience movement, but I’m going to say what these acronyms mean in a second, it’s not about minimizing MTBF, it’s about minimizing MTTR. So, what do I mean? MTBF is mean time between failures. And so if you’re a hardware manufacturer, a thing you want to say is, “Hey, these hard drives that I ship, they don’t fail very often”, and you’re like, “What do you mean by very often?” “Well, our mean time between failures is one in a million, whatever, or four years for this Seagate hard drive, or whatever, but that’s not the right way to think about software. It’s not work so hard to never have anything go wrong because that doesn’t work. Things are going to go wrong. Instead, MTTR, mean time to restore. So, instead, think about, when things fail, how can we recover as quickly as possible and get us back into a correct state?
Again, whether that is retrying and trying to move forward, or rolling back, or undoing, or whatever, and trying to get us back to the beginning, either way. The correct thing, and we’ve learned this over time in the industry in the last, let’s call it, decade is systems are easier to build, much more reliable to operate, and much cheaper if you don’t try to avoid failure. But instead, you try to respond to failure and be resilient to it. So this is exactly the insight behind cloud computing. It’s not have one big system that never, ever, ever goes down, that’s mainframe era thinking.
Instead, it is, there are in a modern data center, literally 100,000 machines and not try to make none of them fail because, at any given moment, handfuls may be, hundreds are down, thousands maybe. But whatever, who cares? Because we put stuff in three different places and we can move things around quickly. And so all these patterns at the higher level are all around letting individual components or individual steps fail, and we don’t care about that because we have a higher level correctness that we’ve layered on.
Resilience is Not A Castle With Moat, Alligators, and a Drawbridge [37:26]
Michael Stiefel: I think last QCon San Francisco, there was a session that we both attended. It was about security. And I apologize for not remembering the speaker’s name, but they had this metaphor of it’s not about building a castle with a moat around, and then alligators, and a drawbridge to make sure no one gets in.
Randy Shoup: Right.
Michael Stiefel: That’s not the right metaphor because the invaders will get in.
Randy Shoup: Yes.
Michael Stiefel: What do you do to section them off, and deal with the failures that inevitably will happen, because the castle will be breached.
Randy Shoup: Yes, exactly. It’s not the hard shell in the soft center. It’s instead zero trust where you componentize and isolate all the individual things. You assume you’re overrun. Your invaders are there, they’re in the house, but what can we do to make sure the room I’m in is safe, or even if they get in the room, they can’t harm me because I wear an Iron Man suit, or whatever? So that mental model is great. The other equivalent, I hope it’s equivalent, is the componentization. Like you say, the isolation. So that’s circuit breakers, that’s bulkheading, that’s all those kind of patterns. And thanks to Michael Nygard for writing those all up in his fantastic book, Release It! Please buy, try and read that.
Michael Stiefel: I recommend everybody read that.
Randy Shoup: Yes. So Michael Nygard and Release It popularized… Because he wouldn’t even himself say he invented these things, but popularized circuit breakers, bulkheading, et cetera, which are exactly these isolated components of the system that are safe. But the other way to think about it is every mitigation or defense that we would put in place is Swiss cheese, but make sure that those holes in the layers of Swiss cheese don’t overlap. I’m making this up, but imagine five pieces of Swiss cheese and you orient them in such a way that at least one of those things blocks everything. So there’s no hole all the way through.
And also, probably related to the first, but I would say it separately, the zero trust where your mental model is you’re out there naked on the internet and you need to make sure that anybody you interact with is legit. So that’s a mutual TLS, so end-to-end encryption in transit. That’s encryption at rest, that’s integrity of the messages, that’s authentication and authorization of the identities of the people that are talking to you.
Michael Stiefel: All the things that the WS-star things tried to solve, that was a big industry struggle. But I think eventually, the industry has realized that these things are important and it’s not just about encrypting or one transaction between the user and the system.
Randy Shoup: Yes.
Architecture and Team Satisfaction [40:05]
Michael Stiefel: I do want to ask you a question. I don’t know if we’ve ever discussed this before, but this is something that’s become interesting to me is how architecture can affect team performance and team satisfaction with their job. You must have come across this. At one level, it seems simple. For example, if you have a loosely coupled system, it makes it easier for individual teams to do their jobs, but I think there’s something deeper here. And I think because you’ve been both an architect and an engineering manager, and if you have lots of different roles, you must have some unique perspective on this.
Randy Shoup: Yes. I don’t know if it’s unique, but I definitely have a perspective. Off the top of my head, I would say at least two, and it’s going to grow in a moment. So number one is, at the highest level, if your system architecture matches the problem, again, this is back what we were saying before, if you take a domain-driven design approach and you can find a place in your system that matches a part of the real problem you’re trying to solve, that’s already good. Why is that good? Because it reduces the cognitive load for the people trying to solve problems, again, because it matches the problem. So, if you understand the problem like, “Oh. Well, where does the payment processing step belong?” “Oh, it’s in the payment processor”. “Okay, cool, that’s awesome”. So, matching the problem is number one.
Number two is, to your point, componentization, whether that is, think about, as microservices or components in some other way, not having to think about the entire system all at once, but instead only having to think about the payment processor part or the bank interchange part, or whatever, taking a big problem, which is the entire thing of eBay or Google, or whatever, and instead making it a much smaller problem. And then thirdly, architecture should be a tool that helps you think, and so having the tools and the patterns to do things easily. And so, we were talking a bunch of those things like, “Hey, if it’s easy to do workflows.”.. Workflows are complicated, but if we, in our architecture, make it easy to do them because we have either built a system that makes it easy or we have other implementations of the SAGA pattern, or whatever, that you can go look at.
So a good architecture is one where, as an architect or as an engineer, I have a lot of different tools. And I don’t mean compilers. I mean components in the system or patterns in the system that allow me to do things. Because the best architectures that people have worked in are ones where you’re like, “Oh, let’s see, I need a data store”. “Okay, here’s this menu”. “Okay, cool, I’ll take that one”. “I need to do events back and forth”. “Okay, I’ll take that”, and having all those tools. And now, I’m going to add a third one, which is a paved path, like a Netflix or a Google, where all the pieces of this system are well-supported. So I can easily spin up a new service because there’s this template. Everything is in there. It’s integrated into the monitoring system, integrated into the RPC mechanism, integrated into CI/CD, blah, blah, blah.
Should Each Team Use Its Own Set of Tools? [43:14]
Michael Stiefel: When you’re talking about tools, one thing that always comes to mind is this struggle between the desire to impose that everyone uses the same tools, because it makes it easier to transfer for systems or hire people, and each team choosing the tool that’s most appropriate for them. And this extends to languages, to high-level tools. How do you feel about that?
Randy Shoup: Yes, I feel very strongly about that. You want to have both. So, the best places that I have worked and the most effective, call it, architectures or engineering organizations, or whatever, are ones where there is a paved path, or a very small number of them. Netflix and Google are great examples of the possible programming languages in the world. At least when I was at Google 10 years ago, there was good support for four, there was good support for C++, Java, Python, and Go.
Other languages are allowed, but you have to roll your own, everything. It has to integrate with a monitoring system and integrate with the testing frameworks, blah, blah, blah, blah, blah. So, certainly at large scale. I’m going to make a different comment when you’re small scale, but at large scale, having a paved path that is well-supported by people whose job it is to support it. That was my team’s job at eBay, by the way, to support the frameworks, and also allow people to go off the reservation. So paved path, but you can bushwhack.
And why do you allow bushwhacking? It’s because, sometimes, the right thing to do to solve this particular problem. Like some machine learning problem, let’s say, they should do it in Python because that’s the language where everything is written in. And if you’re doing some other kind of system, maybe that’s Erlang. So there’s a reason why WhatsApp was only eight people when they were acquired by Facebook, or whatever, because Erlang very much matches… and that whole system supernaturally matches the messaging problem they’re trying to solve.
Anyway, my point is, again, paved path, plus the ability to do new things, and to do new things, again, is because it allows people to match the exact problem they’re trying to solve individual teams, and also it allows growth and evolution of the common framework. So if you’re in this monoculture and you never look outside, you’re stuck. And there are lots of examples out in the world where companies have gotten themselves stuck into a rut where, “Okay, we’re only Java, and we’re going to keep our blinders on and never look anywhere else”, and that hasn’t been a super bad choice, but there are companies that are in the Microsoft ecosystem which is better now than it was 10 years ago. But you know what I mean? “Hey, we only do Microsoft”, and even the thinking about how to do distributed systems was very isolated, if that makes any sense.
Michael Stiefel: Yes, I lived in that world for a long time.
Randy Shoup: No shade on either of those ecosystems, I use them both, but you see where I’m going with that. So, I have a strong visceral belief that you shouldn’t have a monoculture at scale. Okay. Now, when you’re small, like I am, we have 100 people in the engineering organization at Thrive Market, where I work, we really should all be working on one thing. Some individual teams need to… Again, machine learning is a great example. They need to do stuff in Python, whether or not we did that elsewhere on the site. But when you’re small, it should definitely not be like every team for itself because you don’t have a lot of time to waste, a lot of resources to waste on doing things in multiple ways. So, that’s my quick thinking on standardization versus letting a thousand flowers bloom.
Michael Stiefel: I like that approach because it differentiates between the small to the large, and you can see what happens if you adopt what you say, for small teams when you grow to larger scale. Because a lot of the problems, which we didn’t talk about and it’s another whole thing we could talk about because we don’t have the time, and this podcast could be hours-
Randy Shoup: Yes, we’ll do another one, or something, if you want.
The Surprise of Large Scale [46:56]
Michael Stiefel: Right. Because what happens when, all of a sudden, you wake up and you were at small scale, and tomorrow, you got mentioned in the press and you now are at large scale all of a sudden.
Randy Shoup: Yes. Every company that we think of as large scale had that scaling event.
Michael Stiefel: Yes.
Randy Shoup: It is rare to have had that happen very slowly. It happens, but it is rare. Everything is an S-curve, but the way more likely is you’re chugging along, no one knows about you, and all of a sudden, kaboom, you hit something. Again, mentioned in the Wall Street Journal or reach a critical mass of people knowing about it and telling their friends, or whatever. And we don’t have time to talk about it here, but I have thought about for many years these phases of companies and products. And there’s a starting phase and a growth phase where the J-curve, as people talk about, or, really, the S-curve starts to get steeper and you go faster, and then it flattens out.
Michael Stiefel: So maybe some other podcasts, we’ll talk about scaling and what other else comes up.
Randy Shoup: Sure, sure.
The Architect’s Questionnaire [48:03]
Michael Stiefel: This is the point where I like to go off and ask the questionnaires that I like to ask all my architects. I find it also adds a human dimension to the podcast.
Randy Shoup: Great.
Michael Stiefel: So, what is your favorite part of being an architect?
Randy Shoup: I mentioned it earlier, actually. It’s the Gregor Hohpe’s Architect Elevator. So, I get a lot of enjoyment out of going up to the boardroom and down to the engine room. There’s something that’s just very energizing for me about being able to see things and help solve problems in the large, but also see things and solve problems in the small, and each of those lenses informs the other. What I don’t like-
Michael Stiefel: Right. What is your least favorite part of being an architect?
Randy Shoup: I haven’t had this experience a lot, but when it is not considered important or strategic to the organization or the company, I don’t like being not productive, or useful, or valuable. So, if I am in a situation where it’s not considered valuable to do this stuff, I should go somewhere else.
Michael Stiefel: Is there anything creatively, spiritually, or emotionally satisfying about architecture being an architect?
Randy Shoup: Yes. This is where, again, I said I’m a multidisciplinary person at my core. I’m not just interested in the computer science side. I’m not just interested in the international side. We contain multitudes. And so the thing that I love about being an architect is being able to play, again, on both sides.
The thing that really resonates with me is I tend to be more of a deductive reasoner, rather than inductive. So what does that mean? Deductive is you have a set of principles and you apply them. My sense is more people are inductive where you look at a bunch of examples and then derive from there. I like to do both, but my go-to model is to have a model, if that makes any sense. I like to think in terms of… People see this if we look at my talks back almost 20 years, I’d like to state, “Here are the principles. We should split things up. We should be asynchronous. We should deal with failure.” So I like to take those principles and have a clear, almost platonic statement about what the principles are, and then apply them in the real world.
And then maybe orthogonally to that, I didn’t have this word when I was younger, but I’ve always tried to be a system thinker. I get enjoyment, and value, and, I don’t know, spiritual energy, I guess, from really seeing the whole board and then being able to do interesting things within that.
Michael Stiefel: Thinking about what you said, and I’ve done a lot of teaching, and what I have found is, certainly, what you say is true that most people proceed from the concrete to the abstract, rather the abstract to the concrete.
Randy Shoup: Right.
Michael Stiefel: But I think there’s a difference between giving a talk, as you mentioned, and teaching where you want to start from the principles, as opposed to learning where you want to start with the concrete example. Because very often, the abstract principles seem too vague.
Randy Shoup: Yes.
Michael Stiefel: Because presumably, when you give your talk, you state the principles, but then you explain them with concrete examples.
Randy Shoup: Yes. You’re not implying this, but I want to say there’s nothing wrong with either model. A, I use both. And so it’s not like, “Oh, you can only be an architect if you do deductive reasoning and thinking principles first”. I’m just saying you asked a great question, which is, “What resonates with you, Randy, at a deeper level about architecture?” And that’s what resonates with me is this idea of coming with principles and then applying them. So, that’s what resonates with me personally. If you only did that, you would not be a very effective architect if you only did the other way. If you only did inductive reasoning where you only looked at examples and then abstracted from there, you would not be very effective. Both techniques are important, and I use them both all the time.
Michael Stiefel: So, what turns you off about architecture or being an architect?
Randy Shoup: When an architect behaves in an ivory tower way. Again, lots of people get excited about lots of different things, and that’s great. Again, it’s great that we have a diversity of people and approaches in this world. I do not like not being useful. And when I say useful, I mean… A lot of us do. I have the skill and capability to pontificate. I could do that, I don’t want to. Again, boardroom to engine room, I would much rather work it so that things actually matter. I’m not interested in producing documents for document’s sake. I’m interested in changing the world, or at least the company.
Michael Stiefel: In a science fiction world, you’d like to step into the UML diagram, into the box, and see what’s in the box.
Randy Shoup: Yes. And the purpose of diagramming, and the purpose of saying things and principles, and the purpose of doing architecture at all is to solve customer and business problems. That’s what we’re here for, and it is worse… You’re not applying otherwise. It is worse than useless for some fancy, well-paid person to pontificate about stuff and not have that very directly be connected to solving a business problem we couldn’t before, solving a customer problem we couldn’t before.
Michael Stiefel: Do you have any favorite technologies?
Randy Shoup: I do. I very much, obviously, am going to date myself. So, again, as I mentioned, no hiding, I started my career… graduated in 1990, started doing my internships a couple of years before that. I’ve always loved SQL. My starting thing was doing Oracle database related stuff, again, in my internship at Intel for a couple of years, and then I went to work for Oracle for seven years. So, nothing about Oracle database in particular, although it’s always been really good. But SQL, I think it’s just we have yet in the data world, and this is not a bad thing. We are still using 1970s relational algebra, whether we know it or not, doing data systems. So I think that’s wonderful.
Again, dating myself in terms of when I was last, and it was a while ago, hands-on keyboard as my primary job, but I’m really good at Java and C++, so I did a bunch of stuff there. Not so much technologies, but again, patterns and approaches, particularly at large scale. And don’t do it if you don’t need it, but a services or microservices approach, event-driven architecture, those are things that really solve real problems and I use all the time.
Michael Stiefel: What about architecture do you love?
Randy Shoup: I love finding an elegant solution to a problem. Actually, we just had this the other day. It would be too long to explain the details. Not that they’re secret, but just the other day, one of my teams at Thrive was going down a path that would work but wouldn’t be right. And so, going in and doing a little bit of Socratic method of, “Okay, let’s restate the customer problem”, which in this case, “Hey, what does the ML team need to be able to run their models in real time?” And like, “Okay, let’s explain what you need. What do you have and what do you want back?” “Okay, now that we know that, hey, let’s think about what the interface should be on the next level down”, and like, “Okay, we have a bunch of options, but this one is more natural”.
Anyway, I love being able to see a problem and helping to reframe it in a way that makes it easier and maybe sometimes even obvious to solve. It’s such a leverage point. It’s such a force multiplier to be able to… not by myself, but help us all to see, “We think this feels hard. We’re going down this thing, we’re bushwhacking, but there’s actually a really easy, or straightforward, or natural way of approaching this problem if only we think about it differently”. And so, that’s what I get a lot of enjoyment out of.
Michael Stiefel: So, conversely, what about architecture do you hate?
Randy Shoup: I don’t like deferred gratification. So, I like it when if we’re going to really put an architect hat on and do a big architect-y thing, whatever that even means, and we can do this, but I want to make sure that we get value now, as opposed to, “Hey, I sketched out this fancy architecture”. “Oh, yes, we’ll action on that in two years because it’ll take us all this time to do X, and Y, and Z”, and so the deferred gratification there. And then similarly for “big” changes, again, takes too strong a term, but a thing that is hard is dealing with what I’ll call the activation energy to get the organization to think in a new way, start doing in a new way, yes. So I wouldn’t say I hate it, but if I can reframe it, what do I struggle with and not enjoy, that’s it, yes.
Michael Stiefel: So, what profession, other than being an architect, would you like to attempt?
Randy Shoup: Yes, I think I already hinted at that in our intro, but I think I would not do this today, but the other career which I thought was going to be in my mainline career was international law. And thankfully, I found another way. The other thing at the moment would always have been true, but even true now is gourmet chef. So, my personal creative outlet in my life and big enjoyment is I’m a big foodie on the eater side, and so, therefore, I have learned to be a foodie on the chef side. So, I know because my sister-in-law does this herself, but it’s hard to work in a real restaurant. That’s actual real work, very, very hard and uncompromising. But just from an enjoyment perspective, gourmet chef.
Michael Stiefel: Do you ever see yourself not being an architect anymore?
Randy Shoup: Yes and no. I think I will not ever stop trying to frame things in a different, and hopefully natural, and hopefully elegant way. So, from that perspective, that’s a core part of me, I can’t turn that off. Is it possible that someday I will… You know what? No, I don’t think I ever will, to be honest. Even if I, as many of my friends have done so far these days, shift into more an advisory mode with various companies and individual people coaching, or whatever, which I love and do as a side gig as well, I will never shy away from talking about the architecture stuff, if that makes any sense. So, yes, I guess I’ll never stop.
Michael Stiefel: When a project is done, what do you like to hear from the clients or your team?
Randy Shoup: Yes. First and foremost, why do we even do any of these things? It’s because it solved a real problem. So, first and foremost, I want to hear we had a problem, or we had an opportunity, and we solved the problem, or we executed on the opportunity. So that’s number one. At the end of the day, if we don’t make things better, we should do something else. We can make something else better because there’s lots of opportunities for improving. The other though for me is if I hear back, “Wow, that was really elegant how we approached that problem, that was really extensible. I”, other engineer, not Randy, “can now see how we can evolve the system in this way, and that way, and the other way”. So, I guess that’s the other thing that I would really want to see, and this is from the team. It’s like, “Oh, man, we approached this problem in a way that opens more doors than it closes”.
Michael Stiefel: I like that.
Randy Shoup: Yes.
Michael Stiefel: Well, I know there are more topics we could talk about. As we talked through ideas that came into my head, we could go down this path, but thank you very much for being on the podcast. You’re a great guest, and I hope we can do it again sometime.
Randy Shoup: Yes, me, too. Look, between us, we have many, many decades of experience, and just being able to share some of those ideas together is great. So, thanks for having me on. Happy to do it again. Loved it.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Louisiana State Employees Retirement System reduced its position in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 3.6% in the fourth quarter, according to the company in its most recent 13F filing with the SEC. The institutional investor owned 5,400 shares of the company’s stock after selling 200 shares during the quarter. Louisiana State Employees Retirement System’s holdings in MongoDB were worth $1,257,000 as of its most recent SEC filing.
Several other large investors have also bought and sold shares of MDB. Creative Planning boosted its stake in shares of MongoDB by 16.2% during the 3rd quarter. Creative Planning now owns 17,418 shares of the company’s stock worth $4,709,000 after acquiring an additional 2,427 shares in the last quarter. Bleakley Financial Group LLC raised its position in MongoDB by 10.5% during the third quarter. Bleakley Financial Group LLC now owns 939 shares of the company’s stock valued at $254,000 after buying an additional 89 shares during the period. Blue Trust Inc. increased its holdings in shares of MongoDB by 72.3% in the 3rd quarter. Blue Trust Inc. now owns 927 shares of the company’s stock valued at $232,000 after purchasing an additional 389 shares during the period. Prio Wealth Limited Partnership acquired a new position in MongoDB in the 3rd quarter valued at approximately $203,000. Finally, Whittier Trust Co. increased its stake in shares of MongoDB by 3.9% in the third quarter. Whittier Trust Co. now owns 30,933 shares of the company’s stock worth $8,362,000 after acquiring an additional 1,169 shares during the last quarter. Institutional investors own 89.29% of the company’s stock.
Analyst Ratings Changes
MDB has been the topic of a number of recent research reports. Robert W. Baird raised their price target on shares of MongoDB from $380.00 to $390.00 and gave the company an “outperform” rating in a report on Tuesday, December 10th. Morgan Stanley raised their price target on MongoDB from $340.00 to $350.00 and gave the company an “overweight” rating in a research report on Tuesday, December 10th. The Goldman Sachs Group upped their price target on MongoDB from $340.00 to $390.00 and gave the stock a “buy” rating in a report on Tuesday, December 10th. Oppenheimer increased their target price on shares of MongoDB from $350.00 to $400.00 and gave the company an “outperform” rating in a research note on Tuesday, December 10th. Finally, Stifel Nicolaus boosted their price target on shares of MongoDB from $325.00 to $360.00 and gave the company a “buy” rating in a report on Monday, December 9th. Two analysts have rated the stock with a sell rating, four have given a hold rating, twenty-three have assigned a buy rating and two have assigned a strong buy rating to the stock. Based on data from MarketBeat.com, the stock has a consensus rating of “Moderate Buy” and an average target price of $361.00.
Get Our Latest Report on MongoDB
Insider Buying and Selling
In other news, CEO Dev Ittycheria sold 2,581 shares of the stock in a transaction dated Thursday, January 2nd. The stock was sold at an average price of $234.09, for a total transaction of $604,186.29. Following the transaction, the chief executive officer now directly owns 217,294 shares in the company, valued at approximately $50,866,352.46. This represents a 1.17 % decrease in their position. The sale was disclosed in a document filed with the Securities & Exchange Commission, which can be accessed through this link. Also, CAO Thomas Bull sold 169 shares of the stock in a transaction on Thursday, January 2nd. The shares were sold at an average price of $234.09, for a total transaction of $39,561.21. Following the transaction, the chief accounting officer now directly owns 14,899 shares in the company, valued at approximately $3,487,706.91. This represents a 1.12 % decrease in their position. The disclosure for this sale can be found here. In the last 90 days, insiders sold 43,094 shares of company stock valued at $11,705,293. 3.60% of the stock is owned by insiders.
MongoDB Stock Performance
Shares of NASDAQ:MDB opened at $289.63 on Monday. The business’s 50 day simple moving average is $262.66 and its two-hundred day simple moving average is $272.16. MongoDB, Inc. has a 52-week low of $212.74 and a 52-week high of $488.00. The firm has a market cap of $21.57 billion, a P/E ratio of -105.70 and a beta of 1.28.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings data on Monday, December 9th. The company reported $1.16 earnings per share (EPS) for the quarter, topping analysts’ consensus estimates of $0.68 by $0.48. The business had revenue of $529.40 million during the quarter, compared to the consensus estimate of $497.39 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The firm’s quarterly revenue was up 22.3% compared to the same quarter last year. During the same period in the prior year, the firm posted $0.96 EPS. On average, equities analysts forecast that MongoDB, Inc. will post -1.78 earnings per share for the current fiscal year.
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Stephens Inc. AR reduced its position in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 4.1% during the 4th quarter, according to the company in its most recent 13F filing with the Securities & Exchange Commission. The firm owned 1,014 shares of the company’s stock after selling 43 shares during the quarter. Stephens Inc. AR’s holdings in MongoDB were worth $236,000 at the end of the most recent quarter.
Other large investors also recently added to or reduced their stakes in the company. Hilltop National Bank raised its stake in MongoDB by 47.2% during the 4th quarter. Hilltop National Bank now owns 131 shares of the company’s stock valued at $30,000 after purchasing an additional 42 shares during the period. Brooklyn Investment Group acquired a new stake in MongoDB during the 3rd quarter valued at $36,000. Continuum Advisory LLC raised its stake in MongoDB by 621.1% during the 3rd quarter. Continuum Advisory LLC now owns 137 shares of the company’s stock valued at $40,000 after purchasing an additional 118 shares during the period. Versant Capital Management Inc raised its stake in MongoDB by 1,100.0% during the 4th quarter. Versant Capital Management Inc now owns 180 shares of the company’s stock valued at $42,000 after purchasing an additional 165 shares during the period. Finally, Wilmington Savings Fund Society FSB acquired a new stake in MongoDB during the 3rd quarter valued at $44,000. 89.29% of the stock is owned by institutional investors and hedge funds.
Insider Activity
In other news, Director Dwight A. Merriman sold 922 shares of the company’s stock in a transaction dated Friday, February 7th. The shares were sold at an average price of $279.09, for a total transaction of $257,320.98. Following the completion of the transaction, the director now owns 84,730 shares of the company’s stock, valued at approximately $23,647,295.70. This represents a 1.08 % decrease in their ownership of the stock. The sale was disclosed in a filing with the Securities & Exchange Commission, which is accessible through the SEC website. Also, CAO Thomas Bull sold 1,000 shares of the company’s stock in a transaction dated Monday, December 9th. The shares were sold at an average price of $355.92, for a total transaction of $355,920.00. Following the transaction, the chief accounting officer now directly owns 15,068 shares of the company’s stock, valued at $5,363,002.56. This represents a 6.22 % decrease in their position. The disclosure for this sale can be found here. Over the last 90 days, insiders sold 43,094 shares of company stock valued at $11,705,293. Insiders own 3.60% of the company’s stock.
MongoDB Price Performance
MongoDB stock opened at $289.63 on Monday. The firm has a market cap of $21.57 billion, a price-to-earnings ratio of -105.70 and a beta of 1.28. MongoDB, Inc. has a fifty-two week low of $212.74 and a fifty-two week high of $488.00. The firm has a fifty day moving average price of $262.66 and a two-hundred day moving average price of $272.16.
MongoDB (NASDAQ:MDB – Get Free Report) last released its earnings results on Monday, December 9th. The company reported $1.16 earnings per share for the quarter, topping analysts’ consensus estimates of $0.68 by $0.48. The firm had revenue of $529.40 million during the quarter, compared to analyst estimates of $497.39 million. MongoDB had a negative net margin of 10.46% and a negative return on equity of 12.22%. The business’s revenue was up 22.3% on a year-over-year basis. During the same quarter in the previous year, the business earned $0.96 EPS. As a group, sell-side analysts expect that MongoDB, Inc. will post -1.78 earnings per share for the current year.
Analyst Upgrades and Downgrades
MDB has been the topic of several recent analyst reports. Canaccord Genuity Group boosted their target price on shares of MongoDB from $325.00 to $385.00 and gave the company a “buy” rating in a research note on Wednesday, December 11th. Wells Fargo & Company boosted their price target on shares of MongoDB from $350.00 to $425.00 and gave the company an “overweight” rating in a report on Tuesday, December 10th. The Goldman Sachs Group boosted their price target on shares of MongoDB from $340.00 to $390.00 and gave the company a “buy” rating in a report on Tuesday, December 10th. Loop Capital boosted their price target on shares of MongoDB from $315.00 to $400.00 and gave the company a “buy” rating in a report on Monday, December 2nd. Finally, Citigroup boosted their price target on shares of MongoDB from $400.00 to $430.00 and gave the company a “buy” rating in a report on Monday, December 16th. Two investment analysts have rated the stock with a sell rating, four have given a hold rating, twenty-three have issued a buy rating and two have given a strong buy rating to the company’s stock. According to MarketBeat, MongoDB presently has an average rating of “Moderate Buy” and a consensus price target of $361.00.
Get Our Latest Report on MongoDB
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Read More
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news
The Future of Cybersecurity: Moksha Investigates Hybrid ML Models for Critical Infrastructure

MMS • RSS
Posted on mongodb google news. Visit mongodb google news
PRESS RELEASE
Published February 16, 2025
Moksha Shah: Cybersecurity Research Pioneer
The modern world today is interconnected, and cyber-based threats are evolving at a fast pace, making traditional mechanisms for security ineffective. It is Moksha Shah, an Information Technology-embedded researcher, who works day and night to ameliorate the situation. Cocking a keen eye at the crossroads of Cybersecurity and AI, Moksha investigates Hybrid Machine Learning (ML) models, which constitute the latest big thing employing diverse AI methodologies to create an intelligent, more dynamic Cybersecurity system.
Moksha Shah is a Java developer with 5+ years of experience specializing in API development, backend solutions, and cybersecurity. She has designed and implemented RESTful and SOAP APIs using Java and frameworks like Spring Boot and JAX-RS. Moksha has extensive experience integrating APIs with databases like MySQL, SQL Server, Oracle DB, and MongoDB. She is proficient in developing responsive web applications using JavaScript, HTML5, CSS3, Angular.js, and JSTL. Additionally, Moksha is proficient in Python, applying it in the fields of cybersecurity, AI and Machine Learning. She has developed security solutions, automated tasks, and built AI models for threat detection and response. She is adept at using agile methodologies, including TDD and SCRUM, and has a solid understanding of building scalable, efficient, and secure web services. Moksha is also proficient in API testing and documentation tools such as Postman and Swagger, and has a strong background in backend technologies and cloud solutions.
Moksha’s research endeavor revolves around the protection of critical systems, including power grids, banks, healthcare service providers, and government networks. This is so because these Criticals are paramount to society, and their safety becomes a matter of national economy and security. With the onset of AI-enabled cyberattacks, ransomware, and nation-state attacks from within states, it remains glaringly evident that these traditional mechanics of security are fading. Moksha’s research activity intends to fill this gap with the application of Hybrid ML models capable of detecting, identifying, and blocking a cyber threat instantly.
The Latest Project: Hybrid ML to Secure Critical Systems
The latest project Moksha is investigating Hybrid Machine Learning Models that combine Supervised Learning, Unsupervised Learning, and Reinforcement Learning techniques to develop a multi-layered self-healing cybersecurity system. Compared to traditional rule-based security mechanisms, this system is to a greater extent advanced in that it combines AI intelligence that learns, adapts, and ensures its preservation.
Why Traditional Cybersecurity is Failing
The advancements in cybersecurity have not kept organizations immune to sophisticated attacks-that is, attacks that have successfully evaded traditional mechanisms of defense. Some of the trickiest deficiencies in traditional security include:
- Static Rule-Based Systems-Traditional security relies on a very predefined rule set and a signature set, which is easy for the attacker or any cybercriminal to evade.
- Slow Threat Detection- Most security systems work on an after-the-fact approach-they focus on detecting the threat after the attack has taken place; this, however, comes with a significant cost in terms of damage and downtime.
- Inability to Predict Future Attacks-Contrary to modern solutions, classic ones do not build in predictive intelligence, making it entirely impossible to anticipate future cyber threats.
How the Hybrid ML Model Resolves These Challenges
The hybrid ML model is leading the revolution in cybersecurity, overcoming the stated weaknesses with:
- Real-Time Threat Detection-In AI systems, continuous network behavior analysis assists in identifying anomalies that can indicate a cyberattack.
- Adaptive Defense Mechanisms-The hybrid ML model is therefore ever-evolving and learns to adjust its active defense mechanism against a new threat, unlike other systems that operate statically.
- Security Intelligence-Predictive models can determine if a vulnerability exists and whether it is likely to be exploited before an attacker exploits it. Predictive ML tools have better accuracy in determining cyber events.
- Automated Response System- Automated threat response within this system will now integrate AI-driven automated systems and action in real-time with the least possible intervention from humans.
Moksha’s research embodies deploying cutting-edge AI algorithms to envision and establish a proactive cybersecurity framework that stops attacks before they happen, rather than responding to them.
Main Uses of Hybrid ML in Cybersecurity
The hybrid ML models going through changing environments are creating waves in the world of cyberspace:
- Financial Institutions: Acts as a real-time fraud detection to prevent identity theft and protect banking systems from cyberattacks-driven AI.
- Health Systems: To protect against unauthorized access to patient information; furthermore, it secures IoT-enabled medical devices from being hacked.
- Government & Defense: Performing national strength in cybersecurity, preventing strategies against cyber-espionage, and automating security for critical systems.
- Smart Cities & IoT: To detect infrastructure anomalies, safeguard connected devices, and further develop cloud security for data from smart cities.
Reasons Why Hybrid ML Is the Cyber Security Tomorrow
Cybercriminals have been using AI to launch vigilant and evasive attacks. The ongoing AI-generated attacks compel immediate counteractions from the organizational side. A hybrid ML model addresses such counteractions:
- Proactive: It can forestall cyberattacks.
- Smart: It learns about new threats and adapts accordingly.
- Scalable: It can be implemented on large networks without compromising speed or efficiency.
Conclusion: Preparing for Tomorrow’s AI-Powered Cyber Threats
The future of cyber defense systems lies in advanced, AI-based mechanisms-Hybrid Machine Learning Models are for sure the driving engine behind this change. Moksha Shah’s research will bring forth new horizons for a world of cybersecurity in which organizations do not have to wait in reactive mode for an attack, responding with a self-learning AI-driven security, but can actually prevent such attacks.
With the evolution and magnification of cyber threats, businesses, governments, and critical infrastructure providers must turn towards AI-enabled cybersecurity frameworks in order to safeguard.
Track new cyber threats with Moksha Shah’s study on the latest in Hybrid ML-based cybersecurity.
Vehement Media
Article originally posted on mongodb google news. Visit mongodb google news
Java News Roundup: JDK 24-RC1, JDK Mission Control, Spring, Hibernate, Vert.x, JHipster, Gradle

MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for February 10th, 2025 features news highlighting: the first release candidate of JDK 24; JDK Mission Control 9.1.0; milestone releases of Spring Framework 7.0, Spring Data 2025.0.0 and Hibernate 7.0; release candidates of Vert.x 5.0.0 and Gradle 8.13.0; and JHipster 8.9.0.
OpenJDK
The release of JDK Mission Control 9.1.0 provides bug fixes and improvements such as: the ability to use the custom JFR event types, i.e., those extending the Java Event
class, in the JFR Writer API followed by registering those types; and the ability to use primitive types in converters. More details on this release may be found in the list of issues.
JDK 24
Build 36 remains the current build in the JDK 24 early-access builds. Further details may be found in the release notes.
As per the JDK 24 release schedule, Mark Reinhold, Chief Architect, Java Platform Group at Oracle, formally declared that JDK 24 has entered its first release candidate as there are no unresolved P1 bugs in Build 36. The anticipated GA release is scheduled for March 18, 2025 and will include a final set of 24 features. More details on these features and predictions for JDK 25 may be found in this InfoQ news story.
JDK 25
Build 10 of the JDK 25 early-access builds was also made available this past week featuring updates from Build 9 that include fixes for various issues. Further details on this release may be found in the release notes.
For JDK 24 and JDK 25, developers are encouraged to report bugs via the Java Bug Database.
Spring Framework
The second milestone release of Spring Framework 7.0.0 delivers new features such as: improvements to the equals()
method, defined in the AnnotatedMethod
class, and the HandlerMethod
to resolve failed Cross-Origin Resource Sharing (CORS) configuration lookups; and a refinement of the GenericApplicationContext
class that adds nullability using the JSpecify @Nullable
annotation to the constructorArgs
parameter listed in the overloaded registerBean()
method. More details on this release may be found in the release notes.
Similarly, versions 6.2.3 and 6.1.17 of Spring Framework have also been released to provide new features such as: improvements in MVC XML configuration that resolved an issue where the handler mapping, using an instance of the the AntPathMatcher
class, instead used an instance of the PathPatternParser
class; and a change to the ProblemDetails
class to implement the Java Serializable
interface so that it may be used in distributed environments. These versions will be included in the upcoming releases of Spring Boot 3.4.3 (and 3.5.0-M2) and 3.3.9, respectively. Further details on this release may be found in the release notes for version 6.2.3 and version 6.1.17.
The first milestone release of Spring Data 2025.0.0 ships with new features such as: support for vector search for MongoDB and Cassandra via MongoDB Atlas and Cassandra Vector Search; and a new Vector
data type that allows for abstracting underlying values within a domain model that simplifies the declaration, portability and default storage options. More details on this release may be found in the release notes.
Similarly, Spring Data 2024.1.3 and 2024.0.9, both service releases, ship with bug fixes, dependency upgrades and and respective dependency upgrades to sub-projects such as: Spring Data Commons 3.4.3 and 3.3.9; Spring Data MongoDB 4.4.3 and 4.3.9; Spring Data Elasticsearch 5.4.3 and 5.3.9; and Spring Data Neo4j 7.4.3 and 7.3.9. These versions will be included in the upcoming releases of Spring Boot and 3.4.3 and 3.3.9, respectively.
The release of Spring Tools 4.28.1 provides: a properly signed Eclipse Foundation distribution for WindowOS; and a resolution to an unknown publisher error upon opening the executable for Spring Tool Suite in Windows 11. Further details on this release may be found in the release notes.
Open Liberty
IBM has released version 25.0.0.2-beta of Open Liberty features the ability to configure the MicroProfile Telemetry 2.0 feature, mpTelemetry-2.0
, to send Liberty audit logs to the OpenTelemetry collector. As a result, the audit logs may be managed with the same solutions with other Liberty log sources.
Micronaut
The Micronaut Foundation has released version 4.7.6 of the Micronaut Framework featuring Micronaut Core 4.7.14, bug fixes and a patch update to the Micronaut Oracle Cloud module. This version also provides an upgrade to Netty 4.1.118, a patch release that addresses CVE-2025-24970, a vulnerability in Netty versions 4.1.91.Final through 4.1.117.Final, where a specially crafted packet, received via an instance of the SslHandler
class, doesn’t correctly handle validation of such a packet, in all cases, which can lead to a native crash. More details on this release may be found in the release notes.
Hibernate
The fourth beta release of Hibernate ORM 7.0.0 features: a migration to the Jakarta Persistence 3.2 specification, the latest version targeted for Jakarta EE 11; a baseline of JDK 17; improved domain model validations; and a migration from Hibernate Commons Annotations (HCANN) to the new Hibernate Models project for low-level processing of an application domain model. Further details on this release may be found in the release notes and the migration guide.
The release of Hibernate Reactive 2.4.5.Final features compatibility with Hibernate ORM 6.6.7.Final and provides resolutions to issues: a Hibernate ORM PropertyAccessException
when creating a new object via the persist()
method, defined in the Session
interface, with an entity having bidirectional one-to-one relationships in Hibernate Reactive with Panache; and the doReactiveUpdate()
method, defined in the ReactiveUpdateRowsCoordinatorOneToMany
class, ignoring the return value of the deleteRows()
method, defined in the same class. More details on this release may be found in the release notes.
Eclipse Vert.x
The fifth release candidate of Eclipse Vert.x 5.0 delivers notable changes such as: the removal of deprecated classes – ServiceAuthInterceptor
and ProxyHelper
– along with the two of the overloaded addInterceptor()
methods defined in the ServiceBinder class; and support for the Java Platform Module System (JPMS). Further details on this release may be found in the release notes and deprecations and breaking changes.
Micrometer
The second milestone release of Micrometer Metrics 1.15.0 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: the removal of special handling of HTTP status codes 404, Not Found, and 301, Moved Permanently, from OkHttp client instrumentation; and a deprecation of the SignalFxMeterRegistry
class (step meter) in favor of the OtlpMeterRegistry
class (push meter). More details on these releases may be found in the release notes.
The second milestone release of Micrometer Tracing 1.5.0 provides dependency upgrades and features a deprecation of the ArrayListSpanProcessor
class in favor of the Open Telemetry InMemorySpanExporter
class. Further details on this release may be found in the release notes.
Piranha Cloud
The release of Piranha 25.2.0 delivers many dependency upgrades, improvements in documentation and notable changes such as: removal of the GlassFish 7.x and Tomcat 10.x compatibility extensions; and the ability to establish a file upload size in the FileUploadExtension
, FileUploadMultiPart
, FileUploadMultiPartInitializer
and FileUploadMultiPartManager
classes. More details on this release may be found in the release notes, documentation and issue tracker.
Project Reactor
Project Reactor 2024.0.3, the third maintenance release, providing dependency upgrades to reactor-core 3.7.3
, reactor-netty 1.2.3
, reactor-pool 1.1.2
. There was also a realignment to version 2024.0.3 with the reactor-addons 3.5.2
, reactor-kotlin-extensions 1.2.3
and reactor-kafka 1.3.23
artifacts that remain unchanged. Further details on this release may be found in the changelog.
Similarly, Project Reactor 2023.0.15, the fifteenth maintenance release, provides dependency upgrades to reactor-core 3.6.14
, reactor-netty 1.1.27
and reactor-pool 1.0.10
. There was also a realignment to version 2023.0.15 with the reactor-addons 3.5.2
, reactor-kotlin-extensions 1.2.3
and reactor-kafka 1.3.23
artifacts that remain unchanged. More details on this release may be found in the changelog.
JHipster
The release of JHipster 8.9.0 features: dependency upgrades to Spring Boot 3.4.2, Node 22.13.1, Gradle 8.12.1, Angular 19.0.6 and Typescript 5.7.3; and support for plain time fields (Java LocalTime
class) without it being tied to a date to the JHipster Domain Language (JDL). Further details on this release may be found in the release notes.
Gradle
The first release candidate of Gradle 8.13.0 introduces a new auto-provisioning utility that automatically downloads a JVM required by the Gradle Daemon. Other notable enhancements include: an explicit Scala version configuration for the Scala Plugin to automatically resolve required Scala toolchain dependencies; and refined millisecond precision in JUnit XML test event timestamps. More details on this release may be found in the release notes.
Distributed Multi-Modal Database Aerospike 8 Brings Support for Real-Time ACID Transactions

MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Aerospike has announced version 8.0 of its distributed multi-modal database, bringing support for distributed ACID transactions. This enables large-scale online transaction processing (OLTP) applications like banking, e-commerce, inventory management, health care, order processing, and more, says the company.
As Aerospike director of product Ronen Botzer explains, large-scale applications all require some horizontal scaling to support concurrent load and reduce latency, which inevitably brings the CAP Theorem into play.
The CAP theorem states that when a network partitions due to some failure, a distributed system may be either consistent or available. In contrast, both of these properties can be guaranteed in the absence of partitions. For distributed database systems, the theorem led to RDBMS usually choosing consistency via ACID, with NoSQL databases favoring availability following the BASE paradigm.
Belonging to the NoSQL camp, Aerospike was born as an AP (available and partition-tolerant) datastore. Later, it introduced support for ACID with its fourth release by allowing developers to select whether a namespace runs in a high-availability AP mode or a high-performance CP mode. CP mode in Aerospike is known as strong consistency (SC) and provides sequential consistency and linearizable reads, guaranteeing consistency for single objects.
While Aerospike pre-8.0 has been great at satisfying the requirements of internet applications […] limiting SC mode to single-record and batched commands left something to be desired. The denormalization approach works well in a system where objects are independent of each other […] but in many applications, objects actually do have relationships between them.
As Botzer explained, the existence of relationships between objects makes transactions necessary, and many developers had to build their own transaction mechanism on top of a distributed database. This is why Aerospike built native distributed transaction capabilities into Database 8, which meant providing strict serializability for multi-record updates and doing this without hampering performance.
Aerospike distributed transactions have a cost, which includes four extra writes and one extra read, so it is important to understand the performance implications they have. Tests based on Luis Rocha’s Chinook database showed results in line with those extra operations, meaning that smaller transactions are affected most while overhead is amortized in larger ones. All in all, says Botzer,
Transactions perform well when used judiciously together with single-record read and write workloads.
ACID transactions display properties designed to ensure the reliability and consistency of database transactions, i.e., atomicity, consistency, isolation, and durability. They guarantee that database operations are executed correctly. If there is any failure, the database can recover to a previous state without losing any data or impacting the consistency of the data. BASE systems opt instead for being Basically Available, Soft-stated, and Eventually consistent, thus giving up on consistency.
Distributed Multi-Modal Database Aerospike 8 Brings Support for Real-Time ACID Transactions

MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ

Aerospike has announced version 8.0 of its distributed multi-modal database, bringing support for distributed ACID transactions. This enables large-scale online transaction processing (OLTP) applications like banking, e-commerce, inventory management, health care, order processing, and more, says the company.
As Aerospike director of product Ronen Botzer explains, large-scale applications all require some horizontal scaling to support concurrent load and reduce latency, which inevitably brings the CAP Theorem into play.
The CAP theorem states that when a network partitions due to some failure, a distributed system may be either consistent or available. In contrast, both of these properties can be guaranteed in the absence of partitions. For distributed database systems, the theorem led to RDBMS usually choosing consistency via ACID, with NoSQL databases favoring availability following the BASE paradigm.
Belonging to the NoSQL camp, Aerospike was born as an AP (available and partition-tolerant) datastore. Later, it introduced support for ACID with its fourth release by allowing developers to select whether a namespace runs in a high-availability AP mode or a high-performance CP mode. CP mode in Aerospike is known as strong consistency (SC) and provides sequential consistency and linearizable reads, guaranteeing consistency for single objects.
While Aerospike pre-8.0 has been great at satisfying the requirements of internet applications […] limiting SC mode to single-record and batched commands left something to be desired. The denormalization approach works well in a system where objects are independent of each other […] but in many applications, objects actually do have relationships between them.
As Botzer explained, the existence of relationships between objects makes transactions necessary, and many developers had to build their own transaction mechanism on top of a distributed database. This is why Aerospike built native distributed transaction capabilities into Database 8, which meant providing strict serializability for multi-record updates and doing this without hampering performance.
Aerospike distributed transactions have a cost, which includes four extra writes and one extra read, so it is important to understand the performance implications they have. Tests based on Luis Rocha’s Chinook database showed results in line with those extra operations, meaning that smaller transactions are affected most while overhead is amortized in larger ones. All in all, says Botzer,
Transactions perform well when used judiciously together with single-record read and write workloads.
ACID transactions display properties designed to ensure the reliability and consistency of database transactions, i.e., atomicity, consistency, isolation, and durability. They guarantee that database operations are executed correctly. If there is any failure, the database can recover to a previous state without losing any data or impacting the consistency of the data. BASE systems opt instead for being Basically Available, Soft-stated, and Eventually consistent, thus giving up on consistency.

MMS • Aditya Kulkarni
Article originally posted on InfoQ. Visit InfoQ

Slack recently integrated automated accessibility testing into its software development lifecycle to improve user experience for individuals with disabilities.
Natalie Stormann, Software Engineer at Slack, detailed the journey in Slack’s engineering blog, communicating the company’s ongoing adherence to Web Content Accessibility Guidelines (WCAG).
Slack has internal standards and the company further collaborates with external accessibility testers as well. These standards align with WCAG, an internationally recognized benchmark for web accessibility. While manual testing remains important for identifying nuanced accessibility issues, Slack recognized the need to augment these efforts with automation.
In 2022, Slack started incorporating automated tests into its development workflow to proactively address accessibility violations. The company chose Axe, a widely-used accessibility testing tool, for its flexibility and WCAG alignment. Integrating Axe into Slack’s existing test frameworks presented some challenges. Embedding Axe checks directly into React Testing Library (RTL) and Jest created conflicts, further complicating the development process.
Slack accessibility team envisioned using Playwright, a testing framework compatible with Axe via the @axe-core/playwright package. However, integrating Axe checks into Playwright’s Locator object posed its own set of challenges. To overcome these hurdles, Slack adopted a customized approach, strategically adding accessibility checks in Playwright tests after key user interactions to ensure content was fully rendered. Slack also customized Axe checks by filtering out irrelevant rules and focusing initially on critical violations. The checks were integrated into Playwright’s fixture model, and developers were given a custom function runAxeAndSaveViolations
to trigger checks within test specifications.
The tech community on Hacker News took notice of this blog. One of the maintainers of axe accessibility testing engine with HN username dbjorge commented on the post,
It’s awesome to see such a detailed writeup of how folks are building on our team’s work…. It’s very enlightening to see which features the Slack folks prioritized for their setup and to see some of the stuff they were able to do by going deep on integration with Playwright specifically. It’s not often you are lucky enough to get feedback as strong as “we cared about enough to invest a bunch of engineering time into it.
To improve reporting, Slack included violation details and screenshots using Playwright’s HTML Reporter and customized error messages. Their testing strategy involved a non-blocking test suite mirroring critical functionality tests, with accessibility checks added to avoid redundancy. Developers can run tests locally, schedule periodic runs, or integrate them into continuous integration (CI) pipelines. The accessibility team further collaborates with developers to triage automated violations, using a Jira workflow for tracking.
Regular audits ensure coverage and prevent duplicate checks, with Slack exploring AI-driven solutions to automate this process.
Slack aims to continue balancing automation and manual testing. Future plans include developing blocking tests for core functionalities such as keyboard navigation, and using AI to refine test results and automate the placement of accessibility checks.

MMS • Avraham Poupko
Article originally posted on InfoQ. Visit InfoQ

Transcript
Thomas Betts: Hello, and welcome to another episode of The InfoQ Podcast. I’m Thomas Betts. Today I’m joined by Avraham Poupko to discuss how AI can be used as a software architect’s assistant. He likes teaching and learning about how people build software systems.
He spoke on the subject at the iSAQB Software Architecture Gathering in Berlin last month, and I wanted to continue the conversation, so I figured the InfoQ audience would also get some value from it. Avraham, welcome to The InfoQ Podcast.
Avraham Poupko: Thank you, Thomas. Thank you for inviting me. A pleasure to be here. The talk that I gave at the iSAQB conference in Berlin, it was really fascinating for me and I learned a lot preparing the talk. And as anybody that has given a talk knows, when you have a good talk, the hardest part is deciding what not to say. And in this talk that I gave, that was the hardest part. I learned so much about AI and about how AI and architects collaborate. It was painful, the amount of stuff I had to take out. Maybe we could make some of that up here in this conversation.
Architects won’t be replaced by AI [01:21]
Thomas Betts: That’d be great. So from that research that you’ve done, give us your quick prediction. Are the LLMs and the AIs coming for our jobs as software architects?
Avraham Poupko: They’re not coming for our jobs as software architects. Like I said in the talk, architects are not going to be replaced by AI and LLMs. Architects will be replaced by other architects, architects that know how to make good use of AI and LLM, at least in the near term. It’s not safe to make a prediction about the long term. We don’t know if the profession of a software architect is even going to exist, and it’s not a fundamental difference from the way it’s always been.
Craftsmen that make good use of their skill and their tools replace people that don’t. And the same goes for architects, and AI and LLM is just a tool. A very impressive tool, but it’s a tool, and we’re going to have to learn how to use it well. And it will change things for us. It will change things for us, but it will not replace us, at least not in the near future.
LLMs only give the appearance of intelligence [02:16]
Thomas Betts: So again, going back to that research that you’ve done, what is this tool? We talked about the artificial intelligence. There is an intelligence there, it’s doing some thinking. Is that what you found that these LLMs are thinking, or is it just giving the appearance of it?
Avraham Poupko: In our experience, the only entities capable of generating and understanding speech are humans. So once I have a machine that knows how to talk and knows how to understand speech, we immediately associate it with intelligence. It can talk, so it must be intelligence. And that’s very difficult for us to overcome. There are things that look like intelligence. There are things that certainly look like creative thinking or novel thinking, and as time goes on, these things are becoming more and more impressive.
But real intelligence, which is a combination of emotional intelligence and quantitative intelligence and situation intelligence and being aware of your environment, it’s not yet there. That’s where the human comes in. The main thing that AI is lacking is a body. We’ll get to that. Most AIs do not have a body. And because they don’t have a body, they do not have other things that bodies give us.
Humans learn through induction and deduction. LLMs only learn through induction [03:28]
Thomas Betts: Disregarding the fact they don’t have a body, most people are interacting with an LLM through some sort of chat window, maybe a camera or some other multimodal thing. But that gets to what’s the thinking once you give it a prompt and say, “Hey, let’s talk about this subject”. Compare the LLM thinking to human thinking. How do we think differently and what’s fundamentally going on in those processes?
Avraham Poupko: I could talk about the experience and I could talk about what’s going on under the hood. So let’s go under the hood first. We’ll talk a bit about under the hood. I am a product of my genetics, my culture and my experience. And every time that something happens, I am assimilating that into who I am and then that gives fruit to the thoughts and the actions that I do.
I do not have a very strong differentiation between my learning phase and my application phase, what we sometimes call the inference phase. I’m learning all the time and I’m inferring all the time. And I’m learning and applying knowledge, learning and applying knowledge, and it’s highly iterative and highly complex. Yes, I learned a lot as a child. I learned a lot in school, but I also learned a lot today.
LLMs, their model, even if they’re multimodal, their model is text or images. And they’re trained, and that’s when they learn, and then they infer, and that’s when they apply the knowledge. And because they are a model, they’re a model of either the language or the sounds or the videos or the texts. But my world model is comprised of many, many things. My world model is comprised of much, much more than language, than sounds. My world model is composed of my internal feelings, of what I feel as a body.
I have identity, I have mortality, I have sexuality, I have a certain locality. I exist here and not there. And that makes me into who I am and that governs my world experience. And I’m capable of things like body language, I’m capable of things like being tired, of being bored, of being scared, of growing old. And that all combines into my experience and that combines into my thinking. Can an architect, who has never felt the pain or humiliation of making a shameful mistake, can she really guide another architect in being risk-averse or in being adventure-seeking?
Only a person that has had those feelings themselves could guide others. And as architects, we bring our humanity, we bring our experience to the table. For instance, one of the things that typifies older architects as opposed to younger architects, older architects are much more tentative. They’ve seen a lot, they’re much more conservative. They understand that complexity, as Gregor Hohpe says, “Complexity is a price the organization pays for the inability to make decisions”.
And if you’re not sure what to do and you just do both, you’re going to end up paying a dear price for that. And the only way you know that is by having lived through it and done it yourself. So that’s all under the hood.
I’d like to talk a bit about how the text models work. The LLMs that we have today, the large language models, are mostly trained on texts, written texts. And these written texts, let’s say, the texts about architecture are typically scraped or gathered or studied or learned or trained on books, books written by architects, books written about architects.
And our human learning is done in a combination of induction and deduction. And I’ll explain. Induction means I learn a lot of instances, a lot of individual cases, and I’m able to extract a rule. We call that induction. So if I experience the experience once, twice, thrice, four times, five times, and I start recognizing a pattern, I will say, “Oh, every day or every time or often”, and I’ll recognize the commonality, and that is called induction.
Then deduction is when I experience a rule and I have a new case and I realized that it falls into the category, I will apply the generic rules in a deductive manner. And we’re constantly learning by induction and deduction, and that’s how we recognize commonalities and variabilities. And part of being intelligent is recognizing when things are the same and when they are different. And one of the most intelligent things that we can say as humans are, “Oh, those two things are the same. Let’s apply a rule. And those two things are different. We can’t apply the same rule”.
And one of the most foolish things we could say as humans are, “Aren’t those two the same? Let’s just apply the same rule. Or they’re not really the same, they’re different”. And our intelligence and our foolishness is along the lines of commonality and variability.
Now, when we write a book, the way we write a book, if anybody in the audience or anybody that’s written a book knows, you gain a lot of world experience over time, sometimes over decades. You aggregate all those world experiences into a pattern, and this is the inductive phase into a pattern, into something that you could write as a rule, something that you could write as a general application.
So architects that write books, they don’t write about everything they’ve ever experienced. That would make a very, very long and very boring book. Instead, they consolidate everything they’ve experienced. All their insights from all the experience, they consolidate it into a book. Then, when I read the book, I apply the rule back down to my specific case. So the architect experienced a lot and then she wrote a pattern. Then I read the pattern and I apply it to my case.
So writing a book, in some sense, is an inductive process. Reading a book and applying it is, in some sense, a deductive process. The LLMs, as I’ve seen yet, don’t know how to do that. They don’t know how to apply a rule that they read in a book or in a text and apply it in a novel way to a new situation. What I know as an architect, I know because of the books that I’ve read, but I also know it because of the things that I’ve tried.
I know myself as a human being. So I read the books, I tried it, I interacted, I failed, I made mistakes. And then I read the books again and I apply the knowledge in a careful, curated manner. The LLMs don’t yet have that. So it’s very, very helpful, just like books are very helpful. Books are extremely helpful. I recommend everybody should read books, but don’t do what it says in the book. Apply what it says in the book to your particular case, and then you’ll find the books to be useful and then you’ll find the LLMs to be useful.
LLMs lack the context necessary to make effective architectural decisions [10:20]
Thomas Betts: Yes, the standard architect answer is, “It depends”, right? And it depends is always context-specific. So that moving from the general idea to the very specific, “How does this apply to me?” And recognizing sometimes that thing you read in the book doesn’t apply to your situation, but you may not have known that. You could read the book and say, “Oh, I should always do this”.
Very few architecture guidelines say, “Thou shalt always do this thing”. It always comes down to, “In this situation, do this, and in this situation, do that and evaluate those trade-offs”. So you’re saying the LLMs don’t see those trade-offs?
Avraham Poupko: First of all, they will not see them unless they’ve been made explicit and even then, maybe not. So if I would ask the question, the architect’s question is, you said, Thomas, “What should I do? Should I do A or B?” And the answer always is, it depends. And if I would say, “It depends on what?” The answer is, it depends on context. Context being the exact requirements, functional and not functional, articulated and not articulated, the business needs as articulated and as not.
It depends on the capabilities of the individuals who are implementing the architecture. It depends on the horizon of the business, it depends on the nature of the organization. It depends on the nature of the problem you are solving and of the necessity or urgency or criticality or flexibility of the solution. If you are going to write all that into the prompt, that will be a very, very long prompt.
Now, the architects that do really well, are often the architects that either work for the organization that solves the problem, or who have spent a long amount of time studying the organization, embedded in the organization. That’s why the successful workshops are not one-day workshops. The successful workshops are one-week workshops because it takes time to get to know all the people and forces, and then you could guide them towards an architecture that is appropriate to all the “it depends” things.
And one of the things that LLMs are quite limited in, they do not establish a personal relationship. Now, by a personal relationship, the only thing the LLM knows about me is what I told it about me. But what you know about me is if you told me something, and I made a face or I sneered or I looked away or I focused at you very intently, you know some things interest me and some things bore me. And then that is part of our relationship as humans where we could have a trusting technical relationship.
We don’t have that with text-based LLMs, because they don’t have a body, they don’t have eye contact yet. They can’t look back. So I could stare the LLM in the camera, but the LLM can’t stare at me. Again, I’m saying all this yet. Who knows, maybe in 10 years there’ll be LLMs or not LLMs, but large models that know how to do eye contact, they know how to interpret eye contact, they know how to reciprocate eye contact and know how to think slow and know how to think fast, and know how to be deliberate and take their time and know how to initiate a conversation.
I have not yet had an LLM ever call me and say, “Avraham, remember that conversation we had a couple of days ago? I just thought of something. I want to change my mind because I didn’t fully think it out”. I’ve had humans do that to me all the time.
LLMs do not ask follow up questions [13:47]
Thomas Betts: Yes. I think going back to where you’re saying the LLM isn’t going to ask the next question. If you prompt it, “Help me make this decision. What do I need to consider?” then it knows how to respond. But its job, again, the fundamental purpose of an LLM is to predict the most likely next word or the next phrase.
And so if you give it a prompt and all you say is, “Design a system that solves this problem”, it will spit out a design and it’ll sound reasonable, but it has no basis in your context because you didn’t provide the context. And it didn’t stop to ask, “I need to know more information”, right? You’re saying the LLMs don’t follow up with that request for more information, and a human would do that.
Avraham Poupko: I’m saying exactly that. And humans, mature humans, accountable humans, if you ask me, “Design a system that does something”, I will know, in context, if the responsible thing for me is to design the system or if the responsible thing for me is to ask more questions and then not design. And an appropriate answer is, I’ve asked you a bunch of questions. You, Thomas, told me to design a system, and I asked you three or four questions. And then I come back and I say, “Thomas, I really don’t know how to do that”, or “I don’t feel safe doing that”, or “I’m going to propose a design, but I really would like you to check the environment in which the thing is going to work because I’m a little suspect of myself and I’m not as confident as I sound”.
Mature human beings know how to do that. Or I might say, “I actually did that before and it worked, and this is exactly what you should do. And I actually have a picture or a piece of code that I’ll be happy to share with you”. The LLM doesn’t have experience. It would never say, “Oh, somebody else asked me that question yesterday, so I’m going to just give you that same picture”, or something like that. It doesn’t have long-term memory.
Architects are responsible for their decisions, even if the LLM contributed [15:41]
Thomas Betts: But it sounds just as confident, whether it’s giving the correct answer, if you know what the correct answer is or an incorrect answer. And that’s what makes it difficult to tell, “Is that a good answer? Should I follow that guidance?” So if you aren’t an experienced architect, and let’s say you’re an architect who doesn’t know, the right answer is to ask another question. And you just ask a question and the LLM spits out a response, how do you even evaluate that if you don’t have enough experience built up to say, “I know what I should have done was ask more questions”?
Avraham Poupko: So I’ll actually tell you a tip that I give to coders. Coders have discovered LLMs. They love using LLMs and to generate code and it generates code blindingly fast. And the good coders will then take that code, let’s say, the C or the Python, and read it and make sure they understand it and only then use it. And the average coders, as long as it compiles, they use it. As long as it passes some basic sanity test or as long as it passes the unit test, they will use it.
And then you end up with a lot of what some people call very correct and very bad code. It’s code that is functionally correct but might not have some contextual integrity or might not have some mental or conceptual integrity. Now, the same thing goes with an architect. As an architect, just like I would never apply something I read in a book unchecked, I read it in a book, I would see if it makes sense for me. And I could read a book and say, “No, that doesn’t make any sense, or it doesn’t make sense in my particular context”.
Or I would read a book and say, “Yes, that does make a lot of sense”. Now, I do trust the experience of the author of a book. If she’s a good author that has had a lot of experience and designed very robust systems, I will trust their experience to an extent. The same thing goes with an LLM. When the LLM spits out an answer, that is, at most, a proposal. And I will use it as a proposal. I will read it, I will make sure I understand it, and then I will use it as an architecture or as a basis for an architecture or as a part of an architecture or as a proposal for an architecture or not.
But ultimately, I am accountable. So when the system fails and the pacemaker doesn’t work and the poor person gets a heart attack, I’m the one who’s accountable for that and I should do good architecture. I can’t roll it back on the LLM and say, “Why did you tell me that I should write a pacemaker in HTML? Why didn’t you tell me that you’re supposed to write a pacemaker in machine code, in highly optimized machine code?”
I should have known that because I’m the architect and I’m accountable for the performance decisions. But it’s an extremely helpful tool as such because it’s so fast and it gives such good responses, and many of them are correct and many of them are useful.
You can ask an LLM “why” it suggested something, but the response will lack context [18:35]
Thomas Betts: Again, looking at the interaction you’d have with another human architect, if someone came to me and says, “Here’s the design I came up with, I’d like you to review it”, I’m going to ask the, “Why did you decide that?” question. “Show me your decision making process. Don’t just show me the picture of the design. Show me what did you consider, what did you trade off?” Again, my interactions with LLMs, you ask them a question, it gives you an answer. How are they when you ask why? Because they don’t volunteer why, unless you ask them. But if you ask them, do they give a reasonable why behind the decision?
Avraham Poupko: But it’ll never be a contextual answer. That means if you give me a design and I say, “Why?” and you could say something like, “Well, last week I gave it to another organization and it worked well”, or “Last week I gave something else to the organization and it didn’t work well”. Or you’ll tell me a story. You’ll tell me, “I actually did see QRS in a previous role and I saw that it was too hard to explain to the people”, or “It ended up not being relevant”, or “I used the schema database, but then the data was so varied that it was a mistake, and I think that in your case, which is similar enough, we’re going to go schemaless”, or whatever it is.
So the why won’t always be a technical argument. It might be a business argument, it might be an experiential argument, it might be some other rhetorical device. The LLM will always give me a technical answer because technically, that’s the appropriate answer for the question. It’s a different why. And a good question to ask an architect, a very good question to ask an architect is not why, because the hard questions are questions that have two sides, right? I’m now dealing with an issue in an AWS deployment and I’m wondering, “Should I go EKS or should I go ECR? Should I go with the Amazon Container Registry or should I go with the Kubernetes?”
And each one has its pluses and minuses. And I know how to argue both ways. Should I go multi-tenant or single-tenant? The question should be, what, ultimately, convinced you? Not why, because I could give you a counter-argument. But given all the arguments for and against, why were you convinced to go multi-tenant? Why were you convinced to go Kubernetes? And you’ll have to admit and say, “Well, it’s not risk-free and the trade-offs to the other, and I know how to make the other argument, but given everything that I know about everything, this is what I decided is the most reasonable thing”.
Humans are always learning, while LLMs have a fixed mindset [20:57]
Thomas Betts: I like the idea of the human is always answering the question based on the knowledge I’ve had last week, previous project, a year ago. We have that, you mentioned earlier, that humans are always going through that learning cycle. It’s not a, “I learned and now I apply. I’m done with school”. But the LLM had that learning process and then it’s done. It’s baked. It’s a built-in model. Where else does that surface in the interactions you’ve had working with this? How does it not respond to more recent situations?
Avraham Poupko: So the experiences that we remember are those experiences that caused us pain or caused us joy. We’re not really statistical models in that sense. So if I made a bad decision and it really, really hurt me, I will remember that for a long time. I got fired because I made a bad decision. I got promoted because I made a brilliant decision, or everybody clapped their hands and said, “That was an amazing thing”. And they took my architecture and framed it on a wall in the hall of fame. I’ll remember that.
And if I made a reasonably good decision 10 times and the system sort of worked, that would be okay. And that’s my experience. You might have a different experience. What got me fired, got you promoted, and what got you promoted, got me fired. And that’s why we’re different. And that’s good. And one of the things that I encourage the people to do, the people that I manage or the people that I work with, is read lots of books and talk to lots of architects and listen to lots of stories.
Because those stories and those books that you read from multiple sources and multiple experience, read old books, read new books, read and listen to stories of successes from times of old and times of new, and you’ll create your own stories and notice these things. So when you, as an architect, do a good design and it’s robust and it works well and the system deploys, tell yourself a story and try to remember why that worked well and what you felt about that.
LLMs don’t know how to do that. They’re not natural storytellers. If something happens to me and I want to tell it to my friends or to my family or over on a podcast, I will spend some time deliberately rehearsing my narrative. As I’m going, as I’m preparing this story, I’m going to tell the story, “It was a dark night and I was three days late for the deadline and I was designing”. And it’s a narrative. I’m not thinking of it the first time I’m saying it. I thought of it a long time before. And part of me rehearsing the narrative, is me processing the story, is me experiencing the story.
And you know that good storytellers, if we give a story during a talk or during an event, they tell it with a lot of emotion, and you can tell that they’re experiencing it all over again. And they’ve spent a lot of time crafting words and body language and intonation. People that get this really, really well are stand-up comedians. Stand-up comedians are artists of the spoken word. If you would read a transcript of a stand-up comedy, it’s not funny. What makes a stand-up comedian so funny is their interaction with the audience, is there sense of timing, is there sense of story, is there sense of narrative, is there sense of context.
I haven’t yet seen an LLM be a stand-up comedian. It could tell a joke, and the joke might even be funny. It might be surprising, it might have a certain degree of incongruity or a certain degree of play on words or a certain degree of double meaning or ambiguity or all those things that comprise written humor. But most stand-up comedy is a story, and it’s a combination of the familiar and the surprising, the familiar and the surprising. So the stand-up comedian tells a story, and you say to yourself, “Yes, that happened to me too. It’s really funny”. Or he tells you something surprising and you say, “That never happened to me. I only live normal things”.
And that’s a very, very human thing. And part of it, of course, has to do with our mortality and the fact that we live in time and we’re aware of time. Our bodies tell us about time. We get tired, we get hungry, we get all kinds of other things. Time might even be our most important sense, certainly as adults, much more than being hot or cold is we’re aware of time. The reason that we are so driven to make good use of our lives, of our careers, of our relationships is because our time is limited. Death might be the biggest driver to be productive.
LLMs do not continue thinking about something after you interacted with them [25:28]
Thomas Betts: Then time is also one of those factors that we understand the benefits. Sometimes I think about a problem and I’m stuck in the cycle, and then next day in the shower or overnight, whatever it is, I’m like, “Oh, that’s the brilliant idea”. I just need to get away from the problem enough and think about it and let my brain do some work. And that’s not something that’s happening with the LLMs.
Going back to your example of the LLM never calls you up two days later and says, “Hey, I thought about that a little bit more. Let’s do this instead”. But that’s the thing. I keep working, that time keeps progressing, and every LLM is an interaction. Right now, this moment I engage, it responds, we’re done.
Avraham Poupko: That’s an interesting point, and I’d like to elaborate on that a bit. As a human, I know how I learn best and I know I’m stuck in this problem. I need to go take a walk because I know myself, I know how I learn. I’m stuck in this problem. I need to call up a friend and bounce it around, or I need to take a nap or I need to take a shower or I need to give it a few days. And the answer we say will come to me.
And it’s not a divine statement, but it means I know how my mind works. I know how my mind works. I’m very, very self-aware. I know how I learn well. And the good students know how they learn well and know how they don’t learn well. I have not yet seen an LLM that knows how to say, “I don’t know the answer yet, but I know how my mind works and I’m going to think about this”.
LLMs are trained on words; Humans are trained through experiences [26:52]
Thomas Betts: Yes, this is going back to, “Is it going to replace us?” And we don’t expect our tools to do this kind of thing. We expect humans to do human things, but I don’t expect my hammer or my drill to come back with a question, “Oh, I need a few minutes to think about pounding in that nail”. I expect to pick up the hammer and pound in the nail. And the fact that it is involving language is confusing, but there is no behavior, I guess is what we’re getting to.
The interaction model is, “I give you words, you give me words in response”. And if you think about it in just those terms, you have a better expectation of what’s going to happen and you just use it as a tool. But if you think, “I expect behavior”, it starts thinking like an architect, which is we’re placing human characteristics on this thing that has no human characteristics. That’s where we start to go into the slippery slope of we’re going to have assumptions that it’s going to do more than it’s supposed to do.
Avraham Poupko: That is exactly, exactly true. And it’s becoming more and more apparent to me that so much of the world wisdom and so much of my wisdom, it’s not written in words. Maybe it’s not even spoken in words. It’s written in experience, in the way I’ve seen other people behave that has never been put into written words or spoken words.
So when I watch a good architect moderate a workshop, she doesn’t say, “And now I am going to stop for a minute and let somebody else speak”. She just does it. Or, “Now I’m going to give a moment of silence”, or “Now I’m going to interrupt”, or “Now I’m going to ask a provocative question”. They just do that. And the good architects catch on that. And then, when they run workshops, they do the same because they learned it and they saw that it works. And so much of what we do is emulated behavior and not behavior that has found expression in text or in spoken word.
When we watch art, when we look at art, some art deeply moves us. Now, an art critic might be able to say what it was about the picture that moved us, what it was about the piece of music that moved us. And then the LLM will see the picture and see the words of the art critic, and it’ll see another picture with the other words of the art critic. And then the LLM will be able to become an art critic, and it’ll see a new picture and tell you what emotions that evokes.
But when the art critic tells you in words what the music or the picture made her feel, she’s not able to do it completely in words because they’re never complete. Otherwise, I wouldn’t go to the museum. I’d just read the art criticism of the museum. But I want to go to the museum, and I want to go to the concert, and I want to go to the concert with other people, and I want to experience the music. I don’t want to read about it. I want to experience. And there is a different experience in experiencing something than reading about it.
The best engineers and architects are curious; LLMs do not have curiosity [29:42]
Thomas Betts: Yes. I thought about this a few minutes ago in the conversation that you’re talking about the good architect is picking up on these behaviors and then going to start using those behaviors in the future and adapting them to new situations, but saying, “Oh, I’ve seen someone do this”. Going back to the LLM has the learning phase and then it’s done learning.
I’ve unfortunately interacted with a few architects and engineers who feel like I’ve done my learning. I went to school, I learned this thing, and that’s all the knowledge I need to have, and I’m just going to keep applying that. This is an extreme example. Maybe 10, 15 years ago, someone’s like, “I learned ColdFusion. That’s all I ever need to know to write websites”. And I don’t know anyone in 2024 that’s still using ColdFusion, but he was adamant that was going to carry him for the rest of his career.
And that idea that I’ve learned everything and all I need to know is going to get me through forever, doesn’t make you a good engineer, doesn’t make you a good architect. You might be competent, you might be okay, but you’re never going to be getting promoted. You’re never going to be at the top of your career. You’re never going to grow.
And I wonder if that’s analogous to some of these LLMs. I can use that person who has this one skill because they’re great at that one skill, but I’m only going to use them for that one thing. I’m never going to ask them to step outside their comfort zone. The LLM, I’m going to use them for the one thing that it’s good at.
Avraham Poupko: That’s a great analogy. If I’m a hiring manager, let’s say as a hypothetical example. And I have two candidates, one that thinks she knows everything or I would even say knows everything, and is not interested and doesn’t believe that there’s anything left to learn because I know it all. And the other candidate that admittedly doesn’t know everything but is curious, that is proactively curious and is constantly interested in learning and experiencing new things and new architectures.
Of course, it’s a matter of scale, like everything. But I would say I would give extra points to the one that’s curious and wants to learn new things. And if you’ve ever had a curious member on your team, either as a colleague or as a manager or as a managee, people that like learning are fun. And anybody that has children or grandchildren, one of the things we love about children is that they’re experiencing the world and they’re so curious and they’re so excited by new things.
And if it’s your own children or children that you’re responsible for, maybe nieces and nephews or grandchildren, you vicariously learn when they learn. It’s a lot of fun. You become, in some sense… The parents among us will know exactly what I mean. You become a child all over again when you watch your child learn new things and experience new things to the point that you say, “Oh, I wish I could re-experience that awe at the world and at the surprise”. And LLMs have not shown that feeling.
I have never seen an LLM say, “Wow, that was really, really interesting and inspiring. I am going to remember this session for a long time. I’m going to go back home and tell the other LLMs about what I just learned today or what I just experienced today. I have to make a podcast about this”. They don’t work like that. And humans, if you’ve done something fun and you come home energized, even though you haven’t slept for a while, and there’s a lot of brain fatigue going around, you think, “Yes, that was really, really cool. That was fun. I learned something”.
Thomas Betts: I like that. Any last-minute words you want to use to wrap up our discussion?
Avraham Poupko: Well, first of all, since there are lots of architects in our audience, learn. Learn a lot. And experiment with the LLMs, and experiment and do fun things and funny things. And we were going to talk about virtual think tanks and we were going to talk about some other ideas. We need that for another podcast.
But the main thing is experiment with new technologies, don’t be afraid of failing. Well, be a little afraid of failing, but don’t be too afraid of failing. And have fun. And Thomas, thank you so much for having me here. This has been a real pleasure and delight, and I hope we have another opportunity in the future.
Thomas Betts: Yes, absolutely, Avraham. It’s great talking to you again. Good seeing you on screen at least. And listeners, we hope you’ll join us again for a future episode of The InfoQ Podcast.
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

MMS • RSS
Posted on mongodb google news. Visit mongodb google news

FerretDB has announced the first release candidate of version 2.0. Now powered by the recently released DocumentDB, FerretDB serves as an open-source alternative to MongoDB, bringing significant performance improvements, enhanced feature compatibility, vector search capabilities, and replication support.
Originally launched as MangoDB three years ago, FerretDB became generally available last year, as previously reported by InfoQ. Peter Farkas, co-founder and CEO of FerretDB, writes:
FerretDB 2.0 represents a leap forward in terms of performance and compatibility. Thanks to changes under the hood, FerretDB is now up to 20x faster for certain workloads, making it as performant as leading alternatives on the market. Users who may have encountered compatibility issues in previous versions will be pleased to find that FerretDB now supports a wider range of applications, allowing more apps to work seamlessly.
Released under the Apache 2.0 license, FerretDB is usually compatible with MongoDB drivers and tools. It is designed as a drop-in replacement for MongoDB 5.0+ for many open-source and early-stage commercial projects that prefer to avoid the SSPL license, a source-available copyleft software license.
FerretDB 2.x is leveraging Microsoft’s DocumentDB PostgreSQL extension. This open-source extension, licensed under MIT, introduces the BSON data type and related operations to PostgreSQL. The solution includes two PostgreSQL extensions: pg_documentdb_core for BSON optimization and pg_documentdb_api for data operations.
According to the FerretDB team, maintaining compatibility between DocumentDB and FerretDB allows users to run document database workloads on Postgres with improved performance and better support for existing applications. Describing the engine behind the vCore-based Azure Cosmos DB for MongoDB, Abinav Rameesh, principal product manager at Azure, explains:
Users looking for a ready-to-use NoSQL database can leverage an existing solution in FerretDB (…) While users can interact with DocumentDB through Postgres, FerretDB 2.0 provides an interface with a document database protocol.
In a LinkedIn comment, Farkas adds:
With Microsoft’s open sourcing of DocumentDB, we are closer than ever to an industry-wide collaboration on creating an open standard for document databases.
In a separate article, Farkas explains why he believes document databases need standardization beyond just being “MongoDB-compatible.” FerretDB provides a list of known differences from MongoDB, noting that while it uses the same protocol error names and codes, the exact error messages may differ in some cases. Although integration with DocumentDB improves performance, it represents a significant shift and introduces regression constraints compared to FerretDB 1.0. Farkas writes:
With the release of FerretDB 2.0, we are now focusing exclusively on supporting PostgreSQL databases utilizing DocumentDB (…) However, for those who rely on earlier versions and backends, FerretDB 1.x remains available on our GitHub repository, and we encourage the community to continue contributing to its development or fork and extend it on their own.
As part of the FerretDB 2.0 launch, FerretDB Cloud is in development. This managed database-as-a-service option will initially be available on AWS and GCP, with support for Microsoft Azure planned for a later date. The high-level road map of the FerreDB project is available on GitHub.
Article originally posted on mongodb google news. Visit mongodb google news