Presentation: What the Data Says: Emerging Technical Trends in Front-End and How They Affect You

MMS Founder
MMS Laurie Voss

Article originally posted on InfoQ. Visit InfoQ

Transcript

Voss: My name is Laurie. My pronouns are he and him. I have been a web developer for about 26 years now. I, for a little while there, was co-founder of a company called npm, Inc., and a couple of other startups. What I’ve always been is a keen observer of our industry. For the last 10 years, my jobs have involved analyzing data about websites, and how web developers work. More recently, using data from surveys from tens of thousands of web developers. My talk is about a model of how web technology evolves that I’ve been working on. I’ll be using examples from history to show how the model works. Then I’m going to take a leap, and try to use the model to predict where web development is going to go in the next five years or so.

The Model (Web Development Technology Cycle)

Let’s start with the model. There’s not a lot to it. The beginning of the cycle is that people have some technical problem. They try lots of different ways to solve that problem, and they get paid a lot of money to do it. Then there is a period of wild experimentation. Once everyone has seen a problem has been solved, they’re like, “Give me that way, I liked that solution.” Soon, instead of solving a new problem, everyone finds themselves rebuilding the same solution over and over. You get better at doing it, and you begin to form best practices for doing it properly. Eventually, the way that you solve it arrives at consensus. Then some clever person writes it down in a blog post or a book or something, people start giving conference talks about the way to do it. Then the problem is solved, and developers get bored, because they’ve got the design patterns locked in.

Eventually, one of those bored developers realizes that they can save time and effort by building a general solution to the problem, a framework or a product or a service. This is interesting to the developer, because the framework is more interesting to build than just solving the same problem over again. Once it’s done, you never have to build that solution again, or at least that is the idea. Turning it into a product like this is commoditization. The process of commoditization is not clean or simple. Lots of people will get the idea to commoditize the service at the same time, so there will be lots of different products and services and frameworks, all at the same time, in a period of intense competition, trying to become the solution for this problem. In addition, people form religious attachments. There are flame wars. People write blog posts accusing everybody of doing it wrong. It can be really brutal.

Eventually, you do get some consolidation to one winner, or maybe just a small handful of winners. That is because it’s simpler to have one way to do things. By this point, all the solutions that are competing in the marketplace are going to be pretty good. If everybody can agree to do it one way, that is itself an advantage, because you only need one set of documentation. You can hire people knowing that they will already know how to do it the way that you do it. The economics of the situation favors consolidation. The process is also not very enjoyable for a lot of people. It generates a lot of complaints from the people who used to make easy money by solving the same problem over and over, because doing the same thing over and over is easy. They can argue that their custom solutions are going to be better because they are paying careful attention to the circumstances of each customer and each app. They’re often right about that. A custom-built solution will probably be better than an off the shelf solution. It’s not going to be so much better that it is worth all the extra time and money that it takes to build a completely custom solution from scratch. Especially when there is a framework out there that you can buy off the shelf, and it will be pretty good. The economics drives people towards a framework, drives people towards commoditization. In all of these cases, I could be talking about a framework or a product or a service. I’m just going to say framework over and over. In all of these cases, I mean all three of those things.

All of this leads to stage 7, which is whining about the fundamentals. It is entirely impossible to skip this step. People who are used to building a thing from scratch, will start to complain that people who are using the framework are forgetting the fundamentals. This is considered to be a major problem. The fundamentals they’re going to assert are key to writing good software, and to doing what you do well. If you don’t know the fundamentals, you are going to be a bad person. While those people are still whining, a whole bunch of new people will show up who do not know anything other than the framework. They do not know whatever those fundamentals are that the other people are whining about. They don’t care. They are just going to be building a bunch of fun, new stuff, and solving new problems using the framework. It’s important to keep in mind that at every stage of this cycle, you’re adding new people. In fact, the number of people involved is roughly doubling at every stage of the process. There’s twice as many people who know best practices as did the experiments. There’s probably twice as many people using the commoditized product, as we’re accustomed building things using design patterns. The number of people whining about the fundamentals are going to be vastly outnumbered by the number of people who are just adopting the winning solution and not caring about the fundamentals.

The last stage of the process is migration. This is when those who are involved at the beginning are faced with two options. They can become the people who build the framework or the product itself. Because the framework itself is always going to need to evolve. It’s going to need to change. It’s going to need to adapt. It’s going to need to expand, because the world is always changing. The people who decide to become the framework builders, the people who decide to specialize, can get really good at those old techniques that used to be bespoke. They can get so good that nobody can ever hope to approach their level of quality. That works fine, because it’s great for the framework. It’s being written by the people who are best at it in the world. It’s great for the specialists, because they will always have a job. There will always be work for specialists maintaining infrastructure at lower levels in the stack. There’s a second option, which is that you can become part of the crowd of new people who are solving new problems using that framework. Maybe you think that the old problem is boring or difficult. Maybe you just like building new things. One thing is for certain that there’s going to be a lot more jobs building using the framework, than building the framework itself. Most people are going to end up using the framework rather than building it. That is the full cycle. There’s wild experimentation. There’s best practices. There’s design patterns. There’s commoditization. There’s intense competition. There’s consolidation. There is whining about the fundamentals. There is mass adoption. Then there is migration, repeat.

In reality, the cycle isn’t so clean. The stages often overlap. There’s usually several of these cycles going on at the same time in the industry in different corners of technology. The result is that technology is a layer cake of the things that developers used to have to think about, that have become products or services. We call this layer cake, the stack. New layers in the stack are constantly showing up as expectations of quality in the industry continue to rise, and the old ones get abstracted away or turned into some service that everybody agrees is fine. The funny thing about the stack, is that while everybody agrees that there is a stack, very few people agree on what is in the stack and where it begins and where it ends. If you take one thing away from this talk, it should be that there is no such thing as the fundamentals. That is gatekeeping nonsense. If you can get your job done without knowing stuff lower in the stack than where you are, then it doesn’t matter what you don’t know. When I became a web developer, people genuinely said that because HTML is an application of STML, you had to know STML to really be a good web developer, otherwise, you’re ignoring the fundamentals. How many people have even heard of STML?

History Of the Web

To prove how this cycle works, let’s do a lightning recap of the history of the web. I want to show you how many times this has happened because it’s going to be difficult to swallow the prediction I’m making. I want to really ram home that this is a real cycle, and it’s very hard to break. Because you’re going to want to whine about the fundamentals, and I would like to convince you that the best thing to do is embrace the change instead. The web was born in 1989. The way that you built a website in 1989 was you wrote some HTML by hand. You saved it to a file on your hard drive. Then you ran a piece of software that spoke HTTP. That would serve it. That was the state of the art. Then there was a period of intense competition, in which there were hundreds and thousands of competing pieces of software that served HTTP. Soon, things began to consolidate. One extremely popular option was the Apache Server. Soon, either everything used Apache or it worked very much like Apache did. A few people were still writing their own web servers, but not many, not none, because in the early 2000s, Apache was essentially replaced by NGINX in the marketplace. There is still and there always will be innovation going on at levels lower in the stack, but it is not a major focus of competition for most people.

By 1994, the major innovation was the Common Gateway Interface or CGI. CGI let you answer POST requests from browsers with dynamic responses. That was a brand-new idea. A website, instead of being static HTML, it could respond to what a user had posted. It could be interactive. It could get input and respond with output. To do that, you’d have to write a piece of software that understood HTTP, and then run that on your computer or under your desk. This started another period of wild experimentation. First, you installed Apache. Then you started writing a piece of software that parsed the HTTP headers yourself. Your software would glue strings together to make HTML, and it would then send them back via HTTP to the user. This too got really old really quick. Soon there were libraries written in languages like Perl to do this tedious parsing and output for you. Then somebody wrote some libraries that wrapped C libraries for you to make personal home pages, and thus was born PHP. PHP parsed the HTTP headers for you, and then sent HTML back to the browser for you automatically. You just wrote the code that did the logic in the middle. This was a fundamental breakthrough. There was great uproar about it. People using PHP were forgetting the fundamentals. I was told in 1996, with a straight face by other developers that because I used PHP, I wasn’t a real developer, because I wasn’t parsing HTTP headers myself. PHP took the world by storm.

Meanwhile, over in the hardware land, sticking a computer under your desk was getting really boring. People decided that they would make whole custom-built buildings to stack computers in with power and networking and stuff, and you could colocate your servers there. Soon, somebody got the idea that driving to the colocation facility was tedious and expensive, so why not just hire some people who hung out permanently at the colocation center, and they would rack and stack the servers for you. You just hire them by paying for it with a credit card. That was much easier. That was great. There was a period of intense competition. Suddenly, there were thousands of web hosting companies. They all priced things slightly differently. They provided different options. They varied enormously in quality and especially uptime. There was a gigantic competitive fight that drove quality upwards and prices way down. This is how commoditization works.

Everybody wanted a server. Most people wanted Apache on that server. They were ok with the server being Linux. They probably wanted a database. MySQL was popular. They usually wanted PHP installed, and thus was the LAMP stack born: Linux, Apache, MySQL, and PHP. It became the default way to host a website for a solid decade there. Even though none of those companies had anything to do with each other, and Linux didn’t even have a company. Because economics is often the driver of who wins these cycles. The cheapest thing wins, even though nobody was pushing the LAMP stack as an integrated solution, the LAMP stack became popular because those things worked together and they were cheap. The operating system was free and so were PHP and MySQL. LAMP started out cheap and got even cheaper because soon everybody knew it. It became cheaper to hire people. It became cheaper to train people. It became a self-reinforcing cycle. New projects would get started on LAMP, because that was what people already knew.

What were people building on the LAMP stack? It turned out they were mostly building blogs. In fact, somebody realized that nearly every website in the world can be generalized to being like a blog. In 1999, there were 23 blogs online. In 2006, there were 50 million. This led to a period of intense competition. Suddenly there were thousands of competing blog products and services, just an absolute blizzard of them. They were all pretty much the same. The result was that WordPress 1, and WordPress became a framework. It became a way to build any website, not just a blog. It was very popular. How popular? Forty-three percent of websites are powered by WordPress today. In fact, WordPress is more popular today than it has ever been. Even though these days WordPress is sometimes used as a backend with an API to feed some other frontend.

Now you’re beginning to get the idea of how this cycle works. We have a problem. We solve the problem. There’s a whole lot of logos of two dozen products that you’ve never heard of. Then three that you have heard of, because they won. This keeps happening, so let’s go faster. Introduced in 2004, and peaking in 2006, Rails changed the game for building web apps by abstracting away a huge amount of the skeleton of web apps. People complained a lot that people who were building on Rails didn’t know the fundamentals of web development. Invented in 2006, jQuery smoothed over the web’s rough corners. It got enormously popular. jQuery got so popular that its APIs got built into the web. This was funny, because there were people complaining that jQuery developers didn’t understand the fundamentals of the web. Then jQuery became the web. Anybody who understood jQuery was understanding the fundamentals of the web. Then there was GitHub. Launched in 2008, GitHub exploded in popularity. Source control had been a thing forever. Open source software at the time was distributed mostly by SourceForge. GitHub took version control and made it much easier to use. Now there’s lots of people who don’t really know the difference between Git and GitHub, and probably never use Git by itself. Back over in the land of hardware, colocation lasted a long time, as the level of abstraction for hardware. It was really good. You were renting a physical computer. You knew roughly where it was. You knew what hardware it was, what memory, what hard drive. You installed your own software on it. If it failed, you needed to physically replace it.

Round about 2006, AWS introduced EC2, which abstracted away the hardware. Now instead of renting a specific computer, you were renting the idea of a computer. It had virtual memory and a virtual hard drive. Actually, it was just a fraction of a much larger computer that Amazon owned. You probably didn’t know where it was, except that it was probably in Eastern Virginia. Any hardware level concerns were gone. You no longer knew what hardware where it was running on and you didn’t care. Knowing what kind of hardware you were using felt like a fundamental part of being a web developer until suddenly, one day, it wasn’t. That doesn’t mean that there’s not people who still know about those things. Of course, they do. There are thousands of people whose whole job is running power to servers and making sure the networking for them works. There are huge companies that make billions of dollars doing it. Over here in web development land further up the stack, we no longer need to know.

Of course, in cloud computing, there wasn’t one winner, or at least there hasn’t been one yet. In addition to AWS, there’s Azure, and Google Cloud, and Alibaba, and a solid dozen others. There’s lots of competition, but you don’t need to care it turns out, because AWS has 33% market share. Everything works like AWS, even if it’s not AWS, because that’s what everyone already understands. The other ones just modeled themselves over the leader. The concept of cloud hosting has won, completely won. Cloud hosting is just how you run a service in 2022. In a way that you probably don’t even think about, in the same way that people didn’t think about, should I use LAMP or not in 2005?

Let’s hop back to the world of software. 2009, Node.js was invented. npm was invented very shortly after that. Initially, npm was all about Node. Really when npm started taking off is when it became about JavaScript in general, around 2014. npm is a really great example of how the succeeding layer is always so much bigger than the layer before. npm is essentially a distribution channel for open source, like SourceForge. SourceForge gets 15 million downloads per week, which is a lot of downloads. In the last 7 days, npm served over 40 billion downloads. In a week, React gets 15 million downloads all by itself, as much as everything on SourceForge combined. We’re going to be talking more about React a bit. This is not for me to make fun of SourceForge, my point is that the jump in scale is a key part of this cycle. WordPress was not twice as popular as all of the blog software that preceded it. It was hundreds of thousands of times more popular, just many orders of magnitude bigger. It is in a totally different space.

What’s Happening Right Now?

Let’s skip ahead to the present. Take for granted that Docker is a framework, that Kubernetes is a framework on top of that framework. TypeScript is a framework that lives on top of JavaScript. All of these went through the same cycle or are going through the same cycle. I could explain them all. If there’s one thing that you’ve picked up from this talk already, it is that web developers get bored easily. Having toured all of those past cycles, where are we now? There’s two big changes that I see that are currently at stage 5 of the intense competition. The first is serverless. That is the thing that is happening right now. We’ve been abstracting away hardware for a while, so we didn’t even go further. AWS released Lambda in 2014. Forget hardware, forget virtual servers, forget an operating system, or server software, what if you could just deploy functions directly? You just write some code, and AWS makes sure it gets the input and returns the output to your browser at essentially arbitrary scale. We call this phase serverless. I’m still mad about that, because there’s more servers than ever before. That’s what we called it. The serverless revolution is far from done. There’s still lots of people deploying containers. There are tons of people competing to be your serverless platform. Our survey data in 2021 said that 46% of developers are using serverless functions. As a sneak peek of the survey data that we will be releasing at Jamstack Conf, now 70% of people are using serverless. It has become fully mainstream. It is becoming the normal way to build things. Soon, there will be consolidation, and there will be people whining that people have forgotten the fundamentals of servers.

If you thought my talking about npm a little while ago was a self-serving reference, you’re going to really love this next bit where I talk about Netlify where I work right now. Because the other technology that I see at stage 5 right now of intense competition is the one represented by services like Netlify. We can loosely refer to that type of service as the Jamstack. Assume you’re using source control. Assume you want to see the end. Assume that you are doing continuous integration. Assume you want serverless functions. Assume that you don’t want to have to set all of that stuff up yourself, or host it yourself, or make sure that it works together really neatly, then you get Netlify. Netlify can be thought of as a framework for deploying and testing and hosting a website. It solves a bunch of problems that were getting really boring for all of us. Netlify is currently in the intense competition phase. AWS has Amplify. There’s Vercel. Azure probably has some service that does this. All of them are doing the same thing. They’re taking a thing that used to be your concern, how to run tests, how to deploy, how to preview changes before putting them into production, and they’re turning that into one thing that you don’t need to think about anymore. Who’s going to win? Obviously, I have made my bet by going to work for Netlify. The Jamstack concept, the idea that this should be a single thing that you don’t have to think about, that is well on its way to winning the market.

Now let’s talk about React. Introduced in 2013, React is enormous. React’s model for encapsulated components solved a huge pain point in web development. It also went massively against received wisdom of how you should write web pages at the time. It threw HTML and JavaScript in together. Sometimes it throws CSS in there. Sometimes it writes the CSS in JavaScript even more controversially. That was not how you were supposed to do it, or at least that was what everybody agreed at the time that React was released. React demonstrated that web development best practices had got stuck at a local maximum. The result was that React exploded in popularity and took over the web. React’s share of web development right now is almost unprecedented. In our latest survey, 71% of developers said that they have used React this year. React is well into stage 8. There are tons of people whining about the fundamentals of people who only know how to build websites using React and don’t know how to build websites. It’s just as nonsense as it was before. I’ve only seen a frontend framework that got as big as React once before in my life, and that was jQuery. We were just talking about what happened to jQuery. JQuery got so big that browsers adopted it and built it into the web API, and it became the web. That could be what happens to React. It would certainly make our web pages lighter if we didn’t have to ship this React library to every single web browser in the world. How’s that going to happen? Is JSX going to become part of the web? Are we going to start building in the Reactive model? I feel like something has to give at this stage because it’s just so big.

One thing that we have already is web components. These are the native web components. They’re highly performant. They’re built into browsers. They encapsulate code behavior and style like React does. The syntax is really annoying. What I would love personally, is if somebody found a way to make React syntax work with native web components, and then we could all be happy. Even with their current issues, web components are making their way into the mainstream. Thirty-three percent of developers in our survey this year, said that they are regularly using web components, which is solidly mainstream. One way that you can tell that a solution is likely to win in this marketplace, is if some people decide it has already won. They start building new things assuming that you’ve already used the previous solution. They’d say, you’re obviously on EC2, so this is how you deploy this thing to EC2. Or they’re like, you’re obviously using WordPress so here’s a plugin. That’s happening to React. Today, there are a whole bunch of new frameworks, like Astro, and Remix, and SolidJS, and Svelte that are built with the assumption that you’ll be using React. They assume that you know how React works and that you are happy with it. They focus on solving other parts of the problem.

What’s Next?

What happens if we assume that React is going to win? Maybe it will get absorbed into the web. Maybe it will stay a framework. Assume it hits 80% or 90% adoption, can we predict the next turn in the cycle? Can we predict what happens next? That depends what you think the problem that we’re solving is. Let’s look at some of the candidates. Could it be the metaverse? Not yet. They haven’t even figured out how to draw feet. They said that they had, and then it turned out that they had actually faked the video where there were feet. They had to admit that they haven’t actually figured out how to draw feet yet, which is even more embarrassing than not being able to draw feet in the first place. Metaverse is clearly not ready. Nobody’s building en masse for AR yet. Barely anything is going on. What about Web3? No. I could do a whole talk about the problems that come from modeling your entire web as a series of financial transactions. We don’t need to argue about the merits of crypto, we can just look at the data. The number of people building in this space is still tiny. Again, a preview of the results I’m going to show at Jamstack Conf, only 3% of devs in our survey said that they were regularly building things using any of the Web3 technologies we asked about. Then there’s Web5, which I was joking about this. Jack Dorsey has actually announced Web5 and has a whole company building stuff around it. There is no Web4, because you took Web2 and you added Web3 equals Web5. The future is definitely not whatever thing that Jack got high and came up with and thinks is Web5. The way to figure out what is next is to look at what happened before, the thing that got abstracted away, that is the thing that we got tired of is the thing that we got bored of solving. That’s what’s going to happen. That’s what’s about to enter stage 4.

We abstracted away the hardware and the operating system and the server, right now there is intense competition between frameworks. We think we might see a winner. What happens if it’s React? What happens if people get bored of React? What happens if they try to abstract React away? I’m going to show you a product called React Bricks. Three years ago, I predicted that somebody would build a product like this. A couple of months ago, they DM’ed me on Twitter and said, we built that product that you said somebody would build. I’m just going to show you the general idea of React Bricks. You run npx create-reactbricks-app. It creates a development environment for you that runs on your local machine. This is what it looks like. It’s a rich editor for a web app, and it runs on localhost. React Bricks allows you to build a multi-page web app by dragging and dropping prebuilt React components. You can add content within the UI directly by just typing it in. You can use the sidebar to customize content and behavior. You don’t have to know how to code to use React Bricks. If you do, you can also create your own reusable components. You can add more bricks, as it were. The components are just React. They are React with some custom elements that tells the UI where you can enter text, and what the sidebar properties are that you can modify. The components encapsulate code, they encapsulate behavior, but they are not the content. The content lives in a database that is separate and attached to the UI.

Imagine a world where React Bricks gets really popular. There would be enough existing components that you would almost never need to write your own component. You’d hardly ever have to fall back to the code. If you wanted a blog, grab and drag and drop a blog component. When it’s in images, throw in another block. You want a Stripe integration, there’s a block for that. With three drags and drops, you can create an entire e-commerce site. You could build a whole website without knowing any React at all. In fact, you could build an entire website without knowing any HTML. That is where we get to the point where I hate my own prediction, because a web developer who doesn’t know HTML, they don’t know the fundamentals. How can you be a web developer and not know HTML? I hate this prediction. It’s my prediction, because I love HTML. I have been writing it for 26 years. That’s what makes me believe that this is a persuasive vision of what the future could be like, because it makes me deeply uncomfortable. The thing that wins is usually the thing that gets that reaction at first.

If you think about it, this is how web development was always supposed to work. The very first web browser, worldwide web itself, was a WYSIWYG editor. The web was invented by Tim Berners-Lee from the get-go with a drag and drop GUI in mind. That was what he thought it would be. What could be more fundamental than fulfilling that original vision. Once again, the economics are pointing the way. Web developers have never been in as much demand as they are now. We earn so much money and we are effectively building the same thing over and over. A login system, a list display, a search bar, an address form. People have got very excited about design systems recently, and those are an important and necessary step. They’re literally the design patterns. The next phase is automating them. We can’t just stop there. Because the fact of the matter is that the world needs far more websites than the current set of web developers can supply. That is a problem for the whole world.

We all earn tons of money, and yet websites still suck every single time. I’m using some company’s website, it sucks. That is because it’s too hard. It is too hard to build websites, still. There’s still too much effort involved and there aren’t enough of us. That is making the quality of our experiences suffer. For websites to stop sucking, there need to be 10 times as many of us. The way that you do that, the way that you get there is you make web development 10 times easier than it is today. This is one of the ways that you could do that. There are other possibilities of how you could do that. One possibility for accelerating a web developer is GitHub Copilot, which predicts what you’re going to write using machine learning and essentially autocompletes it. This, to me seems like solving the wrong problem. If I have to write the same code so often that ML can predict what it is going to look like, then I should not be writing that code at all. That code should be in a component and I should just use the component. I shouldn’t have to write anything.

Also, you may be thinking that this is scary. Web developers who don’t know HTML, it is scary. If the future happens this way, then you are going to find yourself in another one of those great migrations that I was talking about. You will at that point, have three options, or maybe four. The most obvious option is the default, do nothing. If you already work lower down the stack, you’re racking and stacking hardware, for instance, or you’re writing database software, these changes in the frontend world are not going to affect you very much. Your skills will remain in demand. What will change is how you’re delivering them and who you’re delivering them to. If you do interact with frontend code, it’s almost certainly going to affect you. The first option is to specialize. Become one of those people who are building the components themselves. Like I said, we’re always going to need those. Things are going to be changing. Requirements are going to be changing. People are going to be adding new functionality. Things will need to be maintained, adapted, and expanded. The developers who can build these components and build them really well, they are going to be very much in demand. Everybody is going to want them. They’re going to do what specialists always do. They’re going to become so good at building these components that it’s not going to become worth rolling your own.

The second option is to become one of the people who are using the framework and just start building sites this way. You can become the wise elder of the space by slapping components together more quickly than anyone else knows how to do it, because you got on board early. Then the third option is to be the person who invents this framework in the first place. You could be the person who solves this problem, because I want to be clear, the chances of the winner being React Bricks specifically at this point are basically zero. We are at the phase of wild experimentation in this space. There are a solid dozen projects that I know about, that are taking a swing at letting people build websites this way. Not everything is working yet. Not every part of the problem has been solved yet. There are still some rough edges. I am taking 26 years of experience as a web developer and 10 years of experience looking at data, and I’m staking my reputation, I’m putting a sign in the ground that says this way to the future. Of course, I could be wrong. I’m often wrong. I’ve been wrong about all sorts of stuff. This feels inevitable. If I’m wrong, then we can all come back to this conference five years from now, and you can point at me and laugh. Maybe we’ll all be at Bricks Conf, and you can say that you heard it here first. Either way, I hope I’ve given you something to think about, about your career, about our industry, and about what the future could hold.

Questions and Answers

Dunphy: Since you work at Netlify, you’re mentioning Netlify as a Jamstack framework. I find that’s an interesting concept. Can you briefly talk about what it is that Netlify is working on today that you’re most excited about or recent releases that you’re most excited about?

Voss: I think the direction that Netlify has been moving in for a while now, and all of the other Jamstack services are moving in the same direction, is to expand the capabilities of the Jamstack. The Jamstack, initially got a reputation as static sites. Those two phrases used to be interchangeable. Now I think the most exciting thing about Netlify and other Jamstacks, is the ability to run serverless functions just by throwing them into your app. In particular, Edge functions, which are extremely fast, close to the user. Functionality that lets you really superpower your website in a way that’s far past like these static site origins that people think of when they think of Jamstack.

Dunphy: Now, back to your survey. The data says that React is dominating the web. What does the data say about alternatives like Vue, or Astro?

Voss: Vue is the most interesting one. The data said that this year, Vue lost user share. That was a surprise to us, because in the previous years, it had been gaining share, and we were expecting it to continue to gain share. I think personally, that that might be a blip. I think it might manage to return to growth, because the people who use Vue are very happy using it. They are just less happy than they were last year. Vue is always, to my mind, being like a possible successor to React. If it’s beginning to decline in user share, that means that it’s missed its window, and React will continue until it hits 80%, 90% share. I still think that the potential is there for it to turn it around. Competitors other than Vue, they’re also much smaller, but it’s very hard to get over the moat that React has built. They definitely have their own leashes. They definitely have good use cases. In terms of being a logical successor, I don’t really see one.

How is React Bricks different from any of the many other WYSIWYG editors that have come and gone over the years?

It’s not obvious from the high-level demo that I gave. What React Bricks does that is different is that you are building a functional piece of software. You’re not just saying this is what it looks like and this is the text that I’ve put in. You can connect the components to each other. You can say, when you input this form, the data goes over here, it goes to this backend, it gets this response and that response goes here. Which is very much what you do when you are building a React app, you just do it in code. Because most of the logic of a React app is in the API. If you know how to process the API response, that’s like 90% of what your React app is usually doing. It turns out, you can express that using drags and drops rather than using code. That’s what sets it apart from other WYSIWYG editors, because it’s not just what you see, it’s what you don’t see.

Dunphy: I want to hear your thoughts about code sharing business logic or UI components between web, mobile, and using React and React Native.

Voss: I think we got a question earlier about, will React’s dominance extend to mobile applications? I don’t think it will be React itself. I think React has had enough time in the marketplace that if React was going to become a major force in mobile development, the way that it has in the web, that you would be seeing that already. The fact that it hasn’t implies there’s something that makes it not good enough for mobile developers that they are sticking with what they know. However, if you look at Swift, by Apple, it’s called SwiftKit, it looks a lot like React. It’s not React, but it has many of the same concepts. If you were jumping from React to SwiftKit, you’d be like, I see how this works. It’s just components and props. That’s how React [inaudible 00:41:49] is translating to mobile, it’s taking the concepts, but it’s not necessarily taking React itself, not necessarily running React Native on phones.

Dunphy: There’s definitely a lot of reasons I think that React has not seen dominance on mobile, not the least of which is many issues with the platforms themselves, specifically, Apple. Apple seems to like their own little bubble. They like to put hurdles up for anyone who deviates from their recommended methodologies that either mess with their version of doing things or more importantly, their purse. Having React subserve this is not something that they’re too excited about. I think that’s one roadblock, and there are many others.

Voss: What about low code?

React Bricks is a low-code platform. You can type in some code if you really want to. I think low code is a stepping stone. If you think about people using WordPress, I think WordPress has been for a very long time stealthily a low-code platform. It mostly builds your website for you. If you’re using WordPress, you might tweak your CSS a little bit, or you might set up a plugin that requires some config files, but you’re not writing a lot of stuff. You can do a lot of stuff in WordPress with only knowing a little bit of HTML. I don’t think low code’s a revolution. I think low code’s been here for a while. I think the revolution is no code. We haven’t seen that really blow up yet. I think low code is already obviously true. Things like Webflow, Wix, they are, in fact, low-code platforms already.

Dunphy: One thing, speaking of WordPress, that I found interesting, is it has the most adoption but the lowest satisfaction rate. Do you think that’s a result of a bias towards a more modern Jamstack-y survey respondents, or do you think this is actually representative of the community as a whole? That WordPress, maybe is seeing its last decade, perhaps, before it gets usurped, or do you think it’s going to keep going?

Voss: jQuery is still kicking around, and it’s 15 years old. We don’t need jQuery because it’s built into the web, but like 50% of people are still using it. WordPress will be around a lot more than a decade. We do see in the survey that its share of usage is declining. I do think it has begun the downslope. Web development moves away from tools slowly. It adopts new ones very quickly, but it moves away from old ones very slowly. That process has just begun. It will take a long time before it’s really evident, but I think it has started.

Dunphy: Can you talk more about improving quality of websites, web apps. My thing is that improvements in JS frameworks like React don’t necessarily lead to better quality products, just hopefully a better experience for developers.

Voss: I think it’s correct to say that React in particular has been optimizing for the developer experience for a long time, and has to some degree sacrificed the user experience to do it. My position is that the net benefit to everyone is still better. Each individual React website might be a little bit slower, like I said, than a completely custom solution. The result of using React is that we are able to build twice as many or 5 times or 10 times as many websites as we used to. Websites have gotten a lot better since React came around. It’s because React accelerated us and it made us able to put out more stuff more quickly. We have seen more pretty good websites at the cost of a very small number of very good websites. Personally, I’m ok with that tradeoff.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: 24/7 State Replication

MMS Founder
MMS Todd Montgomery

Article originally posted on InfoQ. Visit InfoQ

Transcript

Montgomery: In this session, we’ll talk about 24/7 state replication. My name is Todd Montgomery. I’ve been working around network protocols since the ’90s. My background is in network protocols, and especially as network protocols, and networking, and communications apply to trading systems in the financial world. I’ve worked on lots of different types of systems. I spent some time working with systems at NASA early on in my career, which was very formative. I have also worked on gaming and a lot of other types of systems. I’ll try to distill some of the things that I’ve noticed especially over the last few years, with systems that are in the trading space, around 24/7 operation.

What’s Hard About 24/7?

What’s so hard about 24/7? If you probably are doing something in retail, or you’re doing something which is consumer facing, you probably are wondering why we’re even talking about something like 24/7. Don’t most systems operate 24/7? The answer to that is many do, but they’re not the same. Financial systems almost all in trading do not operate 24/7. They have periods where they operate, and then periods where they do not. The answer is, there’s nothing hard about it, except when the SLAs are very tight. The SLAs need to be met contractually. The fines that can be leveraged on an exchange, for example, can be extremely penalizing, if they are unavailable during trading hours. It’s not simply that it’s ok to take 10 seconds, for example. Ten seconds is an eternity for most algorithmic trading. Components need to be upgraded, often. Upgrading components should not feel like it is a chore, it should be something that should be done often. It should be something that should be done continuously. It does not stop. This is hard for a system that has to operate in a way that it is always available. It’s always providing its SLAs even when it’s being updated. Transactions must be processed in real time. Almost all systems have that requirement if they’re doing something that involves business. It’s not always the case when you are doing things that involve consumers, for example. These all do have some differences.

When we think about this, we also have to think about disaster recovery. In fact, we’ll talk a lot about disaster recovery. Disaster recovery is very important for applications because of the cost of losing some transactions or data, or things like that. This is not the same for all businesses. Not all businesses are equal in this regard. We’re going to actually use one term, but I wanted to bring two in. Recovery time objective is one thing we will look at. That gives us an idea about how to return to operation looks like. Recovery point objective is also a term that is also used also with this in terms of data protection. We’re not going to actually talk about recovery point objective. It’s its own point that we could be making about it. When we’re really talking about 24/7, we’re really talking about business continuity and business impact. Just being able to continue operation when bad things happen that we’re not planning for is a business differentiator. Also in trading, it can be something that is table stakes. An exchange, for example, in the EU, and the UK, systems that offer trading services have to be available and have to have a certain minimum capacity to operate in a disaster recovery case and be able to resume operation and be able to demonstrate it. Those are not things that are easy to do, most of the time.

Outline

Hopefully, thinking of it now, that maybe it’s not always the case that this is easy. Maybe it’s not always the case that it’s something that is just kind of, we’re already doing that. This can be difficult. We’re going to break this down into a couple different areas. The first we’re going to talk about is fault tolerance, because the fault tolerance model that is in use for an application has a big determination in terms of how 24/7 is done. Disaster recovery as well because that is also part of it. How fast can you return to operation, when you have to come and recover from a disaster? Also, upgrades. Components also need to be upgraded so you’re operating in an environment where sometimes things are just not on the same version. Hopefully, we’ll have some takeaways from this.

Fault Tolerance

First, fault tolerance. Providing fault tolerance is not difficult. There are lots of ways to do it. In fact, you could have a piece of hardware with a service running on it, and a client, and they’re operating just fine. You restart the service when it goes down, everything’s happy. Your first step in making this a little bit more fault tolerant in just restarting is you may put it on the cloud. That’s perfectly fine. That allows you to provide you with some easier mechanisms of restarting services or be able to migrate services a little bit easier, providing you some protection of data in the event of things not going very well. You can also duplicate it and shard. You could partition your dataset so that things are handled by specific servers. This provides you with some forms of fault tolerance. It does provide you with a little bit of things there to think about. For example, now you start to have clients that could connect to all of them, or one of them, or route to them. You have to think about that. That’s where we start to really see systems that operate in some cases just like this. There’s nothing wrong with that. In fact, there’s a lot of systems that work very well in this regard. When the problem domain fits into that, this is a perfectly fine model. It does have one potential problem that is tough to deal with, and that is where all of the problems start to occur. It’s the state that is in these services. A lot of times what we do is kick the can down the road. We use something else to hold the state. A database or a cache or something else that is pushed away. Essentially, it is still the same problem. That state is really what we need to be fault tolerant. That is fault tolerance of state. That state that we have needs to be tolerant to losing members, services, machines, everything.

When you start looking at this, there is a continuum between partitioning that state for scaling and replicating that state for fault tolerant operation, where it is replicated in several areas. There’s lots of ways to address this. Lots of systems do address this in different ways. We’re going to talk about one specific one, because it is a semantic that has become very popular within certain types of problems, many of the ones that I work with in trading, and also out of trading. That is the idea of having a model where you have a contiguous log of data, or log events, with the idea of a snapshot and replay when you restart. In other words, you have a continuous log of events, you can periodically take snapshots. You can then start by rehydrating from the log, the previous state, or loading a snapshot and then replaying as well. Let’s take a look at this in a little bit more detail.

Let’s say we have a sequence of events. This could be a long sequence of events. It really doesn’t matter. We do have some state that our service goes through from one event to the next. It’s compounding the state. There might be a particular point in this state where a lot of the previous state just collapses, because it’s no longer relevant. Think about if you’re trying to sell a particular item and you sell out of it, while the current state, once you sold out of it, you don’t have any more to sell. Why keep it around? That idea also applies to things like trading, like an exchange. You sell a commodity. You sell all that is available. You don’t have to keep track of what is currently available to sell because there are no more sellers in the market. All the buyers have bought up everything from the sellers at any particular time. Think of it as your active state. Your history is not part of that. It’s the active state of what is going on. At some point, let’s say after event 4 here, you took a snapshot of that state. That’s your active working set, you take a snapshot of it. Now we’re going to use the term snapshot, but a checkpoint, or various things, there’s all kinds of different ways to think about this. The idea is that you roll up the current state into something which can then be rehydrated. You would save that, and then the events that happened after that. The idea here is that you would have a contiguous log of events, state that is associated with that as it moves along. You can be able to equate the left and the right here with the idea of having a snapshot that rolls all that previous activity up into the current active state. The left and the right here should have the same state, at the end. They should have the same state at each individual place in that log.

This allows us to build basically clustered services. Now we can have services that are doing the same thing, from the same log of events, the same set of snapshots, everything. In fact, they’re replicas, this is called a replicated state machine. These replicas, if you had the same event log, if you had the same input ordering. In other words, one event comes out right after another event, and that’s always the case, that they don’t just suddenly flip, and you can keep those logs locally, you now have a system which can be deterministic. Everything allows that to happen now. You can have a system that is very deterministic. You can play this forward. You can restart it. It should always come back to the same state. That’s an interesting thing, because that allows you to now have an idea that it is deterministic. The system, given the same set of input will always result in the same state. Those checkpoints, or those snapshots are really events in the log, if you think about it. They roll up the previous log events in the idea that they just roll up the previous state. Some of that state just absolutely collapses, because it’s no longer relevant. The active state can be saved, loaded, stored just like anything else, because it is just a set of data. Then each event is a piece of data as well. There are a couple different things here.

What about consensus? When do we process something? If we’ve got these multiple ones running about these replicas, when is it safe to consume a log event? The idea is that log event then reaches some consensus. We actually have lots of different protocols, but the best one to use for this that I’ve seen in many others is the idea of Raft. Raft is well known. It’s used in many systems. It’s implemented in many systems. The idea is that an event must be recorded and stable at a majority of replicas before it’s being consumed by any replica. What this gives us is the ability to basically have that durable. It’s durable. It’s in storage. Now, if you lose, let’s say you have three members, that majority comes into play. Once it’s on one, that’s not good enough, it’s got to be a two. Once it reaches two, then it can be processed. That’s a very key thing. You can lose one member, and continue on just fine, because another member has it. This works for five, works for all the numbers. A clear majority of those numbers having that reaches consensus. The idea is you’re using consensus to help you with that. If you now have a system that is a set of nodes, cluster, and they’re replicas and they’re all processing the same log, now you have the ability to lose a member and keep going, or lose two members, if you’re running five, or if you’re running seven, lose three, and just continue on as if nothing had happened.

Disaster Recovery

There are some things in there to think about, and they all have an impact on what we’re talking about, which is 24/7. I want to talk about a couple specific ones. The first thing is disaster recovery, because this is very much something that is always on the mind of system developers. Disaster recovery. We’re not going to talk about recovery point objective, RPO. It’s relevant, but it’s a little different. It is not the main thing we want to talk about. What we really want to talk about here is RTO, recovery time objective, because it’s very relevant when you’re talking about 24/7, and you have tight SLAs. Where does disaster recovery come into this picture? We’ve got services. We’ve got archives. We got multiple machines that are all doing the same thing. We’re able to recover very quickly when things happen. In fact, we may not even notice, and just continue on. If you think about this, those machines have an archive. Those archives are then being used to store snapshots. They have the log. If you lazily disseminate those snapshots to DR, and you stream, when each individual message in a stream reaches consensus, you stream it to DR. Now you have a way of actually recovering, because now you’ve got that log and the snapshots and you can recover your state. None of this should be new. The systems we have work this way. I’m trying to break it down so that we can actually see some of the dynamics at play and what 24/7 really means in those cases. If we think about this, that dissemination of the snapshots and the log and the log events, helps us to actually then understand what is actually the problem that we want to solve, to make it so that we can even in disaster recovery cases be able to meet our SLAs, or at least get close to being there.

It breaks down into two things. One, is the disaster recovery cold? In other words, it’s just a backup, and it’s just on disk. Or, is it warm? Is there some logic that’s running there? Let’s take the first idea of it being cold. This means basically, you have a cold replicated state machine for your recovery. In other words, it’s not running yet. Everything is just being saved, just ready for backup. That means when it becomes hot, not warm, but when it becomes hot, because it now becomes active, you have to first load the snapshot and then replay the log. Let’s break that down. This picture was what we saw before earlier. The snapshot takes some time to load, and then the replay of the log takes some time. Thus we end up basically having a particular time that it takes us to recover. If we break that down, the time to load the snapshot, and the time to replay the log, and the time to recover, they have an interesting association with one another. The first thing is, the time to load a snapshot is usually small, because the snapshots are usually just a serialized version of current state. They’re not a history. In fact, that’s the worst thing you could do to put into them would be history. What you really want is just the active state that you need to maintain. The time to replay the log is going to be determined by probably the length of the log. Yes, the length of the log is going to be the thing that has the biggest impact, and how complex it is to actually process the log. That can be large.

In fact, if you have to have a log that is 24 hours old, let’s say you’re snapshotting every 24 hours, the worst case is going to be you have a 23:59:59.99999 log that you have to replay. Some systems that are doing millions of transactions a second, and may have trillions of events in their log from the last snapshot, that can take a while to go through. It’s not so much loading that data from disk or loading it from some storage, it’s usually the fact that the processing of that log can take a very long time. That is something that you have to always consider is what happens if it takes a long time to process that log? Seeing systems where the processing of a log after that can take hours. That’s hours before that is ready to go from disaster recovery to become something that is done right now.

How do we ameliorate that? How do we actually make it better so that the loading of a member going from cold to hot is faster? Reduce the size of the log that you have to replay. How you do that, snapshot more often. Instead of doing every 24 hours, you do it every hour. That means you have an hour’s worth log, that you may be able to run through in a few seconds, or a minute, or a couple minutes, or 10 minutes. That may be acceptable. If you need to snapshot more often, that just means that you have a smaller log to replay. That’s good. That’s your main control, in that case. You don’t have a lot more else that you can do immediately, but it does come at a cost. Taking a snapshot may not be something you can do very quickly. It may be something that takes a period of time to have that snapshot be actually done. In Raft, a snapshot is an optimization. You’re taking it and you’re trying to make it so that it’s an optimization. You could always have just replayed the log from scratch. The idea of that snapshot, you may want to think about it as it might have a disruptive influence on your machine, on all the replicas. In fact, the way that a lot of systems do this is a snapshot is an event in the log, all the replicas take a snapshot at the same time. That simplifies some of the operation, but it also introduces the fact that when you take a snapshot it better be quick because it will disrupt things that are behind it that’ll have to wait. What happens if you were to use the idea of taking snapshots asynchronously outside of the cluster and in disseminating them out of band back into the cluster? That could be a way to minimize that impact. It doesn’t really change the recovery of those, but it does make it so that it should be a little bit less disruptive to take an actual snapshot.

Let’s think about also this in a different way. What if we were to look at it from a warm standby perspective? Instead of it being cold, and just thinking of it as a backup, we think about it more like the service replica is running in DR, and it’s ready. It’s hot if it needs to be. It just has to be told to go hot. That is actually another way to handle this. Instead of it having being cold, and it just being a backup, you could just be streaming those snapshots, and that log, and have a service replica just taking and doing exactly what a normal service would do. This is the power when it’s deterministic. Because the service, the same one it’s running in the cluster could be running in your DR. It’s hot at the drop of a dime. It’s sitting there just warm, and you can light a fire under it at any time. It does not need to be active in the cluster, it just has to be there getting the same log, getting the same snapshots, doing the same logic. That means that recovering from a disaster can be just the matter of making that go from warm to hot. That allows SLAs to be met.

24/7 Upgrades

That’s how you can think about potentially making 24/7 in a SLA demanding type of environment, work really well within some systems that allow them to use this model. That’s only the beginning of a lot of different pieces. The other pieces I want to talk about that is normally a problem for a lot of systems is really more along continuous delivery, and 24/7 upgrades. How to do upgrades within an atmosphere like this. The first thing is, there is no big bang, forklift, stop the world upgrade, because that is too much of a disruption to all SLAs. Blows them out of the water. You’re never going to be able to do it. It’s never going to be possible to go ahead and do that in any real system that needs to be 24/7 and has tight SLA. That’s not something you can really rely on. When you start to think about that, the implication of that is, since you can’t upgrade everything at once, you’re going to have to live with the fact that you’re going to have to upgrade things piecemeal. That means that components will be on varying versions. They’re going to be on varying versions of an operating system. They’re going to be on varying versions of an OS. They’re going to be on varying versions of software that’s outside of your system. Your own system will be on varying versions. It’s just a fact.

That gets us down into actually an area that I come from, which is protocol design. Protocols have to work in ways that mean that they can operate with varying component versions of various pieces, they just have to. How do you make them work in those situations. I’m going to take a lot of inspiration from protocol design here. I suggest you also take a look at this because, in my experience, this is the hardest part of making something where you have the ability to upgrade things 24/7 at the drop of a hat. Good protocol design is a part of that. It’s a very big part of that. The first thing is backwards compatibility, table stakes. You have to be backwards compatible. You have to make it so that new versions won’t necessarily break old versions. You have to figure out how to do that in the best way while moving things forward. That’s not always easy. It gets a little bit easier when you start to think about how to do forward compatibility. What does that mean? It means that version everything: your messages, your events, your data, everything should have a version. It should have a version field, whether it be a set of bits, or whether it be a string, or something else. There has to be a way to look at a piece of data that’s on a wire or a piece of data in a database or a piece of data that is in memory and be able to look at it and go, that’s the version for that.

Not all versioning schemes are the same. They all break down in certain ways. I want to suggest you take a look at something called semantic versioning. Semantic versioning, there’s actually a very nice web page that goes into the idea behind semantic versioning. What it does at its heart, is it gives you some definitions for MAJOR, MINOR, and PATCH that are useful for a lot of systems. You don’t have to use this for messages. You don’t have to use this for data. I strongly suggest looking at it for that. A lot of times it’s looked at for versions of whole packages, things like that. Think about it from the protocol perspective too, the messages, and the data, and everything else. Give it a version number, MAJOR.MINOR.PATCH, increment the major version when you make incompatible API changes. Version two is incompatible with version one. That’s an interesting thing there, the API. Minor version when you add functionality in a backwards compatible way. Minor version can change as long as it is backwards compatible, and it’s adding functionality, not changing it. Patches when you actually make a backwards compatible bug fix. That’s the big-ticket item. There are some details here that I suggest you look at.

Additional labels like pre-release, build metadata, all those are extensions to that format, but the heart of it is those particular types of things. Now we have a way to think about what happens if we introduce an incompatibility, should be a major version change. Minor if it’s compatible, but you’ve added some functionality, and patch if you’ve just added in some fixes. That’s very interesting. We have some semantics that you can hold your hat on.

Let’s take a look at this example applied to that log idea. If all of our events or messages are versioned, we can see a sequence of version changes, 1.2.0, 1.2.0, 1.2.1, 1.3.0, 1.3.0, 2.1.5. That was a big change right there. What happened to that? What happens to that if an old version of a service, say for example, sees that, does it ignore it? Does it try to struggle on? Does it just let an exception happen if that’s ok? Does it terminate? What should it do? There’s no hard and fast rule here. Your system probably has some impact. For all those actions, there’s going to be an impact. In the case of replicated state machines, termination may be the best option, because you might want to just terminate, and then continue on. If you’re upgrading each individual member of a cluster, you probably want to upgrade one, upgrade the second, and upgrade the third for a three-node cluster, and then start using a new format after that, after all those have been upgraded. What happens if you’re doing that you forget to update one. You just have it terminate. That might be the best thing in the world as opposed to corrupting state somewhere. That can be the catch if something doesn’t happen.

With large network systems, what could possibly go wrong? There might be not only this, we may need to have a little bit more information here. For example, let’s think about what we were to do if we were to add a couple more bits here. Let’s borrow something from protocol design. TCP itself has a set of TCP options, and some of them have been added over time. They thought upfront about how to actually add them. They have a bit and other protocols besides TCP do this. It’s probably the best example, at the moment, that comes to mind. It has an ignore bit. All the options have what’s called an ignore bit. Instead of it being version, it’s type, so the type of option. If it doesn’t understand the type, and you receive a TCP message, you look at the I bit, that can be ignored. If the I bit is set, then you can continue processing the rest of the options, and you just ignore that option. If it’s set to zero, it can’t be ignored and you have to actually drop the entire message, the entire TCP frame. This is forward compatibility at its best, in my mind. You’re actually thinking, I may come into a situation where I have to throw away this whole message, or I can just ignore this particular piece of it. That’s thinking about things upfront, and then you have that ability.

Let’s add the ability to terminate to that. You have maybe a bit that says, can be ignored. Can it be ignored? Yes, this particular piece, whether it be a whole message or part of a message, yes, it can be ignored. You might have an additional piece in here that says, should I terminate if I receive this, and I don’t know what to do? Because the answer might be true. This is how you can think about it. You don’t have to think about everything upfront, but you can start to think about an additional way to have some additional pieces of information that give you an idea of how to handle these situations. When you set them up beforehand, you now have logic that can be exercised when it needs to be exercised. I’m not saying termination is the best thing. I’m not saying that dropping the message is the right thing. It may be the best thing to just let an exception occur, and just then have it logged, and then just continue on. Or it might be, have that exception recorded, logged, and then have the thing terminate. You may have a set, this is what we do when the unexpected happens in our system. I’m just saying you can give yourself a little bit more freedom and a little bit more ability to handle the unexpected when you look at that.

Feature flags are an example of things you need to think about for upgrades. Often, feature flags, in a lot of instances are tied to a version of a protocol. The hard thing there is you have to have a new version to understand the new feature flags. That hurts. That can be tough, because it ends up in a chicken and egg effect. We have to upgrade the protocol so that we have new feature flags, but we can’t actually have the feature flag because we have to actually update the protocol. There is a dependency there that you want. If you version them separately, if you version the feature flags separately, you actually get some interesting effects. If you can be able to push out a new set of feature flags, then push out a new version of a protocol, that’s actually pretty powerful. Now you have some ability to not have to do the opposite. I also think that feature flags, they are not always a bit true or false. They often have values. They often have meaning. They also have other things with them. Having them in a self-describing format is actually fairly useful. Some of the things that you might think about from a self-describing format are things like, is it actually going to be an integer? Is it unsigned? Is it signed? How big is the integer? How many bits is it, in other words? Those are things that I think about because I’m a protocol designer. If you start to think about this and step back for a moment, and think about it from a usability standpoint, knowing that something is a integer versus something which is just a string, gives you some ability to even think about these things a little bit more. When they’re self-describing, they do open up some doors.

I want you to think about that for a moment. Let’s go back to having a set of messages and having them in different versions. Interesting thing about this is that log now has distinct events of protocol change in it. That’s interesting. That’s really powerful to be able to look at and see the sequence of when, in operation, protocols changed through a system. I think that is neat anyway. What it comes down to with upgrades is thinking about things not as being static and thinking about things as just a big upgrade. You have to start thinking about these things as your system, it has varying versions of things going on. Not everything is going to be on the same version. You may try to minimize the time that they’re on different versions, but you’ll never get rid of it. Unless you can actually upgrade everything all at the same time. Which is not going to work really for 24/7 types of upgrades.

Recap

I really did just hit on the biggest things that I’ve seen over the years. I’ve seen a lot of people with a lot of systems try to skip over some of the versioning things that I mentioned. I think one of the things that I’ve noticed repeatedly is there is no shortcuts here. You actually have to sit down and think, how does older versions handle these things? What can we do? How can we minimize the disruption when doing upgrades? This goes well beyond just the 24/7 idea. I’ve noticed a lot of microservices architectures have the typical thing of, you put in a microservice architecture in place, you had no versioning of your protocol or data. Then you’ve realized, now we have to add it in, because we’re running different versions all over the place that speak various things and we have to handle these things as piecemeal. That I’ve seen a few times now of just looking back on it going, it’s really hard to shoehorn that in later. It’s really hard to add versions later. I and some of the other people that I work with are no exception to this. We added a protocol that had no versioning at the application layer, to exactly these types of systems, and we’ve had to go back and add it. We were very careful with it.

Questions and Answers

Reisz: It’s a fine line because you don’t want to prematurely optimize something you need to get to product market fit, before you start adding any boilerplate, any gold plating. It’s tough sometimes to make all the right calls, so a talk like this really helps you think about some of these things.

Montgomery: That’s been always my goal to just give some ideas of things for people to think about and adopt what you like. If it doesn’t matter to you, then just forget about it.

Reisz: Honestly, it’s what I liked about this talk, because there’s a lot of things to think about. It’s like almost a checklist. These are some things to think about when you want to run a high resilience system like this. Speaking of that, we did the panel on microservices in San Francisco, you were on it. One of the things that we were talking about was slowness. You talked about, like getting to what does it mean when something is slow? When someone tells you they need a high resilience system, a 24/7 system, how do you have a conversation with them just like slowness? To be able to say, what do they really need. How do you go about that?

Montgomery: I always come at it from business impact, whether it be slow, or all the different qualities like, is it slow? What’s the quality? What is the business impact of that? Fault tolerance is also part of that. There are systems where losing data is ok. It can be figured out later without any business impact. There are a lot of businesses like that. Unfortunately, I don’t deal in any of them. The impact of losing a million-dollar trade, that is much more than a million dollars to large numbers of entities. A lot of times when you’re thinking of, whether it be security, quality, performance, and even fault tolerance, it comes down to what’s the business impact and how much is it worth? Certain things, the business impact is too big, therefore, you need to spend additional time and come up with the right thing.

Reisz: How do you deal with things like blue-green deployment and A/B testing? Is it all really like the trickiness around that state that you were talking about with that log replication? What are your thoughts?

Montgomery: Something like A/B testing is an interesting mental exercise to think about with something like a cluster where it’s replicated. How do you do A/B testing? In that case, you almost have to have two different systems. One that is running one version, and one that’s running a separate version, or they’re running the same version, but the feature flags are different between the two. Then you’re feeding that back into another system. There’s only that real way to do that, because any other way that you do it, introduces an opportunity for divergence of those replicas, which is hard to recognize. Because once you put any type of non-deterministic behavior into a system, that you’re relying on replication, that non-determinism will replicate just as quickly as everything else. You do not want that. You almost have to think of it as two separate systems.

Reisz: There’s a few things that you talked about, deterministic, durable, consensus. I want to dive in just for specific on a few of these, like deterministic. A deterministic system is something that, given a set of inputs gives you the same results. Dive a little bit more into that, what does that mean in practical sense for folks?

Montgomery: Deterministic systems mean that the same input gives you the same output for all the replicas. What that really means is that the same input sequence can be thought of as being immutable. You don’t go back and add or change it or anything. They always present the same state, for example. When you add non-determinism in, a couple different things that you could do to add non-determinism. You could have a piece of randomness that is using a different seed on each node, that is non-deterministic because they would then diverge at some point that is important, where they would have different state. A couple other things that you might do that is a source of non-determinism is data structures that don’t iterate the same. You might go through and iterate through something like let’s say, a hash map, but it’s different iteration order at different replicas. If your iteration order has some impact into the state that you would come up with, that is a source of non-determinism.

A lot of the things that we end up doing in, basically, business logic has a lot of non-deterministic aspects to it, that we don’t realize. The data structures, some of the details about how different things function, it’s very interesting to see that some of the practices we’ve developed over the years, are very non-deterministic. Here’s another source of non-determinism, reading from files on disk that are different between different replicas. That is another source of non-determinism. When you have a system that is deterministic, you do have something powerful.

Reisz: What are some ways to deal with that, like reading from files, for example? Are there some frameworks out there, some projects out there that may help us along some of these lines?

Montgomery: For me, I think about everything should come in through that log. The reason why you go to a file is usually to get some other piece of data. If you’re going to go to that file to get it, you should be going to the same place for all the different replicas, so it should be a service. It should be, you’re going and grabbing the same piece of data. That service that you’re going to should be deterministic as well, and give you the same result. You can’t get around those types of things. Most of the time, when you go to a file, you’re looking for something very small, you’re not looking for a large piece of data. I like having that come into the log, because it’s a fact in the log. It’s a command almost. In that case, you can look at it, instead of reading it from a file, you get it pushed into the system externally. Usually, that’s also an indication that you need that piece of data at that particular point in the log, which is actually itself a very interesting way to think about how you would normally push things into it. Really, when you come to think about it, that piece that comes into the log is updating the state in some way. It should really come in through the log.

Reisz: There’s a question about microservices and data ownership, each microservice owning their own data, for example. What are your thoughts on maintaining data integrity and cross-service transactions in a microservice architecture where things own their own data? Is it, again, all come back to that log? What are your general thoughts.

Montgomery: I think it actually does come back to the log. To give you an idea of how you would think about two different clustered services that are depending on one another, which is a very common pattern, is if you think that as you’re processing data, you’re sending a message to another cluster, another set of services, then it sees that as a message coming in, processes it. Sends a response back to another cluster, or the same one and places that in the log. If you think about it, from everything coming into the log, and everything going back out to other services, if you think about it that way, things get a lot cleaner. Because if you look at it from that first service, you see the logic that spawned that request, it went out to another cluster. At some point, it comes back, because the response comes back in the log. When you think about everything coming into the log that way, it’s a little bit easier to think about the sequence of events that went through.

Reisz: Finding that non-determinism, that seems like that’s going to be a challenge.

Montgomery: Yes, once you let it seep in, it is very hard then to make your system deterministic or non-deterministic.

Reisz: Tony’s got a question around legacy code. He has lots of legacy code, and he’s wondering your thoughts on how to introduce versioning, or how to update the protocol. The legacy code has some intolerant protocols, some older protocols, and he wants to update the protocol. He’s curious your thoughts on how to apply versioning to update this protocol. What strategy might you use to update this? He’s asking here, do you think it would be a good idea to do a minor version update to add versioning to the protocol, and then do a major version and update the protocol? What are your thoughts? As someone who designs protocols, how would you go about recommending him address the solution?

Montgomery: I would actually call it a major version change. Previous versions will never understand the protocol, that version, scheme, or anything else. At that point, I would consider it to be a breaking change, so in the semantic versioning sense, would be a major protocol change. The reason why I do it is this, it’s better to take the pain at that point than to push it off. You really want to take that pain at that point of, now we do versioning. Now going forward, we have this scheme of how we do versioning, and how we can rely on things to do. Because if you’re thinking of it from like, we upgrade everything all at once, the best place to add your versioning is when you’re going to do that. If you upgrade everything all at once so it now understands versioning. Now afterwards, you can pick and choose whether you need to have that. Going into it, you do not want an older piece that doesn’t understand that there even is versioning, to try to do anything else. It’s got to be a big change. I think you take that pain up initially, and then later on you reap the benefits of taking that pain.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ON Semiconductor, MongoDB And 2 Other Stocks Insiders Are Selling – Investing.com UK

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

ON Semiconductor, MongoDB And 2 Other Stocks Insiders Are Selling
© Reuters. ON Semiconductor, MongoDB And 2 Other Stocks Insiders Are Selling

Benzinga – The Nasdaq 100 closed lower by over 2% on Thursday. Investors, meanwhile, focused on some notable insider trades.

When insiders sell shares, it could be a preplanned sale, or could indicate their concern in the company’s prospects or that they view the stock as being overpriced. Insider sales should not be taken as the only indicator for making an investment or trading decision. At best, it can lend conviction to a selling decision.

Below is a look at a few recent notable insider sales. For more, check out Benzinga’s insider transactions platform.

ON Semiconductor

  • The Trade: ON Semiconductor Corporation (NASDAQ: ON) CEO & President Hassane Elkhoury sold a total of 20,000 shares at an average price of $105.00. The insider received around $2.1 million from selling those shares.
  • What’s Happening: onsemi and BorgWarner expanded strategic collaboration for silicon carbide worth over $1 billion in lifetime value.
  • What ON Semiconductor Does: Onsemi is a supplier of power semiconductors and sensors focused on the automotive and industrial markets.

Have a look at our premarket coverage here

MongoDB

  • The Trade: MongoDB, Inc. (NASDAQ: MDB) Director Dwight Merriman sold a total of 1,000 shares at an average price of $420.00. The insider received around $420,000 from selling those shares.
  • What’s Happening: MongoDB announced an expansion of a multiyear strategic partnership agreement with Microsoft by integrating the Atlas application into Azure.
  • What MongoDB Does: Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users.

Winnebago Industries

  • The Trade: Winnebago Industries, Inc. (NYSE: WGO) PRESIDENT – GRAND DESIGN Jeff Donald Clark sold a total of 300,000 shares at an average price of $67.87. The insider received around $20.36 million from selling those shares.
  • What’s Happening: Winnebago Industries reported a third-quarter FY23 sales decline of 38.2% year-on-year to about $900 million, missing the consensus of $961 million.
  • What Winnebago Industries Does: Winnebago Industries manufactures Class A, B, and C motor homes along with towables, customized specialty vehicles, boats, and parts.

Darden Restaurants

  • The Trade: Darden Restaurants, Inc. (NYSE: DRI) Director Lee, Eugene I. Jr. sold a total of 33,000 shares at an average price of $170.09. The insider received around $5.61 million from selling those shares.
  • What’s Happening: Piper Sandler initiated coverage on Darden Restaurants with a Neutral rating and announced a price target of $167.
  • What Darden Restaurants Does: Darden Restaurants is the largest restaurant operator in the U.S. full-service space, with consolidated revenue of $10.5 billion in fiscal 2023 resulting in 3%-4% full-service market share (per NRA data and our calculations).

Check This Out: Investor Sentiment Declines After Nasdaq Falls 2%

© 2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original article on Benzinga

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB, Inc.: Analysts Bullish on Stock’s Potential Following Strong Financial Performance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. (NASDAQ:MDB) has garnered significant attention from investors as its shares have recently been assigned an average recommendation of “Moderate Buy” by twenty-four brokerages, according to Bloomberg Ratings. This rating reflects the diverse opinions and analysis conducted by financial experts to evaluate the stock’s potential.

Among these twenty-four brokerages, one analyst has given a sell rating, three have provided a hold rating, while an overwhelming majority of twenty have recommended buying shares in the company. This positive sentiment suggests a strong belief in MongoDB’s ability to deliver value to its shareholders.

To further support this outlook, analysts have established an average 12-month price objective of $366.59 for the stock over the past year. This projection serves as a guideline for investors, providing a potential target price based on extensive research and calculations.

Examining MongoDB’s recent earnings report released on June 1st enhances our understanding of why analysts maintain such favorable ratings and projections for this company. The report revealed that MongoDB achieved an earnings per share (EPS) of $0.56 for the quarter under consideration. This exceeded analysts’ consensus estimates by a notable $0.38, showcasing the company’s ability to outperform expectations.

Moreover, MongoDB exhibited noteworthy growth in revenue during this period, reporting $368.28 million compared to analysts’ projections of $347.77 million. This represents a substantial increase of 29% compared to the corresponding quarter last year.

Although it is essential to carefully assess all aspects when evaluating a company’s performance, negative return on equity at -43.25% and negative net margin at -23.58% for MongoDB should not overshadow its impressive revenue growth and earnings beat.

Utilizing its general purpose database platform worldwide, MongoDB aims to provide cutting-edge solutions tailored for various needs across industries. Its flagship product offerings include MongoDB Atlas—an innovative multi-cloud database-as-a-service solution that offers flexibility and scalability—and MongoDB Enterprise Advanced, a commercial database server designed to cater specifically to enterprise clients in cloud, on-premise, or hybrid environments.

For developers looking to explore the potential of MongoDB in their projects, the company also offers Community Server—a free-to-download version of its database that includes essential functionalities for an easy and seamless start.

These robust products and services offered by MongoDB have proven instrumental in furthering the company’s growth and solidifying its position as a leader in the database market. With analysts forecasting -2.8 earnings per share for the current year, it will be interesting to monitor how MongoDB continues to leverage its strengths to drive success in a dynamic and competitive industry.

As July 20, 2023 approaches, investors and analysts alike eagerly anticipate future developments within MongoDB. The company’s strong financial performance combined with its reputation for innovative solutions positions it well for continued success and expansion.

MongoDB, Inc.

MDB

Buy

Updated on: 21/07/2023

Price Target

Current $416.96

Concensus $388.06


Low $180.00

Median $406.50

High $630.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Miller Jump
Truist Financial
Buy
Mike Cikos
Needham
Buy
Rishi Jaluria
RBC Capital
Sell
Ittai Kidron
Oppenheimer
Sell
Matthew Broome
Mizuho Securities
Sell

Show more

MongoDB Stock Receives Attention from Brokerges and Insider Trading Activities


MongoDB, Inc. (NASDAQ: MDB) has recently attracted attention from various brokerages, leading to an increase in its stock price objective. Piper Sandler raised the price target on MongoDB from $270.00 to $400.00, while Oppenheimer boosted their target price to $430.00 and Royal Bank of Canada increased it to $445.00. Citigroup also lifted their price target from $363.00 to $430.00, reflecting positive sentiment towards the company’s future prospects. Moreover, William Blair reiterated its “outperform” rating on MongoDB shares.

As of July 20, 2023, NASDAQ: MDB opened at $431.21, a substantial increase compared to its 1 year low of $135.15 and a new high of $439.00. The company has demonstrated financial stability with a debt-to-equity ratio of 1.44 and solid liquidity ratios such as a current ratio and quick ratio of 4.19 each. Additionally, MongoDB has shown impressive growth with its fifty-day moving average price standing at $356.64 and two-hundred-day moving average at $264.70.

MongoDB is renowned for providing a versatile general-purpose database platform globally, catering to diverse user needs seamlessly. The company offers multiple products including MongoDB Atlas – a multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced – a commercial database server tailored for enterprises operating in cloud environments or hybrid setups; and Community Server – a free version designed to enable developers to get started swiftly.

In recent news related to insider trading activities, Director Dwight A Merriman sold 2,000 shares of the company’s stock on April 26th for an average price of $240 per share, amounting to a transaction value of approximately $480,000 USD in total earnings from the sale alone.
The Securities & Exchange Commission received legal filings regarding this transaction through its official website.

Furthermore, CRO Cedric Pech sold 15,534 shares of MongoDB stock on May 9th for an average price of $250 per share. Following the completion of this transaction, Pech now possesses 37,516 shares valued at approximately $9,379,000 USD. The disclosure report regarding this sale can be accessed through relevant sources.

Over the past 90 days, insiders have collectively sold 117,427 shares of MongoDB stock with a total value of $41,364,961 USD. Corporate insiders currently own around 4.80% of the company’s outstanding stock.

Recent hedge fund activity suggests both additions and reductions in positions held on MongoDB stock. For instance, GPS Wealth Strategies Group LLC acquired a new position valued at $26,000 in the second quarter. At the same time, Global Retirement Partners LLC increased its holdings by an impressive 346.7% during the first quarter and now owns 134 shares worth $30,000. Another notable increase was registered by Pacer Advisors Inc., which added to their MongoDB holdings by 174.5% during the second quarter. Bessemer Group Inc., on the other hand, initiated a new stake worth about $29,000 in December last year. Finally, BI Asset Management Fondsmaeglerselskab A.S purchased a stake worth approximately $30,000 in the fourth quarter.

These figures indicate that more than 89% of MongoDB’s outstanding stock is owned by hedge funds and other institutional investors who are recognizing the company’s growth potential and are adding to their positions accordingly.

In conclusion,I based on recent brokerage comments and insider trading activities as well as extensive hedge fund involvement in MDB stock- it seems that market participants continue to show confidence in MongoDB and believe it has room for further growth and appreciation moving forward.&

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


KeyBanc Adjusts Price Target on MongoDB to $462 From $372, Maintains Overweight Rating

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB, Inc. is a developer data platform company. Its developer data platform is an integrated set of databases and related services that allow development teams to address the growing variety of modern application requirements. Its core offerings are MongoDB Atlas and MongoDB Enterprise Advanced. MongoDB Atlas is its managed multi-cloud database-as-a-service offering that includes an integrated set of database and related services. MongoDB Atlas provides customers with a managed offering that includes automated provisioning and healing, comprehensive system monitoring, managed backup and restore, default security and other features. MongoDB Enterprise Advanced is its self-managed commercial offering for enterprise customers that can run in the cloud, on-premises or in a hybrid environment. It provides professional services to its customers, including consulting and training. It has over 40,800 customers spanning a range of industries in more than 100 countries around the world.


More about the company

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


CM Wealth Advisors LLC Makes a Bold Move into Database Platforms with MongoDB Acquisition

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

In a surprising move, CM Wealth Advisors LLC recently acquired a significant stake in MongoDB, Inc. (NASDAQ:MDB). This acquisition marks the firm’s entry into the fast-growing world of database platforms. CM Wealth Advisors LLC purchased 3,855 shares of MongoDB, Inc., amounting to an approximate value of $918,000. This development was revealed in the company’s most recent Form 13F filing with the Securities and Exchange Commission (SEC).

MongoDB (NASDAQ:MDB) is a global provider of general-purpose database solutions. Their flagship product offerings include MongoDB Atlas, a fully managed multi-cloud database-as-a-service platform; MongoDB Enterprise Advanced, a commercial database server tailored for enterprise customers that can be deployed in the cloud, on-premises or in hybrid environments; and Community Server, a free-to-download version aimed at developers seeking the essential functionality required to embark on their journey with MongoDB.

The latest financial performance figures released by MongoDB indicate promising results. On June 1st of this year, the company announced its quarterly earnings for the period ending late May 2023. The earnings per share reported were $0.56 for the quarter, surpassing market expectations by an impressive margin of $0.38 per share. Analysts had projected earnings of $0.18 per share.

Not just limited to beating earnings estimates, MongoDB also recorded substantial revenue growth during this quarter. The company generated a total revenue of $368.28 million compared to analysts’ projections of $347.77 million—an increase of approximately 29%. These positive figures demonstrate that MongoDB’s business is thriving and commanding significant market demand.

It is worth noting that despite delivering remarkable performance in revenues and profits, MongoDB still faces certain challenges that need addressing. The company reported a negative return on equity standing at 43.25% as well as a negative net margin standing at 23.58%. While these figures pose concerns, they are potentially manageable through strategic adjustments and improvements in operational efficiency.

In terms of EPS comparisons, MongoDB’s previous fiscal year showed a loss per share amounting to ($1.15). However, the company has managed to turn its fortunes around by achieving positive EPS growth during the current fiscal year. Industry experts and research analysts predict that MongoDB, Inc. is on track to post earnings per share of approximately -2.8 for this particular fiscal year.

The recent investment made by CM Wealth Advisors LLC signifies confidence in MongoDB’s future prospects. This cutting-edge technological firm has successfully positioned itself as a market leader in providing database solutions across various sectors globally. By expanding its portfolio to include MongoDB shares, CM Wealth Advisors LLC demonstrates a recognition of the company’s potential for growth and profitability.

In conclusion, MongoDB, Inc.’s recent acquisition is indicative of the growing interest and confidence in the realm of database platforms. As businesses seek efficient and reliable data management solutions, companies like MongoDB continue to seize opportunities and deliver innovative products tailored to meet the evolving needs of enterprises worldwide. It will be intriguing to keep an eye on how this investment unfolds for both CM Wealth Advisors LLC and MongoDB, Inc., as they progress towards shaping the future of the database industry.

References:
– SEC Form 13F filing
– MongoDB, Inc. (NASDAQ:MDB) Quarterly Earnings Report
– Industry analyst forecasts

MongoDB, Inc.

MDB

Buy

Updated on: 21/07/2023

Price Target

Current $412.64

Concensus $388.06


Low $180.00

Median $406.50

High $630.00

Show more

Social Sentiments

We did not find social sentiment data for this stock

Analyst Ratings

Analyst / firm Rating
Miller Jump
Truist Financial
Buy
Mike Cikos
Needham
Buy
Rishi Jaluria
RBC Capital
Sell
Ittai Kidron
Oppenheimer
Sell
Matthew Broome
Mizuho Securities
Sell

Show more

Hedge Funds and Institutional Investors Show Confidence in MongoDB’s Growth Potential


MongoDB, Inc. (MDB) has seen recent modifications in its holdings by hedge funds and institutional investors. These changes in ownership reflect the confidence that these investors have in the company’s potential for growth and profitability.

During the fourth quarter of 2023, Bessemer Group Inc. purchased a new position in MongoDB valued at $29,000, while BI Asset Management Fondsmaeglerselskab A S acquired a new position valued at $30,000. Lindbrook Capital LLC increased their holdings by an impressive 350.0% during this period, now owning 171 shares valued at $34,000 after purchasing an additional 133 shares. Additionally, Y.D. More Investments Ltd entered the market with a new position valued at about $36,000. CI Investments Inc. also took advantage of MongoDB’s potential by increasing their holdings by 126.8%, now owning 186 shares valued at $37,000 after purchasing an additional 104 shares.

The fact that institutional investors own approximately 89.22% of MongoDB’s stock further emphasizes the growing interest and confidence in the company.

On Thursday, July 20th, MDB stock opened at $431.21 with a market capitalization of $30.44 billion. The stock has a price-to-earnings ratio of -92.34 and a beta of 1.13, indicating moderate volatility compared to the overall market.

Over the past year, MongoDB has demonstrated significant growth with a 12-month low of $135.15 and a high of $439.00 – showcasing substantial appreciation within this time frame.

The company boasts a current ratio and quick ratio of 4.19 each – indicative of strong liquidity levels to meet short-term obligations efficiently. Furthermore, MongoDB has maintained a debt-to-equity ratio of 1.44 – suggesting moderate leverage in its capital structure.

Analysts monitoring MDB have recently issued reports regarding the company’s performance. Citigroup raised their price target from $363.00 to $430.00, Piper Sandler increased their target price from $270.00 to $400.00, and Stifel Nicolaus upped their target price from $375.00 to $420.00.

William Blair has reiterated an “outperform” rating on shares of MongoDB, while Royal Bank of Canada raised their target price from $400.00 to $445.00.

Currently, Bloomberg.com data shows that of the analysts covering MongoDB, one analyst has rated the stock as a sell, three as hold, and twenty have given it a buy rating.

In separate news related to the company, Director Dwight A. Merriman sold 606 shares of MDB stock on July 10th at an average price of $382.41 per share – totaling approximately $231,740.46 in transactions.

Shortly after completing this sale, Merriman’s direct ownership in MongoDB reached 1,214,159 shares with an approximate valuation of $464,306,543.19.

Furthermore, CRO Cedric Pech sold 360 shares of the company’s stock on July 3rd at an average price of $406.79 per share – amounting to a total value of $146,444.40 for these transactions.

The disclosure for these sales by insiders can be found through the Securities & Exchange Commission (SEC) website.

Overall, MongoDB continues to attract interest from various institutional investors due to its general purpose database platform offerings such as MongoDB Atlas and MongoDB Enterprise Advanced.

With brokerages raising their price targets and a consensus rating among analysts leaning toward “Moderate Buy,” MDB presents itself as an intriguing investment option within the technology sector.

As always with investing decisions, it is important for individuals to conduct thorough research and consult with financial professionals before making any investment choices based on market trends or analyst reports.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Analyst Ratings for MongoDB – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Over the past 3 months, 29 analysts have published their opinion on MongoDB MDB stock. These analysts are typically employed by large Wall Street banks and tasked with understanding a company’s business to predict how a stock will trade over the upcoming year.

Bullish Somewhat Bullish Indifferent Somewhat Bearish Bearish
Total Ratings 11 15 2 0 1
Last 30D 0 1 0 0 0
1M Ago 3 5 1 0 0
2M Ago 7 8 1 0 1
3M Ago 1 1 0 0 0

According to 29 analyst offering 12-month price targets in the last 3 months, MongoDB has an average price target of $390.17 with a high of $490.00 and a low of $210.00.

Below is a summary of how these 29 analysts rated MongoDB over the past 3 months. The greater the number of bullish ratings, the more positive analysts are on the stock and the greater the number of bearish ratings, the more negative analysts are on the stock

This current average has increased by 38.3% from the previous average price target of $282.12.

Stay up to date on MongoDB analyst ratings.

Ratings come from analysts, or specialists within banking and financial systems that report for specific stocks or defined sectors (typically once per quarter for each stock). Analysts usually derive their information from company conference calls and meetings, financial statements, and conversations with important insiders to reach their decisions.

Some analysts will also offer forecasts for metrics like growth estimates, earnings, and revenue to provide further guidance on stocks. Investors who use analyst ratings should note that this specialized advice comes from humans and may be subject to error.

This article was generated by Benzinga’s automated content engine and reviewed by an editor.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ON Semiconductor, MongoDB And 2 Other Stocks Insiders Are Selling – Benzinga

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The Nasdaq 100 closed lower by over 2% on Thursday. Investors, meanwhile, focused on some notable insider trades.

When insiders sell shares, it could be a preplanned sale, or could indicate their concern in the company’s prospects or that they view the stock as being overpriced. Insider sales should not be taken as the only indicator for making an investment or trading decision. At best, it can lend conviction to a selling decision.

Below is a look at a few recent notable insider sales. For more, check out Benzinga’s insider transactions platform.

ON Semiconductor

  • The Trade: ON Semiconductor Corporation ON CEO & President Hassane Elkhoury sold a total of 20,000 shares at an average price of $105.00. The insider received around $2.1 million from selling those shares.
  • What’s Happening: onsemi and BorgWarner expanded strategic collaboration for silicon carbide worth over $1 billion in lifetime value.
  • What ON Semiconductor Does: Onsemi is a supplier of power semiconductors and sensors focused on the automotive and industrial markets.

Have a look at our premarket coverage here

MongoDB

  • The Trade: MongoDB, Inc. MDB Director Dwight Merriman sold a total of 1,000 shares at an average price of $420.00. The insider received around $420,000 from selling those shares.
  • What’s Happening: MongoDB announced an expansion of a multiyear strategic partnership agreement with Microsoft by integrating the Atlas application into Azure.
  • What MongoDB Does: Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users.

Winnebago Industries

  • The Trade: Winnebago Industries, Inc. WGO PRESIDENT – GRAND DESIGN Jeff Donald Clark sold a total of 300,000 shares at an average price of $67.87. The insider received around $20.36 million from selling those shares.
  • What’s Happening: Winnebago Industries reported a third-quarter FY23 sales decline of 38.2% year-on-year to about $900 million, missing the consensus of $961 million.
  • What Winnebago Industries Does: Winnebago Industries manufactures Class A, B, and C motor homes along with towables, customized specialty vehicles, boats, and parts.

Darden Restaurants

  • The Trade: Darden Restaurants, Inc. DRI Director Lee, Eugene I. Jr. sold a total of 33,000 shares at an average price of $170.09. The insider received around $5.61 million from selling those shares.
  • What’s Happening: Piper Sandler initiated coverage on Darden Restaurants with a Neutral rating and announced a price target of $167.
  • What Darden Restaurants Does: Darden Restaurants is the largest restaurant operator in the U.S. full-service space, with consolidated revenue of $10.5 billion in fiscal 2023 resulting in 3%-4% full-service market share (per NRA data and our calculations).

 

Check This Out: Investor Sentiment Declines After Nasdaq Falls 2%

 

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Whatever the Challenge – it’s Always About the People

MMS Founder
MMS Satish Jayanthi

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today I’m sitting down with Satish Jayanthi, who is the CTO of Coalesce. Did I get that right Satish?

Satish Jayanthi: That’s right, Coalesce.io.

Shane Hastie: Welcome. Thanks for taking the time to talk to us today.

Satish Jayanthi: Oh, thank you for having me here.

Shane Hastie: My standard starting question for my guests is who’s Satish?

Introductions [00:28]

Satish Jayanthi: My name is Satish Jayanthi. I am CTO, co-founder of Coalesce. And my background, basically, I started my career as a programmer a long time ago, then switched to DBA, as a DBA for a startup. That was my first time getting into the data space. So when I was working there, I used to get a lot of data requests from the business and it got to a point where it was kind of overwhelming for me and started looking into how I can address that in a more scalable fashion. That’s when I encountered the concept of data warehousing. And since then I’ve been in the space, played different roles, wore several hats, engineer, architect, consultant, manager, but that has been my experience.

Shane Hastie: So we’ve grown from data warehouses to data lakes to I don’t know what the current massive mega data, what would one call them? Data is, in my experience, one of the hardest elements to get right in information technology, and yet it sits at the core of everything we do. Why are we struggling?

Data management is a hard problem in information technology [01:39]

Satish Jayanthi: Yeah, that’s an excellent question and it’s been that way it seems like forever, right? First thing, it’s not a static target. It’s not that there is something that is static that we are going after. It’s dynamic in the sense it’s a moving target. Every time we solve one problem from a technology standpoint, there’s other things that come up. For example, we used to work with smaller data volumes at one point and we’ve increased the compute capacity and we said, “Hey, now we can compute and process reasonable amounts of data very quickly.” Then before we realized our data volumes went up like a hundred times more. So the problem is still back to the same thing pretty much because your compute capacity went but your input is now huge, just as an example.

But it is definitely more than technology that makes this a big challenge. Technology is one thing, but I believe it’s a pretty common thing across the industry that everybody knows in the space that you need people, you need process, you need technology to be successful. And I think the people part is definitely disproportionately the harder one in this big puzzle and that’s definitely a big struggling point for a lot of organizations. Now, once you have some handle on that, then there’s the other pieces that come into play, the process and the technology. But I think these are the pillars of success and that’s why it’s kind of hard. It’s not just the technology that would solve the problem.

Shane Hastie: What are we looking for in those people? What are the inherent attitudes or specific skill sets?

Skills needed for working with data [03:14]

Satish Jayanthi: The hard skills, for example, if you’re in the data space, your data engineering skills and some experience in that, it’s all important. I mean, it gets pretty complex once you get into the weeds of this thing. So that’s a given, but that’s not enough to be successful in my opinion. Because it’s not some science experiment that you’re running, right? It’s not like you’re locked into a room and okay, figure this out and just kind of working your way for whatever you’re doing. It’s not that, especially with data. As you’ve mentioned before, data is everywhere. Everybody in the organization, it’s more and more like that, everybody in the organization has to interact with data in some fashion.

So what that means is the engineers who are pretty skilled with the technology have to have a lot of skill in terms of working with other people. Not only with other engineers, but working with business people, working with leaders, and understanding the goals. So it’s one of those skills where it comes down to collaboration, communication, being humble is important. Just because you learn some technology doesn’t make you any different than any other person. So we got to respect that. And it goes from the other way as well, like business people in how they’re communicating with engineering. But to answer your question, some of the skills are those, like the communication is key, collaboration skills are key. Of course the technical skills are a given because that’s why you’re in the job in the first place.

Shane Hastie: Most technologists don’t come into the workplace with those collaboration, communication skills, and humble is something that we often see as missing. How do we help them get there?

Building an environment of collaboration and communication starts at the top [04:56]

Satish Jayanthi: It starts at the top, for sure. For example, speaking of our own company, how we hire, I pretty much always, always focus on the potential of a candidate rather than the pedigree. You could have done so many things in the past, which is all great, but I want to see how you fit, what you can do from now on. How do you work with other people? How do you respect other people? How do you build that integrity, credibility, and all of that? I actually focus a lot more than that. I can take somebody who has not a lot of experience, but I see these other traits in them, such as willingness to learn, you have to be a curious person, constantly wants to learn. Listening skills are important, very, very important.

These are all kind of common sense things, but this is what you should be focusing on when you’re hiring, not just like a Java experience or JavaScript experience. I mean, those are all kind of important. So that’s what we do. That’s what, when we look at people, we’re looking at, as a whole, what kind of person is this person for that role? If there’s a leadership role, then it’s even more important. Because as a leader you’re going to have a larger impact on the company. You’re going to bring people that might think like you. And if you are not aligned the culture, then it’s very risky because it’s now you have a multiplier effect. If you’re good, you’ll bring five other good people. If you’re not aligned, you’ll bring five other people who are not aligned. So that’s going to be very detrimental to the company.

Shane Hastie: So that’s in terms of hiring. How do we grow those skills?

Invest in growing collaboration and communication skills in people [06:28]

Satish Jayanthi: So it starts with hiring of course. And again, it’s growth maintenance, or whatever you want to call it, but you definitely have to invest in that as a company. You have to invest in your employees, their skills, their strengths, their weaknesses, their goals. That’s extremely important to listen, constantly listen. As a manager I think that’s what you should be doing. Listen, listen, listen, right? And understand how is your employee base getting excited? What motivates them? Where do they want to go? Do they want to improve their communication skills? I mean, if you realize that they are doing everything great but they need some help with communication or they need some help with other things, you have to facilitate that and make that happen. And they should be part of the organization culture and growth program like that. I mean, again, it’s very hard when you’re a startup because there’s so many things going on. So the hiring piece is very, very critical. But as you become mature as an organization, you have to take these processes into consideration. How do you grow your employees in a more holistic way?

Shane Hastie: And touching on something that we mentioned in the conversation before we started recording, one of the things that you’ve raised is burnout in the industry. What’s happening and how do we help?

Build relationships and cultivate allies to bridge gaps [07:44]

Satish Jayanthi: This goes back to the same underlying things that I was talking about earlier, which is there’s people, there’s processes, technology. I mean, the intersection of these three things is what happens in the real world constantly. Now if you miss any one of those, you’re going to have a problem. And that manifests itself as, in this case, it’s a burnout or whatever that is, people just leaving the company or whatever. From a people standpoint, the communication between business and IT, this has been a standard problem. So that has to be addressed. And I’ve provided some examples in the past on how to address those. For instance, from a people standpoint, if you’re an IT leader, if you’re a data leader, you need to find champions in business units. You need to kind of build these partnerships with people who can help you out. But you also have to come and reach out to the business and see if you can partner with somebody. The more partners you have like that, the better it’s going to be because that’s how the communication builds.

I mean, you’re not always going to find an opportunity to talk to the CEO constantly. That’s not going to happen. But there’s going to be a leader somewhere in the business who is really struggling with something and who is like-minded, who wants to partner, and you got to build that partnership. So that’s the people part of it. And once you find that and you understand the business objectives, you understand the goals, you understand the challenges, then comes the technology. I mean, what type of technology is available out there to solve these problems? Are our tools outdated? Are we going to modernize our stack, simplify our stack? Do we have too many tools? So that’s next. But then there is the other piece, which is the process. You have great people, you have great technology, now the processes becomes very, very important to scale. Otherwise, whatever you put in place, you might be excited for a few weeks, few months because of the latest technology, great people working with great technology, but then if you don’t have a process, I can guarantee that people will get burnt out. Just because that excitement wears away pretty quickly and you start seeing problems and problems, and that’s where there’s a high chance that the employees can get burnt.

Shane Hastie: Well let’s delve a little bit into that process area. One of the motivators that Daniel Pink, for instance, talks about is the need for autonomy, mastery and purpose. How do we balance autonomy with good process?

Balancing process and autonomy [10:17]

Satish Jayanthi: Everything has to be balanced well, right? I mean the real world is everything has to be balanced. I think none of these work by itself in extreme ways. So the autonomy basically, again, when you are constantly engaged with your own team and you’re listening, you would kind of understand what type of a person they are. Everybody’s different. How do they feel respected? What does autonomy mean for them, right? Does that mean “Don’t tell me how to do it, I’ll do it”? Or does it mean that, “Hey, let me talk to the business directly and understand the problem myself. You don’t have to be in the middle every time.” Whatever that is, you need to understand. Everybody works differently. You can’t fit everybody into the same thing, same mould. I think that is important. And assuming that these people can work with other people and all of that, and then on top of that, you’re giving that freedom to engage more deeply with the business. That will motivate people.

Because if you always stand in the way that, “Hey, everything has to go through me,” you become the bottleneck. And they also may not feel that they’re that important in the organization, and that’s a bad mix. There’s always have to be a overall process that everything should fit, and that process can be established by taking input from everyone. Now, don’t force a process on anybody, understand, have a healthy debate, get to the bottom of something that you want to set as a process, and once you all decide, then it becomes much easier to stick with it. And of course, leave some room to fine tune the process because the process may not be the same forever because things are changing around it. But if you have to balance, you have to do it in that way. I mean, you’ve got to involve all the stakeholders in these. It’s hard. It’s not easy. It’s definitely hard. And that’s why it takes so long and people give up or people just get too self centric and then they lose track.

Shane Hastie: Let’s explore what you’ve done in your own organization in terms of building that culture where people do have that balance of autonomy and process and relationships and engagement and so forth. How have you built that?

The role of managers is to facilitate effective teamwork [12:28]

Satish Jayanthi: We are still relatively a small company, new company, and our hiring has been the biggest piece of the whole puzzle here. So we always get the best of the best people on the team to start with. And from there, my partner and I, we are on the same page regarding what our organization culture should be, how everyone should have the freedom to share opinions, debate and feel respected. So those are all the things that help each other out. So all of those things are in our principles. The structure, the organization structure is important, but it shouldn’t be viewed as a way to take advantage of something. Rather it should be viewed as helping other people for them to succeed. If you’re a manager, your main job is to facilitate, not to monitor how much time people are spending on the keyboard. That’s not the goal of anyone.

The main thing is, “Hey, is my employee or my colleague blocked by anything? Are they not happy?” It could be even a personal thing in their life that could be happening because everything, work affects life, life affects work. So it’s all in the same thing, you have to address in the same way, pretty much. So again, going back to your question, the hiring piece is a big thing and our top leadership aligned on these principles is a super, super important thing. We all kind of keep talking about how important it is for our employees to have this kind of positive environment every day. We want people to look forward to come to work. We don’t have to tell them to work. They should be motivated to work by their own. And so far, we’ve been very successful. Now as the company gets bigger and bigger, we growing very fast, it becomes harder. But we are aware of it. That’s the good thing. We are aware of it. I personally have experienced it many, many times. I can see the red flags, at least watch, keep an eye on for those red flags. So that’s how we do it so far.

Shane Hastie: For somebody listening to this, you’ve just mentioned red flags. What are the sort of red flags they should be looking out for if they’re trying to create this safe culture in their organization or their team?

Indicators to look out for [14:41]

Satish Jayanthi: One red flag could be the engagement, how engaged they are in anything, like in the organization. I mean, these days people are working remotely and we are all on Slack. You could see somebody who’s not engaged at all. This is just an example. We don’t do this internally or I’m not saying this is the way to do it, but I’m just saying an example is if somebody’s not engaged, they would probably not participate in any discussions on Slack for some time. And as a manager, you shouldn’t approach that alarmingly, you shouldn’t be going after them, like “What’s going on? Why are you not doing?” But instead, you should kind of understand what’s happening. Why is that? Is that because they feel like it’s a waste of time engaging in this or is it something else going on? And you have to have that conversation.

It’s not necessarily, I wouldn’t call it a red flag because a red flag indicates that it’s a bad thing. It’s more like an indicator that is helping you to kind of saying something is going on, look into it and understand. There might be a valid reason for that, but this is just one example. Or it could be something that they have said to someone and it hurt their feelings or just not having a good day or whatever. So you need to understand what that, as a manager, as a leader. I mean, those are some examples that I’ll be watching out. If you’re constantly hearing about somebody, that they’re not being a team player from multiple people, then you need to see what’s going on.

Shane Hastie: How do you amplify? How do you make things better as a manager?

Managers need to be constantly communicating with their teams [16:09]

Satish Jayanthi: As a manager, I think, again, listening is a big, big thing. So if based on what you understand from your employees, where they want to be, what they want to do, what they get excited about, if you can create or facilitate for them to achieve their goals, that would be the best thing to do. That’s when people get super excited and they will continue that state of high motivation. And this has to be done in several ways. They say performance reviews and talk after a few months, once in a while, I’m not a big fan of those formal performance review meetings and so on. You should be having pretty regular touch bases and some fun events and some team bonding events, and that constant engagement, as long as you can maintain that throughout, it’ll automatically work as a push. If you have these gaps in communication, that’s a problem because now you’re losing track of what’s going on. So it’s important as a manager to have that continued communication and understanding both their frustrations as well as their excitement.

Shane Hastie: That is quite a significant commitment from a manager.

Satish Jayanthi: It is.

Shane Hastie: And again, this is, how do we teach our managers to do that well? Most of them may not have that as an inherent skillset.

Satish Jayanthi: Then they shouldn’t be a manager. I strongly believe that, actually. It’s not for everybody. See, the definition of manager has always been like, people think you manage people. The way I think is if there’s someone in the company that needs to be managed, that person doesn’t belong in the company. Nobody needs to be managed. Everybody should be motivated to do something. Because if you think as a manager that my job is to manage these five people, you’re a bad manager already. You should be looking at, I need to facilitate, help and motivate them. That’s it. That’s the only way. Given the environment that we have today, like the remote environment, all of that, you can’t possibly be measuring or monitoring people like that. That just doesn’t work and doesn’t scale.

Shane Hastie: Facilitate, help and motivate. Those are very different skillsets.

Satish Jayanthi: Yeah, they are. They are different skillsets. And management is not easy, and I don’t claim, I’m not a management expert or anything, I just go with my intuition and feel and with what I felt when I was not a manager most of the times and how I felt left out sometimes, I felt like my opinion was not heard, I was stressed. I was in some good environments, some really bad environments, but all of those experiences, my personal experiences have just kind of taught me what is a good environment. It’s not that I went and took a degree in management or I don’t have any psychology degree or anything like that. This is just based on my intuition, my interaction for a long time, that’s for sure. For more than 20 years. And how I feel as a person and how I want to be, how I want to feel, and that’s the feeling I want our employees to have. That’s how I approach this.

Shane Hastie: Satish, thanks very much. Some really interesting points there. If people want to continue the conversation, where would they find you?

Satish Jayanthi: Well, they can reach me at my LinkedIn account and also they can email me anytime, I’m available.

Shane Hastie: Wonderful. Well, again, thanks very much.

Satish Jayanthi: Thank you.

Mentioned:

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Introduces the Public Preview of Vector Search Feature in Azure Cognitive Search

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

At its annual Inspire conference, Microsoft recently announced the public preview of Vector search in Azure Cognitive Search, a capability for building applications powered by large language models. It is a new capability for indexing, storing, and retrieving vector embeddings from a search index.

Microsoft’s Vector Search feature through Azure Cognitive Search uses machine learning to capture the meaning and context of unstructured data, including images and text, to make search faster.

Liam Cavanagh, a principal group product manager of Azure Cognitive Search, explains in a Tech Community blog post:

Vector search is a method of searching for information within various data types, including images, audio, text, video, and more. It determines search results based on the similarity of numerical representations of data, called vector embeddings. Unlike keyword matching, Vector search compares the vector representation of the query and content to find relevant results for users. Azure OpenAI Service text-embedding-ada-002 LLM is an example of a powerful embeddings model that can convert text into vectors to capture its semantic meaning.

Diagram of vector representations of data (Source: Tech Community blog post)

Users can leverage the feature for similarity search, multi-modal search, recommendations engines, or applications implementing the Retrieval Augmented Generation (RAG) architecture. The growing need to integrate Large Language Models (LLMs) with custom data drives the latter. For instance, users can retrieve relevant information using Vector search, analyze and understand the retrieved data, and generate intelligent responses or actions based on the LLM’s capabilities.

Distinguished engineer at Microsoft Pablo Castro explains in a LinkedIn post:

Vector search also plays an important role in Generative AI applications that use the retrieval-augmented generation (RAG) pattern. The quality of the retrieval system is critical to these apps’ ability to ground responses on specific data coming from a knowledge base. Not only Azure Cognitive Search can now be used as a pure vector database for these scenarios, but it can also be used for hybrid retrieval, delivering the best of vector and text search, and you can even throw-in a reranking step for even better quality by enabling it.

Since Vector Search is a part of Cognitive Services, it brings forth a range of additional functionalities, including faceted navigation and filters. Moreover, by utilizing Azure Cognitive Search’s Indexer, users can draw data from various Azure data stores, such as Blob Storage, Azure SQL, and Cosmos DB, to enrich a unified, AI-powered application.

Some of the use cases, according to the company for the Vector Search integrated with Azure AI, are:

  • Search-enabled, chat-based applications using the Azure OpenAI Service
  • Conversion of images into vector representations using Azure AI Vision for accurate, relevant text-to-image and image-to-image search experiences
  • Fast and accurately retrieve relevant information from large datasets to help automate processes and workflows

The technique of vectorization is gaining popularity in the field of search. It involves transforming words or images into numerical vectors, encoding their semantic significance, and facilitating mathematical processing. By representing data as vectors, machines can organize and comprehend information, swiftly identifying relationships between words located closely in the vector space and promptly retrieving them from vast databases containing millions of words.

Amazon and Google use the technique in their offerings. Google, for instance, in Vertex AI Matching Engine, and managed databases like Cloud SQL and AlloyDB. At the same time, Amazon brings it to OpenSearch. In addition, Microsoft has another offering leveraging vectorization with Azure Data Explorer (ADX).

Lastly, Vector Search in Azure Cognitive Services is currently available in all regions without cost. In addition, Vector search code samples are available in a GitHub repo.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.