Mobile Monitoring Solutions

Search
Close this search box.

Presentation: Modern API Development and Deployment, From API Gateways to Sidecars

MMS Founder
MMS Matt Turner

Article originally posted on InfoQ. Visit InfoQ

Transcript

Turner: Do you find you have trouble with the APIs in your organization? Do services struggle to interact with each other because their APIs are misunderstood, or they change in breaking ways? Could you point to the definition of an API for a given service, let alone to a catalog of all of them available in your estate? Have you maybe tried to use an API gateway to bring all of this under control, but do you find it doesn’t quite feel like the right tool for this job? I’m Matt Turner. In this talk, I’m going to talk about some more modern techniques and tools for managing your APIs. We’re going to see what API gateways really do, what they’re useful for, and where they can be replaced by some more modern tooling.

Outline

In the first half of this talk, I’m going to talk about what an API is just very briefly, get us back on the same page. Talk about what an API gateway is. Where they came from. Why I’m talking about them. Then we’re going to look at sidecars, which I think are a more modern distributed microservice style alternative. Then get a look at some more tooling around the lifecycle of an API. Some of these stages you might be familiar with in the design and development of an API. I think API gateways have been pressed into helping us with a lot of these things as well. Actually, rather than using an API gateway or a sidecar, rather than using a component of the network, like an online active component at runtime, there’s a bunch of more modern tooling now that enables us to do this earlier in the dev cycle to shift these concerns left. I’m going to be covering that as well.

API Gateways and Sidecars

API gateways and sidecars. Let’s very briefly go over what I mean by API, at least for the purposes of this talk. Let’s start off easy, what isn’t an API? The definition I’ve come up with is that an API is the definition of an external interface of some service. Service is open to interpretation, your workload and point where you want to call it. There’s a way of interacting with it and talking to it from the outside. That’s what an API is. Wikipedia says that an API is a document or a standard that describes how to build or use a software interface. That document is called an API specification. It says that the term API may refer to either the specification or the implementation. What I really want us to agree on, for the purposes of this talk, an API is not a piece of code or a running process. People, I’ve heard them talk about deploying an API, and what they mean is deploying a pod to Kubernetes, deploying some workload to a compute environment. That’s either the fact that that service is not a batch processing service but it’s some daemon that has an API. I think we get confused if we start using the word API for that. An API by my definition, how do we define it? An API is defined by an IDL, an Interface Definition Language. You may have come across OpenAPI Specs, formerly called Swagger, or protobuf files, which include the gRPC extensions. There’s Avro, or Thrift, and a lot of those other protocols have their own IDLs as well. This is like a C++ header file, the Java interfaces, C# interface.

Very briefly, a workload, a running process, a pod can have an API that we can talk to. If you got more than one of those, you put something in front, that it spreads the work between APIs. This is your classic load balancer. You can have more than one different API on different workloads, and then your front-facing thing, whatever it is, can expose both of them. You can have a slightly fatter, more intelligent front-facing thing that can combine those backend APIs in some way. Think of this as a GraphQL resolver, or even a load balancer, or an API gateway doing path routing. It’s worth saying, a pod, a service, a workload can actually implement two different APIs, like a class having two interfaces, or maybe two versions of the same API, which is something that’s going to be quite relevant for us.

What Is an API Gateway?

What is an API gateway? They have a set of common features. They are network gateways. The clue is in the name with gateway. They do things like terminate TLS. They load balance between services. They route between services. They do service discovery first to find out where those services are. They can do authentication with cloud certificates, JWTs, and whatever. They can do authorization. That’s allow listing, block listing of certain hosts, certain paths, certain header compressions, whatever, because it’s at layer 7. They do security related stuff. An obvious example would be injecting calls headers. They can rate limit and apply quotas. They can cache. They can provide a lot of observability, because we send all traffic through them, so we can get a lot of observability for free. Then, some of the common features that get more interesting or more relevant to this talk that I’ve pulled out of a lot of those different product specifications is things like the ability to upload one of those IDL files, like an OpenAPI Spec, and have the API gateway build an internal object model of your API, hosts, paths, methods, and even down to a body schema, so enforcing the schema of bodies and request bodies coming in and going out. A lot of them also have some support for versioning APIs, different stages of deployment, so maybe a test, a staging, and a prod version of an API. A lot of them will also do format translations. That’s maybe gRPC to SOAP or gRPC to “REST,” JSON over HTTP. They can do schema transforms. They might take a structured body like a JSON document in one format, and rearrange some of the fields, rename some of the fields. They can manipulate metadata as well, so insert, modify, delete headers.

Why Am I Talking About This?

Why am I talking about these things? Back in the day, we had a network and we would probably need some load balancing. Since we got beyond a toy example, we’d have certain services. In this case, I’m showing a database where we need more than one copy running for redundancy, for load carrying ability. We’d have multiple clients as well that needed to talk to those things. We’d stick a load balancer in the network, logically in the middle. These were originally hardware pizza boxes, thinking F5, or a Juniper, or a NetScaler. Then there were some famous software implementations, something like HAProxy, NGINX is quite good at doing this. They balance load service to service, but you also needed one of at the edge so that external clients could reach your services at all, across that network boundary, discover your services. Then they would offer those features like load balancing between different copies of the service. Here, I’ve just shown that middle proxy copied, we’ve got another instance of it, because it’s doing some load balancing over. This is probably the ’90s or the early 2000s. This is your web tier, and then they got a load balancer in front of your database tier. Because this thing is a network gateway, because it’s exposed out on the internet, it’s facing untrusted clients on an untrusted network, it needs to get more features. Things like TLS termination, just offering a serving cert so you can do HTTPS rate limiting, authentication, authorization, and then more advanced stuff like bot blocking, and your web application firewall features like detecting injection payloads, and that kind of stuff. Although it’s a very blurry definition, I think all of these features made a load balancer into what we can nebulously call an API gateway.

Now that we’ve moved to microservices, our monoliths are becoming distributed systems, we need a bunch of those features inside the network as well, so observability doesn’t come from attaching a debugger to the monolith, it comes from looking at the requests that are happening on the wire. Things that were a function call from one class into another that were effectively infinite and infallible, are now network transactions that can fail, that can time out, that can be slow, that can need retrying. All of this routing, and, yes, we used to have dependency injection frameworks. You wanted to do your reads in different config or access a different database in test, you would just use dependency injection to do a different build profile. We can’t do any of that stuff anymore in the processes themselves, because they’re too small and too simple. A lot of that stuff is now being done on the network. That fairly dumb load balancer in the middle has gained a lot of features and it’s starting to look a lot like an API gateway. In fact, a lot of API gateway software is being pressed in to perform this function.

I think the issue with this is that it can do a lot. The issue is that it can maybe do too much, and that it’s probably extensible as well. You just write a bit of Lua or something to plug in. It’s very easy for this to become an enterprise service-plus thing from 2005. It’s this all-knowing traffic director that does all the service discovery, holds all the credentials for things to talk to each other. It’s the only policy enforcement point. All of those extensions, all of those things that we can plug in, even the built-in features like the ability to just add a header here or manipulate a body there, just rename a field, just so that a v1 request looks like a v2 request. All of that stuff strikes terror into me of ESPs, and makes me think of these systems that accreted so much duct tape that nobody really understood. Of course, that duct tape isn’t really duct tape, this is part of a running production system, so you’ve got to consider it production code.

It’s also not giving us the security we think it is, in this day and age. The edge proxy works. The edge proxy secures us from things on the internet, because it’s the only way in and out of the network. That’s fine as long as all of our threats are outside. Then that’s the only way in because the subnet is literally unroutable. Bitter experience shows us that not all threats are on the outside, compromised services attempting lateral movement, probably because they’ve had a supply chain attack or a disgruntled employee has put some bad code in, or just somebody on the internal network because Clee has plugged into an Ethernet port or somebody’s cracked the WiFi or something. There’s more of these devices on our networks now with Bring Your Own Device, and with cloud computing, and with one monolith becoming 3000 microservices, there’s just more units of compute, more workloads that we have to worry about that could be a potential threat. While an API gateway being used as a middle proxy might do authentication, it’s kind of opt-in, you’ve got to send your traffic through it with the right headers, if you don’t have the right headers, what’s to stop you going sideways like this?

Sidecars

Onto sidecars. The basic sidecar idea is the idea of taking this middle proxy that’s got a bunch of logic, and logic that I agree that we need, I just don’t think it should be here. What a sidecar does is it moves it to each of the services. Some of them are maybe a little bit obvious, like rate limiting. Each one of those sidecars can rate limit on behalf of the service it’s running on. Load balancing is one people often get a bit confused about. You don’t need a centralized load balancer. Each service, in this case, each sidecar can know about all of the other potential services and it can choose which one to talk to. They can either coordinate through something called a lookaside load balancer to all make sure that their random number generators aren’t going to make them all hit the same one basically. They can look at the lookaside load balancer to find out which of the potential backends has the currently least connections from all clients. Client-side load balancing is actually perfectly valid and viable for those backend services, for internal trusted services where we have control over the code.

I’ve said sidecar a few times, I should actually have talked more generally about that logic. The first thing we did with that logic, that stuff that the API gateway was doing for us on the network, retries, timeouts, caching, authentication, rate limiting, all that stuff, the first thing we did was we factored it out into a library. Because like that Kong slide showed earlier, we don’t want every microservice to have to reimplement that code. We don’t want the developer of each service to have to reinvent that wheel and type the code out, or copy and paste it in. We factor it out. There are a couple of early examples of this. We’ve got Hystrix and Finagle, which are both very full featured libraries that did this stuff. The problem with those libraries is that they’re still part of the same process. They’re still part of the same build, so deployments and upgrades are coupled. They’re reams of code. They’re probably orders of magnitude more code than your business logic, so they actually have bug fix updates a lot more often. Every time they do, you have to roll your service. You also, practically speaking, need an implementation in your language. Hystrix and Finagle were both JVM based languages. If you want to do Rust or something, then you’re out of luck unless there’s a decent implementation that comes along. We factored it out even further, basically, to an external process, a separate daemon that could do that stuff, and another process that can therefore have its configuration loaded, be restarted, be upgraded independently.

What software can we find that will do that for us? It turns out, they’ve already existed. This basically is an HTTP proxy, running as a reverse proxy. Actually, I could be running as a forward proxy on the client side and then a reverse proxy on the server side on the Kotlin side. Even Apache can do this, if we press it, but NGINX, HAProxy are better at it. The new goal is a cloud native HTTP proxy called Envoy, which has a few advantages, like being able to be configured over its API in an online fashion. If you want to change a setting on NGINX, you have to render out a config file, put it on disk, hit NGINX with SIGHUP, I think. I don’t think you actually have to quit it. You do have to hit it with a signal and it will potentially drop connections while it’s reconfiguring itself. Envoy applies all of that stuff live. This is the logo for Envoy. Envoy is a good implementation of that, and we can now use nice cool modern programming languages, any language we want, rather than being stuck in the JVM.

This is great for security, too. I talked before about how that API gateway middle proxy was opt-in, and it could be bypassed fairly easily. If you’re running Kubernetes and each of these black boxes is a pod, then your sidecar is a sidecar container, so a separate container. It’s in the same network namespace. The actual business logic, the application process is unroutable from the outside. The only way traffic can reach it is through the pod’s single IP, and therefore through the sidecar because that’s where all traffic is routed. These sidecars are also going to be present in all traffic flows in and out of the pod. Whatever tries to call a particular service, no matter where on the network it is, or how compromised it is, it’s going to have to go through that sidecar, they can’t opt out. We therefore apply authentication, authorization, rate limit to everything, even stuff that’s internal in our network that it would have just been trusted, because it’s on the same network, in the same cluster, it came from a 192.168 address. We don’t trust any of that anymore, because it’s so easy for this stuff to be compromised these days. This is an idea called zero trust, because you trust nothing, basically. I’ve put a few links up for people who want to read more on that. They can also cover the egress case. You’re trying to reach out to the cloud, all traffic goes through the cloud. You’re trying to reach out to things on the internet, again, all traffic out of any business logic has to go through the sidecar.

These sidecars, they can do great things, but they can be quite tricky to configure. NGINX config files can get quite long, if you wanted to do a lot. Envoy’s config is very long and fiddly. It’s not really designed to be written by a human. Each of these sidecars is going to need a different config depending on the service that it’s serving. Each one is going to have different connections that it allows and different rate limits that it applies on whatever. Managing that configuration by hand is a nightmare. We very quickly came up with this idea of a control plane. A daemon that provides a high-level simple config API, and we can give it high level notions like service A can talk to service B at 5 RPS. It will then go and render the long-winded, individual configs needed for the sidecars, and go and configure them. This control plane in addition to the sidecar proxies gives us what’s called a service mesh. Istio is probably the most famous example. There’s others out there like Linkerd, or Cilium, or AWS’s native App Mesh. If you’re running your workloads in Kubernetes, then you could get this service mesh solution quite easily. Using various Kubernetes features, you can just make container images that contain just your business logic. You can write deployments that deploy just your business logic, just one container in a pod with your container image in. Then using various Kubernetes features, you can have those sidecars automatically injected. The service mesh gets its configuration API and storage hosted for free in the Kubernetes control plane. It can be very simple to get started with these things if you’re in a friendly, high-level compute environment like Kubernetes.

Just to recap, I think we’ve seen what an API gateway is as a piece of network equipment. What features it has. Why they used to be necessary. They are still necessary, but why in a microservice world having them all centralized in one place is maybe not the best thing. How we can move a lot of those features out to individual services through this sidecar pattern.

API Lifecycle

I want to talk about some of the stuff we haven’t touched on. Some of those API gateway features like enforcing request and response bodies, like doing body transformation, and all that stuff. Because, as I was saying, API gateways, some of them do offer these features, but I don’t think it’s the right place for it. I don’t think it should be an infrastructure networking. I don’t think it should be moved to a sidecar. I think it should be dealt with completely differently. I’m going to go through various stages of an API’s lifecycle and look at different tooling that we can use to help us out in all of those stages.

We want to come along and design an API. The first thing is you should design your API upfront. This idea of schema-driven development, of sitting down and writing what the API is going to be, because that’s the services contract, is really powerful. I found it very useful personally. It can also be great for more gated development processes. If you need to go to a technical design review meeting to get approval to spend six weeks writing your service, or you need to go to a budget holder’s review meeting to get the investment, then going with maybe a sketch of the architecture and the contract, the schema, I’ve found to be a really powerful thing. Schema-driven development I think is really useful. It really clarifies what this service is for and what it does, and what services it’s going to offer to whom. If you’re going to be writing the definitions of REST interfaces, then you’re almost certainly writing OpenAPI files. That’s the standard. You can do that with Vim, or with your IDE with various plugins. There’s also software packages out there that support this, first-class. Things like Stoplight, and Postman, and Kong’s Insomnia. If you’re writing protobuf files describing gRPC APIs, which I would encourage you to do. I think gRPC is great. It’s got a bunch of advantages, not just around the API design, but just around actual runtime network usage stuff. Then you’re going to be writing proto files. Again, you can use Vim, you can use your IDE and some plugins. I don’t personally know of any first-class tools that support this at the moment.

Implementation

What happens when I want to come to implement the service behind this API? The first thing to do I think, is to generate all of the things: generate stubs, generate basically an SDK for the client side, and generate stub hooks on the server side. All of that boilerplate code for hosting a REST API where you open a socket and you attach a HTTP mux router thing, and you register logging middleware, and all that stuff. That’s boilerplate code. You can copy and paste it. You can factor it out into a microservices framework, but you can also generate it from these API definition files. Then they will leave you with code where you just have to write the handlers for the business logic. Same on the client side, you can generate your client libraries, SDKs, whatever you want to call them, a library code where you write the business logic. Then you make one function call to hit an API endpoint on a remote service. You can just assume that it works because all of the finding the service, serializing the request body, sending it, retrying it, timing it out, all of that kind of stuff is taken care of. Again, you can just focus on writing business logic.

One of the main reasons I think for doing this is, I often see API gateways used for enforcing request schemas on the wire. Perhaps service A will send the request to service B. The API gateway will be configured to check that the JSON document that it’s sending has the right schema. This just becomes unnecessary if all you’re ever doing is calling autogenerated client stubs to send requests and hooking into autogenerated service stubs to send responses. It’s not possible to send the wrong body schema, because you’re not generating it and serializing it. Typically, in an instance of a class, you’ll fill in the fields, so you’ll have to fill in all of the fields and you can’t fill any extra fields. Then the stubs will take it from there, and they’ll serialize it, and they’ll do any field validation or non-integer size, or string length, or whatever. By using these client stubs, just a whole class of errors just goes away and it gets caught a lot earlier.

Generating stubs from OpenAPI documents for REST, there’s a few tools out there. There’s Azure AutoRest, which gets a fair amount of love, but only supports a few languages. There’s this project called OpenAPI Generator. Its main advantage is that it’s got templates for like a zillion languages. In fact, for Python, I think it’s got four separate templates, so you choose your favorite. I do have to say from a lot of bits of practical experience that most of those templates aren’t very good. The code they emit is very elaborate, very complicated, very slow, and just not yet idiomatic at all. Your mileage may vary, and you can write your own templates, although that’s not easy. It’s a nice idea. I’ve not had a great amount of success with that tool. Even the AWS API Gateway can do this. It’s not a great dev experience, but if you take an OpenAPI file, and you upload it into AWS API Gateway, which is the same thing as clicking through the UI and making paths and methods and all that stuff, there’s an AWS CLI command that’ll get you a stub. It only works for two languages, and they seem pretty basic, doing the same for gRPC. Then there’s the original upstream Google protoc compiler, and it has a plugin mechanism. There needs to be a plugin for your language. There’s plugins for most of the major languages. It’s fine, but there’s this new tool called Buf, which I think is a lot better.

When we’ve done that, we really hopefully get to this point of just add business logic. We can see this service on the bottom left is a client that’s calling the service on the top right, which is a server. That distinction becomes irrelevant in microservices often, but in this case, we’ve got one thing pointing another to make it simple. That server side has business logic, and that really can just be business logic, because network concerns like rate limiting and authentication or whatever are taken care of by the sidecar. Then things like body deserialization, body schema validation, all of that stuff are taken care of by just the boilerplate, like open this socket and set the buffer a bit bigger, so we can go faster. All of that stuff’s taken care of in the generated service stub code. Likewise, on the client, the sidecar is doing retries and caching for us. Then the business logic here can call on three separate services, and it has a generated client stub for each one.

Deployment

When we want to deploy these services, schema validation that we use to configure on an API gateway. I’m going to say, don’t, because I’ve talked about how we can shift that left, and how we don’t make those mistakes if we use generated code. I’ve actually already covered it, but the IDLs tend to only be expressive to the granularity of the field email is a string. There are enhancements and plugins, I think for OpenAPI, certainly for proto where you can give more validation. You can say that the field email is a string. It’s a minimum of six characters. It’s got to have an ampersand in the middle. It’s got to match a certain regular expression. That fine-grained stuff can all be done declaratively in your IDL, and therefore generated into automatic code at build time.

Publication

What happens when we want to publish these things? Buf is just one example, Stoplight and Postman all offer this as well, I think, but Buf has a schema registry. I can take my protobuf file on the left and I can upload it into the Buf schema registry. There’s a hosted version or you can run your own. You can see that it’s done, sort of Rust docs, or Go docs, it’s rendered this nicely in terms of documentation with hyperlinks. Now I’ve gotten nice documentation for what this API is, what services it offers, how I should call them, and I’ve got a catalog by looking at the whole schema registry. I’ve got a catalog for all the APIs available in my organization, all the services I can get from all the running microservices. This is really useful for discovering that stuff. The amount of time in previous jobs I’ve had people say, “I’d love to write that code but this piece of information isn’t available,” or, “I’m going to spend a week writing the code to extract some data from the database and transform it,” when a service to do that already exists. We can find them a lot more easily now.

There’s this idea of ambient APIs. You just publish all your schema to the schema registry, and then others can search them, others can find them. You could take that stock generation and you can put it in your CI system. When you’re automatically building those stubs, you automatically build those stubs in CI every time the IDL proto definition file changes. Those built stubs are pushed to pip, or npm, or your internal Artifactory, or whatever, so that if I’m writing a new service that wants to call a service called foo, if I’m writing in Python, I just pip install foo-service-client. I don’t have to go and grab the IDL and run the tooling on it myself and do the generation. I don’t have to copy the code into my code base. I can depend on it as a package. Then I can use something like Dependabot to automatically upgrade those stubs. If a new version of the API is published, then a new version of the client library that can call all of the new methods will be generated. Dependabot can come along and suggest or even do the upgrade for me.

Modification

I want to modify an API. It’s going through its lifecycle. We should version them from day one. We should use semver to do that. Probably sucking eggs, but it’s worth saying. When I want to go from 1.0 to 1.1, this is a non-breaking change. We’re just adding a method. Like I’ve already said, CI/CD will support the new IDL file, generate and publish new clients. Dependabot can come along and it should be safe to automatically upgrade the services that use them. Then, next time you’re hacking on your business logic, you can just call a new method. You press clientlibrary. in your IDE, and autocomplete will tell you the latest set of methods that are available because there are local function calls on that SDK. When I want to go to v2, so this is a breaking change, say I’ve removed a method, renamed a field. Again, CI/CD can spot that new IDL file, generate a new client with a v2 version on the package now, and publish that. People are going to have to manually do this dependency upgrade because the API changes on the wire, then the API for the SDK is also going to change in a breaking way. You might be calling the method on the SDK that called the endpoint on the API that’s been removed. This is a potentially breaking change. People have to do the upgrade manually and deal with any follow in their code. The best thing to do is to not make breaking changes, is to just go to 1.2 or 1.3, and never actually have to declare v2. We can do this with breaking change detection, so the Buf tool. This is one of the reasons I like it so much, you can do this for protobuf files. Given an old and a new protobuf file coming through your CI system, Buf can tell you whether the difference between them is a breaking change or an additive one. That’s really nice for stopping people, making them think, I didn’t mean for that to be a breaking change. Or, yes, that is annoying, let me think if I can do this in a way that isn’t breaking on the wire.

Deprecation

How do we deprecate them? If I had to make a v2.0, I don’t really want v1.0 to be hanging around for a long time, because realistically I’m going to have to offer both. It’s a breaking change. Maybe all the clients aren’t up to date yet. I want to get them up to date. I want to get them all calling v2, but until they are, I’ve got to keep serving v1, because a v1 only client is not compatible with my new v2. As I say, keep offering v1. A way to often do this is you take your refactored, improved code that natively has a v2 API, and you can write an adapter layer that will keep serving v1. If the code has to be so different, then you can have two different code bases and two different pods running. Then the advantage of the approach I’ve been talking about is you can go and proactively deprecate these older clients. The way you need to do that is you need to make sure that no one’s still using it. We’ve got v2 now where we want everybody to be using v2, we want to turn off v1, so we want to delete that code in the pod that’s offering v1. I’ll turn off the old viewer pods, or whatever it is. We can’t do that if people are still using it, obviously, or potentially still using it. The amount of people I’ve seen try to work out if v1 is still being used by just looking at logs or sniffing network traffic. That’s only data from the last five minutes or seven days, or something. I used to work in a financial institution, it doesn’t tell you whether you turn it off now, in 11 months, when it comes around to year-end, some batch process or some subroutine is going to run that’s going to expect to be able to call v1, and it’s going to blow up and you’re going to have a big problem.

If we build those clients stubs into packages, and we push them to something like a pip registry, then we can use dependency scanners, because we can see which repos in our GitHub are importing foo-service-client version 1.x. If we insist that people use client stubs to call everything, and we insist that they get those client stubs from the published packages, then the only way anybody can possibly ever call v1, even if they’re not doing it now is if their code imports foo autogenerated client library v1. We can use a security dependency scanner to go and find that. Then we can go and talk to them. Or at least we’ve got a visibility at least even if they can’t or won’t change, we know it’s not safe to turn off v1.

Feature Mapping

This slide basically says for all of the features that you’re probably getting for an API gateway, where should it go? There are actually a couple of cases where you do need to keep an API gateway, those kinds of features. Things like advanced web application firewall stuff, advanced AI based bot blocking, or that stuff. I haven’t seen any sidecars that do that yet. That product marketplace is just less mature. It’s full of open source software at the moment. These are big, heavy R&D value adds. You might want to keep a network gateway. For incidental stuff, this is talking about whether you want to move that code actually into the service itself, into the business logic, whether you want to use a service mesh sidecar or whether you want to shift it left.

Recap

I think API gateways is a nebulous term for a bunch of features that have been piled into what used to be ingress proxies. These features are useful. API gateways are being used to provide them but they’re now being used in places they’re not really suited like the middle of microservices. Service meshes, and then this shift left API management tooling can take on most of what an API gateway does. Like I said, API gateways still have a place, especially for internet-facing ingress. You actually probably need something like a CDN and regional caching even further left than your API gateway anyway. In this day and age, you probably shouldn’t actually have an API gateway exposed to the raw internet. These patterns like CDNs, and edge compute, and service meshes are all standard now. I wouldn’t be afraid of adopting them. I think this is a reasonably well trodden path.

Practical Takeaways

You can incrementally adopt sidecars. The service meshes support incremental rollouts to your workloads one by one, so I wouldn’t be too worried about that. I think sidecars will get more of these API gateway features like the advanced graph stuff over time. I don’t think you’re painting yourself into a black hole. You’re not giving yourself a much bigger operational overhead forever. Check out what your CDN can do when you’ve got those few features left in the API gateway. CDNs can be really sophisticated these days. You might find that they can do everything that’s left, and you really can get rid of the API gateway. That shift left management tooling can also be incrementally adopted. Even if you’re not ready to adopt any of this stuff, if I’ve convinced you this is a good way of doing things and you think it’s a good North Star, then you can certainly design with this stuff in mind.

Questions and Answers

Reisz: You’ve mentioned, for example, what problem are you trying to solve. When we’re talking about moving from an API gateway to a service mesh, if you’re in more of something where the network isn’t as predominant. You’ve got more of like a modular monolith, when do you start really thinking service mesh is a good solution to start solving some of your problems? At what point like in a modular monolith, is it a good idea to begin implementing a service mesh?

Turner: I do like a service mesh. There’s no reason not to do that from the start. Even if you have, in the worst case, where you have just the one monolith, it still needs that ingress piece, that in and out of the network to the internet, which is probably what a more traditional API gateway or load balancer is doing. A service mesh will bring an ingress layer of its own that can do a lot of those features. Maybe not everything, if you’ve subscribed to an expensive API gateway that’s going to do like AI based bot detection and stuff. If it satisfies your needs, then it can do a bunch of stuff. Then, as soon as you do start to split that monolith up, you don’t ever have to be in a position of writing any of that resiliency code or whatever, or suffering outages because of networking problems. You get it there, proxying all of the traffic in and out. You can get a baseline. You can see that it works. It doesn’t affect anything. Then, as soon as you make that first split, split one little satellite off, implement one new separate service, then you’re already used to running this thing and operating it, and you get the advantages straight away.

Reisz: One of the other common things we hear when you first started talking about service mesh, particularly in that journey from modular monoliths into microservices, is the overhead cost. Like we’re taking a bunch of those cross-cutting features, like retries, and circuit breakers out of libraries and putting them into reverse proxies that has that overhead cost. How do you answer people, when they say, I don’t want to pay the overhead of having a reverse proxy at the ingress to each one of my services?

Turner: It might not be for you if you are high frequency trading or doing something else. I think it’s going to depend on your requirements, and knowing them, maybe if you haven’t done it yet, going through that exercise of agreeing and writing down your SLAs and your SLOs to know, because this might be an implicit thing, and people are just a bit scared. Write it down. Can you cope with a 500-millisecond response time? Do you need 50? Where are you at now? How much budget is there left? That code is either happening anyway in a library, in which case, there are going to be the cycles used within your process, and you’re just moving them out. Or maybe it’s not happening at all, so things look fast, but are you prepared to swap a few dollars of cloud compute cost for a better working product? Yes, having that as a separate process, there’s going to be a few more cycles used because it’s got to do a bit of its own gatekeeping. They do use a fair amount of RAM typically. That’s maybe a cost benefit thing. They do have a theoretical throughput limit. What I’d say is that’s probably higher, these proxies do one job and they do it well. Envoy is written in C++. The Linkerd one is written in Rust. These are pretty high-performance systems languages. The chances of their cap on throughput being lower than your application, this may be written in Java, or Python, or Node, is actually fairly low. Again, depends on your environment. Measure and test.

Reisz: It’s a tradeoff. You’re focusing on the business logic and trading off a little bit of performance. It’s all about tradeoffs. You may be trading off that performance that you don’t have to worry about dealing with the circuit breakers, the retries. You can push that into a different tier. At least, those are some of the things that I’ve heard in that space.

Turner: Yes, absolutely. I think you’re trading a few dollars for the convenience and the features. You’re trading some milliseconds of latency for the same thing. If it’s slowing you down a lot, that’s probably because you’re introducing this stuff for the first time, and it’s probably well worth it. The only time where it’s not a tradeoff, or it’s a straight-up hindrance is probably that cap on queries per second throughput. Unless you’ve got some well optimized C++, you’re writing like a trading system or something, then the chances are, it’s not your bottleneck and it won’t be, so there really isn’t a tradeoff there.

Reisz: If there is a sidecar or not sidecar, like we’ve been talking in Istio, kind of Envoy sidecar model. There are interesting things happening without sidecars, and service meshes. Any thoughts or comments there?

Turner: It’s a good point. This is the worst case. The model we have now gets things working. The service mesh folks have been using Envoy. It is very good at what it does, and it already exists. It’s a separate Unix process. Yes, I think things aren’t bad at the moment. With BPF moving things into the kernel, there’s this thing called the Istio CNI. If you’re really deep into your Kubernetes, then the Istio folks have written a CNI plugin which actually provides the interface into your pod’s network namespace so that you don’t have to use iptables to forcefully intercept the traffic, which means you save a hop into kernel space and back. Yes, basically technological advancement is happening in this space. It’s only getting better. You’re probably ok with the tradeoffs now. If you’re not, watch this space. Go look at some of the more advanced technologies coming out of the FD.io VPP folks, or Cilium, or that stuff.

Reisz: Any thoughts on serverless? Are these types of things all provided by the provider? There isn’t really like a service mesh that you can implement, is it just at the provider? What are your thoughts if some people are in the serverless world?

Turner: If I think of a serverless product, like a Knative serve or an OpenFaaS, something that runs as a workload in Kubernetes, then as far as Kubernetes is concerned, that’s opaque and it may well be hosting a lot of different functions. If you deploy your service mesh in Kubernetes, then you’re going to get one sidecar that sits alongside that whole blob, so it will do something, but it’s almost like an ingress component into that separate little world that may or may not be what you want. You may be able to get some value out of it. You’ll get the observability piece, at least. I don’t personally know of any service meshes that can extend into serverless. I don’t know enough about serverless to know what these individual platforms like a Lambda or an OpenFaaS offers natively.

Reisz: Outside of Knative, for example, they can run on a cluster.

Turner: Yes, but Knative is maybe its own little world, so Kubernetes will see one pod, but that will actually run lots of different functions. A service mesh is going to apply to that whole pod. It’s going to apply equally to all of those functions. It doesn’t have too much visibility inside.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Stock traders purchase a large number of MongoDB put options (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

On Wednesday, the company MongoDB, INC (NASDAQ: MDB) was involved in strange options trading.

Investors purchased 23,831 put options on the company’s stock on the stock market.

The daily average number of put options has increased to 1,056, a 2,173% increase.

This brings the total number of put options available up to 3,012.

On Tuesday, December 6, the most recent earnings report for MongoDB, which is publicly traded and can be found under the NASDAQ: MDB, was made available to the public.

The company’s earnings per share (EPS) for the quarter came in at $1.23, which was $0.25 more than the average projection made by industry experts, who predicted that the company would earn $1.48.

The revenue for the quarter came in at $333.62 million, which was significantly higher than the $302.39 million that analysts had anticipated the revenue would be. For MongoDB, the company’s profit margin was -30.73%, and the return on equity was -52.50%, both of which were below their respective industry averages.

The projections made by market research analysts indicated that MongoDB would suffer a loss of $4.65 on each share during this year.
In recent times, MDB has been discussed at length in various analyst papers that have been published.

The price objective that Goldman Sachs Group has set for MongoDB has been reduced from $380.00 to $325.00, as stated in a research note distributed on Wednesday, December 7.

Despite this modification, the company has been given a “buy” rating by the company’s analyst firm. Needham & Company LLC gave MongoDB a “buy” rating in a research report released on December 21.

Additionally, the company increased its price objective from $225.00 to $240.00.

The subject of the study was MongoDB.

The price objective that Morgan Stanley has set for MongoDB has increased from $215.00 to $230.00, and the firm has designated the stock as having “equal weight.” The research note was published on Wednesday, December 7.

The findings of the research study were presented on December 7, and UBS Group raised its “buy” recommendation for MongoDB and its price goal for the company, moving it from $200 to $215.
Additionally, the company published the findings of the research study. JMP Securities upgraded MongoDB from an “underperform” rating to an “outperform” rating and established a price objective of $215.00 for the company in a research report released on December 7.

Twenty market observers have suggested that investors acquire more security shares, while only four have suggested that investors maintain their existing stock holdings.

The overall recommendation for MongoDB on Bloomberg.com is a “Moderate Buy,” The website’s average price target for the stock is $269.83 per share.

When trading started on Thursday, the price of a share of MDB was $213.59 per share.

The company has a price-to-earnings ratio of -39.77, a beta value of 0.94, and a market capitalization of $14.80 billion.

The current ratio, the quick ratio, and the debt-to-equity ratio are all equal to 1.66.

This holds for all of the other financial measures as well.

The stock’s price has been trading at a moving average of $204.14 over the past 50 trading days, while the price has been trading at an average of $214.95 over the last 200 trading days. Over the previous year, the cost of MongoDB has varied widely, with prices ranging anywhere from $135.15 to $471.96, with an average cost of $471.96.

On January 3, a company insider named Thomas Bull sold 399 shares of the company’s stock.

This is just one of the many items that are relevant to this subject.

A total of 79,524.69 dollars was spent on the purchase of the stock, which works out to a price of $199.31 per share on average.

As a result of the successful completion of the transaction, the insider now directly owns 16,203 shares of the company’s stock.

The total value of these shares is approximately $3,229,419.93.

If you follow the link, which will take you to a legal file submitted to the SEC, you can acquire additional information regarding the transaction. On January 3, a corporate insider named Thomas Bull sold 399 shares of the company’s stock.

This development is relevant to the MongoDB company as a whole.

The number of sold shares resulted in a total sale volume of $79,524.69, and the average price per share received for them was $199.31.

As a result of the successful completion of the transaction, the insider now directly owns 16,203 shares of the company’s stock.
The total value of these shares is approximately $3,229,419.93. One can access the legal file containing the transaction disclosure by going to the website of the Securities and Exchange Commission (SEC).

In addition to his role as Chief Risk Officer at MongoDB, Cedric Pech was a seller of 328 shares on January 3.

The stock was purchased for $65,373.68 and sold at an average price per share of $199.31.

This resulted in the accumulation of cash.

The completion of the sale has resulted in the executive now being the owner of 33,829 business shares that have a total value of $6,742,457.99.

The disclosure about the purchase can be found in this particular location.

The company’s insiders have sold 58,074 shares of stock during the past three months, bringing in a revenue of $11,604,647. 5.70% of the total number of shares in the company are held by people who work there.

Recently, several institutional investors have modified the processes by which they invest their money in the company in various ways.

Bessemer Group, INC invested in a new MongoDB position totaling approximately $29,000 over the final three months of 2018.

The last three months of 2014 were spent by BI Asset Management Fondsmaeglerselskab AS, investing approximately $30,000 in purchasing a new position in MongoDB. Sentry Investment Management LLC increased its holdings in MongoDB during the third quarter by investing approximately $33,000 in purchasing additional company stock shares.

During the period that concluded with the fourth quarter, Lindbrook Capital LLC saw a rise of 350% in the value of its investment assets.

After purchasing 133 shares of the company’s stock during the most recent quarter, Lindbrook Capital LLC now directly owns 171 shares of the company’s stock.

Each share is worth $34,000, bringing the total value of Lindbrook Capital LLC’s stock holdings to $34,000.

During the second quarter, First Horizon Advisors INC increased the amount of MongoDB stock owned by 510.3%, making this the last and most important point.

As a result of the purchase of 148 additional shares during the most recent quarter, First Horizon Advisors INC now directly owns 177 shares of the company.

These 177 shares are currently worth $44,000 and give the company a market capitalization of $44,000.

Institutional investors and hedge funds hold 84.86% of the total shares outstanding in the company.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Transcend Therapeutics, a mental health-focused biotechnology company … – MarketScreener

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

NEW YORK – Transcend Therapeutics, a biotechnology company that develops medicines to treat neuropsychiatric diseases, including post-traumatic stress disorder (PTSD), today announced that it closed its Series A funding round of $40 million.

Led by Alpha Wave Global and Integrated Investment Partners, this funding round will enable Transcend to launch multiple clinical trials, including a phase II study, with its next-generation psychoactive compound. Transcend has raised nearly $42 million to date.

Transcend Therapeutics, which was founded in 2021 and incubated by leading early-stage venture studio AlleyCorp, develops next-generation psychoactive drugs aiming to benefit the more than 50 million Americans who suffer from neuropsychiatric diseases. While various psychedelics have demonstrated promise for the treatment of PTSD, Transcend focuses on a psychoactive compound that may be more accessible for the tens of millions of patients in need. Its lead compound, methylone (TSND-201), has short-acting and mild psychological effects, and thus likely requires less clinician time than other psychedelic compounds. Methylone has the potential to be used as an adjunctive treatment to existing pharmacotherapies (e.g. SSRIs), making it better suited to integrate into the existing psychiatric paradigm and healthcare infrastructure.

The Transcend leadership team has extensive experience across the full drug development lifecycle, leading to pivotal contributions on 13 FDA-approved drugs totaling $7 billion dollars in M&A and public company value. The leadership team includes: Kevin Ryan, Co-Founder and Chairman of Transcend, and founder and CEO of AlleyCorp, where he was co-founder and chairman of MongoDB, Business Insider, Gilt Groupe, Zola, and also previously served as CEO of DoubleClick; Blake Mandell, Co-Founder and CEO of Transcend, has spent most of his career working in venture capital – leading AlleyCorp’s frontier technology practice, where he helped incubate Pearl Health – and prior to that was a consultant at BCG; Ben Kelmendi, MD, Co-Founder and Chief Scientific Advisor at Transcend, co-director of the Yale Program for Psychedelic Science and the first scientist to receive federal funding for clinical psychedelic research in more than 50 years; Martin Stogniew, PhD, Chief Development Officer at Transcend, who holds 40 patents, has led development at six biotech startups leading to nine New Drug Applications and five biopharma exits valued at more than $3.5 billion and Amanda Jones, PharmD, Senior Vice President of Clinical Development at Transcend, who most recently led clinical development at Axsome Therapeutics, where she led two compounds from IND to NDA, including the first and only oral NMDA receptor antagonist approved for treating major depressive disorder in adults (Auvelity).

‘Mental health diseases are one of the leading causes of disability in the United States and globally, yet available treatments are ineffective for many patients, can take weeks to kick in, and frequently have chronic side effects. At Transcend, we’re working to change that, starting by bringing a next-generation compound, methylone, to market as a potential rapid-acting, disease-modifying, non-hallucinogenic treatment for neuropsychiatric conditions like PTSD,’ Transcend Therapeutics Co-Founder and CEO Blake Mandell said. ‘In a published clinical case series, methylone has demonstrated robust responses in patients with PTSD. This funding will enable us to more rapidly enter clinical trials, and ultimately make this – and other – life-changing medicine available to those in need.’

‘The advancements being made right now around mental healthcare, and at Transcend in particular, reminds me of the internet in 1996; even the most hopeful people underestimate its impact on the world. The work that Transcend is doing has the potential to completely change how people are treated for PTSD and depression,’ Transcend Co-Founder and Chairman, and AlleyCorp Founder and CEO Kevin Ryan said. ‘We’ve brought together the world’s leaders in drug development and psychiatric research, and together we’re pushing this industry into new, highly promising territory.’

‘We need new and innovative approaches to dealing with the mental health crisis, and Transcend is unique in that it has the team and a well-characterized lead compound to make a meaningful impact in the lives of millions,’ Rick Gerson, Co-Founder, Chairman, and CIO at Alpha Wave Global said. ‘We are thrilled to lead this Series A investment round and confident that this funding will accelerate Transcend’s work to bring methylone to market sooner, while beginning to build out a pipeline of other promising compounds.’

The Series A round was also supported by Global Founders Capital and Emerald Development Managers, among others.

About Transcend Therapeutics

Transcend Therapeutics discovers, develops, and delivers next-generation psychoactive medicines to work toward a world in which people no longer suffer from neuropsychiatric disease. Transcend focuses on developing medicines that are accessible to a larger percentage of patients in need, specifically the tens of millions of patients who already take psychotropic medication. Transcend Therapeutics already has real-world data for its lead compound, TSND-201, demonstrating robust responses in patients with PTSD. As a Public Benefit Corporation, Transcend has pledged 10% of its founding shares toward nonprofits focused on scientific research and patient access.

About AlleyCorp

Founded by serial entrepreneur Kevin Ryan, AlleyCorp originates ideas, hires early teams, funds, launches, and grows each company, and maintains an integral leadership role from beginning through exit. On the incubation side, AlleyCorp-founded companies include MongoDB (NASDAQ: MDB), Business Insider, Gilt Groupe, Zola, Nomad Health, and more. AlleyCorp’s Healthcare Fund is one of the most active early-stage venture funds and incubators in New York dedicated to healthcare.

About Alpha Wave Global

Alpha Wave is a global investment company with offices in New York, Miami, London, Abu Dhabi, Tel Aviv, Bangalore, Jakarta, and Sydney. Its flagship global venture and growth fund, Alpha Wave Ventures, aims to invest in best-in-class venture and growth-stage companies and endeavors to be helpful long-term partners to the founders and management teams. Alpha Wave manages a variety of investment partnerships that cover several asset classes, themes, and geographies.

About Integrated Investment Partners

Integrated is a venture fund partnering with value-aligned companies transforming the health and wellbeing of our communities around the globe. Its intention is to redefine healthcare through the innovative advancement of next-generation solutions as well as disrupt legacy healthcare institutions that have not kept pace with our changing world. Integrated is committed to supporting sustainable change and donates a significant percentage of profits to Reconsider, a sister non-profit organization. Reconsider is a center for transformation that is building a bridge to ensure transformative therapies become more accessible to our communities.

Contact:

Email: info@transcendtherapeutics.com

(C) 2023 Electronic News Publishing, source ENP Newswire

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Do MongoDB Inc.’s (NASDAQ:MDB) Prospects Look Stable? – Stocks Register

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB Inc. (NASDAQ:MDB) price on Thursday, February 23, fall -0.06% below its previous day’s close as a downside momentum from buyers pushed the stock’s value to $213.46.

A look at the stock’s price movement, the close in the last trading session was $213.59, moving within a range at $205.88 and $219.99. The beta value (5-Year monthly) was 0.94. Turning to its 52-week performance, $471.96 and $135.15 were the 52-week high and 52-week low respectively. Overall, MDB moved 6.79% over the past month.

Will You Miss Out On This Growth Stock Boom?

A new megatrend in the fintech market is well underway. Mobile payments are projected to boom into a massive $12 trillion market by 2028. According to Motley Fool this growth stock could “deliver huge returns.” Not only in the immediate future but also over the next decade. Especially since the man behind this company is a serial entrepreneur who has been wildly successful over the years.

And this is just one of our
5 Best Growth Stocks To Own For 2023.

Sponsored

MongoDB Inc.’s market cap currently stands at around $15.08 billion, with investors looking forward to this quarter’s earnings report slated for Mar 08, 2023. Analysts project the company’s earnings per share (EPS) to be $0.07, which has seen fiscal year 2023 EPS growth forecast to increase to $0.3 and about $0.6 for fiscal year 2024. Per the data, EPS growth is expected to be 150.80% for 2023 and 100.00% for the next financial year.

Analysts have a consensus estimate of $337.45 million for the company’s revenue for the quarter, with a low and high estimate of $335.17 million and $351.61 million respectively. The average forecast suggests up to a 26.60% growth in sales growth compared to quarterly growth in the same period last fiscal year. Wall Street analysts have also projected the company’s year-on-year revenue for 2023 to grow to $1.27 billion, representing a 45.50% jump on that reported in the last financial year.

Revisions could be used as tool to get short term price movement insight, and for the company that in the past seven days was no upward and no downward review(s). Turning to the stock’s technical picture we see that short term indicators suggest on average that MDB is a Hold. On the other hand, the stock is on average a Hold as suggested by medium term indicators while long term indicators are putting the stock in 50% Sell category.

28 analyst(s) have given their forecast ratings for the stock on a scale of 1.00-5.00 for a strong buy to strong sell recommendation. A total of 6 analyst(s) rate the stock as a Hold, 19 recommend MDB as a Buy and 3 give it an Overweight rating. Meanwhile, 0 analyst(s) rate the stock as Underweight and 0 say it is a Sell. As such, the average rating for the stock is Overweight which could provide an opportunity for investors keen on increasing their holdings of the company’s stock.

MDB’s current price about -2.51% and 4.50% off the 20-day and 50-day simple moving averages respectively. The Relative Strength Index (RSI, 14) currently prints 50.41, while 7-day volatility ratio is 5.30% and 6.21% in the 30-day chart. Further, MongoDB Inc. (MDB) has a beta value of 0.99, and an average true range (ATR) of 13.01. Analysts have given the company’s stock an average 52-week price target of $254.26, forecast between a low of $205.00 and high of $325.00. Looking at the price targets, the low is 3.96% off current price level while to achieve the yearly target high, price needs to move -52.25%. Nonetheless, investors will most likely welcome a -12.43% jump to $240.00 which is the analysts’ median price.

In the market, a comparison of MongoDB Inc. (MDB) and its peers suggest the former has performed considerably weaker. Data shows MDB’s intraday price has changed -0.06% in last session and -44.18% over the past year. Comparatively, Progress Software Corporation (PRGS) has moved -0.31% on the day and only 31.51% in the past 12 months. Looking at another peer, we see that Pixelworks Inc. (PXLW) price has dipped -1.82% on the day. However, the stock is -47.06% off its price a year ago. Elsewhere, the overall performance for the S&P 500 and Dow Jones Industrial shows that the indexes are up 0.53% and 0.33% respectively in the last trading.

If we refocus on MongoDB Inc. (NASDAQ:MDB), historical trading data shows that trading volumes averaged 1.3 million over the past 10 days and 1.94 million over the past 3 months. The company’s latest data on shares outstanding shows there are 68.92 million shares.

The 2.60% of MongoDB Inc.’s shares are in the hands of company insiders while institutional holders own 92.40% of the company’s shares. Also important is the data on short interest which shows that short shares stood at 4.5 million on Jan 12, 2023, giving us a short ratio of 2.89. The data shows that as of Jan 12, 2023 short interest in MongoDB Inc. (MDB) stood at 6.50% of shares outstanding, with shares short falling to 4.76 million registered in Dec 14, 2022. Current price change has pushed the stock 8.44% YTD, which shows the potential for further growth is there. It is this reason that could see investor optimism for the MDB stock continues to rise going into the next quarter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Point72 Hong Kong Ltd sells a portion of its stock in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Point72 Hong Kong Ltd reduced its holdings in MongoDB, INC (NASDAQ: MDB) by 20.7% during the third quarter, according to its most recent filing with the Securities and Exchange Commission.

After selling 5,884 shares of company stock during that period, the corporation ended the period with 22,494 shares of the company’s stock.

The most recent report that Point72 Hong Kong Ltd provided to the SEC indicated that the value of the company’s interests in MongoDB was $4,466,000.

This information was included in the report.

During the past few months, several institutional investors have modified the interests that they currently hold in the company.

During the third quarter, Sentry Investment Management LLC began a new investment in MongoDB, estimated to be worth around $33,000.

The value of the new investment in MongoDB that Alta Advisors Ltd made during the third quarter was somewhere in the neighborhood of 40,000 dollars. First Horizon Advisors INC increased the number of MongoDB shares it owned by 510.3% over the second quarter. First Horizon Advisors INC now holds 177 shares, acquired during the most recent quarter by purchasing an additional 148 shares.

Each share of the company’s stock is currently worth $44,000.

The staggering number of 1,468.8 percent was the percentage by which Huntington National Bank increased its ownership of MongoDB during the third quarter.

After making an additional purchase of 235 shares of the company during the most recent quarter, Huntington National Bank now has 251 shares in its possession.

The value of these shares is equivalent to fifty thousand dollars.

And finally, during the period covered by the report for the second quarter, Quadrant Capital Group LLC increased the percentage of MongoDB stock owned by an additional 37.8 percent. Quadrant Capital Group LLC now has 419 shares in the company, having purchased an additional 115 shares during the most recent quarter for a total of 419.

At the price that they are trading at right now, these shares have a market value of $109,000. Currently, 84.86% of the company’s stock is held by institutional investors and those who invest through hedge funds.
There has been discussion regarding the stock from the perspectives of several different equity analysts.

In a research note published on Thursday, December 15, Tigress Financial announced that their target price for MongoDB would be decreased from $575.00 to $365.00.

Despite the price decrease, Tigress Financial remained positive about the company and maintained its “buy” recommendation.

The price target that Morgan Stanley has set for MongoDB has increased from $215.00 to $230.00 due to a research report made public on Wednesday, December 7.

The “equal weight” rating was another of the firm’s company evaluations.

In a research report made available to the public on Wednesday, December 7, Robert W.

Baird increased their price objective on MongoDB.

The previous price objective of $205.00 has been increased to $230.00.

The price objective that KeyCorp has set for MongoDB has increased from $220.00 to $255.00 due to a research report published on Monday, February 6.
Additionally, the report designated the stock as having an “overweight” status. On February 17, 2018, after some time had passed, Sanford C.

Bernstein began covering MongoDB shares in a report.

They gave the stock an “outperform” rating and anticipated that it would reach a price of $282.00 by the end of their forecast period.

Twenty market observers have suggested that investors acquire more security shares, while only four have suggested that investors maintain their existing stock holdings.

According to the data presented by Bloomberg.com, the company currently has an average rating of “Moderate Buy,” The price target has been set at $269.83 on average.
On December 15, Director Hope F. Cochran sold 1,175 shares of the company’s stock, which brings us to another piece of relevant information about this topic.

The cost of purchasing all of the shares came to a total of 245,163.75 dollars, which works out to a price of 208.65 dollars per share on average.

After the completion of the transaction, the director is now the direct owner of 7,674 company shares, and the total value of those shares is $1,601,180.10.

The transaction was disclosed to the public through a filing that was made with the SEC, which can be located on the website of the SEC.

According to reports from other sources regarding MongoDB, Director Hope F. Cochran sold 1,175 shares of the company’s stock on December 15.

This information was obtained from the company’s public filings.

The cost of purchasing all of the shares came to a total of 245,163.75 dollars, which works out to a price of 208.65 dollars per share on average.

The director is now the proud owner of 7,674 shares of company stock, which have a combined value of $1,601,180.10 as a direct result of the transaction. You might find a filing that explains the transaction in greater detail on the Securities and Exchange Commission (SEC) website.

Additionally, on January 3, Chief Technology Officer Mark Porter disposed of 635 shares of the company’s stock.

A total of $119,202.20 was spent on acquiring the stock, corresponding to a price of $187.72 per share when broken down into an average. Following the conclusion of the transaction, the chief technology officer now directly owns 27,577 shares.

Based on the current stock price, these shares are estimated to be worth approximately $5,176,754.44.

Disclosures related to the sale might be found in this website section.

The company’s insiders sold 58,074 shares for a sum that amounted to $11,604,647 over the preceding three months. 5.70 company insiders own a percent of the total shares currently outstanding.

NASDAQ: MDB was first available for trading on Thursday with an opening price of $217.25.

The current price of MongoDB, INC is $471.96, which is higher than the company’s one-year high, while the company’s one-year low is $135.15.

The current, quick, and debt-to-equity ratios all come in at 4.10, while the debt-to-equity ratio sits at 1.66.

All three ratios measure liquidity.

The company has a price-to-earnings ratio of -39.77, a market capitalization of $15.05 billion, and a beta value of 0.94.

The price-to-earnings ratio measures how much an asset is worth relative to its current price.

The company has seen a simple moving average of $204.14 over the past 50 days, and the company has seen a simple moving average of $214.95 over the last 200 days.
MongoDB’s most recent quarterly earnings report was released on Tuesday, December 6, and the report has now been made available to the general public.

The company announced earnings per share for the quarter of $1.23, which is $0.25 better than the consensus expectation of $1.48.

Both the return on equity and the net margin for MongoDB were in red.

The return on equity was -52.50%, and the net margin for the company was -30.73%.

The revenue for the quarter came in at $333.62 million, which was significantly higher than the $302.39 million that analysts had anticipated the revenue would be.

According to the forecasts of financial experts, MongoDB, INC will report earnings per share (EPS) of -4.65 for the fiscal year that just ended.

A company that goes by the name MongoDB, INC is the one that is in charge of the development and marketing of a platform for databases that are utilized for a variety of purposes.

The business provides a variety of products, some of which include the following: MongoDB Community Server, MongoDB Atlas, and MongoDB Enterprise Advanced.

In addition to that, it offers to consult and train as additional components of its professional services.

It is widely believed that Dwight A. Schwartz and Elliot Horowitz were the company’s founders.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (MDB) Stock Sinks As Market Gains: What You Should Know – Yahoo Finance

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

In the latest trading session, MongoDB (MDB) closed at $213.46, marking a -0.06% move from the previous day. This change lagged the S&P 500’s daily gain of 0.53%. Meanwhile, the Dow gained 0.33%, and the Nasdaq, a tech-heavy index, lost 5.2%.

Coming into today, shares of the database platform had gained 9.45% in the past month. In that same time, the Computer and Technology sector gained 0.96%, while the S&P 500 gained 0.67%.

Wall Street will be looking for positivity from MongoDB as it approaches its next earnings report date. This is expected to be March 8, 2023. On that day, MongoDB is projected to report earnings of $0.07 per share, which would represent year-over-year growth of 177.78%. Meanwhile, our latest consensus estimate is calling for revenue of $335.84 million, up 26.02% from the prior-year quarter.

It is also important to note the recent changes to analyst estimates for MongoDB. These revisions typically reflect the latest short-term business trends, which can change frequently. With this in mind, we can consider positive estimate revisions a sign of optimism about the company’s business outlook.

Based on our research, we believe these estimate revisions are directly related to near-team stock moves. We developed the Zacks Rank to capitalize on this phenomenon. Our system takes these estimate changes into account and delivers a clear, actionable rating model.

Ranging from #1 (Strong Buy) to #5 (Strong Sell), the Zacks Rank system has a proven, outside-audited track record of outperformance, with #1 stocks returning an average of +25% annually since 1988. The Zacks Consensus EPS estimate remained stagnant within the past month. MongoDB is holding a Zacks Rank of #2 (Buy) right now.

Investors should also note MongoDB’s current valuation metrics, including its Forward P/E ratio of 337.03. For comparison, its industry has an average Forward P/E of 41.79, which means MongoDB is trading at a premium to the group.

The Internet – Software industry is part of the Computer and Technology sector. This group has a Zacks Industry Rank of 69, putting it in the top 28% of all 250+ industries.

The Zacks Industry Rank includes is listed in order from best to worst in terms of the average Zacks Rank of the individual companies within each of these sectors. Our research shows that the top 50% rated industries outperform the bottom half by a factor of 2 to 1.

Make sure to utilize Zacks.com to follow all of these stock-moving metrics, and more, in the coming trading sessions.

Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report

MongoDB, Inc. (MDB) : Free Stock Analysis Report

To read this article on Zacks.com click here.

Zacks Investment Research

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


CloudNativeSecurityCon 2023: SBOMs, VEX, and Kubernetes

MMS Founder
MMS Mostafa Radwan

Article originally posted on InfoQ. Visit InfoQ

At CloudNativeSecrityCon 2023 in Seattle, WA, Kiran Kamity, founder and CEO of Deepfactor, led a panel discussion on software supply chain security, the practical side of SBOMs, and VEX.

Kamity started the talk by underscoring how some organizations are rushing to create SBOMs because they have until the beginning of June of this year to comply with the US executive order on cybersecurity.

He mentioned that the goal of this panel discussion is to bring together experts on the operational aspects of cybersecurity to answer questions such as how to create SBOMs, how to store them, and what to do with them.

After introducing the speakers, Kamity started the discussion by asking the panel what is an SBOM and why people should care about it.

Allan Friedman, Senior Advisor at CISA, referred to SBOMs as dependency treatment to help us achieve more transparency. He pointed out:

Transparency in the software supply chain is going to be a key part of how we make progress in thinking about software.

He mentioned that the log4j vulnerabilities crisis of December 2021 won’t be a one-off event and SBOMs can help developers build secure software, IT leaders select software, and end-users react faster.

Furthermore, he indicated that there are plenty of tools today, both proprietary and open source, to generate SBOMs including the two formats SPDX, and CycloneDX.

Friedman ended by referring to the Vulnerability Exploitability eXchange(VEX) which will allow organizations to assess the exploitability level of vulnerabilities to prioritize and focus on those that matter to them.

InfoQ sat with Chris Aniszczyk, CTO of CNCF, at CloudNativeSecurityCon 2023 and talked about the event and the relevance of SBOMs.

 I love SBOMs. It is funny that we have been excited about SBOMs for a decade at the Linux foundation and now they’re everywhere. We’ve been prototyping with SBOMs for some projects in the CNCF and based on the tool used, it generates a different type of SBOM. Eventually, the tools will converge and generate a similar thing but we are not there yet.

Next, Rose Jude, Senior Open Source Engineer at VMWare and maintainer of project Tern, an open source software inspection tool for containers, discussed the storage and distribution of SBOMs.

She mentioned that the focus in the community lately has been more on generating SBOMs and less on storing them.

Also, she pointed out that the considerations for SBOMs storage are no different than other types of cloud native artifacts including lifecycle management, caching, versioning, and access control. However, SBOMs’ association with artifacts is a unique thing.

She ended by underscoring that if you’re a software vendor, sharing your software SBOMs with your customers is a way to establish trust and help them understand their exposure and risk.

Kamity wrapped up the session by asking Andrew Martin, CEO of ControlPlane, how can teams start using SBOMs and what’s the payoff.

Martin pointed out that because there’s no standard way to distribute SBOMs today, end-users should ask for or pull SBOMs from their software vendors and scan package manifests using a container vulnerability tool to assess CVEs. He pointed out that It’s a complex problem and further automation is needed.

Kamity recommended the Graph for Understanding Artifact Composition (GUAC) developed by Google as a guide to better understand how to consume, use, and make sense of SBOMs since it covers the proactive, preventative, and reactive aspects.

A Software Bills of Materials(SBOM) is a list of the components that make up an application including dependencies and relationships during its development and delivery.

The breakout session recording is available on the CNCF Youtube channel.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Royal London Asset Management Ltd. Reduces Its Position in MongoDB, Inc. (NASDAQ:MDB).

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The most recent report that the company has provided to the Securities and Exchange Commission (SEC) indicates that Royal London Asset Management Ltd liquidated a portion of its holdings in MongoDB, INC (NASDAQ: MDB) during the third quarter. Following the disposal of 3,039 holdings during the period above, the fund’s total number of shares in the company was calculated to be 20,683.

The value of the MongoDB holdings that Royal London Asset Management Ltd was responsible for at the end of the most recent reporting period came to $4,110,000.

Several of the company’s institutional investors have recently rebalanced the portion of the business’s assets held in their portfolios.

The portfolio of Aberdeen plc included the 0.6% increase in MongoDB holdings during the second quarter.

Abrdn plc now directly owns 6,745 shares of the company’s stock, which have a combined value of $1,750,000 due to the purchase of an additional 41 shares during the most recent fiscal quarter.

During the third quarter, CWM LLC increased its holding portfolio by 2.6% by purchasing additional MongoDB shares.

Because CWM LLC purchased an additional 55 shares during the most recent quarter, the company now directly owns 2,144 shares in the company.

Based on the current share price, this gives the company a market value of $426,000.

During the second quarter, Cetera Advisor Networks LLC increased the size of its holdings in MongoDB by 7.4 percent. Cetera Advisor Networks LLC now owns 860 shares, valued at $223,000, thanks to the recent purchase of 59 additional shares during the most recent fiscal quarter.

During the third quarter of the current fiscal year, Synergy Financial Group Ltd purchased an additional 3.2% of MongoDB’s shares.

As a result of the purchase of 64 additional shares during the most recent fiscal quarter, Synergy Financial Group LTD now holds a total of 2,089 shares of the company’s stock, which has a value of $415,000.

These shares were acquired, bringing the total number of shares owned by the company to 2,089.
Last but not least, during the third quarter, Exchange Traded Concepts LLC completed a 6.6% increase in the amount of MongoDB stock it possesses as a percentage of its total holdings.

Exchange Traded Concepts LLC has a total investment of $1,320,000 after purchasing an additional 82 shares during the most recent quarter. Currently, the company holds 1,320 shares of the company’s stock, which have a combined value of $262,000.

The majority of the shares in the company are held, to the tune of 84.86%, by institutional investors.
On Tuesday, January 3, Thomas Bull sold 399 shares of the company’s stock, which brings us to another piece of information regarding MongoDB.

The total amount made from the sale of the shares was $79,524.69, which comes out to a price of $199.31 for each share. Following the transaction, the company insider now owns 16,203 shares of the company’s stock, which have a total value of $3,229,419; if you follow this link, you will be taken to a submitted document to the Securities and Exchange Commission (SEC).

The document contains an in-depth explanation of the transaction that was reported to the SEC.

Dev Ittycheria, the chief executive officer of the company, sold a total of 39,382 shares of company stock on Tuesday, January 3.

During the sale, each share was purchased for an average price of $199.96; this resulted in a total value of the transaction equal to 7,874,824.72 dollars.

Because of the transaction, the CEO now owns 190,264 company shares, with a combined value of approximately $38,045,189.44.

These shares have been transferred into the CEO’s name.

The sale was made known to the general public through a document initially submitted to the SEC and then made available online by the SEC.

Thomas Bull, an employee of the company with access to confidential information about the business, sold 399 shares of the company’s stock on Tuesday, January 3.

The total cost of purchasing the stock came to 79,524.69 dollars, totaling an average price per share of approximately $199.31.

The insider now owns 16,203 of the company’s shares, which have a combined value of $3,229,419 thanks to the purchase.

This is where you may find the disclosure that pertains to the sale of the shares.

During the most recent fiscal period of the company, business insiders sold 58,074 shares of company stock for a total of $11,604,647.5.70 percent of the company is owned by its employees and other company insiders at present.
It has been brought to the attention of the research community by several market watchers and analysts that their opinions concerning the stock have been shared.

In a research report released on December 21 by Needham & Company LLC, the company stated that they now have a price objective of $240.00 on MongoDB, an increase from their previous price target of $225.00.

The report also stated that this new price objective represents an increase from their previous price target of $225.00.

MongoDB was designated as an “outperform” company by Credit Suisse Group in a research report that was published on Wednesday, December 7. However, the company’s share price target was reduced from $400.00 to $305.00 during the same period.

The report was focused on the company.

According to a research note released on January 9, the price objective that Truist Financial has set for MongoDB shares has decreased from $300 to $235.

The research note was published.

The target price that Robert W.

Baird has set for MongoDB has increased from $205.00 to $230.00 due to a research report made available to the general public on Wednesday, December 7.

In a research report published on December 7, Citigroup increased its price target on MongoDB shares from $295.00 to $300.00.

This was the last and most important point.

The company has been recommended to buy by twenty different financial analysts.

In contrast, a recommendation to hold the stock has been made by only four of these analysts.

According to Bloomberg, the stock has been assigned a rating of “Moderate Buy” by market analysts, and investors have established a price target of $269.83 for the price of a share of the company’s stock.

On Wednesday, trading began for NASDAQ: MDB for $210.61.

The current, quick, and debt-to-equity ratios all come in at 4.10, while the debt-to-equity ratio sits at 1.66.

All three ratios measure liquidity.

In the past 52 weeks, MongoDB, INC has experienced a range of prices from a high of $471.96 to a low of the same amount.

The company’s simple moving average over the past 50 days is $204.08, and its simple moving average over the past 200 days is $214.95.
On Tuesday, December 6, MongoDB (NASDAQ: MDB) made its most recent quarterly results report available to the public.

The company achieved earnings per share of $1.23% for the quarter, which is $0.25 higher than the average estimate of $1.48%.

The actual revenue for the quarter was $333.62 million, which was significantly higher than the average estimate of $302.39 million for the revenue generated during the quarter.

Both the return on equity and the net margin for MongoDB were in red.

The return on equity was -52.50%, and the net margin for the company was -30.73%.

Analysts who specialize in market research predicted that MongoDB, INC would incur a loss of $4.65 per share during the current financial year.

A company that goes by the name MongoDB, INC is the one that is in charge of the development and marketing of a platform for databases that are utilized for a variety of purposes.

The business provides a variety of products, some of which include the following: MongoDB Community Server, MongoDB Atlas, and MongoDB Enterprise Advanced.

In addition to that, it offers to consult and train as additional components of its professional services.

It is widely believed that Dwight A. Schwartz and Elliot Horowitz were the company’s founders.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How to enable MongoDB for remote access – TechRepublic

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Looking to use your MongoDB server from another machine? If so, you must configure it for remote access.

Code on a monitor.
Image: Maximusdn/Adobe Stock

MongoDB is a powerful and flexible NoSQL server that can be used for many types of modern apps and services. MongoDB is also scalable and can handle massive troves of unstructured data.

SEE: Hiring Kit: Database engineer (TechRepublic Premium)

I’ve outlined how to install MongoDB on both Ubuntu and RHEL-based Linux distributions, but one thing that was left out was how to configure it for remote access.

Note that the installation for RHE-based distributions has changed to accommodate the latest version of MongoDB. The new installation requires a different repository and installation command. The repository is created with the command:

sudo nano /etc/yum.repos.d/mongodb-org-6.0.repo

The content for that repository is:

[mongodb-org-6.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/6.0/x86_64/
gpgcheck=1enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc

Finally, the installation command is:

sudo dnf install mongodb-org mongodb-mongosh -y

Now that you have MongoDB installed and running, you need to configure it for remote access. Why? Because you might want to use the MongoDB server as a centralized location to serve data to other remote machines.

What you’ll need to enable remote access in MongoDB

To enable MongoDB for remote access, you’ll need a running instance of MongoDB and a user with sudo privileges.

How to enable remote access for MongoDB

The first thing we must do is enable authentication. To do that, access the MongoDB console with the command:

mongosh

Change to the built-in MongoDB admin with:

use admin

Create a new admin user with the following:

db.createUser(
  {
    user: "madmin",
    pwd: passwordPrompt(), // or cleartext password
    roles: [
      { role: "userAdminAnyDatabase", db: "admin" },
      { role: "readWriteAnyDatabase", db: "admin" }
    ]
  }
)

You can change madmin to any username you like. You’ll be prompted to create a new password for the user. A word of warning: You only get one chance to type that password, so type it carefully.

Next, open the MongoDB configuration file with:

sudo nano /etc/mongod.conf

Locate the line:

#security:

Change that line to:

security:
    authorization: enabled

Save and close the file.

Restart MongoDB with:

sudo systemctl restart mongod

Now, we can enable remote access. Once again, open the MongoDB configuration file with:

sudo nano /etc/mongod.conf

In that file, locate the following section:

net:
  port: 27017  bindIp: 127.0.0.1  

Change that section to:

net:  port: 27017
  bindIp: 0.0.0.0

Save and close the file. Restart MongoDB with:

sudo systemctl restart mongod

If you’re using the firewall on your server, you’ll need to open it for port 27017. For example, on Ubuntu-based distributions, that would be:

sudo ufw allow from remote_machine_ip to any port 27017

Reload the firewall with:

sudo ufw reload

Remote access granted

At this point, you should be able to connect to your MongoDB on port 27017 using the new admin user and password you created above. That is all there is to enable MongoDB for remote access. When you want to use that server as a centralized DB platform, this can make that possible.

Subscribe to TechRepublic’s How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Future of Databases Is Now – Datanami

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

(Immersion Imagery/Shutterstock)

Databases–those workhorses of data management–have come a long way over the past 10 years. Users no longer must accept the tradeoffs that were commonplace in 2013, and the array of features and capabilities in relational and NoSQL databases is growing every month. In some ways, we’ve already arrived at the glorious future data architects envisioned for us way back then. So what’s holding us back?

The biggest change in databases is the cloud. While you could get a DynamoDB instance from AWS as far back as 2012, the cloud was pretty much an afterthought for databases. But thanks to continued investments by database vendors, cloud database services greatly improved, and by 2018, Gartner estimated that managed cloud database services accounted for $10.4 billion of the $46.1 billion DBMS market, or about a 23% share.

One year later, Gartner went out on a limb when it declared that cloud had become the default deployment method for databases. “On-premises is the new legacy,” the analysts declared. By 2020, thanks to COVID-19 pandemic, cloud migrations had kicked into high gear, and managed cloud deployments accounted for $39.2 billion in revenue by 2022, a whopping 49% of the total $80 billion market, Gartner found.

Today, the cloud is the default mechanism for database deployments. Database vendors work hard to eliminate as much as the complexity as possible from deployments, utilizing containerization technology to create serverless database instances that scale up and down on demand. While data modeling continues to occupy customers’ time, operating and managing the database has practically been eliminated.

A Modern NoSQL Database

Ravi Mayuram, the senior vice president of products and engineering at NoSQL database vendor Couchbase, remembers the bad old days when database administrators (DBAs) would dictate what could and couldn’t be done with the database.

“We need to go to a place where the front- and back-end friction should go away, where more of the operational tasks of the databases are hidden, automated, and made autonomous,” Mayuram says. “All that stuff should basically go away when you’re getting to a point where the database is on tap, so to say. You just have a URL end point, you start writing to it, and it takes care of the rest of the stuff.”

NoSQL databases like Couchbase, Cassandra, and MongoDB emerged in response to limitations in relational databases, specifically the schema rigidity and lack of scale-out capabilities. Developers love the schema flexibility of document databases like Couchbase and MongoDB, which store data in JSON-like formats, while their distributed architectures allow them to scale out to meet growing data needs.

Since their introduction, many NoSQL vendors have also added multi-modal capabilities, which allows the database to shapeshift and serve different use cases from the same data store, such as search, analytics, time-series, and graph. And most NoSQL vendors have even embraced good old SQL, despite having their own query language optimized for their particular data store.

That flexibility appeals to customers and prospects alike, Mayuram says. “In Couchbase, you can write the data once and I can do a key-value lookup, I can do a relational query on it, I can do a full ACID transaction on it, I can search tokens. I can do analytics,” he says. “It’s more like a smartphone. It’s about five different data services in one place.”

Like Teslas, the differences with modern databases are under the hood  (Alexander Kondratenko/Shutterstock)

While Couchbase delivers some of the same functionality as a relational database, it goes about it in a completely different manner. Newer databases, such as Couchbase’s, are completely different animals than the relational databases that have roamed the land for the past 40 years. Today’s modern databases are more complex in some ways than the old guard, and it will take some time for enterprise to adjust to the new paradigm, Mayuram says.

“Sometimes you have to go slow to go fast,” he says. “There is going to be an amount of time in which we have to carry sort of both sides, if you will, until we can sort of cut over. That is not an easy task. It’s a generational shift. It’s going to take a little bit of time before your investment that you made in the past has to be transformed to the investment that we make for the future. There is a learning curve as well as an experience curve that you will go through.”

Familiarity will be critical to giving customers a sense of comfort as they slowly swap out the old databases for the newer generation of more-capable databases, Mayuram says.

“You can say there is no difference between Tesla and a regular car because it’s got the same steering wheel, the same tires, the same gas pedal, so what’s the difference?” he says. “What we are losing is our comfort. We just need to go to the next level to tackle the problem. Tthat doesn’t mean you break away completely. You have to have the same SQL available to you. It’s the same steering wheel. Don’t take away the steering wheel. That’s where the comfort lives. Change the gas engine, which is saving all the pollution and, you know, dependency on oil and all that stuff. Change that.”

New Relational DBs

A similar but slightly different journey has taken place in the world of relational databases, which has seen its share of new entries. Vendors like Cockroach Labs, Fauna, and Yugabyte have sought to remake the RDBMS into a scale-out data store that can provide ACID guarantees for a globally distributed cluster. And like their NoSQL brethren, the new generation of relational databases can run in the cloud in a serverless manner.

Yugabyte, for example, has found success by fitting the open source Postgres database into the new distributed and cloud-first world. “Our unique advantage is we don’t enable one feature at a time,” says Karthik Ranganathan, Yugabyte’s founder and CTO. “We enable a class of features at a time.”

By starting with Postgres, YugabyteDB ensures compatibility not only with applications that have already been built for Postgres, but also ensures that the database works with the large ecosystem of Postgres tools, Ranganathan says.

However, unlike plain vanilla Postgres, YugabyteDB is a full-fledged distributed database, providing ACID guarantees for transactions in globally distributed clusters. Not every organization needs that level of capability, but the world’s biggest enterprises certainly do.

Yugabyte wraps that Postgres compatibility and distributed capability in a cloud-native delivery vehicle, dubbed YugabyteDB Managed, enabling users to scale their database clusters up and down as needed. In addition to scaling out by adding more nodes on the fly, YugabyteDB can also scale vertically.

Yugabyte has brought together all of these features into a single package, and it’s resonating in the market, Ranganathan says.

The cloud is now driving the bulk of database revenue, per Gartner

“You need the availability, resilience, and scale in order to be cloud-native because cloud is, after all, commodity hardware and it’s prone to failures and it’s a bursty environment,” he says. “And all of the features are there and the architectural way of thinking how to build an application [is there]…because they have the ecosystem, the tooling and the feature set. So that marriage has been amazing, and we’re getting incredible pull from companies.”

Many enterprises that would have traditionally looked to the trusted relational database vendors–the Oracles, IBMs, and Microsofts of the world–are looking to open source Postgres to save money. They’re short-listing the Postgres offerings from cloud vendors, such as Amazon Aurora and Amazon RDS, and giving YugabyteDB a try in the process.

YugabyteDB is winning its share of business. Kroger, for example, relies on the database to power its ecommerce shopping cart. Another customer is General Motors, which uses YugabyteDB to manage data collected from 20 million smart vehicles. And Temenos, which is one of the world’s largest banking solutions providers, is also running core transaction processing on YugabyteDB.

Ranganathan admits that some of this success is luck. He certinaly couldn’t have foreseen that Postgres would become the world’s most popular database when he and his colleagues started work on the stealth project 10 years ago. But Ranganathan and his colleagues also deserve credit for doing the hard work to create a database that contains the other features enterprises want, which is the resiliency and scale of distributed processing and the ease of use that comes with cloud.

“Sometimes we get pulled into the conversation by the research the customers do and they tell us ‘Can you help us with this?’ So we’re kind of getting the ask handed to us,” Ranganathan says. “It’s still difficult. Don’t get me wrong…But we just really love the place we’re in and where the market is pulling us.”

The times are changing when it comes to databases. Today’s cloud-native databases provide better scalability, more flexibility, and are easier to use than the relational databases of old. For customers looking to modernize their data workhorses and reap the data-driven benefits that come with it, the future has never looked brighter than it does right now.

Related Items:

Cloud Databases Are Maturing Rapidly, Gartner Says

Cloud Now Default Platform for Databases, Gartner Says

Who’s Winning the Cloud Database War

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.