Mobile Monitoring Solutions

Search
Close this search box.

Java News Roundup: JDK 20 Released, Spring Releases, Quarkus, Helidon, Micronaut, Open Liberty

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for March 20th, 2023 features news from OpenJDK, JDK 20, JDK 21, Amazon Corretto 20, BellSoft Liberica JDK 20, multiple Spring milestone and point releases, Quarkus 3.0.0.Beta1 and 2.16.5, Helidon 3.2.0, Open Liberty 23.0.0.3-beta, Micronaut 4.0.0-M1, Camel Quarkus 3.0.0-M1, JBang 0.105.1, Failsafe 3.3.1, Maven 3.9.1 and Gradle 8.1-RC1.

OpenJDK

JEP 431, Sequenced Collections, has been promoted from Proposed to Target to Targeted status for JDK 21. This JEP proposes to introduce “a new family of interfaces that represent the concept of a collection whose elements are arranged in a well-defined sequence or ordering, as a structural property of the collection.” Motivation was due to a lack of a well-defined ordering and uniform set of operations within the Collections Framework. More details on JEP 431 may be found in this InfoQ news story.

JEP 443, Unnamed Patterns and Variables (Preview), was promoted from JEP Draft 8294349 to Candidate status this past week. This preview JEP proposes to “enhance the language with unnamed patterns, which match a record component without stating the component’s name or type, and unnamed variables, which can be initialized but not used.” Both of these are denoted by the underscore character as in r instanceof _(int x, int y) and r instanceof _.

JDK 20

Oracle has released version 20 of the Java programming language and virtual machine, which ships with a final feature set of seven JEPs. More details may be found in this InfoQ news story.

JDK 21

Build 15 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 14 that include fixes to various issues. Further details on this build may be found in the release notes.

For JDK 20 and JDK 21, developers are encouraged to report bugs via the Java Bug Database.

Amazon Corretto

Amazon has released Amazon Corretto 20, their downstream distribution of OpenJDK 20, which is available on Linux, Windows, and macOS. Developers may download this latest version from this site.

Liberica JDK

Similarly, BellSoft has released Liberica JDK 20, their downstream distribution of OpenJDK 20. Developers may download this latest version from this site.

Spring Framework

It was a very busy week over at Spring as the project teams delivered milestone and point releases of Spring Boot, Spring Framework, Spring Data, Spring Integration, Spring Vault, Spring for GraphQL, Spring Authorization Server, Spring HATEOAS and Spring Modulith. Some of these release address these Common Vulnerabilities and Exposures (CVEs):

The release of Spring Boot 3.0.5 delivers improvements in documentation, dependency upgrades and notable bug fixes such as: the EmbeddedWebServerFactoryCustomizerAutoConfiguration class should not be invoked when the embedded web server is not configured; the @ConfigurationProperties annotation no longer works on mutable Kotlin data classes; and the use of the @EntityScan annotation causes an AOT instance supplier code generation error. More details on this release may be found in the release notes.

Similarly, the release of Spring Boot 2.7.10 ships with improvements in documentation, dependency upgrades and notable bug fixes such as: loading an application.yml file fails with a NoSuchMethodError exception when using SnakeYAML 2.0; an instance of the StandardConfigDataResource class can import the same file twice if the classpath includes the ‘.‘ character; and a Maven plugin uses a timezone-local timestamps when the project.build.outputTimestamp property is used. Further details on this release may be found in the release notes.

The second release candidate of Spring Boot 3.1.0 provides new features such as: a new method, withSanitizedValue(), in the SanitizableData class that returns a new instance with a sanitized value; support for auto-configuration of GraphQL pagination and sorting; and support for Spring Authorization Server. More details on this release may be found in the release notes.

Versions 6.0.7 and 5.3.26 of Spring Framework have been released to primarily address the aforementioned CVE-2023-20860 and CVE-2023-20861 vulnerabilities. Both versions also deliver new features such as: improved diagnostics in SpEL for the matches operator and repeated text; updates to the HandlerMappingIntrospector class; and allow SnakeYaml 2.0 runtime compatibility. Further details on these releases may be found in the release notes for version 6.0.7 and version 5.3.26.

The release of Spring Framework 5.2.23 also addresses the CVE-2023-20861 vulnerability and provides the same new SpEL features as Spring Framework 5.3.26. More details on this release may be found in the release notes.

Versions 2023.0-M1, codenamed Ullman, 2022.0.4 and 2021.2.10 of Spring Data have been released this past week. The service releases include bug fixes and improvements in documentation, and may be consumed in Spring Boot 3.0.5 and 2.7.10, respectively. New features in the milestone release include: a new scroll API to support offset and key-based pagination; improvements in JPA query parsing for HQL and JPQL; support for explicit field level encryption in MongoDB; and aggregate reference request parameters in Spring Data REST. Further details on the milestone release may be found in the release notes.

Versions 6.1.0-M2, 6.0.4 and 5.5.17 of Spring Integration have been released featuring notable changes such as: improvements in the LockRegistryLeaderInitiator class such calling a target lock provider is delayed if the current thread has been interrupted; improvements to the AbstractRemoteFileStreamingMessageSource class for remote calls; and fix the relationship between the code coverage tools, Sonar and JaCoCo. More details on these releases may be found in the release notes for version 6.1.0-M2, version 6.0.4 and version 5.5.17.

Versions 3.0.2 and 2.3.3 of Spring Vault have been released to address the aforementioned CVE-2023-20859 vulnerability and new features such as: refine logging after token revocation failure; allow reuse of library-specific configuration code in the ClientHttpRequestFactoryFactory and ClientHttpConnectorFactory classes; and add AWS IAM Authentication to the EnvironmentVaultConfiguration class. Further details on these releases may be found in the release notes for version 3.0.2 and version 2.3.3.

The first milestone release of Spring for GraphQL 1.2.0 that delivers new features such as: support for pagination return values and pagination requests in methods defined in the @SchemaMapping annotation; support for custom instances of the HandlerMethodArgumentResolver interface; and a dependency upgrade to GraphQL Java 20.0. More details on this release may be found in the release notes.

Versions 1.1.3 and 1.0.4 of Spring for GraphQL have been released with new features: access request attributes and cookies in the WebGraphQlInterceptor interface; a fix in which an instance of the ContextDataFetcherDecorator class ignores subscriptions when their name has changed. These releases will also be consumed in Spring Boot 3.0.5 and 2.7.10, respectively. Further details on these releases may be found in the release notes for version 1.1.3 and version 1.0.4.

The second milestone release of Spring Authorization Server 1.1.0 ships with bug fixes, dependency upgrades and new features: an implementation of RFC 8628, OAuth 2.0 Device Authorization Grant; and enable the upgradeEncoding() method defined in the PasswordEncoder interface for OAuth2 client secrets. More details on this release may be found in the release notes.

Versions 2.1-M1, 2.0.3 and 1.5.4 of Spring HATEOAS have been released this past week. The service releases include improvements in documentation and dependency upgrades. The milestone release features: support for property metadata on forms using the @Size annotation as defined in JSR-303, Bean Validation; and a new SlicedModel class, a simplified version of PagedModel class, to navigate slices, but not calculate a total. Further details on these releases may be found in the release notes for version 2.1-M1, version 2.0.3 and version 1.5.4.

The release of Spring Modulith 0.5.1 provides a significant bug fix in which the spring-modulith-runtime module accidentally contained a Logback configuration file that was only intended for test usage. There was also a dependency upgrade to Spring Boot 3.0.5. More details on this release may be found in the release notes.

The Spring Data JPA team has introduced HQL and JPQL query parsers for developers to more easily customize queries in Spring Data JPA applications in conjunction with the @Query annotation.

Quarkus

The first beta release of Quarkus 3.0.0 features support for a management interface that exposes selected routes, i.e., management routes, to a different HTTP server that avoids exposing these routes on the main HTTP server, which could lead to leaks and undesired access to these endpoints. Further details on this release may be found in the changelog.

Quarkus 2.16.5.Final, the fifth maintenance release with notable changes such as: filter out a RESTEasy-related warning from executing the test class, ProviderConfigInjectionWarningsTest; a fix for the NullPointerException upon loading workspace modules; and prevent server-side events from the MessageBodyWriter potentially writing an accumulation of headers. More details on this release may be found in the changelog.

Helidon

Oracle has released Helidon 3.2.0 that shis with changes such as: a fix on the overloaded create() methods defined in the WriteableMultiPart class; a fix for erroneous behavior closing a database connection within the JtaConnection class; an a dependency upgrade to SnakeYAML 2.0. It is important to note that there are breaking changes in SnakeYAML 2.0. A Helidon application may be impacted if SnakeYAML is used directly. It is still possible, however, that an application may still be upgraded to Helidon 3.2.0 with a downgraded SnakeYAML 1.3.2. Further details on this release may be found in the release notes.

Open Liberty

IBM has released Open Liberty 23.0.0.3-beta featuring support for JDK 20, Jakarta EE 10 Platform and MicroProfile 6.0.

Micronaut

The Micronaut foundation has provided the first milestone release Micronaut Framework 4.0.0 featuring: experimental support for Kotlin Symbol Processing; support for virtual threads; improved error messages for missing beans; and support for filter methods.

Apache Software Foundation

As disclosed by the Apache Tomcat team, CVE-2023-28708, a vulnerability in which using the RemoteIpFilter class, with requests received from a reverse proxy via HTTP that include the X-Forwarded-Proto header set to HTTPS, session cookies created by Tomcat did not include the secure attribute. This vulnerability could result in an attacker transmitting a session cookie over an insecure channel. Tomcat versions affected by this vulnerability include: 11.0.0-M1 to 11.0.0-M2; 10.1.0-M1 to 10.1.5; 9.0.0-M1 to 9.0.71; and 8.5.0 to 8.5.85.

The first milestone release of Camel Quarkus 3.0.0, containing Quarkus 3.0.0.Alpha5 and Camel 4.0.0-M2, is the first Camel Quarkus release featuring a baseline of JDK 17 and Jakarta EE 10. Other notable changes include: deprecation of the ReflectiveClassBuildItem class; a fix for the exception thrown using the PerfRegressionIT class while testing with Camel 4 and Quarkus 3; and a split of Infinispan testing into separate modules for the Quarkus- and Camel-managed clients. More details on this release may be found in the release notes.

JBang

Versions 0.105.1 and 0.105.2 of JBang deliver notable changes such as: an improved jbang edit command in which it assumes one of the supported JBang IDE plugins is installed; continued improvements using modulepath over classpath; The jbang export jlink command is now an option that allows developers to export a JBang application or script with an embedded Java runtime; and a fix for the Apple Silicon VSCodium download.

Failsafe

Failsafe, a lightweight, zero-dependency library for handling failures in Java 8+, has released version 3.3.1 featuring API changes such as: the addition of full Java module descriptors to the Failsafe JARs; and the release of execution references inside instances of the CompletableFuture class provided by Failsafe. Further details on this release may be found in the changelog.

Maven

Maven 3.9.1 has been released with improvements such as: an improved “missing dependency” error message; performance enhancement by replacing any non regular expression patterns in the replaceAll() method with the replace() method or use precompiled patterns; and deprecate the Mojo plugin parameter expression, ${localRepository}, because an instance of the ArtifactFactory interface injected by ${localRepository} is not compatible with the Maven Resolver interface, LocalRepositoryManager, due lack of context.

Gradle

The first release candidate of Gradle 8.1 delivers: continued improvements in the configuration cache, now considered stable; continued improvements in the Kotlin DSL, an alternative to the Groovy DSL, that includes an experimental simple property assignment in Kotlin DSL scripts; and support for JDK 20. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: InfoQ Software Architecture & Design Trends 2023

MMS Founder
MMS Thomas Betts Daniel Bryant Vasco Veloso Eran Stiller Tanmay

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Introduction [00:17]

Thomas Betts: Hello and welcome to The InfoQ Podcast. I’m Thomas Betts, and today, members of the InfoQ editorial staff and I will be discussing the current trends in software architecture and design. I’ll let each of them introduce themselves and we’ll start with Eran Stiller.

Eran Stiller: Hi, so everyone, my name is Eran Stiller. I’m the Principal Software Architect for Luxury Escapes. I’m also an editor on architecture and design in InfoQ, I’ve been doing it for several years now and I’m enjoying every minute. I can’t wait for this discussion, which is always very interesting.

Thomas Betts: Great. We’ll move over to Vasco.

Vasco Veloso: Hello, thank you for listening. My name is Vasco Veloso. I am currently working for ING Netherlands on a role connected to software design and architecture with a few interests along the way, but an all-encompassing and generic role at the moment.

Thomas Betts: I think we can all say that. Moving on to Tanmay.

Tanmay Desphande: Hello, everyone. This is my very first time on this podcast, I’ve been working as Operations Data Platform Architect for SLB. My main work goes towards architecting systems for production optimizations and well constructions in the oil and gas domain, using the latest technology that we are using and while you’re also making sure that we are reducing carbon or using the carbon-neutral technologies to do so. Pretty excited to be here.

Thomas Betts: We’ll wrap up with Daniel Bryant.

Daniel Bryant: Hey, everyone. Good to see everyone again. Daniel Bryant here. I run DevRel at Ambassador Labs, but a long career in software architecture and I do moonlight as a Solution Architect at Ambassador Labs, as well. You may recognize the voice. I do a few podcasts alongside Thomas and also do the news management here at InfoQ, too, so looking forward to this discussion.

Thomas Betts: So as I said today we’ll be discussing the current trends in architecture and design as part of the process to create our annual Trends Report for InfoQ. Those reports provide the InfoQ readers with a high level overview of the topics to pay attention to and also help us, the editorial team, focus on innovative technologies for our reporting. In addition to the report and the trends graph that are available on infoq.com, this podcast is a chance to hear our conversation and some of the stories that our expert practitioners have observed.

Design for Portability [02:21]

Thomas Betts: I think we want to start the conversation today with design for portability and the concept of cloud-bound applications, which has been gaining adoption. One of the popular frameworks for this is Dapr, that’s D-A-P-R, not confused with dapper, D-A-P-P-E-R, the Distributed Application Runtime, but that’s just one implementation of this idea of designing for portability.

Vasco, can you kick us off? What do you think of the idea of designing for portability and how’s it being used today?

Vasco Veloso: Well, from what I have seen, most of the usages of these tools and these patterns, they are going back, if I may say so, to the old paradigm of build once, run everywhere that we have been hearing about for a long time. Except that this time, instead of being on different machines, the everywhere is, well, every cloud, still chasing the vendor independence dream and it is quite good to see that there are choices of frameworks to accomplish that goal.

Personally, I believe that when the framework is good and has quite a reasonable amount of choices of platforms, it also means that whoever is designing and building the system can focus on what brings value, instead of having to worry too much with the platform details that they are going to be running on.

Server-Side WebAssembly [03:44]

Thomas Betts: Then one of the other ideas that falls into this bucket sometimes talked about is WebAssembly and there’s also the clarification of client-side WebAssembly versus server-side WebAssembly.

I know, Eran, this is saying you called out earlier in our pre-discussion. Can you elaborate on that? What’s the server-side WebAssembly integration with or thinking about for design for portability?

Eran Stiller: A lot of time when we think about WebAssembly, as we’ve said, it’s about running things in the browser, it’s in the name “Web” Assembly.

But I think a large benefit of it is running it on the server side because there’s an ongoing effort for integrating a WebAssembly-based runtime, instead of let’s say Docker. Let’s say you want to run in Kubernetes or another orchestrator, you want to run your software, then instead of compiling it to a Docker container and then need to spin up an entire system inside that container on your orchestrator, you compile to WebAssembly and that allows the container to be much more lightweight because it has security baked in because it’s meant to run the browser and it can run anywhere in any cloud, on any CPU, for that matter. So it doesn’t matter if it’s an ARM CPU or an x86 CPU and basically just abstracts away all the details. So we don’t really need to care about them. It makes it everything work more lightweight and in a more performant manner. I think that’s the key benefit there that we’ll see now as we progress.

Thomas Betts: Daniel, I can see you nodding your head along. You have anything to add to that?

Daniel Bryant: Yes, it totally makes sense. As you mentioned Eran, I can see the Docker folks leaning heavily into this as well, right? They’ve totally recognized exactly this.

I think there’s a lot of interesting connections around security. Because as Vasco mentioned, I’m old enough to remember Java. “Write once, debug everywhere,” was the phrase I think we often used to say, right?

So I really as the abstractions are going up through the chain, JVM to you implied there, Vasco, vascular really were at the cloud abstraction layer now. I think things like Dapr combining with Wasm, it’s all about finding the right abstractions where we can get that reusability, that high cohesion, low coupling from whatever level you are operating at.

So many of us architects are more thinking about the Dapr- level stuff, but I think the Wasm stuff really has an impact. You hinted at it, Eran, on security, for example, if I choose to run a super lightweight container, I’m sure many of us are using Go and Scratch containers, same deal. It gets rid of a whole attack vector potentially. I know we’re going to look at software supply chain later on, but really I love the ideas that the different levels of abstraction here.

Thomas Betts: I can’t remember who said it one time, “Every problem being solved with one more layer of abstraction and indirection.”

Daniel Bryant: Totally.

Thomas Betts: But the idea behind that joke is, “I want to reduce the cognitive load. I don’t want to have to think about the whole tech stack. How do I get closer to thinking about my business problem, my business logic, my domain?” If I have to think about, “How do I deploy this in Kubernetes?” all the time, that’s one more thing that my developers have to think about.

I think Dapr is one framework that says, “Hey, we’re going to take care of all of the deployment problems. We know the shapes of the things we want to deploy, the things we want to build with. We’ve now given you the right size Lego bricks to build what you need.”

Server-Driven UI [06:48]

Thomas Betts: Tanmay, you had an idea that I wanted to bring up that’s, again, somewhat related, server-driven UI. Can you tell us more about what you think that is?

Tanmay Desphande: It’s a concept that people started talking about a couple of years back but then didn’t really understand where to use those correctly. Any public applications that we see, any mobile-native applications that all of us need to develop, has to go through quite a lot of, let’s say, scrutinies when you get published on the app stores and the Google stores, you need to continuously deploy. When you’re an era of, let’s say, every day 100 production check-ins that you’re making and continuous deployments, it’s very hard to see that your code is not getting deployed unless somebody from Google and Apple is going to approve your applications, et cetera.

So I think that’s where the benefit of server-driven UIs are getting popular among quite lot of cloud-native or, let’s say, mobile native application developers, where they want to do server-driven UI development for native applications so that they can continue to improve their applications, not even bothering about if all of my users are on the latest versions or do I need to continue to keep the backups in general for the backward compatibilities, as well. So that’s where I see this trend going to get kicked in again and see a lot of those people will continue to follow on that route, as well.

Thomas Betts: It’s some of that cyclical nature of software that we tend to see these patterns going back and forth. We had mainframes that were very client-server and they’re like, “Oh, well now we can have the client be smarter. We can put the code over there,” and everything moves over.

Then we got into the web era and it’s like, “Oh, the website does everything and we just serve up simple HTML.” Then HTML had all this JavaScript and all the app runs in the browser.

So we keep going back and forth trying to find the right balance and now we have a lot of options and people can choose based on the latest technology, what’s the right solution for you.

Tanmay Desphande: The very first time I mentioned this to somebody, it’s like, “Isn’t that something JSP used to do?”

Daniel Bryant: Nice. It’s that cliché of, “History doesn’t repeat, but it rhymes.” But I think in software development it’s poetic, you just constantly see the same thing over and over again and hopefully we make improvements, to your point, Thomas.

Thomas Betts: Hopefully.

Eran Stiller: Just something to clarify that. When we say design for portability, it’s not necessarily about, as Daniel said, “Writing once, running everywhere” because we were there before with Java .NET and whatever.

It’s more about the fact that all the details are abstracted, you don’t necessarily need to care about them. Because let’s say someone mentioned vendor lock in want to avoid vendor lock-in. It’s not about the fact that tomorrow we can take our entire application, move it on another cloud, it will still work because it won’t. There are always details that are abstracted away. If we change the database, we change the platform, something will break, something will need to change.

It’s more about the fact that we don’t care about those details as we develop things. It makes it much easier for us as developers and as architects when we design our systems, and then later on once the need arises and we want to change something, we’ll still need to work but then it’ll be easier. But I think the focus is less about that and more about the abstraction side of things.

Thomas Betts: Yes, I think portability is, like you said not the, “I need to be able to move it,” but it’s, “I don’t have to think about how it’s anchored to the ground. My code is directly tied to this.” And if you see the evolution of… I used to write build scripts and it was an idea of let’s have our infrastructure as code. Well, now we’re a few layers abstracted from the infrastructure as code. I don’t need to talk about deploy this VM. I just say, “I need some compute that does this,” and we think about compute as the unit that I need or, “I need some data storage,” just stick my data somewhere and handle it and I’m further removed. And whatever’s underneath that could vary. Like you said, if I deployed the same thing to AWS or Azure, maybe the underlying stuff is the same but my abstraction looks identical.

Tanmay Desphande: I just wanted to add a different perspective in that portability section, as well. I mean, the industry that I come into the picture, I mean quite a lot of companies which are, let’s say, national companies in general, they’re very particular about where their data resides and the Googles and the Microsofts of the world are not available in every single region itself. So building applications which are portable to run in every single data center, whether they’re GCPs or the Azures or the AWS of the world, it’s quite important, as well. Same goes for the WebAssembly area, as well. Imagine, let’s say, all sort of application, the Adobes and the Siemens of the world, when they used to build heavy desktop applications and with the cloud, they had no answer about, “How do I provide this heavy desktop applications as a service to my customers?” Web-based assembly has been now coming to be a really true place in the sense so that people are able to stream heavy applications via process itself, as well, for sure.

Daniel Bryant: Can I just put a pin, as well. I’ve hear a few folks say that, Eran, Tanmay, there. It’s really all about the APIs. As in, cause if we anchor ourself to a vendor-specific API, it’s really hard to move away. I think that’s the secret of Dapr, if I’m being honest. It’s all about the APIs and they abstract without doing the lowest common denominator. I think that’s one of the dangers, you always code to, “It’s got to be MySQL” or whatever, but I can’t take advantage of Cloud spanner or Redshift or whatever kind of thing. I think the APIs are the key and that’s where I can see Dapr, if we can all collect around this CNCF project and go, “Hey, let’s all agree,” exactly what Eran said we’re building for the future. It’s still painful to change but it’s not impossible to change. Right?

Tanmay Desphande: Absolutely, I agree.

Large Language Models [11:53]

Thomas Betts: So I want to move us on because otherwise we’ll be talking about this for the entire afternoon.

Large language models is I think the next thing I want to talk about. It seems lately there’s a new product announcement–I think there were three this morning. Well, here’s a new thing that just came out. GPT-3 is now old and GPT-4 is out. We’ve got ChatGPT, Bing, Bard. Who’s going to be the best chat bot to talk to?

If you look at it, the people who are in the know have seen that we’ve been building to this slowly for years, but it seems like it’s just suddenly got upon us in the last few months.

Are we at some major inflection point? Is this all just hype or is there something there that we really need to consider? That’s my hot take. Who wants to take that and run with it?

Tanmay Desphande: I think I’m pretty excited about the advancement of all those applications from my personal space, but I’m equally worried about my enterprise architect hat, as well. Because I’m not sure in terms of what sort of data is being used to train those models, et cetera. When I’m using it in my personal space, I’m very happy to use those applications. But when I start wearing my enterprise architect hat, I’m equally worried about what are the challenges it’s going to bring to my enterprise if some of my developers are going to use ChatGPT to build applications and deploy that, as well. So that’s where I’m now very excited to see how this evolves, as well, for sure.

Eran Stiller: Yes, I think we’re seeing a revolution at the moment. Because while the ideas are not new, so GPT-3 has been around for a while and it’s been used in various places, but we’ve seen lately we’ve seen GPT-3.5 with the ChatGPT and now GPT-4 and there are other models around, it’s not only one. But I think we’re seeing a large, major improvements happening all the time and the speed and velocity at which things happening just keeps getting faster. I think we’re at a point where the model is good enough, it’s not perfect, there are always issues, but it’s good enough to employ to various scenarios that we never fought of using it for before.

So for example, I see people, the company where I work, some of them like to hack things and they use ChatGPT. They actually took one of our React components and they took our coding conventions at the company and they fed into ChatGPT or GPT-4. They fed it with the coding convention, they fed it with the code for the React component, go, “What are the bugs here, what are we doing wrong?”

It actually found things and it’s amazing. It actually found things that the developers never thought of. So that’s only one way to try neutral, utilize this new thing that we’re still don’t know what we’re going to do with, exactly. But I think the possibilities are endless and I, for one, am very excited for all the new things we’re going to see around and all the new APIs that were laid on top of ChatGPT. Because ChatGPT and GPT-4, they’re very broad, you can just input some text, get some text back. But I think the innovation here, once it will be integrated as part of other systems using the API layers and we’ll adapt it to specific fields, that’s when I think we’ll see even more innovation coming.

Vasco Veloso: I was hearing you speaking and also looking at the amount of products that have appeared in the past couple of weeks. It almost feels like we got ChatGPT and then everybody else who was working on large language models looked at it and said, “Oh my God, we need to productize this, we are going to miss the train.” And that’s actually good because, in a sense, it feels like that we may be looking at the start of a new AI summer or AI spring maybe, where the pressure of getting a product out there may actually produce something quite useful, and everybody’s trying to see what the model can do. Well, I am indeed looking forward to what’s coming out of it.

Daniel Bryant: Perfectly said, and I’m a big fan of Simon Wardley, so Wardley Mapping, if folks have bumped into it. He talks about punctuated equilibrium, like a big disrupter, paradigm shift, exactly what you’ve all mentioned.

I like what you saying, Vasco. One thing I’m hearing the productization, a lot of it’s around the UX and I’m thinking back to, we mentioned Docker earlier on. Containers were not new, but Docker put a fantastic UX on them and a centralized repository, Docker Hub, that was a game changer.

I’m seeing just to your point, Thomas, this morning there was a news drop on, I think it was the GitHub what they’re doing and there’s the voice control with ChatGPT and so forth. The UX is what it’s about. We as developers would love to be able to chat to our pair program out or pair architect and then get that feedback.

But there’s always that thing you mentioned Tanmay of, “I want feedback and input, but I want to make the choices.” Because I remember when RPA, was it robotic process automation, came out, UiPath, all that stuff. I remember going to a conference, an O’Reilly Software Architecture conference and everyone was super nervous, quite rightly, about this. Because they were like, “My finance team is creating this RPA thing. I’ve got no idea what security they put on it! I’ve got no idea what auth they’re using!” I’m totally seeing that could be a potential with the output of these kind of frameworks now, right?

Thomas Betts: Yes. I think you start opening it up from … Professional developers are not always thinking about security, but the citizen developers really don’t think about it because they assume it’s just baked in, because everyone picks up their iPhone and they think it’s as secure as it needs to be and they don’t think about, “What are the ramifications if I connect to this site and connect to that site? It’s like it’s all fine, right?” Then there’s a breach and you find out, “Oh, this could have been maintained.”

When we lower the bar of who can write the code, and I think that’s good in a lot of ways, we have to acknowledge we have to build better security by default and not so much that it prevents its use.

Low-Code and No-Code Integration [17:21]

Thomas Betts: That does get to the idea of, how is this one more way that, I said citizen developers, would use it and how does it integrate with low-code and no-code. I think, Eran, this was your idea, you wanted to tie in.

Eran Stiller: Yes, so there’s a lot of talk about low-code and no-code systems for a while. Seems for years it was on the InfoQ Trends Report. I don’t remember when we added it, but it was there before. I think that all of these language models are going to be a huge enabler for low-code/no-code systems.

I remember a few days ago I saw on Twitter, I don’t remember who posted it, I saw an example of someone who integrated ChatGPT with the Unity platform and they had this prompt where you can say, “Add 100 cubes. Add 3 resources of light.” It just did because behind the scenes it took the prompts, it translated them into code, and then you just ran them.

We all saw examples of how ChatGPT and GPT-4 can actually create usable code. ChatGPT could only create components, one file, et cetera, simple things. GPT-4 is much more improved. You can actually generate entire simple websites, but still. I think once we take that, this could actually be a new abstraction of programming languages. Because we started from Assembly and then we got C and C++ and then we got into all kinds of Java and .NET and higher level languages.

I think you think about this as a new programming language, which is much more abstract and I don’t know if you can do anything with it, maybe not now, maybe it’ll come in the future, but I think it’s inevitable that as citizen developers but also as professional developers, much more efficient and it’s another tool that we can use and we should learn how to utilize it.

Thomas Betts: Yes, I think the example someone cited to me is a lot of people’s first programming language is Microsoft Excel. Maybe they don’t they don’t think about it-

Vasco Veloso: And it’s Turing Complete.

Thomas Betts: But when you’re saying, “This cell equals this plus that” or, “Sum up these numbers,” you’re writing code. It’s not just type in text, it is actual code there and you don’t think about it. You don’t think about it as programming, but in a sense, that’s what you’re doing.

I see these large language models as that enabler that gives ordinary people the ability to do a lot more, without having to know how it all works. That’s, again, that force multiplier of having an abstraction layer that’s so powerful.

I think someone pointed out it can create a full website. I saw the demo of, let me sketch out the website, take a picture, and then it generates the HTML for you.

Vasco Veloso: A wireframe, yes.

Thomas Betts: That goes to the, it’s the UX that we need to figure out. If I can take a picture of something and get a working system or I can talk to it, as opposed to I’m sitting there for an hour typing out code, that just saves me time. It’s not doing anything that I couldn’t do or some programmer couldn’t do. It’s doing the thing that’s really easy so I can work on the thing that’s really hard.

Eran Stiller: I think also a lot of people think, “Well, will this get developers out of work? Will we need any more developers?” I think that’s not the case. I think we’ll still need developers, they’ll just be more efficient and do more things. Because I think Vasco mentioned earlier, we’re still the one making the decisions. For example, all these model, like GPT-3 I think was already integrated in GitHub Co-Pilot for example, which I think was based on top of it and could generate test cases. I assume at some point it will be updated to GPT-4 and will provide better results.

But still, even when GPT-4 other models they generate code, you could still look at the code quality, it’s not what we expect in quality conventions. Maybe there’s some hidden bugs in there that we don’t know about, maybe some security flaws. Of course, with time it’ll get better, it will give out better results and we could go on faster, but I think we’ll still be the drivers. It’s just that the building blocks will be much bigger.

Daniel Bryant: That’s an interesting point, Eran. It does point to different skillsets we might need to learn as developers or architects. Because I think more product-minded developers will excel in this; to your point, Thomas, I sketch out a wire frame, happy days. For some folks, really like writing code, sometimes I want to write the code. So that’s going to be really interesting, that thing we have to learn, and as architects that the way we phrase the problems like typing is no longer the bottleneck by ourselves, by our teams. How’s that going to change our jobs? Quite a lot I think.

Thomas Betts: Yes. I think it’s going to put Stack Overflow out of business before it kicks me out.

Daniel Bryant: Yes, super interesting.

Thomas Betts: But Stack Overflow is probably feeding half of what my questions are answered by, it’s just saving me the work of finding the answer.

Software Supply Chain Security [21:48]

Thomas Betts: I wanted to move us on, again. We don’t talk about security a lot on the Trends Report, but I wanted to bring it up this year because it’s been an interesting last few years with global supply chains being disrupted and the talk of the software supply chain and how that comes into play. We’re not moving molecules around, it’s just electrons. But those bits that we’re downloading for the hundreds or thousands of packages that my code depends on, we’re now getting into this question of trust.

How do I know what the code is that I’m using? Are we just trusting the commons that we’ve got? “Oh, it’s out on NPM or NuGet, so it’s got to be safe?” And how do we verify that the code I run is safe for what I need it to do?

Tanmay Desphande: Every time anyone start talking about the software supply chain security, a meme pops up into my head where there’s a bunch of great things that I’ve developed and it says that it’s based on a very tiny bit of JavaScript that somebody’s maintaining in some corner of the world that I’m not aware about. So I think that says a lot about the supply chain software security.

Starting from 2021, we started seeing quite a lot of buzz around that because there was some incidents that proved us to make an conscious effort in that direction. I’ve seen quite a lot of, let’s say, open frameworks that are available from Googles and the Microsofts of the world, where they’re making it available for everyone so that everyone start understanding what level of software supply chain security that you are into and what else can be done, as well. So I think it’s going to be quite evident as we continue to move ahead in this journey to keep stressing more and more on that, for sure, as well.

Thomas Betts: It’s certainly something we might not be thinking about every day, but there needs to be something that the architects looking at the big picture of the system have to consider, “What is the foundation that we’re building on? Is it all just a house of cards to level down?”

Daniel Bryant: Getting visibility is a key thing, as Thomas was saying. I think a lot of us here work in, say SBOMs, that kind of stuff. First thing is actually knowing where there is a problem, and I think the Log4J stuff really triggered a lot of architects to realize, “This is everywhere in my system. I didn’t even realize there was Java running there.” Do you know what I mean? So having that SBOM and you mentioned that SLSA, Tanmay, as a bunch of frameworks popping up, open source frameworks. I think they’re super smart. I’d definitely encourage architects to check these things out. Visibility is the first stage to their problem solving.

Design for Sustainability [23:58]

Thomas Betts: What about the idea of designing for sustainability? Vasco, I think you mentioned this a little bit. Are there new trends or new ways that people are thinking about making their software system sustainable or measuring it?

Vasco Veloso: Indeed, I have noticed that there is more talk about sustainability these days. Let’s be honest that probably half of it is because, well, energy is just more expensive and everybody wants to reduce OPEX. But, it also means that there are new challenges.

Let’s look at a data center, for example. I am absolutely not an expert in data centers. I know they exist, I can understand them, but please don’t ask me to optimize their footprint, way out of my league. However, a data center is just at a level of abstraction that is just too high and there are limits to what can be done to reduce the footprint at that level. Initiatives such as the Green Software Foundation, for example, that is part of the Linux Foundation, they are trying to come up with a methodology that people, developers, architects, can use to measure the footprint of a software system. And that, depending on the boundaries that you choose, can actually allow you to measure the footprint of individual systems within a large boundary, such as a data center. Going from there, well, the sky’s a limit, I think.

Thomas Betts: Yes, I think the measurement problem is the thing that we’re working on most right now. Because we say we want to reduce our carbon footprint, “What’s my carbon footprint?” And if you don’t have a way to tell, the best guess we’ve had is, “Well, let me look at what my cloud spend cost because that’s probably directly correlated to how much the servers are costing to run, which is based off the electricity and let’s assume the electricity is not green.” But that’s going to be wrong in a lot of cases. And it’s going to be very different. If I deploy to US East, which runs mostly on coal, that’s going to be different than a data center that runs on renewable energy, so you have to factor that in. I think that’s what the Green Software Foundation is trying to help do is not just, what is your code doing, how much does it run? Where is it and how is it run?

Vasco Veloso: Indeed. Also taking into consideration that the energy is cleaner at some points during the day and is dirtier at others, as well. So that is still yet another factor.

Thomas Betts: So you can change when your software runs. Tanmay, you had something to add?

Tanmay Desphande: Well, yes, and I can relate more to that. I mean, I breathe and breathe out all of those things every day as a part of my job. So for sure, I feel that the way we are talking about the software develop material, we also going to start and seeing every single software provider is expected to provide their carbon emissions as a part of the services that they’re providing, the way we provide liabilities, the we provide availabilities, et cetera. I feel that that’s going to be a trend that we’re going to see in next few years for sure, as well. Then what’s going to drive that trend is the way that we do today our financial accounting as a public-listed company, we’re expected to do the carbon accountings, as well, sooner. So it’s going to be quite evident that I may start choosing a vendor that does a greener energy emissions than the other one in some context, as well, for sure. So it’s going to be quite an interesting trend to watch our product, for sure.

Thomas Betts: I know a lot of companies, at least in the US, they said, “Oh, we’ve gone carbon-neutral.” Well, they just waited until 2020 happened and they sent everybody home and they stopped having to pay for the electricity in their building. I’m like, “Are you counting everyone running at home and everyone’s internet connection to connect back into the data centers?”

Eran, did you have a last comment about this?

Eran Stiller: So you mentioned earlier where the software is going to run and how it’s going to operate. But it’s also about when and how we time things.

For example, today, and I think the key here is accountability, I think we start going to that. Because right now as developers we have FinOps running all the time and it’s getting more traction. We’re actually being measured on how much money we spend on the cloud because a developer it’s his decision, her decision, they can just spin up a large instance and create $10,000 bill out of nowhere.

But no one’s measuring our emissions right now and there isn’t really a good way of doing it. Also, I know the large providers are working on all kinds of calculators to help you estimate your carbon footprint, but it’s still not there and no one’s holding us accountable for it. I think once we get to that stage where we’ll be held accountable both as developers, but more importantly as architects, then we need to take those decisions into account.

Because right now, as an architect I design a system, I’m well aware of how much containers am I using, am I using this database or that database, because of cost. But when we factor in carbon emission and those calculations, and I might decide, “Well, I can run this calculation at night,” not because it’s cheaper, it might be cheaper because things like spot instances and stuff, but it also produces less emissions because of the way the power for that data center is generated and so on.

So I think right now we’re at the start of it. It’s still an innovator’s market, but I think it’ll progress as accountability comes to mind, when the calculators become better, and it might even become a regulation thing, but who knows.

Decentralized Apps (dApps) [29:16]

Thomas Betts: So I’m going to throw out a topic that I didn’t mention earlier that I was going to bring up: Blockchain and decentralized apps. Blockchain has sat on the innovator section of the trends graph I think since it showed up and we haven’t moved it because it is only applied to crypto and NFTs and that just didn’t seem interesting.

Well, partially because Twitter was supposed to implode this year, Mastodon took off, and I think that’s maybe the first time that people saw an actual decentralized app in production and they used it. Is that enough to say it’s moved over to early adopter or is blockchain and dApp still one of those innovator hasn’t really caught on stages?

Daniel Bryant: Shall I be controversial and say I still think it’s niche or niche, depending on how you want to phrase it? I think there is something interesting there, but there’s a lot of like, “That’s almost like a house of cards built on top of it,” in terms of it’s very easy. We saw most folks, I think Meta pulling back from NFTs. There’s been all the SVB bank and a lot of crypto stuff related to that, as well. So I think there’s distrust in the market.

When we have the zero interest rates, I think everyone was just like, “Build all the things!” and clearly not think about carbon emissions and related to that, as well. But now I think the fact that we do have high interest rates and we are being more conscious of some of these environmental factors, I don’t see blockchains. To do with proof of work, I don’t see it being a thing, proof of stake maybe or proof of some other things. But the way it’s originally architected, I don’t see that happening in the current macroeconomic climate.

Eran Stiller: Yes, I think when we prepared for this podcast, someone mentioned that it just doesn’t align with consumer demands or consumer requirements. You mentioned Mastodon earlier.

It’s decentralized and as a techie, as a technologist, I really like it. I think it’s cool. I think the way a server shuts down, you can just move your stuff over and continues working and no one can shut it down and there’s no censorship in it. And these are very good ideas. But when you look at the average consumer, they really don’t care about it. “I just want it to work. I want to go to a certain website, to type something in the browser, open an account, log in. I don’t care if there’s censorship or there isn’t censorship,” most people. Again, some people who live in certain countries care about this very much, but I’m talking about the most of the population that uses this tech, and decentralized software is still not there, it still doesn’t offer …

I think the only case that I can think of that a decentralized app was actually successful was around file sharing and torrents and stuff like that. The reason why that was successful because the consumer requirement actually aligns with the nature of decentralized app, “I don’t want to get caught. I don’t want anyone to be able to block me and I wanted it work fast,” so decentralized was doing it much better. So the requirements aligned with the technology there, but I don’t think there is another case of that that was very successful.

Thomas Betts: I think you hit it on the head. There’s no consumer need for that as a top level feature and so why add the complexity if we don’t need it?

Socio-Technical Architecture Trends [32:12]

Thomas Betts: So I wanted to wrap up the discussion talking about the socio-technical issues, the ways that architects are working. I think we took it off the trend chart last year, but the “architect as technical leader,” the “What role do you have as an architect?” How do you lead, and also, how are we doing with remote and hybrid work, all of that stuff about how are we doing architecture.

Just general thoughts that people have on that concept. Vasco, you want to kick that off?

Vasco Veloso: Well, I can get started by sharing my personal view, which is that regardless of whether we are designing a piece of software or an enterprise architecture at a 30,000 feet view, it is always important to, and this sounds like a cliche, but not to lose touch with reality. That is when expressions such as having one foot in the engine room or always messing around with tech, even if you don’t write code or build a system on your free time, just ensuring that there are still lines of communication open with the people who are actually building and troubleshooting and debugging and involved in those calls at 3:00 in the morning. That is the only way how we can really understand whether what we are designing works and has value and then take those lessons to the next project. That’s my take.

Thomas Betts: Tanmay?

Tanmay Desphande: I like the idea always about documenting your architecture. I mean, if you just Google around the word, I mean, “How do I document an architecture?,” there’s not really good answers or a very strict or very popular answers to that. Then I always get that question with some junior people that I’m currently working with.

There are quite a lot of good things available. I mean people generally start talking about ADRs, but then they only record the decisions part. I’m not going to give the full view of architecture how it is there right now. So I’m right now struggling to have a very good standard around that part. The way I personally try to use it is obviously with a combination of C4, ADR, and something like r42s is the way for me, it is the correct answer so far for me right now. But then I certainly feel that there has to be something revolutionary here to happen, considering the fact that software architecture is such an old in a sense, that age-old practice that we have been practicing all across and there’s still no good answer to document a very good soft architecture in a sense, right?

Daniel Bryant: Can I just add, great answers already, but I think to lead, coming almost full circle with LLMs and ChatGPT, the what now is going to be easier. We can feed something into our systems and go, “What’s going on there?,” and it’ll spit back and we can have a conversation.

As you alluded to Tanmay, the why is not going to be there, and that’s ADRs. Thomas, you’ve talked fantastic on previous podcasts around the value of ADRs.

But I almost envision talking about UX again, but the ability to scrub the timeline. Do you know what I mean? I do it sometimes in Google Docs and things and, “Why did we get to this point?” I can go back in the version history, Git, Google Docs, choose your poison. But I think the why is going to be the real thing. The what, ChatGPT is just going to rock that better than we can. Feed the code in, give us an answer. But why? Because there’s always good intentions, good information at the right time, but then imperfect world, and you’re like, “What idiot put this code there? Oh, it was me two years ago,” is the classic. I love to be able to scrub that timeline and go, “Why did I do it at the time? Oh, these constraints were there, which have changed. Now I can make different decisions,” right?

Eran Stiller: Daniel, you’ve just blown my mind. As you were talking, I started thinking, “Well, if we feed ChatGPT, we give it our code and we also provided our ADRs along with it and they’ll all be timed and it will have access to the Git commit history, I can ask some complex questions around why we’ve done some stuff and who’s at fault at something.” That’s an interesting concept, I wonder who’s going to turn into a product.

Daniel Bryant: We need to patent this, Eran. I think this is the five of us here could make some serious money, right?

Eran Stiller: Yes, indeed.

Thomas Betts: I want the GPT-enabled forensic auditor that comes in and says, “What did it look like at this time and why did you do it that way on December of 2022?” I don’t remember, but all the data should be there if you had it captured.

I personally have found that if I am writing an ADR, using ChatGPT or Bing to ask it the questions help me understand the trade-offs. That’s a surprising thing that a year ago I would not have considered having that as my virtual assistant, my pair programming assistant.

I work in a different time zone than most of my team so they’re not always on when I’m working late and having that person that’s always available to ask a question. Then if I get lazy, “Please just write the ADR for me” and then I compare it to I would’ve done. So that’s a new way of working that I think gets to, we have to constantly be looking as architects of what the new technology trends are, how can we incorporate them, should we incorporate them, and how can we make our process better? And how can we get that to our developers and engineers and everyone else we work with and make their lives better?

Tanmay Desphande: I think in the old remote world, I mean, we worried about asynchronous communication is the key. ADRs and C4s and all of those things that we keep talking about are the best means to communicate your architecture if you’re working in a remote setup, for sure, as well. These are the tools probably all to create personal communications, as well.

Thomas Betts: I want to thank everybody for joining me and participating in this discussion of architecture and design trends for the InfoQ Trends Report. We hope you all enjoyed this and please go to the infoq.com site and download the report that’ll be available the same day this podcast comes out.

So thank you again, Vasco Veloso, Eran Stiller, Tanmay Deshpande, and Daniel Bryant.

I hope you join us again soon for another episode of the InfoQ podcast.

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


625 Shares in MongoDB, Inc. (NASDAQ:MDB) Bought by Yarbrough Capital LLC

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Yarbrough Capital LLC bought a new position in shares of MongoDB, Inc. (NASDAQ:MDBGet Rating) during the 4th quarter, according to its most recent disclosure with the SEC. The fund bought 625 shares of the company’s stock, valued at approximately $123,000.

→ The gold catalyst we’ve waited for (From Stansberry Research)

Other hedge funds and other institutional investors have also recently bought and sold shares of the company. Sentry Investment Management LLC bought a new stake in MongoDB during the 3rd quarter worth approximately $33,000. Alta Advisers Ltd bought a new position in MongoDB during the third quarter valued at $40,000. Huntington National Bank lifted its position in shares of MongoDB by 1,468.8% during the third quarter. Huntington National Bank now owns 251 shares of the company’s stock worth $50,000 after acquiring an additional 235 shares in the last quarter. Quadrant Capital Group LLC lifted its position in shares of MongoDB by 37.8% during the second quarter. Quadrant Capital Group LLC now owns 419 shares of the company’s stock worth $109,000 after acquiring an additional 115 shares in the last quarter. Finally, IFP Advisors Inc raised its position in shares of MongoDB by 28.9% in the third quarter. IFP Advisors Inc now owns 1,760 shares of the company’s stock worth $155,000 after buying an additional 395 shares in the last quarter. 84.86% of the stock is owned by institutional investors and hedge funds.

MongoDB Stock Performance

Shares of NASDAQ:MDB traded down $6.14 during midday trading on Monday, reaching $213.63. The company’s stock had a trading volume of 256,891 shares, compared to its average volume of 1,783,544. The company has a quick ratio of 4.10, a current ratio of 4.10 and a debt-to-equity ratio of 1.66. The stock has a fifty day moving average of $210.61 and a 200 day moving average of $197.13. MongoDB, Inc. has a twelve month low of $135.15 and a twelve month high of $471.96.

Insider Activity at MongoDB

In other news, insider Thomas Bull sold 399 shares of MongoDB stock in a transaction dated Tuesday, January 3rd. The shares were sold at an average price of $199.31, for a total value of $79,524.69. Following the sale, the insider now owns 16,203 shares in the company, valued at $3,229,419.93. The sale was disclosed in a document filed with the SEC, which can be accessed through the SEC website. In other news, insider Thomas Bull sold 399 shares of the firm’s stock in a transaction dated Tuesday, January 3rd. The shares were sold at an average price of $199.31, for a total value of $79,524.69. Following the transaction, the insider now directly owns 16,203 shares of the company’s stock, valued at $3,229,419.93. The transaction was disclosed in a legal filing with the SEC, which is accessible through this link. Also, CTO Mark Porter sold 635 shares of the firm’s stock in a transaction dated Tuesday, January 3rd. The stock was sold at an average price of $187.72, for a total transaction of $119,202.20. Following the completion of the transaction, the chief technology officer now directly owns 27,577 shares in the company, valued at $5,176,754.44. The disclosure for this sale can be found here. Over the last quarter, insiders have sold 110,994 shares of company stock worth $22,590,843. 5.70% of the stock is owned by company insiders.

Analysts Set New Price Targets

A number of research analysts recently commented on the company. Piper Sandler reissued an “overweight” rating and issued a $270.00 target price on shares of MongoDB in a research note on Thursday, March 9th. Needham & Company LLC lifted their target price on shares of MongoDB from $240.00 to $250.00 and gave the company a “buy” rating in a report on Thursday, March 9th. Credit Suisse Group cut their price objective on shares of MongoDB from $305.00 to $250.00 and set an “outperform” rating for the company in a research report on Friday, March 10th. KeyCorp lifted their target price on shares of MongoDB from $220.00 to $255.00 and gave the company an “overweight” rating in a research note on Monday, February 6th. Finally, Truist Financial decreased their target price on shares of MongoDB from $300.00 to $235.00 in a research report on Monday, January 9th. Four equities research analysts have rated the stock with a hold rating and twenty have issued a buy rating to the company’s stock. According to MarketBeat, the company currently has a consensus rating of “Moderate Buy” and a consensus price target of $253.87.

MongoDB Company Profile

(Get Rating)

MongoDB, Inc engages in the development and provision of a general-purpose database platform. The firm’s products include MongoDB Enterprise Advanced, MongoDB Atlas and Community Server. It also offers professional services including consulting and training. The company was founded by Eliot Horowitz, Dwight A.

Further Reading

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

Metaverse Stocks And Why You Can't Ignore Them Cover

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Big Data and Analytics Market is Gaining Momentum | MongoDB, Azure, Splunk

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Big Data and Analytics Market is Gaining Momentum | MongoDB, Azure, Splunk – Technology Today – EIN Presswire

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Accelerating the Secure Software Delivery Lifecycle with GitOps

MMS Founder
MMS Gopinath Rebala

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • A key way that businesses can respond faster to today’s competitive pressures is by enabling shorter and shorter software delivery lifecycles – without sacrificing reliability, security, or compliance.
  • By integrating GitOps into the DevSecOps workflow, businesses can dramatically simplify the software delivery workflow, while establishing Git as the source of truth for state-of-service delivery.
  • The GitOps Model separates security from development by enabling security teams to specify policies independent of the software development processes, significantly improving the security of the delivered software.
  • The GitOps Model increases developer productivity because developers can simply commit their code in Git without needing to understand and use an orchestrator, conduct testing, set policies, or seek approvals.  
  • While the GitOps Model does not yet support the necessary transparency, real-time visibility, and collaboration required for production environments, we expect new solutions over the next year or two that will make true GitOps a reality.

More than ever, businesses must be able to respond faster to intense competitive pressure, increase operational efficiency, and adapt to constant disruption. One key to accomplishing this is enabling shorter and shorter software delivery lifecycles – that don’t sacrifice reliability, security, or compliance. This is the goal of the integrated DevSecOps strategy. However, today DevSecOps implementations require that developers understand and participate in setting up the delivery pipeline and are vigilant when it comes to security issues – otherwise the risks associated with cybercrime and regulatory compliance failures can soar.

GitOps offers a solution. By integrating GitOps into the DevSecOps workflow, businesses can leverage the benefits of Git to dramatically simplify the software delivery workflow, including for production releases, while also automating pipeline setup in the background and handing off security and compliance tasks to the respective teams.

The goal of GitOps is to enable software delivery and infrastructure configuration with Git as the source of truth for state-of-service delivery. GitOps also enables software delivery operations with familiar workflows and tools for developers, enabling DevOps processes with little or no additional friction.

Enabling enterprise security teams to specify security policies that apply to the software delivery process via the GitOps delivery model vastly improves the security of application delivery. For example, the security team could require applications deployed to a specific environment to perform dynamic application security tests (DAST) and ensure the results of the checks conform to expectations.

Specifying such security policies as code acts not only as documentation of security policies but also helps in the automation of compliance checks for these policies. In addition, security policy as code, managed by security teams independent of the software development processes and integrated with the tools that developers use, significantly improves the security of the delivered software.

This separation is critical for developing a zero-trust environment around software releases. It can also accelerate the delivery of new software by surfacing security issues as they arise, eliminating the time and complexity of separate security reviews.

DevSecOps – A solid foundation

Despite increasing risks associated with the software delivery lifecycle, most organizations struggle to get their operations, product development, and security teams to collaborate in a way that improves security without adding cumbersome process steps that end up slowing down the software delivery lifecycle. DevSecOps builds this collaboration into the software delivery lifecycle based on the following principles:

  • Providing visibility into security issues throughout the software delivery workflow
    The security team, developers, and project managers should all have visibility into the results of a comprehensive list of security tests, including static application security test (SAST), dynamic application security test (DAST), fuzzy testing, dependency scanning, binary scanning, license compliance, and secret detection.
  • Identifying security issues as early as possible in the software delivery lifecycle
    The sooner risks and vulnerabilities can be surfaced, the faster and more easily they can be remediated. Automation should be used to eliminate the time-consuming manual review of the large number of logs and metrics created by the build, test, deploy, and production stages.
  • Enabling the automatic enforcement of security policies throughout the workflow
    Application security includes ensuring that production software complies with evolving regulatory requirements, especially those related to data privacy, and reducing the risk of breaches. It also includes ensuring that a release complies with internal policies, such as all the review steps required before deployment.

The currently popular Pipeline Model for DevSecOps adheres to these principles. Both AWS and Google Cloud have best practices outlined for this approach. Using an orchestrator tool, such as Spinnaker, changes in Git or other source repositories trigger an automated workflow (a “push model”) that goes through the required steps for deploying to a target environment. These are typically represented as directed acyclic graphs with requirements for running a stage/step in the workflow and dependent stages/steps to run based on the status or output of the stage. This model allows for the required visibility into all the steps for delivery by all the stakeholders (dev, ops, security, SRE, etc.). And it encodes the rules/requirements as guardrails or gates – which are also exposed to all the stakeholders in the delivery ecosystem.

However, this pipeline model puts a significant burden on the developers to be familiar with the entire process, including how to use and configure the orchestrator. This means it requires a demanding onboarding process for developers to be able to use the system.

GitOps Model for Delivery

The GitOps Model allows developers to simply commit their code in Git and be done. They don’t have to run or track anything using new tools in the delivery process. They don’t have to understand and use the orchestrator. Instead, SecOps is able to put the necessary visibility, risk identification, and rule enforcement into Git, making Git a single source for all the necessary process controls, which run in the background, invisible to the developers.

The GitOps Model relies on Kubernetes-native capabilities. Kubernetes supports a declarative model for running applications, where the configuration required for the application services can be specified in configuration files as declarative specifications. The specifications can then be applied through the Kubernetes API, and Kubernetes deduces the operations that should be performed to attain the state declared in the configuration files and executes them. This model allows for the detection of changes between the expected state in Git and the actual running state (drift detection), enabling corrective actions.

Kubernetes has an extension capability that extends its base functionality, allowing for additional checks as part of the verification of the deployment requirements. The Kubernetes Admission Control (KAC) mechanism plays a key role in allowing GitOps simplicity for developers and providing the required reliability and security guarantees for the delivery.

KAC also provides the critical ability to separate enterprise policies for process requirements from the development process, allowing policy changes to be made independently of the development process. As an example, to ensure alignment with the SOX compliance requirement of ensuring more than one individual is involved in the verification of changes to production, the policy at admission control can verify that the Git PR review and QA approval are by two different individuals before allowing deployment.

Taking advantage of these capabilities, the GitOps Model allows for:

The huge advantage of this model for developers is that they need to take only one step to participate in the DevSecOps workflow: look for changes and reconcile them in the target environment. There is no need to perform security checks, conduct testing, set policies, or seek approvals as separate steps in their DevOps processes.

Instead, in the GitOps model, the DevOps tools can be required to perform their actions asynchronously in place of sequential orchestration steps and then, at deployment time, verify that the required steps have been performed according to the specified policy.

For example, a code check-in performs continuous integration (CI) unit tests, allowing developers to have faster dev/test cycles. The binary scans, dynamic scans, and other policy checks are performed asynchronously to the developer workflow. Test cases are run automatically, or manually for integration tests, and results are published to the same or another central repository. The results stored in the central repository are associated with a signed binary artifact that can be used for verification. This is typically done using the output of CI – for example, a container that can be signed. For a production deployment, the Git configuration is updated with the new version manifests.

When the configuration for a new version is updated in the Git repository, the deployment will trigger to the Kubernetes namespace. The Configuration of Admission Controller will then trigger the checks with the signed metadata for the container for allowing the deployment. These checks will determine whether the static scans, dynamic scans, compliance requirements like SOX, and test results are acceptable for deployment by approving or denying the deployment and sending notifications to users on the reasons for any failure.

The Open Policy Agent (OPA) Gateway and Kyverno are examples of frameworks for implementing admission control checks for deployments. They can be configured as an extension of Kubernetes and include an open policy agent or Kyverno that allows verification of security policies. When a new application or a Git change is applied, the extension recognizes the need for a new deployment and runs security policy checks to determine if all the specified security and compliance requirements have been met. It can then approve the deployment or stop it.

While these are great frameworks for aiding the GitOps Model, they do not yet support the following:

  • The ability to centrally audit the checks required for delivery across various tools working asynchronously.
  • An interface for modifying and viewing policies and applying them to different types of applications and Kubernetes clusters.
  • Visibility into applications running on various Kubernetes clusters and the status of security, resolution requirements, etc.

What this means is that although the GitOps Model has the potential to make life simpler for developers, there is a need for additional tooling to support real-time visibility and collaboration capabilities.

In enterprise environments, the security team should be able to view the compliance status of all the applications including exceptions granted for specific applications. The system should support the ability to inform developer groups of violations and collaborate with the security team on the resolution of an issue, for example, a new vulnerability identified in a deployed application. These capabilities are critical for ensuring security without compromising the speed of software delivery.

Speed in software delivery workflows is essential. But speed without security and compliance is reckless. The GitOps Model for DevSecOps offers the best of all worlds. It can free developers to focus on their projects without having to understand security issues and learn to use an orchestrator tool in the quick dev/test cycles – while also allowing them to continue working with their tool of choice. At the same time, it can enable SecOps to add the required guardrails to the software delivery workflow and surface risks as they emerge, so they can be resolved faster and more easily – ensuring software releases are safer and more reliable. No, we aren’t quite there yet, but over the next year or two, expect to see a crop of solutions hitting the market that will make true GitOps a reality.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Will the Market React to Mongodb Inc (MDB) Stock Getting a Bullish Rating

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

News Home

Monday, March 20, 2023 03:54 PM | InvestorsObserver Analysts

Mentioned in this article

How Will the Market React to Mongodb Inc (MDB) Stock Getting a Bullish Rating

Mongodb Inc (MDB) stock is up 4.43% over the past week and gets a Bullish rating from InvestorsObserver Sentiment Indicator.

Sentiment Score - ,bullish
Mongodb Inc has a Bullish sentiment reading. Find out what this means for you and get the rest of the rankings on MDB!

What is Stock Sentiment?

In investing, sentiment generally means whether or not a given security is in favor with investors. It is typically a pretty short-term metric that relies entirely on technical analysis. That means it doesn’t incorporate anything to do with the health or profitability of the underlying company.

Changes in price are generally the best indicator of sentiment for a particular stock. At its core, a stock’s trend indicates whether current market sentiment is bullish or bearish. Investors must be bullish if a stock is trending upward, and are bearish if a stock is moving down.

InvestorsObserver‘s Sentiment Indicator factors in both price changes and variations in volume. An increase in volume usually means a current trend is stengthening, while a drop in volume tends to signal a reversal to the ongoing trend.

Our system also uses the options market in order to receive additional signals on current sentiments. We take into account the ratio of calls and puts for a stock since options allow an investor to bet on future changes in price.

What’s Happening With MDB Stock Today?

Mongodb Inc (MDB) stock is down -4.14% while the S&P 500 is up 0.91% as of 3:53 PM on Monday, Mar 20. MDB has fallen -$9.10 from the previous closing price of $219.77 on volume of 1,052,430 shares. Over the past year the S&P 500 is down -11.41% while MDB has fallen -46.05%. MDB lost -$5.03 per share in the over the last 12 months.

More About Mongodb Inc

Founded in 2007, MongoDB is a document-oriented database with nearly 33,000 paying customers and well past 1.5 million free users. MongoDB provides both licenses as well as subscriptions as a service for its NoSQL database. MongoDB’s database is compatible with all major programming languages and is capable of being deployed for a variety of use cases.

Click Here to get the full Stock Report for Mongodb Inc stock.

You May Also Like

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Cloud Spanner Introduces Configurable Read-Only Replicas and Zero-Downtime Move Service

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

Google recently announced new regional and multi-regional capabilities for Cloud Spanner. The distributed SQL database now supports configurable read-only replicas and introduced a “zero-downtime” instance move service.

Cloud Spanner supports regional and multi-regional configurations, with regional configurations providing 99.99% availability and multi-regional configurations, providing 99.999% availability and protection from regional outages. But low latency was hard to achieve globally as Mark Donsky, senior product manager, explains:

Until today, read-only replicas were available in several multi-region configurations: nam6, nam9, nam12, nam-eur-asia1, and nam-eur-asia3. But now, with configurable read-only replicas, you can add read-only replicas to any regional or multi-regional Spanner instance so that you can deliver low-latency stale reads to clients everywhere.

Cloud Spanner offers three types of replicas: read-write replicas, read-only replicas, and witness replicas. Read-only replicas provide low-latency stale reads to nearby clients and help increase a node’s read scalability. As they do not participate in voting to commit writes, the read-only replicas do not contribute to the write latency.

The zero-downtime instance move service is designed to move production Spanner instances from any configuration and region to a different one, without downtime, supporting regional, multi-regional, and custom deployments with configurable read-only replicas. Donsky highlights the previous challenge:

So you can imagine that moving a Spanner instance from one configuration to another — say us-central1 in Iowa to nam3 with a read-only replica in us-west2 — is no small feat. Factor in Spanner’s stringent availability of up to 99.999% while serving traffic at an extreme scale, and it might seem impossible to move a Spanner instance from us-central1 to nam3 with zero downtime.

The combination of the new service and the option to customize configurations with additional read replicas now allows customers to move an instance to a different location on Google Cloud. Depending on the environment, the operation might require from a few hours to a few days to complete, but during the process Cloud Spanner maintains high availability and strong consistency, preserving the SLA guarantees.

During the change, both the source and destination instance configurations are subject to hourly compute and storage charges. The new zero-downtime move service requires opening a ticket with Google support and currently has some limitations: moving instances across projects and accounts is not supported, Spanner free trial instances are not supported and an instance must have a minimum of 1 node (1000 processing units).

On a separate announcement, Spanner fine-grained access control is now generally available, allowing database administrators to define database roles, grant privileges to the roles, and create IAM policies to grant permissions on roles to IAM principals. Earlier this year, Spanner introduced support for regional endpoints, where data is stored and processed within the same region to comply with regulatory requirements, but the feature has been rolled back and moved to a future release.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Economics of Database Operations – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Data / Software Development / Storage“><meta name="x-tns-authors" content="“>

The Economics of Database Operations – The New Stack

Modal Title

2023-03-23 08:25:50

The Economics of Database Operations

sponsor-mongodb,sponsored-post-contributed,

Opportunities exist to increase efficiency and save money by moving to a document database and practicing appropriate data modeling techniques.


Mar 23rd, 2023 8:25am by


Featued image for: The Economics of Database Operations

Rising costs and uncertain economic times are causing many organizations to look for ways to do more with less. Databases are no exception. Fortunately, opportunities exist to increase efficiency and save money by moving to a document database and practicing appropriate data modeling techniques.

Document databases save companies money in two ways:

  1. Object-centric, cross-language SDKs and schema flexibility mean developers can create and iterate production code faster, lowering development costs.
  2. Less hardware is necessary for a given transaction throughput, significantly reducing operational costs.

Developer Efficiency

All modern development uses the concept of objects. Objects define a set of related values and methods for reading, modifying and deriving results from those values. Customers, invoices and train timetables are all examples of objects. Objects, like all program variables, are transient and so must be made durable by persisting them to disk storage.

We no longer manually serialize objects into local files the way Windows desktop developers did in the 1990s. Nowadays data is stored not on the computer running our application, but in a central place accessed by multiple applications or instances of an application. This shared access means not only do we need to read and write data efficiently over a network, but also implement mechanisms to allow concurrent changes to that data without one process overwriting another’s changes.

Relational databases predate the widespread use and implementation of object-oriented programming. In a relational database, data structures are mathematical tables of values. Interaction with the data happens via a specialized language, SQL, that has evolved in the past 40 years to provide all sorts of interaction with the stored data: filtering and reshaping it, converting it from its flat, deduplicated interrelated model into the tabular, re-duplicated, joined results presented to applications. Data is then painfully converted from these rows of redundant values back into the objects the program requires.

Doing this requires a remarkable amount of developer effort, skill and domain knowledge. The developer has to understand the relationships between the tables. They need to know how to retrieve disparate sets of information, and then rebuild their data objects from these rows of data. There is an assumption that developers learn this before entering the world of work and can just do it. But this is simply untrue. Even when a developer has had some formal education in SQL, it’s unlikely that the developer will know how to write nontrivial examples efficiently.

Document databases start with the idea of persisting objects. They allow you to persist a strongly typed object to the database with very little code or transformation, and to filter, reshape and aggregate results using exemplar objects, not by trying to express a description in the broken English that is SQL.

Imagine we want to store a customer object where customers have an array of some repeating attribute, in this case, addresses. Addresses here are a weak entities not shared between customers. Here is the code in C# /Java-like pseudocode:

To store this customer object in a relational database management system (RDBMS) and then retrieve all the customers in a given location, we need the following code or something similar:

This code is not only verbose and increasingly complex as the depth or number of fields in your object grows, but adding a new field will require a slew of correlated changes.

By contrast, with a document database, your code would look like the following and would require no changes to the database interaction if you add new fields or depth to your object:

When developers encounter a disconnect between the programming model (objects) and the storage model (rows), they’re quick to create an abstraction layer to hide it from their peers and their future selves. Code that automatically converts objects to tables and back again is called an object relational mapper or ORM. Unfortunately, ORMs tend to be language-specific, which locks development teams into that language, making it more difficult to use additional tools and technologies with the data.

Using an ORM doesn’t free you from the burden of SQL either when you want to perform a more complex operation. Also, since the underlying database is unaware of objects, an ORM usually cannot provide much efficiency in its database storage and processing.

Document databases like MongoDB persist the objects that developers are already familiar with so there’s no need for an abstraction layer like an ORM. And once you know how to use MongoDB in one language, you know how to use it in all of them and you never have to move from objects back to querying in pseudo-English SQL.

It’s also true that PostgreSQL and Oracle have a JSON data type, but you can’t use JSON to get away from SQL. JSON in an RDBMS is for unmanaged, unstructured data, a glorified string type with a horrible bolt-on query syntax. JSON is not for database structure. For that you need an actual document database.

Less Hardware for a Given Workload

A modern document database is very similar to an RDBMS internally, but unlike the normalized relational model where the schema dictates that all requests are treated equally, a document database optimizes that schema for a given workload at the expense of other workloads. The document model takes the idea of the index-organized table and clustered index to the next level by co-locating not only related rows as in the relational model, but all the data you are likely to want to use for a given task. It takes the idea that a repeating child attribute of a relation does not need to be in a separate table (and thus storage) if you have a first-order array type. Or in other terms, you can have a column type of “embedded table.”

This co-location or, as some call it, the implicit joining of weak entity tables reduces the costs of retrieving data from storage as often only a single cache or disk location needs to be read to return an object to the client or apply a filter to it.

Compare this with needing to identify, locate and read many rows to return the same data and the client-side hardware necessary to reconstruct an object from those rows, a cost so great that many developers will put a secondary, simpler key-value store in front of their primary database to act as a cache.

These developers know the primary database cannot reasonably meet the workload requirements alone. A document database requires no external cache in front of it to meet performance targets but can still perform all the tasks of RDBMS, just more efficiently.

How much more efficiently? I’ve gone through the steps of creating a test harness to determine how much efficiency and cost savings can be achieved by using a document database versus a standard relational database. In these tests, I sought to quantify the transactional throughput per dollar for a best-in-class cloud-hosted RDBMS versus a cloud-hosted document database, specifically MongoDB Atlas.

The use case I chose represents a common, real-world application where a set of data is updated frequently and read even more frequently: an implementation of the U.K. Vehicle Inspection (MOT) system and its public and private interfaces, using its own published data.

The tests revealed that create, update and read operations are considerably faster in MongoDB Atlas. Overall, on similarly specified server instances with a similar instance cost, MongoDB Atlas manages approximately 50% more transactions per second. This increases as the relational structure becomes more complex, making the joins more expensive still.

In addition to the basic instance costs, the hourly running costs of the relational database varied between 200% and 500% of the Atlas costs for these tests due to additional charges for disk utilization. The cost of hosting the system, highly available to meet a given performance target, was overall three to five times less expensive on Atlas. In the simplest terms, Atlas could push considerably more transactions per second per dollar.

Independent tests confirm the efficiency of the document model. Temenos, a Swiss-based software company used by the largest banks and financial institutions in the world, has been running benchmark tests for more than 15 years. In its most recent test, the company ran 74,000 transactions per second (TPS) through MongoDB Atlas.

The tests resulted in throughput per core that was up to four times better than a similar test from three years ago while using 20% less infrastructure. This test was performed using a production-grade benchmark architecture with configurations that reflected production systems, including nonfunctional requirements like high availability, security and private links.

During the test, MongoDB read 74,000 TPS with a response time of 1 millisecond, all while also ingesting another 24,000 TPS. Plus, since Temenos is using a document database, there’s no caching in the middle. All the queries run straight against the database.

Summary

Unless you intend to have a single database in your organization used by all applications, moving your workloads from relational to a document model would allow you to build more in less time with the same number of developers, and spend considerably less running it in production. It’s unlikely that your business hasn’t started using object-oriented programming yet. So why haven’t you started using object-oriented document databases?

Group
Created with Sketch.

TNS owner Insight Partners is an investor in: Pragma.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike targets Java Spring devs with support for the popular framework – The Register

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Distributed NoSQL database Aerospike has boosted functionality and engineering support for its real-time database within the popular Java development environment Spring Framework.

Spring Data Aerospike adds Spring Data 3, Reactive Repositories, and Projections to help developers of applications in Spring Boot. The database biz says it has created an engineering team to support continuing integration with the Spring Data framework.

The move would help to build and bring Java applications to the Aerospike database, one analyst told us.

Included in the release are real-time object data in Java applications, more efficient data retrieval from object stores in Aerospike and expanded query features for nested and plain Java objects, the company told us.

Spring is the most popular Java framework, according to the Stack Overflow 2022 developer survey. Aerospike customers include Sony Entertainment, PayPal, and Airtel.

Speaking to The Register, Srini Srinivasan, Aerospike CTO and founder, said the open source community had begun working on support for Aerospike in Spring, but he wanted to accelerate the work.

“The idea is that people who program in Spring already know it, and they don’t have to understand the details of the Aerospike APIs underneath. It enables them to make progress really fast: the database just gets plugged in underneath using the connector. We’re adding enterprise support and adding support for the work already done in the community,” he said.

Srinivasan said the features would mean developers avoid writing complex application codes using the native APIs and Aerospike. They will also support “configuration driven development” to create data queries without writing any code, he said.

Holger Mueller, vice president and principal analyst at Constellation Research, told us Aerospike’s expanded enterprise support for the latest version of Spring Boot 3.0x was positive for enterprise Java developers.

“Java is the largest developer community and Spring the most popular framework so combining both in support for a database makes it easier to build and bring applications to a database, and that is what Aerospike is doing. Given the shortfall in developer capacity that enterprises face, anything that makes developers more productive [or] increases developer velocity is critical and highly welcome by [businesses] building or porting their next generation applications,” Mueller said.

Greater Spring support follows added support for JSON documents in a slew of new features included in its Database 6 release, announced in April 2022. ®

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Administrator at Vennote Technologies Limited – The Paradise News

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Key Responsibilities :

Provision database instances
Ensure performance, security, and availability of databases
Prepare documentations and specifications
Handle common database procedures, such as upgrade, backup, recovery, migration, etc
Profile server resource usage, optimize and tweak as necessary
Collaborate with other team members and stakeholders
MongoDB administrator will help manage, maintain, and troubleshoot the company databases housed in MongoDB. They will create the scripts to automate routine tasks and establish backup protocols to ensure the reliability and safety of data sets. In addition, the administrator will work with the development team to create security protocols, making data integrity one of the highest priorities.
Administrator will need to have excellent communication skills to help translate requests and shepherd through the large volume of data queries, as well as being a creative problem-solver, a motivated self-starter, and a team player. They also need to be dedicated to efficient, inventive use of MongoDB and be enthusiastic about new technologies and iterations of MongoDB.
Primary operational support for assigned database environments, as well as backup support for other environments
Responsible for database and system performance, including database monitoring, problem analysis, and tuning activities to ensure optimum database performance
After-hours support for application release and change management activities
Participate in on-call rotation and respond to critical database incidents
Work with application development groups and system administrators to determine the best storage, access, and distribution methods for data
Participate as a project team member providing vital information regarding database tools, structure, physical design and implementation

Technical Experience:

Experience in Configure, manage and troubleshoot MongoDB Shared cluster
Good troubleshooting skills on performance issues, replication issues, or operational issues like alerts alert log, jobs, disk group, etc
Good understanding of DB schema design, performance tuning and capacity planning
Experienced in upgrades of MongoDB, database migrations and patching
Knowledge in Optimizing query performance
Good Analytical and Communication Skill
4+ years of experience in no-sql area
Experience in creating data models using BigTable type No-SQL (Cassandra, HBase etc) modeling techniques,
Experience in porting relational data models into any BigTable type No-SQL data models,
Hands-on experience in performance tuning Cassandra ring,
Advanced knowledge and experience with Cassandra, Opscenter
Hands-on experience in serialization APIs like Thrift and Hector.
Exposure to Big Data technologies like Hadoop.
Exposure to how Map-Reduce/Hive/Pig jobs can run on Cassandra.
Experience with Unix/Linux including basic commands and shell scripting.
Good working knowledge of DB performing monitoring tools.
Experience with RDBMS is a plus. RDBMS such as Oracle, MySql, PostgreSQL and use of Hibernate, ORMs and/or jdbc


Click Here To Apply

Spread the love

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.