Aerospike’s new Graph database to support both OLAP and OLTP workloads – InfoWorld

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Aerospike on Tuesday took the covers off its new Graph database that can support both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) workloads.

The new database, dubbed Aerospike Graph, adds a property graph data model to the existing capabilities of its NoSQL database and Apache TinkerPop graph compute engine, the company said.

Apache TinkerPop is an open source property graph compute engine that helps the new graph database support both OLTP and OLAP queries. It is also used by other database providers such as Neo4J, Microsoft’s CosmosDB, Amazon Neptune, Alibaba Graph Database, IBM Db2 Graph, ChronoGraph, Hadoop, and Tibco’s Graph Database.

In order to enable integration between Apache TinkerPop and its database, Aerospike uses the graph API via its Aerospike Graph Service, the company said.

In its efforts to integrate further, the company said it has created an optimized data model under the hood to represent graph elements — such as vertices and edges — that map to the native Aerospike data model using records, bins, and other features.

The new graph database, just like Apache TinkerPop, will make use of the Gremlin Query Language, Aerospike said. This means developers can write applications with new and existing Gremlin queries in Aerospike Graph.

Gremlin is the graph query language of Apache TinkerPop.

Some of the applications of graph databases include identity graphs for the advertising technology industry, customer 360 applications across ecommerce and retail companies, fraud detection and prevention across financial enterprises, and machine learning for generative AI.

The introduction of the graph database will also see the graph model added to Aerospike’s real-time data platform which already supports key value and document models. Last year, the company added support for native JSON.

While Aerospike did not reveal any pricing details, it said Aerospike Graph can independently scale compute and storage, and enterprises will have to pay for the infrastructure being used.

Next read this:

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: GraalVM 23.0.0, Payara Platform, Spring 6.1-M1, QCon New York

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for June 12th, 2023 features news from OpenJDK, JDK 22, JDK 21, GraalVM 23, various releases of: GraalVM Build Tools, Spring Framework, Spring Data, Spring Shell, Payara Platform, Micronaut, Open Liberty, Quarkus, Micrometer, Hibernate ORM and Reactive, Project Reactor, Piranha, Apache TomEE, Apache Tomcat, JDKMon, JBang, JHipster, Yupiik Bundlebee; and QCon New York 2023.

OpenJDK

After its review had concluded, JEP 404, Generational Shenandoah (Experimental), was officially removed from the final feature set in JDK 21. This was due to the “risks identified during the review process and the lack of time available to perform the thorough review that such a large contribution of code requires.” The Shenandoah team has decided to “deliver the best Generational Shenandoah that they can” and will seek to target JDK 22.

Julian Waters, OpenJDK development team at Oracle, has submitted JEP Draft 8310260, Move the JDK to C17 and C++17, to allow the use of the C17 and C++17 programming language features in JDK source code. Due to the required minimal JDK 17 version in Microsoft Visual C/C++ Compiler: Visual Studio 2019, this draft proposes to apply changes to the build system such that the existing C++ flag, -std:c++14, will change to -std:c++17 and the existing C flag, -std:c11, will change to -std:c17.

JDK 21

Build 27 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 26 that include fixes to various issues. Further details on this build may be found in the release notes.

JDK 22

Build 2 of the JDK 22 early-access builds was also made available this past week featuring updates from Build 1 that include fixes to various issues. More details on this build may be found in the release notes.

For JDK 22 and JDK 21, developers are encouraged to report bugs via the Java Bug Database.

GraalVM

Oracle Labs has introduced Oracle GraalVM with a new distribution and license model for both development and production applications. GraalVM Community Components 23.0.0 provides support for JDK 20 and JDK 17 with the GraalVM for JDK 17 Community 17.0.7 and GraalVM for JDK 20 Community 20.0.1 editions of GraalVM. New features in Native Image include: support for G1 GC; ​​compressed object headers and pointers for a lower memory footprint; and machine learning to automatically infer profiling information. InfoQ will follow up with a more detailed news story.

On the road to version 1.0, Oracle Labs has also released version 0.9.23 of Native Build Tools, a GraalVM project consisting of plugins for interoperability with GraalVM Native Image. This latest release provides notable changes such as: a fix for the compatibility of the “collect reachability metadata” task with Gradle’s configuration cache; remove the use of the deprecated Gradle GFileUtils class that will ultimately be removed in Gradle 9; and add a display of the GraalVM logo on the generated Native Build Tools documents. Further details on this release may be found in the changelog.

Spring Framework

The first milestone release of Spring Framework 6.1 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: initial support for the new Sequenced Collections interfaces; support for Coordinated Restore at Checkpoint (CRaC); compatibility with virtual threads; and a ClientHttpRequestFactory interface based on the HttpClient class provided by Jetty. More details on this release may be found in the release notes.

Similarly, versions 6.0.10 and 5.3.28 of Spring Framework have also been released featuring bug fixes, improvements in documentation, dependency upgrades and new features such as: a new remoteServer() method added to the MockHttpServletRequestBuilder class to set the remote address of a request; a new matchesProfiles() methods added to the Environment interface to determine whether one of the given profile expressions matches the active profiles; and declare the isPerInstance() method defined in the Advisor interface as default to eliminate the unnecessary implementation requirement of that method. Further details on these releases may be found in the release notes for version 6.0.10 and version 5.3.28.

Versions 2023.0.1, 2022.0.7 and 2021.2.13, service releases of Spring Data, ship with bug fixes and respective dependency upgrades to sub-projects such as: Spring Data Commons 3.1.1, 3.0.7 and 2.7.13; Spring Data MongoDB 4.1.1, 4.0.7 and 3.4.13; Spring Data Elasticsearch 5.1.1, 5.0.7 and 4.4.13; and Spring Data Neo4j 7.1.1 7.0.7 and 6.3.13.

Versions 3.1.1 and 3.0.5 of Spring Shell 3.0.5 have been released with notable bug fixes such as: a target annotated with @ShellAvailability not registering with Ahead-of-Time processing; native mode broken on Linux; and an unexpected comma inserted at the end of a parsed message. More details on these releases may be found in the release notes for version 3.1.1 and version 3.0.5.

Payara

Payara has released their June 2023 edition of the Payara Platform that includes Community Edition 6.2023.6, Enterprise Edition 6.3.0 and Enterprise Edition 5.52.0. All three versions feature: the removal of the throwable reference of the ASURLClassLoader class to eliminate class loader leaks; and a fix for the configuration of the dependency injection kernel, HK2, for JDK 17 compilation. Further details on these versions may be found in the release notes for Community Edition 6.2023.6, Enterprise Edition 6.3.0 and Enterprise Edition 5.52.0.

Micronaut

The fourth release candidate of Micronaut 4.0 delivers bug fixes and improvements such as: add a default method to the overloaded set of writeValueAsString() methods in the JsonMapper interface; improved exception handling on scheduled jobs; and a new parameter, missingBeans=EndpointSensitivityHandler.class, for the @Requires annotation on the EndpointsFilter class to convey that endpoint sensitivity is handled externally and the filter will not be loaded. More details on this release may be found in the release notes.

Open Liberty

IBM has released Open Liberty 23.0.0.6-beta that provides: continued improvements in their InstantOn functionality; continued support for the Jakarta Data specification; and improvements for OpenID Connect clients with support for Private Key JWT client authentication and RFC 7636, Proof Key for Code Exchange by OAuth Public Clients (PKCE).

Quarkus

Quarkus 3.1.2.Final, the second maintenance release, provide improvements in documentation, dependency upgrades and bug fixes such as: a ClassNotFoundException when using the Qute Templating Engine in dev mode; a NullPointerException in version 3.1.1 when using a Config Interceptor; and startup of the Quarkus server hangs indefinitely when using the OidcRecorder class. Further details on this release may be found in the release notes.

Micrometer

Versions 1.11.1, 1.10.8 and 1.9.12 of Micrometer Metrics have been released with dependency upgrades and bug fixes such as: an improper variable argument check in the KeyValues class that leads to NullPointerException; loss of scope and context propagation between Project Reactor and imperative code blocks; and random GRPC requests return null upon calling the currentSpan() method defined in the Tracer class. More details on these releases may be found in the release notes for version 1.11.1, version 1.10.8 and version 1.9.12.

Similarly, versions 1.1.2 and 1.0.7 of Micrometer Tracing have been released with dependency upgrades, improvements in documentation and bug fixes: abstractions from the Span interface are not equal when when delegating to the same OpenTelemetry object; and a fix for Project Reactor with Micrometer 1.10 by using null scopes instead of clearing thread locals. Further details on these releases may be found in the release notes for version 1.1.2 and version 1.0.7.

Hibernate

The release of Hibernate ORM 6.2.5.Final provides bug fixes such as: caching not working properly for entities with inheritance when the hibernate.cache.use_structured_entries property was set to true; generic collections not mapped correctly using a @MappedSuperclass annotation; and mapping of JSON-B of different types in a class inheritance hierarchy does not work.

The release of Hibernate Reactive 2.0.1.Final ships with compatibility with Hibernate ORM 6.2.5.Final and adds support for the @Lob annotation for MySQL, MariaDB, Oracle, and Microsoft SQLServer.

Project Reactor

Project Reactor 2022.0.8, the eighth maintenance release, provides dependency upgrades to reactor-core 3.5.7, reactor-netty 1.1.8. There was also a realignment to version 2022.0.8 with the reactor-kafka 1.3.18, reactor-pool 1.0.0, reactor-addons 3.5.1 and reactor-kotlin-extensions 1.2.2 artifacts that remain unchanged. More details on this release may be found in the changelog.

Piranha

The release of Piranha 23.6.0 delivers notable changes such as: removal of the deprecated Logging Manager and MimeTypeManager interfaces; deprecation of the --war and --port command line arguments; and add HTTPS support to the Piranha Maven plugin. Further details on this release may be found in their documentation and issue tracker.

Apache Software Foundation

Apache TomEE 9.1.0 has been released featuring bug fixes, improvements in documentation, dependency upgrades and improvements: use of ActiveMQ 5.18.0 Jakarta EE-compatible client in favor of the shade approach with TomEE; and backport the fixes that addressed CVE-2023-24998 and CVE-2023-28708 in Apache Tomcat from version 10.1.x to version 10.0.27. More details on this release may be found in the release notes.

Versions 10.1.10 and 8.5.90 of Apache Tomcat delivers: support for JDK 21 and virtual threads; an update to HTTP/2 to use the RFC-9218, Extensible Prioritization Scheme for HTTP, prioritization scheme; a dependency upgrade to Tomcat Native to 2.0.4 and 1.2.37, respectively which includes binaries for Windows built with OpenSSL 3.0.9 and 1.1.1u, respectively; and a deprecation of the xssProtectionEnabled property from the HttpHeaderSecurityFilter class and set the default value to false. Further details on these versions may be found in the changelogs for version 10.1.10 and version 8.5.90.

JDKMon

Versions 17.0.67 and 17.0.65 of JDKMon, a tool that monitors and updates installed JDKs, has been made available this past week. Created by Gerrit Grunwald, principal engineer at Azul, these new versions provide: support for the new GraalVM Community builds; and a small icon added to the name of a JDK to indicate that it is managed by SDKMan. An experimental new feature in version 17.0.65 includes a new switch-jdk script placed in a user’s home folder that makes it possible to switch to a specific JDK in a shell session.

JBang

The release of JBang 0.108.0 ships with support for JEP 445, Unnamed Classes and Instance Main Methods (Preview). It is important to note that developers will be required to build and install JDK 21 early-access to use JEP 445 due to the Temurin JDK builds only providing JDK 20 as the latest version.

JobRunr

JobRunr 6.2.2 has been released to provide notable changes: improve caching of job analysis when using Java Stream API; and the ElectStateFilter and ApplyStateFilter interfaces are invoked while there is no change of state.

JHipster

The first beta release of JHipster 8.0.0 delivers bug fixes and notable changes such as: the use of Consul by default; a fix for Apache Cassandra tests by dropping CassandraUnit and adding reactive tests; and a move to deny-by-default over allow-by-default by using the authorizeHttpRequests() method defined in Spring Security HttpSecurity class. It is important to note that there is a rename of the AngularX configuration option to Angular for backward compatibility as AngularX will be removed in the GA release of JHipster 8.0. More details on this release may be found in the release notes.

Yupiik

The release of Yupiik Bundlebee 1.0.20, a light Java Kubernetes package manager, provides updates such as: additional placeholders for the default observability stack; support for namespace placeholder keywords to enable the reuse of globally configured namespace in placeholders; and proper usage of DaemonSet usage for Loki. Further details on this release may be found in the release notes.

QCon New York

After a three-year hiatus due to the pandemic, the 9th annual QCon New York conference was held at the New York Marriott at the Brooklyn Bridge in Brooklyn, New York this past week featuring three days of presentations from 12 tracks and keynotes delivered by Radia Perlman, Alicia Dwyer Cianciolo, Suhail Patel and Sarah Bird. More details about this conference may be found in the InfoQ daily recaps from Day One and Day Two. InfoQ will follow-up with Day Three coverage.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Voxel51 Open-Sources Computer Vision Dataset Assistant VoxelGPT – Q&A with Jason Corso

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Voxel51 recently open-sourced VoxelGPT, an AI assistant that interfaces with GPT-3.5 to produce Python code for querying computer vision datasets. InfoQ spoke with Jason Corso, co-founder and CSO of Voxel51, who shared their lessons and insights gained while developing VoxelGPT.

In any data science project, data exploration and visualization is a key early step, but many of the techniques are intended for structured or tabular data. Computer vision datasets, on the other hand, are usually unstructured: images or point clouds. Voxel51’s open-source tool FiftyOne provides a query language for exploring and curating computer vision datasets. However, it can be challenging for casual users to quickly and reliably use tools like FiftyOne.

VoxelGPT is a plug-in for FiftyOne that provides a natural language interface for the tool. Users can ask questions about their data, which are translated into Python code that leverages FiftyOne to produce the answers. VoxelGPT uses LangChain and prompt engineering to interact with GPT-3.5 and output the Python code. InfoQ spoke with Jason Corso, co-founder and CSO of Voxel51, about the development of VoxelGPT.

InfoQ: It’s clear that while LLMs provide a powerful natural language interface for solving problems, getting the most out of them can be a challenge. What are some tips for developers?

Jason Corso: A key learning we had while developing VoxelGPT is that expecting one interaction with the LLM to sufficiently address your task is likely a bad idea.  It helps to carefully segment your interaction with the LLM to sufficiently provide enough context per interaction, generate more useful piecemeal results, and later compose them together depending on your ultimate task.

A few other lessons learned:

  • Start simple, gain intuition, and only add layers of complexity once the LLM is acting as you expect.
  • LangChain is a great library, but it is not without its issues. Don’t be afraid to “go rogue” and build your own custom LLM tooling wherever existing tools aren’t getting the job done.

InfoQ: Writing good tests is a key practice in software engineering. What are some lessons you learned when testing VoxelGPT?

Corso: Testing applications built with LLMs is challenging, and testing VoxelGPT was no different. LLMs are not nearly as predictable as traditional software components. However, we incorporated software engineering best practices into our workflows as much as possible through unit testing. 

We created a unit testing framework with 60 test cases, which covered the surface area of the types of queries we’d expect from usage. Each test consisted of a prompt, a FiftyOne Dataset, and the expected subset of the dataset resulting from converting the prompt to a query in FiftyOne’s domain-specific query language. We ran these tests each time we made a substantial change to the code or example set in order to prevent regression. 

InfoQ: AI safety is a major concern. What were some of the safety issues you confronted and how did you solve them?

Corso: Yes, indeed AI safety is a key element to consider when building these systems. When building VoxelGPT, we were intentional about addressing potential safety issues in multiple ways.

Input validation: The first stop on a prompt’s journey through VoxelGPT is OpenAI’s moderation endpoint, so we ensure all queries passed through the system comply with OpenAI’s terms of service. Even beyond that, we run a custom “intent classification” routine to validate that the user’s query falls into one of the three allowed classes of query, is sensible, and is not out of scope.

Bias mitigation: Bias is another major concern with LLMs, which form potentially unwanted or non inclusive connections between concepts, based on their training data. VoxelGPT is incentivized to infer as much as possible from the contextual backdrop of the user’s FiftyOne Dataset, so that it capitalizes on the base LLM’s inference capabilities without being mired in its biases.

Programmed limitations: We purposely limited VoxelGPT’s access to any functionality involving the permanent moving, writing, or deleting of data. We also prevent VoxelGPT from performing any computationally expensive operations. At the end of the day, the human working with VoxelGPT (and FiftyOne) is the only one with this power!

InfoQ: What was one of the most surprising things you learned when building VoxelGPT?

Corso: Building VoxelGPT was really quite fun.  LLMs capture a significant amount of generalizable language-based knowledge.  Their ability to leverage this generalizability in context-specific ways was very surprising.  What do I mean?  At the heart of FiftyOne is a domain-specific language (DSL), based in Python, for querying schema-less unstructured AI datasets.  This DSL enables FiftyOne users to “semantically slice” their data and model outputs to various ends like finding mistakes in annotations, comparing two models, and so on.  However, it takes some time to become an expert in that DSL.  It was wildly surprising that with a fixed and somewhat limited amount of context, we could provide sufficiently rich “training material” for the LLM to actually construct executable Python code in FiftyOne’s DSL.

The VoxelGPT source code is available on GitHub. There is also an online demo available on the FiftyOne website.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike invades the graph database space with a little help from Apache TinkerPop

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Real-time database vendorAerospike is expanding its multi-model capabilities with the launch of the Aerospike Graph database.

Aerospike got its start back in 2009, providing a NoSQL database that in its early years focused on advertising applications. Over the past decade Aerospike has evolved to become a real-time database platform, useful for adtech, financial services and customer data platforms among other use cases.

In 2022, the company began its shift to offering what is known as a multi-model database, providing support for the JSON document model, which has become increasingly popular in recent years in part due to the success of document database vendor MongoDB.

Now Aerospike is expanding further with the general availability of Aerospike Graph, which brings graph data model capabilities.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

A graph database is a type of data model structured to help users better understand relationships between different data points and content. There are multiple graph databases in the market today, including Neo4J and Amazon Neptune. Even Oracle has a graph database.

Graph databases are helpful for many different use cases, including fraud detection, an area where Aerospike customers have increasingly been headed and have needed a solution.

“What we decided last year was setting the course and a strategy to build a multi-model, multicloud data platform really focused on real-time workloads,” Subbu Iyer, CEO of Aerospike, told VentureBeat. “Our platform is really suited well for high performance at scale, low latency and high availability, so that’s what pushed us into looking at graph to really go after this space.”

Aerospike Graph: Built with open-source Apache TinkerPop technology

Aerospike didn’t build its graph database entirely from scratch. 

Rather what it did was find a suitable open-source base in the Apache TinkerPop project to build upon. Apache TinkerPop is a graph computing framework that includes its own query language known as Gremlin.

“When we found Apache TinkerPop, we realized it is a great solution, and we actually work with some of the original authors of TinkerPop,” Iyer said.

In effect what Aerospike has done with its graph database is develop a commercially supported version of TinkerPop. Iyer explained that Aerospike Graph handles the separation of compute and storage, enabling either type of resource to scale independently as needed. The database is available both as an on-premises technology and in a database-as-a-service (DBaaS) model in the cloud.

Aerospike Graph architecture
Image credit: Aerospike

With the initial release of Aerospike Graph, the company will be supporting the Apache TinkerPop project’s Gremlin query language. In the future, Iyer said that Aerospike could support other query languages for graph databases. Today there are multiple approaches for graph queries, including the cypher query language backed by graph database vendor Neo4j and the Property Graph Query Language (PGQL) backed by Oracle.

Is the next stop for Aerospike more AI?

As Aerospike continues to grow its platform, artificial intelligence (AI) capabilities are high on Iyer’s agenda.

The core Aerospike database platform is already being used by organizations as a feature store for AI pipelines, according to Iyer. There has also been a lot of effort in the industry overall in recent months to use existing data sources to help augment large language model (LLM) data for generative AI. That’s an area where vector databases are playing a role and it’s a space that Iyer is tracking closely.

“We’re looking at it very carefully,” Iyer said about vector databases. “It actually fits in very well with our multi-model strategy.”

>>Follow VentureBeat’s ongoing generative AI coverage<<

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Launches Amazon S3 Dual-Layer Server-Side Encryption with Keys Stored in AWS KMS

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Recently AWS launched Amazon S3 dual-layer server-side encryption with keys stored in AWS Key Management Service (DSSE-KMS), a new encryption option in Amazon S3 that applies two layers of encryption to objects when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket.

The company designed DSSE-KMS to meet National Security Agency CNSSP 15 for FIPS compliance and Data-at-Rest Capability Package (DAR CP) Version 5.0 guidance for two layers of CNSA encryption. It will allow customers to use DSSE-KMS to fulfill regulatory requirements to apply multiple layers of encryption to their data.

With the launch of DSSE-KMS, Amazon S3 now offers four options for server-side encryption:

Source: https://aws.amazon.com/blogs/aws/new-amazon-s3-dual-layer-server-side-encryption-with-keys-stored-in-aws-key-management-service-dsse-kms/

DSSE-KMS allows users to indicate dual-layer server-side encryption (DSSE) when uploading or copying an object through a PUT or COPY request. Additionally, they can set up their S3 bucket so that DSSE is automatically applied to all new objects. By leveraging IAM and bucket policies, users can also enforce DSSE-KMS. Each encryption layer employs a distinct cryptographic implementation library with its own data encryption keys. Furthermore, DSSE-KMS helps protect sensitive data against the low probability of vulnerability in a single layer of cryptographic implementation.

Users can leverage DSSE-KMS via the AWS CLI, AWS Management Console, or using the Amazon S3 REST API.

Regarding the DSSE-KMS, Rob Fuller, a Red Team tactics trainer, tweeted:

If you didn’t see this, please go have your cloud teams (or if that’s you) enable this today (or your next maintenance window).

In addition, Irshad A Buchh, a principal solutions architect at AWS, states in an AWS News blog post:

Amazon S3 is the only cloud object storage service where customers can apply two layers of encryption at the object level and control the data keys used for both layers. DSSE-KMS makes it easier for highly regulated customers to fulfill the rigorous security standards, such as the US Department of Defense (DoD) customers.

Meanwhile, in a LinkedIn post about DSSE-KMS by Joshua Bregler, a senior security manager at McKinsey Digital, Kieran Miller, a chief architect at Garantir, commented:

Dual encryption is great if the keys are stored separately and under control of different entities. What’s the threat model for this use case where both keys are stored in your AWS KMS account and all the encryption happens server-side? Is it likely that I would compromise one KMS key but not the other?

I suppose I could see value if one of the KMS keys is stored externally via AWS KMS External Key Store or in another account under a different entity’s control. Are these use cases supported?

Currently, Amazon S3 dual-layer server-side encryption with keys stored in AWS KMS (DSSE-KMS) is available today in all AWS Regions, and its pricing details are available on the Amazon S3 pricing page (Storage tab) and the AWS KMS pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


National Pension Service Boosts Stock Position in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

National Pension Service grew its holdings in MongoDB, Inc. (NASDAQ:MDBGet Rating) by 24.4% in the fourth quarter, according to its most recent disclosure with the Securities and Exchange Commission (SEC). The institutional investor owned 114,778 shares of the company’s stock after buying an additional 22,516 shares during the quarter. National Pension Service owned 0.17% of MongoDB worth $22,593,000 as of its most recent SEC filing.

A number of other institutional investors and hedge funds have also made changes to their positions in MDB. Truist Financial Corp boosted its stake in MongoDB by 25.2% during the fourth quarter. Truist Financial Corp now owns 4,982 shares of the company’s stock worth $981,000 after buying an additional 1,002 shares during the last quarter. Public Employees Retirement System of Ohio boosted its stake in MongoDB by 5.3% during the fourth quarter. Public Employees Retirement System of Ohio now owns 35,039 shares of the company’s stock worth $6,897,000 after buying an additional 1,759 shares during the last quarter. CI Private Wealth LLC acquired a new position in MongoDB during the fourth quarter worth $274,000. Utah Retirement Systems boosted its stake in MongoDB by 0.9% during the fourth quarter. Utah Retirement Systems now owns 11,725 shares of the company’s stock worth $2,308,000 after buying an additional 100 shares during the last quarter. Finally, Captrust Financial Advisors acquired a new position in shares of MongoDB in the 4th quarter valued at $3,648,000. 84.86% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Stock Down 1.4 %

NASDAQ:MDB opened at $379.90 on Tuesday. The firm has a market capitalization of $26.61 billion, a P/E ratio of -81.35 and a beta of 1.04. The stock’s fifty day simple moving average is $282.72 and its 200-day simple moving average is $232.90. The company has a quick ratio of 4.19, a current ratio of 4.19 and a debt-to-equity ratio of 1.44. MongoDB, Inc. has a 12-month low of $135.15 and a 12-month high of $398.89.

MongoDB (NASDAQ:MDBGet Rating) last issued its earnings results on Thursday, June 1st. The company reported $0.56 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.18 by $0.38. MongoDB had a negative net margin of 23.58% and a negative return on equity of 43.25%. The business had revenue of $368.28 million during the quarter, compared to the consensus estimate of $347.77 million. During the same period last year, the business posted ($1.15) EPS. MongoDB’s revenue was up 29.0% on a year-over-year basis. Equities research analysts expect that MongoDB, Inc. will post -2.85 earnings per share for the current year.

Wall Street Analysts Forecast Growth

Several brokerages have recently commented on MDB. Needham & Company LLC upped their price target on MongoDB from $250.00 to $430.00 in a research note on Friday, June 2nd. Robert W. Baird upped their price target on MongoDB from $230.00 to $290.00 in a research note on Wednesday, May 31st. William Blair reaffirmed an “outperform” rating on shares of MongoDB in a research note on Friday, June 2nd. The Goldman Sachs Group upped their price target on MongoDB from $280.00 to $420.00 in a research note on Friday, June 2nd. Finally, Stifel Nicolaus increased their target price on MongoDB from $240.00 to $375.00 in a research note on Friday, June 2nd. One research analyst has rated the stock with a sell rating, two have given a hold rating and twenty-one have given a buy rating to the stock. According to data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and an average target price of $328.35.

Insiders Place Their Bets

In other news, CAO Thomas Bull sold 605 shares of the firm’s stock in a transaction that occurred on Monday, April 3rd. The shares were sold at an average price of $228.34, for a total transaction of $138,145.70. Following the transaction, the chief accounting officer now directly owns 17,706 shares of the company’s stock, valued at $4,042,988.04. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through the SEC website. In other MongoDB news, CRO Cedric Pech sold 720 shares of the business’s stock in a transaction on Monday, April 3rd. The shares were sold at an average price of $228.33, for a total transaction of $164,397.60. Following the sale, the executive now owns 53,050 shares of the company’s stock, valued at $12,112,906.50. The transaction was disclosed in a legal filing with the SEC, which is accessible through this link. Also, CAO Thomas Bull sold 605 shares of the business’s stock in a transaction on Monday, April 3rd. The stock was sold at an average price of $228.34, for a total transaction of $138,145.70. Following the sale, the chief accounting officer now directly owns 17,706 shares in the company, valued at approximately $4,042,988.04. The disclosure for this sale can be found here. Insiders sold 106,682 shares of company stock worth $26,516,196 over the last three months. Insiders own 4.80% of the company’s stock.

MongoDB Profile

(Get Rating)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


National Pension Service Boosts Stock Position in MongoDB, Inc. (NASDAQ:MDB)

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

National Pension Service grew its holdings in MongoDB, Inc. (NASDAQ:MDBGet Rating) by 24.4% in the fourth quarter, according to its most recent disclosure with the Securities and Exchange Commission (SEC). The institutional investor owned 114,778 shares of the company’s stock after buying an additional 22,516 shares during the quarter. National Pension Service owned 0.17% of MongoDB worth $22,593,000 as of its most recent SEC filing.

A number of other institutional investors and hedge funds have also made changes to their positions in MDB. Truist Financial Corp boosted its stake in MongoDB by 25.2% during the fourth quarter. Truist Financial Corp now owns 4,982 shares of the company’s stock worth $981,000 after buying an additional 1,002 shares during the last quarter. Public Employees Retirement System of Ohio boosted its stake in MongoDB by 5.3% during the fourth quarter. Public Employees Retirement System of Ohio now owns 35,039 shares of the company’s stock worth $6,897,000 after buying an additional 1,759 shares during the last quarter. CI Private Wealth LLC acquired a new position in MongoDB during the fourth quarter worth $274,000. Utah Retirement Systems boosted its stake in MongoDB by 0.9% during the fourth quarter. Utah Retirement Systems now owns 11,725 shares of the company’s stock worth $2,308,000 after buying an additional 100 shares during the last quarter. Finally, Captrust Financial Advisors acquired a new position in shares of MongoDB in the 4th quarter valued at $3,648,000. 84.86% of the stock is currently owned by hedge funds and other institutional investors.

MongoDB Stock Down 1.4 %

NASDAQ:MDB opened at $379.90 on Tuesday. The firm has a market capitalization of $26.61 billion, a P/E ratio of -81.35 and a beta of 1.04. The stock’s fifty day simple moving average is $282.72 and its 200-day simple moving average is $232.90. The company has a quick ratio of 4.19, a current ratio of 4.19 and a debt-to-equity ratio of 1.44. MongoDB, Inc. has a 12-month low of $135.15 and a 12-month high of $398.89.

MongoDB (NASDAQ:MDBGet Rating) last issued its earnings results on Thursday, June 1st. The company reported $0.56 earnings per share (EPS) for the quarter, beating the consensus estimate of $0.18 by $0.38. MongoDB had a negative net margin of 23.58% and a negative return on equity of 43.25%. The business had revenue of $368.28 million during the quarter, compared to the consensus estimate of $347.77 million. During the same period last year, the business posted ($1.15) EPS. MongoDB’s revenue was up 29.0% on a year-over-year basis. Equities research analysts expect that MongoDB, Inc. will post -2.85 earnings per share for the current year.

Wall Street Analysts Forecast Growth

Several brokerages have recently commented on MDB. Needham & Company LLC upped their price target on MongoDB from $250.00 to $430.00 in a research note on Friday, June 2nd. Robert W. Baird upped their price target on MongoDB from $230.00 to $290.00 in a research note on Wednesday, May 31st. William Blair reaffirmed an “outperform” rating on shares of MongoDB in a research note on Friday, June 2nd. The Goldman Sachs Group upped their price target on MongoDB from $280.00 to $420.00 in a research note on Friday, June 2nd. Finally, Stifel Nicolaus increased their target price on MongoDB from $240.00 to $375.00 in a research note on Friday, June 2nd. One research analyst has rated the stock with a sell rating, two have given a hold rating and twenty-one have given a buy rating to the stock. According to data from MarketBeat, the stock currently has an average rating of “Moderate Buy” and an average target price of $328.35.

Insiders Place Their Bets

In other news, CAO Thomas Bull sold 605 shares of the firm’s stock in a transaction that occurred on Monday, April 3rd. The shares were sold at an average price of $228.34, for a total transaction of $138,145.70. Following the transaction, the chief accounting officer now directly owns 17,706 shares of the company’s stock, valued at $4,042,988.04. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which can be accessed through the SEC website. In other MongoDB news, CRO Cedric Pech sold 720 shares of the business’s stock in a transaction on Monday, April 3rd. The shares were sold at an average price of $228.33, for a total transaction of $164,397.60. Following the sale, the executive now owns 53,050 shares of the company’s stock, valued at $12,112,906.50. The transaction was disclosed in a legal filing with the SEC, which is accessible through this link. Also, CAO Thomas Bull sold 605 shares of the business’s stock in a transaction on Monday, April 3rd. The stock was sold at an average price of $228.34, for a total transaction of $138,145.70. Following the sale, the chief accounting officer now directly owns 17,706 shares in the company, valued at approximately $4,042,988.04. The disclosure for this sale can be found here. Insiders sold 106,682 shares of company stock worth $26,516,196 over the last three months. Insiders own 4.80% of the company’s stock.

MongoDB Profile

(Get Rating)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Articles

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The Best High-Yield Dividend Stocks for 2023 Cover

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


What to expect during MongoDB.local NYC: Join theCUBE June 22 – SiliconANGLE

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

When it’s time to cut cloud costs, businesses make tough decisions on technology investments. In a post-earnings call Dev Ittycheria, president and chief executive for MongoDB Inc., noted that customers were continuing to analyze these investments to determine the must-haves.

Based on the blowout earnings quarter reported by MongoDB at the start of this month, the document-oriented database provider appears to have been placed in the “must-have” category for a number of enterprises.

“They’re the proxy for developers, because the MongoDB database also uses a lot of cloud compute,” said industry analyst John Furrier, during a recent episode of theCUBE Podcast. “If MongoDB is doing well, that’s the canary in the coal mine that developers are continuing to drive the innovation cycle.”

TheCUBE, SiliconANGLE Media’s livestreaming studio, will cover the MongoDB .local NYC event on June 22, providing the latest news and insights through interviews with company executives, customers and industry analysts. (* Disclosure below)

Multicloud and Atlas

MongoDB’s recent popularity stems from its Atlas solution, a cloud-based model of the database. Atlas has found traction in key vertical enterprise sectors such as financial services. The banking software company Temenos AG recently announced that its cloud on Atlas had processed 200 million embedded finance loans and 100 million retail accounts at a record 150,000 transactions per second, a significant increase over the prior benchmark using a relational database.

“IDC says that 715 million applications will be built over the next two to three years,” said Ittycheria, in a recent interview with theCUBE. “To put that number in perspective, that’s more apps that will be built in the next three to four years than were built in the last 40. People need platforms like MongoDB to build the next generation of applications.”

Atlas has also attracted interest from businesses looking for tools that will help developers deal with a multicloud world. Verizon Communications Inc. uses MongoDB’s technology to provide an abstraction layer that enables easier ways to manipulate data and interact with cross-platform APIs.

The two above-mentioned examples highlight MongoDB’s expanding role in enterprise IT, driven by developer interest, cloud data and the rise of new business opportunities through AI.

“MongoDB is becoming an essential part of the modern IT stack for data,” Furrier said. “They are attracting customers that include the hot new AI companies and continue to focus on developer ease of use and productivity. At their event, I expect them to focus on their key cloud partners around supercloud-like capabilities and AI-focused new features and capabilities.”

TheCUBE event livestream

Don’t miss theCUBE’s coverage of the MongoDB .local NYC event on June 22. Plus, you can watch theCUBE’s event coverage on-demand after the live event.

How to watch theCUBE interviews

We offer you various ways to watch theCUBE’s coverage of the MongoDB .local NYC event, including theCUBE’s dedicated website and YouTube channel. You can also get all the coverage from this year’s events on SiliconANGLE.

TheCUBE Insights podcast

SiliconANGLE also has podcasts available of archived interview sessions, available on iTunesStitcher and Spotify, which you can enjoy while on the go.

SiliconANGLE also has analyst deep dives in our Breaking Analysis podcast, available on iTunesStitcher and Spotify.

Guests

Stay tuned for theCUBE’s complete guest list during the MongoDB .local NYC event here.

(* Disclosure: TheCUBE is a paid media partner for the MongoDB .local NYC event. MongoDB Inc. and other sponsors of theCUBE’s event coverage do not have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS MongoDB: A Comprehensive Guide – Nigeria News June 19, 2023 – NNN

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Introduction to AWS MongoDB

MongoDB is a popular document-based NoSQL database platform that is widely used in modern application development. AWS services support the MongoDB database platform in various capacities to provide users with greater flexibility and scalability. AWS provides different services that support the MongoDB platform, including Amazon DocumentDB, Amazon EC2, and Amazon EBS. This article aims to provide a comprehensive guide on AWS MongoDB, including its architecture, benefits, use cases, and potential limitations.

MongoDB Architecture on AWS

The architecture of a MongoDB database on AWS comprises the database nodes, the application servers, the load balancers, and the storage volumes. Depending on the needs of an application, AWS uses different strategies to configure and deploy MongoDB architecture. One such strategy is deploying MongoDB on Amazon EC2 instances. In this strategy, MongoDB is deployed into an EC2 instance as a single-node database or a replica set. In a single-node deployment, the MongoDB process runs on a single EC2 instance, while in a replica set, multiple MongoDB instances are deployed into multiple EC2 instances. Additionally, MongoDB architected on EC2 can benefit from Auto Scaling, which can automatically add or remove EC2 instances based on traffic demand.

Another way of deploying MongoDB on AWS is through Amazon DocumentDB, a fully managed DB service that offers a scaling and self-healing architecture based on version 3.6 of MongoDB. Amazon DocumentDB automatically deploys and scales MongoDB replica sets, making it easier to provision, configure, and manage. Amazon DocumentDB also provides built-in security features such as encryption at rest and in-flight, as well as secure connectivity to applications.

Advantages of Using AWS MongoDB

AWS provides many benefits when using MongoDB, including scalability, security, reliability, and cost efficiency.

One of the significant advantages of deploying MongoDB on AWS is scalability. AWS’s automatic scaling features can automatically scale up and down the infrastructure in response to changing workloads, meaning that an application can scale up to meet the demands of the most complex applications. Additionally, AWS provides a broad range of server hardware that provides fast performance and high levels of availability.

Another advantage of using AWS MongoDB is security. The AWS security model applies equally to MongoDB deployed on AWS, providing you with a secure platform. Amazon DocumentDB automatically encrypts data at rest, and you can enable encryption at the application level using different mechanisms such as the transport layer security and HTTPS. AWS provides additional security features such as network isolation, IAM roles, and VPC access.

Reliability is another advantage of using AWS MongoDB. Many AWS regions have high availability zones that provide reliable infrastructure, and the automatic scaling ensures that an application never fails and also prevents over-provisioning. The AWS-managed services, including Amazon DocumentDB, are fully managed, offering an easy-to-use platform that doesn’t require a significant investment of time and resources.

Cost-efficiency is another advantage of using AWS MongoDB. AWS provides a flexible pricing model, allowing you to pay for the resources you consume rather than paying for a larger block of resources. Additionally, AWS provides on-demand infrastructure, which means that you only pay for the resources you consume by the hour.

Use Cases of AWS MongoDB

AWS MongoDB has many use cases, including e-commerce, IoT, gaming, and social networking. MongoDB is useful in e-commerce applications because it provides capabilities such as real-time inventory updates, customer insights, and matching products with customers. IoT applications make use of MongoDB’s ability to store documents representing sensor data. Game developers use MongoDB for player profiles and game event storage and analysis. Social networking applications use MongoDB for storing data streams, user-generated content, and messages.

Potential Limitations of AWS MongoDB

One potential limitation when using AWS MongoDB is the increased complexity of management because users are responsible for managing the infrastructure. Additionally, using AWS requires users to manage their networking and security policies effectively. Users must also carefully consider the sizing, performance tuning, and scaling of their database.

Another potential limitation of using AWS MongoDB is the lock-in factor. Developers using AWS’s managed services such as Amazon DocumentDB may find it challenging to migrate their application from AWS to another cloud platform or on-premise.

AWS MongoDB is a versatile and scalable NoSQL database platform that offers several benefits such as security, reliability, flexibility, and cost-efficiency. AWS provides different services that support MongoDB, including Amazon DocumentDB and Amazon EC2, and Auto Scaling. AWS MongoDB has a wide range of use cases that span across different industries, including social networking, gaming, and e-commerce. However, using AWS MongoDB comes with potential limitations such as management complexity and the lock-in effect. Before using AWS MongoDB, users must carefully consider these factors to ensure the smooth running of their application.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Exploring Java Records Beyond Data Transfer Objects

MMS Founder
MMS Otavio Santana

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Records are classes that act as transparent carriers for immutable data and can be thought of as nominal tuples.
  • Records can help you write more predictable code, reduce complexity, and improve the overall quality of your Java applications.
  • Records can be applied with Domain-Driven Design (DDD) principles to write immutable classes, and create more robust and maintainable code.
  • The Jakarta Persistence specification does not support immutability for relational databases, but immutability can be accomplished with NoSQL databases.
  • You can take advantage of immutable classes in situations such as concurrency cases, CQRS, event-driven architecture, and much more.

If you are already familiar with the Java release cadence and the latest LTS version, Java 17, you can explore the Java Record feature that allows immutable classes.

But the question remains: How can this new feature be used in my project code? How do I take advantage of it to make a clean and better design? This tutorial will provide some examples going beyond the classic data transfer objects (DTOs).

What and Why Java Records?

First thing first: what is a Java Record? You can think of Records as classes that act as transparent carriers for immutable data. Records were introduced as a preview feature in Java 14 (JEP 359).

After a second preview was released in Java 15 (JEP 384), the final version was released with Java 16 (JEP 395). Records can also be thought of as nominal tuples.

As I previously mentioned, you can create immutable classes with less code. Consider a Person class containing three fields – name, birthday and city where this person was born – with the condition that we cannot change the data.

Therefore, let’s create an immutable class. We’ll follow the same Java Bean pattern and define the domain as final along with its respective fields:

public final class Person {

    private final String name;

    private final LocalDate birthday;

    private final String city;

    public Person(String name, LocalDate birthday, String city) {
        this.name = name;
        this.birthday = birthday;
        this.city = city;
    }


    public String name() {
        return name;
    }

    public LocalDate birthday() {
        return birthday;
    }

    public String city() {
        return city;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }
        OldPerson person = (OldPerson) o;
        return Objects.equals(name, person.name)
                && Objects.equals(birthday, person.birthday)
                && Objects.equals(city, person.city);
    }

    @Override
    public int hashCode() {
        return Objects.hash(name, birthday, city);
    }

    @Override
    public String toString() {
        return "OldPerson{" +
                "name='" + name + ''' +
                ", birthday=" + birthday +
                ", city='" + city + ''' +
                '}';
    }
}

In the above example, we’ve created the class with final fields and getter methods, but please note that we didn’t exactly follow the Java Bean standard by preceding the methods with get.

Now, let’s follow the same path to create an immutable class: define the class as final, the fields, and then the constructor. Once it is repeatable, can we reduce this boilerplate? The answer is yes, thanks to the Record construct:

public record Person(String name, LocalDate birthday, String city) {

}

As you can see, we can reduce a couple of lines with a single line. We replaced the class keyword to instead use the record keyword, and let the magic of simplicity happen.

It is essential to highlight that the record keyword is a class. Thus, it allows several Java classes to have capabilities, such as methods and implementation. Having said that, let’s move to the next session to see how to use the Record construct.

Data Transfer Objects (DTOs)

This is the first, and also broadly the most popular use case on the internet. Thus, we won’t need to focus on this too much here, but it is worth mentioning that it is a good example of a Record, but not a unique case.

It does not matter if you use Spring, MicroProfile or Jakarta EE. Currently, we have several samples cases that I’ll list below:

Value Objects or Immutable Types

In Domain-Driven Design (DDD), Value Objects represent a concept from a problem domain or context. Those classes are immutable, such as a Money or Email type. So, once both Value Objects as records are firm, you can use them.

In our first example, we’ll create an email where it only needs validation:

public record Email (String value) {
}

As with any Value Object, you can add methods and behavior, but the result should be a different instance. Imagine we’ll create a Money type, and we want to create an add operation. Thus, we’ll add the method to check if those are the same currency and then create a new instance as a result:

public record Money(Currency currency, BigDecimal value) {

    Money add(Money money) {
        Objects.requireNonNull(money, "Money is required");
        if (currency.equals(money.currency)) {
            BigDecimal result = this.value.add(money.value);
            return new Money(currency, result);
        }
        throw new IllegalStateException("You cannot sum money with different currencies");
    }
}

The Money Record was just an example, mainly because developers can use the well-known library, Joda-Money. The point is when you need to create a Value Object or an immutable type, you can use a Record that can fit perfectly on it.

Immutable Entities

But wait? Did you say immutable entities? Is that possible? It is unusual, but it happens, such as when the entity holds a historic transitional point.

Can an entity be immutable? If you check definition of an entity in Eric Evans’ book, Domain-Driven Design: Tackling Complexity in the Heart of Software:

An entity is anything that has continuity through a life cycle and distinctions independent of attributes essential to the application’s user.

The entity is not about being mutable or not, but it is related to the domain; thus, we can have immutable entities, but again, it is unusual. There is a discussion related to this question on Stackoverflow.

Let’s create an entity named Book, where this entity has an ID, title and release year. What happens if you want to edit a book entity? We don’t. Instead, we need to create a new edition. Therefore, we’ll also add the edition field.

public record Book(String id, String title, Year release, int edition) {}

This is OK, but we also need validation. Otherwise, this book will have inconsistent data on it. It does not make sense to have null values on the id, title and release as a negative edition. With a Record, we can use the compact constructor and place validations on it:

public Book {
        Objects.requireNonNull(id, "id is required");
        Objects.requireNonNull(title, "title is required");
        Objects.requireNonNull(release, "release is required");
        if (edition < 1) {
            throw new IllegalArgumentException("Edition cannot be negative");
        }
    }

We can override equals(), hashCode() and toString() methods if we wish. Indeed, let’s override the equals() and hashCode() contracts to operate on the id field:

@Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }
        Book book = (Book) o;
        return Objects.equals(id, book.id);
    }

    @Override
    public int hashCode() {
        return Objects.hashCode(id);
    }

To make it easier to create this class or when you have more complex objects, you can either create a method factory or define builders. The code below shows the builder creation on the Book Record method:

 Book book = Book.builder().id("id").title("Effective Java").release(Year.of(2001)).builder();

At the end of our immutable entity with a Record, we’ll also include the change method, where we need to change the book to a new edition. In the next step, we’ll see the creation of the second edition of the well-known book by Joshua Bloch, Effective Java. Thus, we cannot change the fact that there was once a first edition of this book; this is the historical part of our library business.

Book first = Book.builder().id("id").title("Effective Java").release(Year.of(2001)).builder();
 Book second = first.newEdition("id-2", Year.of(2009));

Currently, the Jakarta Persistence specification cannot support immutability for compatibility reasons, but we can explore it on NoSQL APIs such as Eclipse JNoSQL and Spring Data MongoDB.

We covered many of those topics. Therefore, let’s move on to another design pattern to represent the form of our code design.

State Implementation

There are circumstances where we need to implement a flow or a state inside the code. The State Design Pattern explores an e-commerce context where we have an order where we need to maintain the chronological flow of an order. Naturally, we want to know when it was requested, delivered, and finally received from the user.

The first step is to create an interface. For simplicity, we’ll use a String to represent products, but you know we’ll need an entire object for it:

public interface Order {

    Order next();
    List products();
}

With this interface ready for use, let’s create an implementation that follows its flows and returns the products. We want to avoid any change to the products. Thus, we’ll override the products() methods from the Record to produce a read-only list.

public record Ordered(List products) implements Order {

    public Ordered {
        Objects.requireNonNull(products, "products is required");
    }
    @Override
    public Order next() {
        return new Delivered(products);
    }

    @Override
    public List products() {
        return Collections.unmodifiableList(products);
    }
}

public record Delivered(List products) implements Order {

    public Delivered {
        Objects.requireNonNull(products, "products is required");
    }
    @Override
    public Order next() {
        return new Received(products);
    }

    @Override
    public List products() {
        return Collections.unmodifiableList(products);
    }
}


public record Received(List products) implements Order {

    public Received {
        Objects.requireNonNull(products, "products is required");
    }

    @Override
    public Order next() {
        throw new IllegalStateException("We finished our journey here");
    }

    @Override
    public List products() {
        return Collections.unmodifiableList(products);
    }

}

Now that we have state implemented, let’s change the Order interface. First, we’ll create a static method to start an order. Then, to ensure that we won’t have a new intruder state, we’ll block the new order state implementation and only allow the ones we have. Therefore, we’ll use the sealed interface feature.

public sealed interface Order permits Ordered, Delivered, Received {

    static Order newOrder(List products) {
        return new Ordered(products);
    }

    Order next();
    List products();
}

We made it! Now we’ll test the code with a list of products. As you can see, we have our flow exploring the capabilities of records.

List products = List.of("Banana");
Order order = Order.newOrder(products);
Order delivered = order.next();
Order received = delivered.next();
Assertions.assertThrows(IllegalStateException.class, () -> received.next());

The state, with an immutable class, allows you to think about transactional moments, such as an entity, or generate an event on an Event-Driven architecture.

Conclusion

That is it! In this article, we discussed the power of a Java Record. It is essential to mention that it is a Java class with several benefits such as creating methods, validating on the constructor, overriding the getter, hashCode(), toString() methods, etc.

The Record feature can go beyond a DTO. In this article, we explored a few, such as Value Object, immutable entity, and State.

Imagine where you can take advantage of immutable classes in situations such as concurrency cases, CQRS, event-driven architecture, and much more. The record feature can make your code design go to infinity and beyond! I hope you enjoyed this article and see you at a social media distance.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.