Enhancing Java Concurrency with Scoped Values in JDK 21 (Preview)

MMS Founder
MMS Shaaf Syed

Article originally posted on InfoQ. Visit InfoQ

Scoped Values is now in JDK 21 as a Preview Feature. Alongside Virtual Threads and Structured Concurrency, Scoped Values add to the growing list of enhancements to Java and Project Loom.

Scoped values can be accessed from anywhere, providing that a dynamic scope has been created and the desired value bound into the scope. Imagine a call chain of methods with a faraway method that needs to use data. The data would need to be passed down the call chain with the caution that it might be changed by any method till the callee is reached.

A Scoped value behaves like an additional parameter for every method in the sequence of calls, but none of the methods actually declare this parameter. Only the methods that have access to the ScopedValue object can retrieve its value, which represents the data being passed. As stated in JEP 446, Scoped Values (Preview)

Scoped Values improve safety, immutability, encapsulation, and efficient access within and across threads

Applications that use transactions, security principals, and other forms of shared context in a multithreaded environment will be able to benefit from them. However, they are not intended to replace the ThreadLocal variables introduced in Java 1.2.

The difference between the two is the choice of mutability and, in some cases, safety. While thread-local allows for values to be set and changed, Scoped values take a different approach by controlling shared data, making it immutable and bound for the lifetime of the scope.

A ThreadLocal variable is a global variable, usually declared as a static field, that has accessor methods. This makes the variables mutable, as the setter can change the value. With every new thread, you get the value already present in the spawning thread, but it can be changed in the new thread without affecting the value in the thread that spawned it.

However, it also poses some challenges, such as the ThreadLocal variable being a global mutable. This can result in tracing and debugging challenges in some cases, as the thread-local can be modified a long way from where it is created (sometimes referred to as “spooky action at a distance”, a reference to Einstein’s remark about quantum mechanics). A further, more minor issue is that they cause a larger memory footprint as they maintain copies for each thread.

Scoped Values, on the other hand, introduce a different way to share information between components of an application by limiting how the data is shared, ensuring it is immutable and has a clear and well-defined lifetime. A scoped value is created using the factory method newInstance() on the ScopedValue class, and a value is given to a scoped value within the context of a Runnable, Callable or Supplier calls. The following class illustrates an example with Runnable:

public class WithUserSession {
	// Creates a new ScopedValue
	private final static ScopedValue USER_ID = new ScopedValue.newInstance();

	public void processWithUser(String sessionUserId) {
		// sessionUserId is bound to the ScopedValue USER_ID for the execution of the 
		// runWhere method, the runWhere method invokes the processRequest method.
		ScopedValue.runWhere(USER_ID, sessionUserId, () -> processRequest());
	 }
	 // ...
}

In the above class, the first line creates a scoped value called USER_ID, and the method processWithUser(String sessionUserId) invokes the processRequest() method with the scope via the runWhere() method, thereby executing the run method to handle the request. The value is valid within this method and anywhere else invoked within the method. The lifespan of the scoped value is well-bounded, eliminating the risk of memory or information leaks.

 

There is no set() method in ScopedValue. This ensures the value is immutable and read-only for the thread. However, it also allows for cases where the caller requires the result after the callee has finished processing. For example, in the callWhere() method, a returning-value operation will bind a value to the current Thread. In the runWhere example method above, the processRequest() method was called, but no returning value was expected. In the following example, the value returned from the squared() method will be returned and stored in the multiplied variable. callWhere() uses the Callabale, whereas the runWhere() method expects a Runnable interface.

public class Multiplier {
	// Creates a new ScopedValue
	ScopedValue MULTIPLIER = ScopedValue.newInstance();

	public void multiply(BigInteger number) {
		// invokes the squared method and saves the result in variable multiplied
		var multiplied = ScopedValue.callWhere(MULTIPLIER, number, () -> squared());
	}
	// …
}

A Scoped value is bound to a value on the current thread. However, rebinding is possible for an execution of a new method. The rebinding is not allowed during the execution of the scope however, once the scoped execution is completed, a rebinding can done. This is different from a ThreadLocal, where binding can be done anytime during the execution by using a setter method.

Furthermore, to read a scoped value from the thread, simply call the get() method. However, calling get() on an unbound scoped value throws a NoSuchElementException. If unsure, check if the scoped value is bound using ScopedValue.isBound(). There are also two methods, orElse(), and orElseThrow(), to provide a default value or exception, respectively.

One critical distinction between thread-local variables and Scoped values is that the latter is not bound to a particular thread. They are only set for a dynamic scope – such as a method call issued, meaning a single scoped value cannot have different values in the same thread.

In other words, it’s useful for a “one-way transmission” of data. A ThreadLocal has an unbounded lifetime and does not control the changing of data throughout that lifetime. Moreover, it is an expensive operation, with the values being copied to each child thread. With a scoped value, it is set once and can then be shared over multiple threads, as shown in the example below, where three forks of the Task share the same variable number.

        ScopedValue.runWhere(MULTIPLIER, number, () -> {
            try (var scope = new StructuredTaskScope()) {

                scope.fork(() -> squaredByGet());
                scope.fork(() -> squaredByGet());
                scope.fork(() -> squaredByGet());

            }
        });

While sharing values between threads in this way is beneficial, the cache sizes for Scoped values are limited to 16 entries. To change the default size, one can use the following parameters while invoking the Java program.

java.lang.ScopedValue.cacheSize

The introduction of Scoped Values aims to solve the limitations associated with ThreadLocal variables, especially in the context of virtual threads. Although it’s not absolutely necessary to move away from ThreadLocal, Scoped Values significantly enhances the Java programming model by providing a more efficient and secure way to share sensitive information between components of an application. Developers can learn more details on Scoped Values in the JEP-446 documentation.

We may expect significant numbers of the current use cases of thread-local variables to be replaced by scoped values over time – but please note that Java 21 unfortunately only brings Scoped Values as a Preview Feature – we will have to wait a bit longer before the feature arrives as final.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike’s new Graph database to support both OLAP and OLTP workloads – InfoWorld

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Aerospike on Tuesday took the covers off its new Graph database that can support both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) workloads.

The new database, dubbed Aerospike Graph, adds a property graph data model to the existing capabilities of its NoSQL database and Apache TinkerPop graph compute engine, the company said.

Apache TinkerPop is an open source property graph compute engine that helps the new graph database support both OLTP and OLAP queries. It is also used by other database providers such as Neo4J, Microsoft’s CosmosDB, Amazon Neptune, Alibaba Graph Database, IBM Db2 Graph, ChronoGraph, Hadoop, and Tibco’s Graph Database.

In order to enable integration between Apache TinkerPop and its database, Aerospike uses the graph API via its Aerospike Graph Service, the company said.

In its efforts to integrate further, the company said it has created an optimized data model under the hood to represent graph elements — such as vertices and edges — that map to the native Aerospike data model using records, bins, and other features.

The new graph database, just like Apache TinkerPop, will make use of the Gremlin Query Language, Aerospike said. This means developers can write applications with new and existing Gremlin queries in Aerospike Graph.

Gremlin is the graph query language of Apache TinkerPop.

Some of the applications of graph databases include identity graphs for the advertising technology industry, customer 360 applications across ecommerce and retail companies, fraud detection and prevention across financial enterprises, and machine learning for generative AI.

The introduction of the graph database will also see the graph model added to Aerospike’s real-time data platform which already supports key value and document models. Last year, the company added support for native JSON.

While Aerospike did not reveal any pricing details, it said Aerospike Graph can independently scale compute and storage, and enterprises will have to pay for the infrastructure being used.

Next read this:

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: GraalVM 23.0.0, Payara Platform, Spring 6.1-M1, QCon New York

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for June 12th, 2023 features news from OpenJDK, JDK 22, JDK 21, GraalVM 23, various releases of: GraalVM Build Tools, Spring Framework, Spring Data, Spring Shell, Payara Platform, Micronaut, Open Liberty, Quarkus, Micrometer, Hibernate ORM and Reactive, Project Reactor, Piranha, Apache TomEE, Apache Tomcat, JDKMon, JBang, JHipster, Yupiik Bundlebee; and QCon New York 2023.

OpenJDK

After its review had concluded, JEP 404, Generational Shenandoah (Experimental), was officially removed from the final feature set in JDK 21. This was due to the “risks identified during the review process and the lack of time available to perform the thorough review that such a large contribution of code requires.” The Shenandoah team has decided to “deliver the best Generational Shenandoah that they can” and will seek to target JDK 22.

Julian Waters, OpenJDK development team at Oracle, has submitted JEP Draft 8310260, Move the JDK to C17 and C++17, to allow the use of the C17 and C++17 programming language features in JDK source code. Due to the required minimal JDK 17 version in Microsoft Visual C/C++ Compiler: Visual Studio 2019, this draft proposes to apply changes to the build system such that the existing C++ flag, -std:c++14, will change to -std:c++17 and the existing C flag, -std:c11, will change to -std:c17.

JDK 21

Build 27 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 26 that include fixes to various issues. Further details on this build may be found in the release notes.

JDK 22

Build 2 of the JDK 22 early-access builds was also made available this past week featuring updates from Build 1 that include fixes to various issues. More details on this build may be found in the release notes.

For JDK 22 and JDK 21, developers are encouraged to report bugs via the Java Bug Database.

GraalVM

Oracle Labs has introduced Oracle GraalVM with a new distribution and license model for both development and production applications. GraalVM Community Components 23.0.0 provides support for JDK 20 and JDK 17 with the GraalVM for JDK 17 Community 17.0.7 and GraalVM for JDK 20 Community 20.0.1 editions of GraalVM. New features in Native Image include: support for G1 GC; ​​compressed object headers and pointers for a lower memory footprint; and machine learning to automatically infer profiling information. InfoQ will follow up with a more detailed news story.

On the road to version 1.0, Oracle Labs has also released version 0.9.23 of Native Build Tools, a GraalVM project consisting of plugins for interoperability with GraalVM Native Image. This latest release provides notable changes such as: a fix for the compatibility of the “collect reachability metadata” task with Gradle’s configuration cache; remove the use of the deprecated Gradle GFileUtils class that will ultimately be removed in Gradle 9; and add a display of the GraalVM logo on the generated Native Build Tools documents. Further details on this release may be found in the changelog.

Spring Framework

The first milestone release of Spring Framework 6.1 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: initial support for the new Sequenced Collections interfaces; support for Coordinated Restore at Checkpoint (CRaC); compatibility with virtual threads; and a ClientHttpRequestFactory interface based on the HttpClient class provided by Jetty. More details on this release may be found in the release notes.

Similarly, versions 6.0.10 and 5.3.28 of Spring Framework have also been released featuring bug fixes, improvements in documentation, dependency upgrades and new features such as: a new remoteServer() method added to the MockHttpServletRequestBuilder class to set the remote address of a request; a new matchesProfiles() methods added to the Environment interface to determine whether one of the given profile expressions matches the active profiles; and declare the isPerInstance() method defined in the Advisor interface as default to eliminate the unnecessary implementation requirement of that method. Further details on these releases may be found in the release notes for version 6.0.10 and version 5.3.28.

Versions 2023.0.1, 2022.0.7 and 2021.2.13, service releases of Spring Data, ship with bug fixes and respective dependency upgrades to sub-projects such as: Spring Data Commons 3.1.1, 3.0.7 and 2.7.13; Spring Data MongoDB 4.1.1, 4.0.7 and 3.4.13; Spring Data Elasticsearch 5.1.1, 5.0.7 and 4.4.13; and Spring Data Neo4j 7.1.1 7.0.7 and 6.3.13.

Versions 3.1.1 and 3.0.5 of Spring Shell 3.0.5 have been released with notable bug fixes such as: a target annotated with @ShellAvailability not registering with Ahead-of-Time processing; native mode broken on Linux; and an unexpected comma inserted at the end of a parsed message. More details on these releases may be found in the release notes for version 3.1.1 and version 3.0.5.

Payara

Payara has released their June 2023 edition of the Payara Platform that includes Community Edition 6.2023.6, Enterprise Edition 6.3.0 and Enterprise Edition 5.52.0. All three versions feature: the removal of the throwable reference of the ASURLClassLoader class to eliminate class loader leaks; and a fix for the configuration of the dependency injection kernel, HK2, for JDK 17 compilation. Further details on these versions may be found in the release notes for Community Edition 6.2023.6, Enterprise Edition 6.3.0 and Enterprise Edition 5.52.0.

Micronaut

The fourth release candidate of Micronaut 4.0 delivers bug fixes and improvements such as: add a default method to the overloaded set of writeValueAsString() methods in the JsonMapper interface; improved exception handling on scheduled jobs; and a new parameter, missingBeans=EndpointSensitivityHandler.class, for the @Requires annotation on the EndpointsFilter class to convey that endpoint sensitivity is handled externally and the filter will not be loaded. More details on this release may be found in the release notes.

Open Liberty

IBM has released Open Liberty 23.0.0.6-beta that provides: continued improvements in their InstantOn functionality; continued support for the Jakarta Data specification; and improvements for OpenID Connect clients with support for Private Key JWT client authentication and RFC 7636, Proof Key for Code Exchange by OAuth Public Clients (PKCE).

Quarkus

Quarkus 3.1.2.Final, the second maintenance release, provide improvements in documentation, dependency upgrades and bug fixes such as: a ClassNotFoundException when using the Qute Templating Engine in dev mode; a NullPointerException in version 3.1.1 when using a Config Interceptor; and startup of the Quarkus server hangs indefinitely when using the OidcRecorder class. Further details on this release may be found in the release notes.

Micrometer

Versions 1.11.1, 1.10.8 and 1.9.12 of Micrometer Metrics have been released with dependency upgrades and bug fixes such as: an improper variable argument check in the KeyValues class that leads to NullPointerException; loss of scope and context propagation between Project Reactor and imperative code blocks; and random GRPC requests return null upon calling the currentSpan() method defined in the Tracer class. More details on these releases may be found in the release notes for version 1.11.1, version 1.10.8 and version 1.9.12.

Similarly, versions 1.1.2 and 1.0.7 of Micrometer Tracing have been released with dependency upgrades, improvements in documentation and bug fixes: abstractions from the Span interface are not equal when when delegating to the same OpenTelemetry object; and a fix for Project Reactor with Micrometer 1.10 by using null scopes instead of clearing thread locals. Further details on these releases may be found in the release notes for version 1.1.2 and version 1.0.7.

Hibernate

The release of Hibernate ORM 6.2.5.Final provides bug fixes such as: caching not working properly for entities with inheritance when the hibernate.cache.use_structured_entries property was set to true; generic collections not mapped correctly using a @MappedSuperclass annotation; and mapping of JSON-B of different types in a class inheritance hierarchy does not work.

The release of Hibernate Reactive 2.0.1.Final ships with compatibility with Hibernate ORM 6.2.5.Final and adds support for the @Lob annotation for MySQL, MariaDB, Oracle, and Microsoft SQLServer.

Project Reactor

Project Reactor 2022.0.8, the eighth maintenance release, provides dependency upgrades to reactor-core 3.5.7, reactor-netty 1.1.8. There was also a realignment to version 2022.0.8 with the reactor-kafka 1.3.18, reactor-pool 1.0.0, reactor-addons 3.5.1 and reactor-kotlin-extensions 1.2.2 artifacts that remain unchanged. More details on this release may be found in the changelog.

Piranha

The release of Piranha 23.6.0 delivers notable changes such as: removal of the deprecated Logging Manager and MimeTypeManager interfaces; deprecation of the --war and --port command line arguments; and add HTTPS support to the Piranha Maven plugin. Further details on this release may be found in their documentation and issue tracker.

Apache Software Foundation

Apache TomEE 9.1.0 has been released featuring bug fixes, improvements in documentation, dependency upgrades and improvements: use of ActiveMQ 5.18.0 Jakarta EE-compatible client in favor of the shade approach with TomEE; and backport the fixes that addressed CVE-2023-24998 and CVE-2023-28708 in Apache Tomcat from version 10.1.x to version 10.0.27. More details on this release may be found in the release notes.

Versions 10.1.10 and 8.5.90 of Apache Tomcat delivers: support for JDK 21 and virtual threads; an update to HTTP/2 to use the RFC-9218, Extensible Prioritization Scheme for HTTP, prioritization scheme; a dependency upgrade to Tomcat Native to 2.0.4 and 1.2.37, respectively which includes binaries for Windows built with OpenSSL 3.0.9 and 1.1.1u, respectively; and a deprecation of the xssProtectionEnabled property from the HttpHeaderSecurityFilter class and set the default value to false. Further details on these versions may be found in the changelogs for version 10.1.10 and version 8.5.90.

JDKMon

Versions 17.0.67 and 17.0.65 of JDKMon, a tool that monitors and updates installed JDKs, has been made available this past week. Created by Gerrit Grunwald, principal engineer at Azul, these new versions provide: support for the new GraalVM Community builds; and a small icon added to the name of a JDK to indicate that it is managed by SDKMan. An experimental new feature in version 17.0.65 includes a new switch-jdk script placed in a user’s home folder that makes it possible to switch to a specific JDK in a shell session.

JBang

The release of JBang 0.108.0 ships with support for JEP 445, Unnamed Classes and Instance Main Methods (Preview). It is important to note that developers will be required to build and install JDK 21 early-access to use JEP 445 due to the Temurin JDK builds only providing JDK 20 as the latest version.

JobRunr

JobRunr 6.2.2 has been released to provide notable changes: improve caching of job analysis when using Java Stream API; and the ElectStateFilter and ApplyStateFilter interfaces are invoked while there is no change of state.

JHipster

The first beta release of JHipster 8.0.0 delivers bug fixes and notable changes such as: the use of Consul by default; a fix for Apache Cassandra tests by dropping CassandraUnit and adding reactive tests; and a move to deny-by-default over allow-by-default by using the authorizeHttpRequests() method defined in Spring Security HttpSecurity class. It is important to note that there is a rename of the AngularX configuration option to Angular for backward compatibility as AngularX will be removed in the GA release of JHipster 8.0. More details on this release may be found in the release notes.

Yupiik

The release of Yupiik Bundlebee 1.0.20, a light Java Kubernetes package manager, provides updates such as: additional placeholders for the default observability stack; support for namespace placeholder keywords to enable the reuse of globally configured namespace in placeholders; and proper usage of DaemonSet usage for Loki. Further details on this release may be found in the release notes.

QCon New York

After a three-year hiatus due to the pandemic, the 9th annual QCon New York conference was held at the New York Marriott at the Brooklyn Bridge in Brooklyn, New York this past week featuring three days of presentations from 12 tracks and keynotes delivered by Radia Perlman, Alicia Dwyer Cianciolo, Suhail Patel and Sarah Bird. More details about this conference may be found in the InfoQ daily recaps from Day One and Day Two. InfoQ will follow-up with Day Three coverage.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Voxel51 Open-Sources Computer Vision Dataset Assistant VoxelGPT – Q&A with Jason Corso

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Voxel51 recently open-sourced VoxelGPT, an AI assistant that interfaces with GPT-3.5 to produce Python code for querying computer vision datasets. InfoQ spoke with Jason Corso, co-founder and CSO of Voxel51, who shared their lessons and insights gained while developing VoxelGPT.

In any data science project, data exploration and visualization is a key early step, but many of the techniques are intended for structured or tabular data. Computer vision datasets, on the other hand, are usually unstructured: images or point clouds. Voxel51’s open-source tool FiftyOne provides a query language for exploring and curating computer vision datasets. However, it can be challenging for casual users to quickly and reliably use tools like FiftyOne.

VoxelGPT is a plug-in for FiftyOne that provides a natural language interface for the tool. Users can ask questions about their data, which are translated into Python code that leverages FiftyOne to produce the answers. VoxelGPT uses LangChain and prompt engineering to interact with GPT-3.5 and output the Python code. InfoQ spoke with Jason Corso, co-founder and CSO of Voxel51, about the development of VoxelGPT.

InfoQ: It’s clear that while LLMs provide a powerful natural language interface for solving problems, getting the most out of them can be a challenge. What are some tips for developers?

Jason Corso: A key learning we had while developing VoxelGPT is that expecting one interaction with the LLM to sufficiently address your task is likely a bad idea.  It helps to carefully segment your interaction with the LLM to sufficiently provide enough context per interaction, generate more useful piecemeal results, and later compose them together depending on your ultimate task.

A few other lessons learned:

  • Start simple, gain intuition, and only add layers of complexity once the LLM is acting as you expect.
  • LangChain is a great library, but it is not without its issues. Don’t be afraid to “go rogue” and build your own custom LLM tooling wherever existing tools aren’t getting the job done.

InfoQ: Writing good tests is a key practice in software engineering. What are some lessons you learned when testing VoxelGPT?

Corso: Testing applications built with LLMs is challenging, and testing VoxelGPT was no different. LLMs are not nearly as predictable as traditional software components. However, we incorporated software engineering best practices into our workflows as much as possible through unit testing. 

We created a unit testing framework with 60 test cases, which covered the surface area of the types of queries we’d expect from usage. Each test consisted of a prompt, a FiftyOne Dataset, and the expected subset of the dataset resulting from converting the prompt to a query in FiftyOne’s domain-specific query language. We ran these tests each time we made a substantial change to the code or example set in order to prevent regression. 

InfoQ: AI safety is a major concern. What were some of the safety issues you confronted and how did you solve them?

Corso: Yes, indeed AI safety is a key element to consider when building these systems. When building VoxelGPT, we were intentional about addressing potential safety issues in multiple ways.

Input validation: The first stop on a prompt’s journey through VoxelGPT is OpenAI’s moderation endpoint, so we ensure all queries passed through the system comply with OpenAI’s terms of service. Even beyond that, we run a custom “intent classification” routine to validate that the user’s query falls into one of the three allowed classes of query, is sensible, and is not out of scope.

Bias mitigation: Bias is another major concern with LLMs, which form potentially unwanted or non inclusive connections between concepts, based on their training data. VoxelGPT is incentivized to infer as much as possible from the contextual backdrop of the user’s FiftyOne Dataset, so that it capitalizes on the base LLM’s inference capabilities without being mired in its biases.

Programmed limitations: We purposely limited VoxelGPT’s access to any functionality involving the permanent moving, writing, or deleting of data. We also prevent VoxelGPT from performing any computationally expensive operations. At the end of the day, the human working with VoxelGPT (and FiftyOne) is the only one with this power!

InfoQ: What was one of the most surprising things you learned when building VoxelGPT?

Corso: Building VoxelGPT was really quite fun.  LLMs capture a significant amount of generalizable language-based knowledge.  Their ability to leverage this generalizability in context-specific ways was very surprising.  What do I mean?  At the heart of FiftyOne is a domain-specific language (DSL), based in Python, for querying schema-less unstructured AI datasets.  This DSL enables FiftyOne users to “semantically slice” their data and model outputs to various ends like finding mistakes in annotations, comparing two models, and so on.  However, it takes some time to become an expert in that DSL.  It was wildly surprising that with a fixed and somewhat limited amount of context, we could provide sufficiently rich “training material” for the LLM to actually construct executable Python code in FiftyOne’s DSL.

The VoxelGPT source code is available on GitHub. There is also an online demo available on the FiftyOne website.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Aerospike invades the graph database space with a little help from Apache TinkerPop

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Real-time database vendorAerospike is expanding its multi-model capabilities with the launch of the Aerospike Graph database.

Aerospike got its start back in 2009, providing a NoSQL database that in its early years focused on advertising applications. Over the past decade Aerospike has evolved to become a real-time database platform, useful for adtech, financial services and customer data platforms among other use cases.

In 2022, the company began its shift to offering what is known as a multi-model database, providing support for the JSON document model, which has become increasingly popular in recent years in part due to the success of document database vendor MongoDB.

Now Aerospike is expanding further with the general availability of Aerospike Graph, which brings graph data model capabilities.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

A graph database is a type of data model structured to help users better understand relationships between different data points and content. There are multiple graph databases in the market today, including Neo4J and Amazon Neptune. Even Oracle has a graph database.

Graph databases are helpful for many different use cases, including fraud detection, an area where Aerospike customers have increasingly been headed and have needed a solution.

“What we decided last year was setting the course and a strategy to build a multi-model, multicloud data platform really focused on real-time workloads,” Subbu Iyer, CEO of Aerospike, told VentureBeat. “Our platform is really suited well for high performance at scale, low latency and high availability, so that’s what pushed us into looking at graph to really go after this space.”

Aerospike Graph: Built with open-source Apache TinkerPop technology

Aerospike didn’t build its graph database entirely from scratch. 

Rather what it did was find a suitable open-source base in the Apache TinkerPop project to build upon. Apache TinkerPop is a graph computing framework that includes its own query language known as Gremlin.

“When we found Apache TinkerPop, we realized it is a great solution, and we actually work with some of the original authors of TinkerPop,” Iyer said.

In effect what Aerospike has done with its graph database is develop a commercially supported version of TinkerPop. Iyer explained that Aerospike Graph handles the separation of compute and storage, enabling either type of resource to scale independently as needed. The database is available both as an on-premises technology and in a database-as-a-service (DBaaS) model in the cloud.

Aerospike Graph architecture
Image credit: Aerospike

With the initial release of Aerospike Graph, the company will be supporting the Apache TinkerPop project’s Gremlin query language. In the future, Iyer said that Aerospike could support other query languages for graph databases. Today there are multiple approaches for graph queries, including the cypher query language backed by graph database vendor Neo4j and the Property Graph Query Language (PGQL) backed by Oracle.

Is the next stop for Aerospike more AI?

As Aerospike continues to grow its platform, artificial intelligence (AI) capabilities are high on Iyer’s agenda.

The core Aerospike database platform is already being used by organizations as a feature store for AI pipelines, according to Iyer. There has also been a lot of effort in the industry overall in recent months to use existing data sources to help augment large language model (LLM) data for generative AI. That’s an area where vector databases are playing a role and it’s a space that Iyer is tracking closely.

“We’re looking at it very carefully,” Iyer said about vector databases. “It actually fits in very well with our multi-model strategy.”

>>Follow VentureBeat’s ongoing generative AI coverage<<

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

.article-content .boilerplate-after {
background-color: #F5F8FF;
padding: 30px;
line-height: 2em;
margin-top: 20px;
margin-bottom: 20px;
border-left: 4px solid #000E31;
font-family: Roboto, sans-serif;
}

.article-content .boilerplate-after p { margin: 0; }

@media (max-width: 500px) {
.article-content .boilerplate-after {
padding: 20px;
}
}

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Launches Amazon S3 Dual-Layer Server-Side Encryption with Keys Stored in AWS KMS

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Recently AWS launched Amazon S3 dual-layer server-side encryption with keys stored in AWS Key Management Service (DSSE-KMS), a new encryption option in Amazon S3 that applies two layers of encryption to objects when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket.

The company designed DSSE-KMS to meet National Security Agency CNSSP 15 for FIPS compliance and Data-at-Rest Capability Package (DAR CP) Version 5.0 guidance for two layers of CNSA encryption. It will allow customers to use DSSE-KMS to fulfill regulatory requirements to apply multiple layers of encryption to their data.

With the launch of DSSE-KMS, Amazon S3 now offers four options for server-side encryption:

Source: https://aws.amazon.com/blogs/aws/new-amazon-s3-dual-layer-server-side-encryption-with-keys-stored-in-aws-key-management-service-dsse-kms/

DSSE-KMS allows users to indicate dual-layer server-side encryption (DSSE) when uploading or copying an object through a PUT or COPY request. Additionally, they can set up their S3 bucket so that DSSE is automatically applied to all new objects. By leveraging IAM and bucket policies, users can also enforce DSSE-KMS. Each encryption layer employs a distinct cryptographic implementation library with its own data encryption keys. Furthermore, DSSE-KMS helps protect sensitive data against the low probability of vulnerability in a single layer of cryptographic implementation.

Users can leverage DSSE-KMS via the AWS CLI, AWS Management Console, or using the Amazon S3 REST API.

Regarding the DSSE-KMS, Rob Fuller, a Red Team tactics trainer, tweeted:

If you didn’t see this, please go have your cloud teams (or if that’s you) enable this today (or your next maintenance window).

In addition, Irshad A Buchh, a principal solutions architect at AWS, states in an AWS News blog post:

Amazon S3 is the only cloud object storage service where customers can apply two layers of encryption at the object level and control the data keys used for both layers. DSSE-KMS makes it easier for highly regulated customers to fulfill the rigorous security standards, such as the US Department of Defense (DoD) customers.

Meanwhile, in a LinkedIn post about DSSE-KMS by Joshua Bregler, a senior security manager at McKinsey Digital, Kieran Miller, a chief architect at Garantir, commented:

Dual encryption is great if the keys are stored separately and under control of different entities. What’s the threat model for this use case where both keys are stored in your AWS KMS account and all the encryption happens server-side? Is it likely that I would compromise one KMS key but not the other?

I suppose I could see value if one of the KMS keys is stored externally via AWS KMS External Key Store or in another account under a different entity’s control. Are these use cases supported?

Currently, Amazon S3 dual-layer server-side encryption with keys stored in AWS KMS (DSSE-KMS) is available today in all AWS Regions, and its pricing details are available on the Amazon S3 pricing page (Storage tab) and the AWS KMS pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS MongoDB: A Comprehensive Guide – Nigeria News June 19, 2023 – NNN

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Introduction to AWS MongoDB

MongoDB is a popular document-based NoSQL database platform that is widely used in modern application development. AWS services support the MongoDB database platform in various capacities to provide users with greater flexibility and scalability. AWS provides different services that support the MongoDB platform, including Amazon DocumentDB, Amazon EC2, and Amazon EBS. This article aims to provide a comprehensive guide on AWS MongoDB, including its architecture, benefits, use cases, and potential limitations.

MongoDB Architecture on AWS

The architecture of a MongoDB database on AWS comprises the database nodes, the application servers, the load balancers, and the storage volumes. Depending on the needs of an application, AWS uses different strategies to configure and deploy MongoDB architecture. One such strategy is deploying MongoDB on Amazon EC2 instances. In this strategy, MongoDB is deployed into an EC2 instance as a single-node database or a replica set. In a single-node deployment, the MongoDB process runs on a single EC2 instance, while in a replica set, multiple MongoDB instances are deployed into multiple EC2 instances. Additionally, MongoDB architected on EC2 can benefit from Auto Scaling, which can automatically add or remove EC2 instances based on traffic demand.

Another way of deploying MongoDB on AWS is through Amazon DocumentDB, a fully managed DB service that offers a scaling and self-healing architecture based on version 3.6 of MongoDB. Amazon DocumentDB automatically deploys and scales MongoDB replica sets, making it easier to provision, configure, and manage. Amazon DocumentDB also provides built-in security features such as encryption at rest and in-flight, as well as secure connectivity to applications.

Advantages of Using AWS MongoDB

AWS provides many benefits when using MongoDB, including scalability, security, reliability, and cost efficiency.

One of the significant advantages of deploying MongoDB on AWS is scalability. AWS’s automatic scaling features can automatically scale up and down the infrastructure in response to changing workloads, meaning that an application can scale up to meet the demands of the most complex applications. Additionally, AWS provides a broad range of server hardware that provides fast performance and high levels of availability.

Another advantage of using AWS MongoDB is security. The AWS security model applies equally to MongoDB deployed on AWS, providing you with a secure platform. Amazon DocumentDB automatically encrypts data at rest, and you can enable encryption at the application level using different mechanisms such as the transport layer security and HTTPS. AWS provides additional security features such as network isolation, IAM roles, and VPC access.

Reliability is another advantage of using AWS MongoDB. Many AWS regions have high availability zones that provide reliable infrastructure, and the automatic scaling ensures that an application never fails and also prevents over-provisioning. The AWS-managed services, including Amazon DocumentDB, are fully managed, offering an easy-to-use platform that doesn’t require a significant investment of time and resources.

Cost-efficiency is another advantage of using AWS MongoDB. AWS provides a flexible pricing model, allowing you to pay for the resources you consume rather than paying for a larger block of resources. Additionally, AWS provides on-demand infrastructure, which means that you only pay for the resources you consume by the hour.

Use Cases of AWS MongoDB

AWS MongoDB has many use cases, including e-commerce, IoT, gaming, and social networking. MongoDB is useful in e-commerce applications because it provides capabilities such as real-time inventory updates, customer insights, and matching products with customers. IoT applications make use of MongoDB’s ability to store documents representing sensor data. Game developers use MongoDB for player profiles and game event storage and analysis. Social networking applications use MongoDB for storing data streams, user-generated content, and messages.

Potential Limitations of AWS MongoDB

One potential limitation when using AWS MongoDB is the increased complexity of management because users are responsible for managing the infrastructure. Additionally, using AWS requires users to manage their networking and security policies effectively. Users must also carefully consider the sizing, performance tuning, and scaling of their database.

Another potential limitation of using AWS MongoDB is the lock-in factor. Developers using AWS’s managed services such as Amazon DocumentDB may find it challenging to migrate their application from AWS to another cloud platform or on-premise.

AWS MongoDB is a versatile and scalable NoSQL database platform that offers several benefits such as security, reliability, flexibility, and cost-efficiency. AWS provides different services that support MongoDB, including Amazon DocumentDB and Amazon EC2, and Auto Scaling. AWS MongoDB has a wide range of use cases that span across different industries, including social networking, gaming, and e-commerce. However, using AWS MongoDB comes with potential limitations such as management complexity and the lock-in effect. Before using AWS MongoDB, users must carefully consider these factors to ensure the smooth running of their application.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Exploring Java Records Beyond Data Transfer Objects

MMS Founder
MMS Otavio Santana

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Records are classes that act as transparent carriers for immutable data and can be thought of as nominal tuples.
  • Records can help you write more predictable code, reduce complexity, and improve the overall quality of your Java applications.
  • Records can be applied with Domain-Driven Design (DDD) principles to write immutable classes, and create more robust and maintainable code.
  • The Jakarta Persistence specification does not support immutability for relational databases, but immutability can be accomplished with NoSQL databases.
  • You can take advantage of immutable classes in situations such as concurrency cases, CQRS, event-driven architecture, and much more.

If you are already familiar with the Java release cadence and the latest LTS version, Java 17, you can explore the Java Record feature that allows immutable classes.

But the question remains: How can this new feature be used in my project code? How do I take advantage of it to make a clean and better design? This tutorial will provide some examples going beyond the classic data transfer objects (DTOs).

What and Why Java Records?

First thing first: what is a Java Record? You can think of Records as classes that act as transparent carriers for immutable data. Records were introduced as a preview feature in Java 14 (JEP 359).

After a second preview was released in Java 15 (JEP 384), the final version was released with Java 16 (JEP 395). Records can also be thought of as nominal tuples.

As I previously mentioned, you can create immutable classes with less code. Consider a Person class containing three fields – name, birthday and city where this person was born – with the condition that we cannot change the data.

Therefore, let’s create an immutable class. We’ll follow the same Java Bean pattern and define the domain as final along with its respective fields:

public final class Person {

    private final String name;

    private final LocalDate birthday;

    private final String city;

    public Person(String name, LocalDate birthday, String city) {
        this.name = name;
        this.birthday = birthday;
        this.city = city;
    }


    public String name() {
        return name;
    }

    public LocalDate birthday() {
        return birthday;
    }

    public String city() {
        return city;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }
        OldPerson person = (OldPerson) o;
        return Objects.equals(name, person.name)
                && Objects.equals(birthday, person.birthday)
                && Objects.equals(city, person.city);
    }

    @Override
    public int hashCode() {
        return Objects.hash(name, birthday, city);
    }

    @Override
    public String toString() {
        return "OldPerson{" +
                "name='" + name + ''' +
                ", birthday=" + birthday +
                ", city='" + city + ''' +
                '}';
    }
}

In the above example, we’ve created the class with final fields and getter methods, but please note that we didn’t exactly follow the Java Bean standard by preceding the methods with get.

Now, let’s follow the same path to create an immutable class: define the class as final, the fields, and then the constructor. Once it is repeatable, can we reduce this boilerplate? The answer is yes, thanks to the Record construct:

public record Person(String name, LocalDate birthday, String city) {

}

As you can see, we can reduce a couple of lines with a single line. We replaced the class keyword to instead use the record keyword, and let the magic of simplicity happen.

It is essential to highlight that the record keyword is a class. Thus, it allows several Java classes to have capabilities, such as methods and implementation. Having said that, let’s move to the next session to see how to use the Record construct.

Data Transfer Objects (DTOs)

This is the first, and also broadly the most popular use case on the internet. Thus, we won’t need to focus on this too much here, but it is worth mentioning that it is a good example of a Record, but not a unique case.

It does not matter if you use Spring, MicroProfile or Jakarta EE. Currently, we have several samples cases that I’ll list below:

Value Objects or Immutable Types

In Domain-Driven Design (DDD), Value Objects represent a concept from a problem domain or context. Those classes are immutable, such as a Money or Email type. So, once both Value Objects as records are firm, you can use them.

In our first example, we’ll create an email where it only needs validation:

public record Email (String value) {
}

As with any Value Object, you can add methods and behavior, but the result should be a different instance. Imagine we’ll create a Money type, and we want to create an add operation. Thus, we’ll add the method to check if those are the same currency and then create a new instance as a result:

public record Money(Currency currency, BigDecimal value) {

    Money add(Money money) {
        Objects.requireNonNull(money, "Money is required");
        if (currency.equals(money.currency)) {
            BigDecimal result = this.value.add(money.value);
            return new Money(currency, result);
        }
        throw new IllegalStateException("You cannot sum money with different currencies");
    }
}

The Money Record was just an example, mainly because developers can use the well-known library, Joda-Money. The point is when you need to create a Value Object or an immutable type, you can use a Record that can fit perfectly on it.

Immutable Entities

But wait? Did you say immutable entities? Is that possible? It is unusual, but it happens, such as when the entity holds a historic transitional point.

Can an entity be immutable? If you check definition of an entity in Eric Evans’ book, Domain-Driven Design: Tackling Complexity in the Heart of Software:

An entity is anything that has continuity through a life cycle and distinctions independent of attributes essential to the application’s user.

The entity is not about being mutable or not, but it is related to the domain; thus, we can have immutable entities, but again, it is unusual. There is a discussion related to this question on Stackoverflow.

Let’s create an entity named Book, where this entity has an ID, title and release year. What happens if you want to edit a book entity? We don’t. Instead, we need to create a new edition. Therefore, we’ll also add the edition field.

public record Book(String id, String title, Year release, int edition) {}

This is OK, but we also need validation. Otherwise, this book will have inconsistent data on it. It does not make sense to have null values on the id, title and release as a negative edition. With a Record, we can use the compact constructor and place validations on it:

public Book {
        Objects.requireNonNull(id, "id is required");
        Objects.requireNonNull(title, "title is required");
        Objects.requireNonNull(release, "release is required");
        if (edition < 1) {
            throw new IllegalArgumentException("Edition cannot be negative");
        }
    }

We can override equals(), hashCode() and toString() methods if we wish. Indeed, let’s override the equals() and hashCode() contracts to operate on the id field:

@Override
    public boolean equals(Object o) {
        if (this == o) {
            return true;
        }
        if (o == null || getClass() != o.getClass()) {
            return false;
        }
        Book book = (Book) o;
        return Objects.equals(id, book.id);
    }

    @Override
    public int hashCode() {
        return Objects.hashCode(id);
    }

To make it easier to create this class or when you have more complex objects, you can either create a method factory or define builders. The code below shows the builder creation on the Book Record method:

 Book book = Book.builder().id("id").title("Effective Java").release(Year.of(2001)).builder();

At the end of our immutable entity with a Record, we’ll also include the change method, where we need to change the book to a new edition. In the next step, we’ll see the creation of the second edition of the well-known book by Joshua Bloch, Effective Java. Thus, we cannot change the fact that there was once a first edition of this book; this is the historical part of our library business.

Book first = Book.builder().id("id").title("Effective Java").release(Year.of(2001)).builder();
 Book second = first.newEdition("id-2", Year.of(2009));

Currently, the Jakarta Persistence specification cannot support immutability for compatibility reasons, but we can explore it on NoSQL APIs such as Eclipse JNoSQL and Spring Data MongoDB.

We covered many of those topics. Therefore, let’s move on to another design pattern to represent the form of our code design.

State Implementation

There are circumstances where we need to implement a flow or a state inside the code. The State Design Pattern explores an e-commerce context where we have an order where we need to maintain the chronological flow of an order. Naturally, we want to know when it was requested, delivered, and finally received from the user.

The first step is to create an interface. For simplicity, we’ll use a String to represent products, but you know we’ll need an entire object for it:

public interface Order {

    Order next();
    List products();
}

With this interface ready for use, let’s create an implementation that follows its flows and returns the products. We want to avoid any change to the products. Thus, we’ll override the products() methods from the Record to produce a read-only list.

public record Ordered(List products) implements Order {

    public Ordered {
        Objects.requireNonNull(products, "products is required");
    }
    @Override
    public Order next() {
        return new Delivered(products);
    }

    @Override
    public List products() {
        return Collections.unmodifiableList(products);
    }
}

public record Delivered(List products) implements Order {

    public Delivered {
        Objects.requireNonNull(products, "products is required");
    }
    @Override
    public Order next() {
        return new Received(products);
    }

    @Override
    public List products() {
        return Collections.unmodifiableList(products);
    }
}


public record Received(List products) implements Order {

    public Received {
        Objects.requireNonNull(products, "products is required");
    }

    @Override
    public Order next() {
        throw new IllegalStateException("We finished our journey here");
    }

    @Override
    public List products() {
        return Collections.unmodifiableList(products);
    }

}

Now that we have state implemented, let’s change the Order interface. First, we’ll create a static method to start an order. Then, to ensure that we won’t have a new intruder state, we’ll block the new order state implementation and only allow the ones we have. Therefore, we’ll use the sealed interface feature.

public sealed interface Order permits Ordered, Delivered, Received {

    static Order newOrder(List products) {
        return new Ordered(products);
    }

    Order next();
    List products();
}

We made it! Now we’ll test the code with a list of products. As you can see, we have our flow exploring the capabilities of records.

List products = List.of("Banana");
Order order = Order.newOrder(products);
Order delivered = order.next();
Order received = delivered.next();
Assertions.assertThrows(IllegalStateException.class, () -> received.next());

The state, with an immutable class, allows you to think about transactional moments, such as an entity, or generate an event on an Event-Driven architecture.

Conclusion

That is it! In this article, we discussed the power of a Java Record. It is essential to mention that it is a Java class with several benefits such as creating methods, validating on the constructor, overriding the getter, hashCode(), toString() methods, etc.

The Record feature can go beyond a DTO. In this article, we explored a few, such as Value Object, immutable entity, and State.

Imagine where you can take advantage of immutable classes in situations such as concurrency cases, CQRS, event-driven architecture, and much more. The record feature can make your code design go to infinity and beyond! I hope you enjoyed this article and see you at a social media distance.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AI, ML, Data Engineering News Round Up: Vertex, AlphaDev, Function Calling, Gorilla, and Falcon

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

The latest update, spanning from June 12th, 2023, highlights the recent advancements and announcements in the domains of data science, machine learning, and artificial intelligence. This week’s spotlight is on notable entities such as Google, OpenAI, UC Berkeley, and AWS.

Generative AI Support on Vertex AI Is Now Generally Available

Google has introduced the availability of Generative AI support on Vertex AI, enabling customers to utilize the latest platform features in creating and operating personalized generative AI applications. This update empowers developers to utilize various tools and resources, including the PaLM 2 powered text model, the Embeddings API for text, and foundational models in Model Garden. Additionally, the Generative AI Studio offers user-friendly tools for fine-tuning and deploying models. With the backing of enterprise-level data governance, security, and safety measures, Vertex AI simplifies the process for customers to access foundational models, customize them with their own data, and swiftly develop generative AI applications.

DeepMind Introduces AlphaDev

Google Deepmind has unveiled AlphaDev, an artificial intelligence system that leverages reinforcement learning to uncover improved computer science algorithms, surpassing the ones developed by scientists and engineers over many years. AlphaDev has discovered a more efficient sorting algorithm, which is used for organizing data. These algorithms play a fundamental role in various aspects of our daily lives, from ranking online search results and social media posts to data processing on computers and smartphones. The utilization of AI to generate superior algorithms is poised to revolutionize computer programming and have a significant impact on all facets of our ever-growing digital society.

OpenAI Announces Function Calling

OpenAI has introduced updates to the API, including a capability called function calling, which allows developers to describe functions to GPT-4 and GPT-3.5 and have the models create code to execute those functions. Function calling facilitates the development of chatbots capable of leveraging external tools, transforming natural language into database queries, and extracting structured data from text. These models have undergone fine-tuning to not only identify instances where a function should be invoked but also provide JSON responses that align with the function signature.

UC Berkeley Develops Gorilla, an LLM Connected with APIs

Researchers from the UC Berkeley released Gorilla, a finely-tuned model based on LLaMA, that surpasses GPT-4 in terms of performance when generating API calls. Its integration with a document retriever allows Gorilla to effectively adapt to changes in documents during testing, enabling flexible API updates and version changes. Moreover, Gorilla significantly addresses the issue of hallucination often encountered when directly prompting LLMs. To assess the model’s capabilities, APIBench, a comprehensive dataset comprising HuggingFace, TorchHub, and TensorHub APIs, has been introduced. The successful combination of the retrieval system with Gorilla showcases the potential for LLMs to utilize tools more accurately, stay updated with frequently changing documentation, and enhance the reliability and practicality of their outputs.

AWS Introduces Falcon, a Foundational LLM Built and Trained with Sagemaker

AWS and the UAE Technology Innovation Institute have launched Falcon LLM, a foundational large language model  with 40 billion parameters. Falcon matches the performance of other high-performing LLMs, and is the top-ranked open-source model in the public Hugging Face Open LLM leaderboard. It’s available as open-source in two different sizes, Falcon-40B and Falcon-7B and was built from scratch using data preprocessing and model training jobs built on Amazon SageMaker. Open-sourcing Falcon 40B enables users to construct and customize AI tools that cater to unique users needs, facilitating seamless integration and ensuring the long-term preservation of data assets. The model weights are available to download, inspect and deploy anywhere.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ASP.NET Core in .NET 8 Preview 5: Improved Debugging, Blazor Updates, SignalR Reconnects, and More

MMS Founder
MMS Almir Vuk

Article originally posted on InfoQ. Visit InfoQ

The latest release of .NET 8 Preview 5 brings significant additions to ASP.NET Core. Notable enhancements include an improved debugging experience for ASP.NET Core, changes regarding the servers and middleware, the introduction of new features and improvements in Blazor, enhanced API authoring capabilities, seamless reconnect functionality in SignalR, and improvements and changes in authentication and authorization.

Regarding productivity notable advancements have been made to enhance the debugging experience in ASP.NET Core. Specifically, developers will benefit from the introduction of debug customization attributes that facilitate the retrieval of crucial information related to types such as HttpContext, HttpRequest, HttpResponse, and ClaimsPrincipal within the Visual Studio debugger.

In the latest .NET 8 preview 5, developers can experience early support for “seamless reconnects” in SignalR. This new feature aims to minimize downtime for clients facing temporary network disruptions, such as network switches or tunnel passages. By temporarily buffering data on both the server and client sides and acknowledging messages, it ensures a smoother user experience. Currently, this support is limited to .NET clients using WebSockets, and configuration options are not yet available. Developers have the freedom to opt-in to this feature and tweak around options.UseAcks, at HubConnectionBuilder. Upcoming previews are expected to introduce server-side configuration, customizable buffering settings, timeout limits, and expanded support for other transports and clients.

Blazor has also received a significant number of updates in the latest release of .NET 8 Preview 5. Updates like the new Blazor Web App template available through the command line and within the Visual Studio, webcil is now the default packaging format when publishing a Blazor WebAssembly app is being done, and regarding the Blazor WebAssembly, there is no longer requirement for unsafe-eval to be enabled, while specifying a Content Security Policy (CSP).

Also, the Blazor Router component now integrates with endpoint routing to handle both server-side and client-side routing. This integration allows for consistent routing to components regardless of whether server-side or client-side rendering is employed. The new Blazor Web App template includes sample pages, such as Index.razor and ShowData.razor, which utilize endpoint routing and streaming rendering for displaying weather forecast data, with enhanced navigation support expected in future .NET 8 previews.

Blazor Server introduces the ability to enable interactivity for individual components. With the new [RenderModeServer] attribute, developers can activate interactivity for specific components by utilizing the AddServerComponents extension method. This enhancement offers greater flexibility and control when building interactive applications with Blazor Server rendering mode.

The comment section of the original release blog post has generated significant activity, with users engaging in numerous questions and discussions with the development team. Developers are encouraged to explore the comment section for further information and insights.

Generic attributes were introduced in C# 11 and now regarding updates for API authoring, there is support added for generic attributes, providing cleaner alternatives to attributes that previously relied on a System.Type parameter. Generic variants are now available for the following attributes: ProducesResponseType, Produces, MiddlewareFilter, ModelBinder, ModelMetadataType, ServiceFilter, and TypeFilter.

Authentication and authorization, have also seen some changes, ASP.NET Core React and Angular project templates have removed the dependency on Duende IdentityServer. Instead, these templates now utilize the default ASP.NET Core Identity UI and cookie authentication to handle authentication for individual user accounts. Also, a new Roslyn analyzer is introduced in this preview to facilitate the adoption of a more “terser” syntax using the AddAuthorizationBuilder API, where applicable.

Other notable changes include the servers & middleware area, the introduction of the IHttpSysRequestTimingFeature interface allows for the detailed info of timestamp data during request processing when utilizing the HTTP.sys server. Additionally, the ITlsHandshakeFeature interface now exposes the Server Name Indication (SNI) hostname information. The addition of IExceptionHandler interface, enables services to be resolved and invoked by the exception handler middleware in order to provide developers with a callback mechanism to handle known exceptions in a centralized location.

Furthermore, regarding Native AOT, the latest preview incorporates enhancements to minimal APIs generated at compile-time. These improvements include support for parameters adorned with the AsParameters attribute and the automatic inference of metadata for request and response types.

Lastly, developers are welcome to leave feedback and follow the progress of the ASP.NET Core in .NET 8 by visiting the official GitHub project repository.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.