Improving Developer Experience in Non-Technical Organisations with BMK

MMS Founder
MMS Rafiq Gemmail

Article originally posted on InfoQ. Visit InfoQ

BMK Lakshminarayanan, a transformation architect at SECTION6 and New Zealand’s ambassador for the Cloud Native Computing Foundation and DevOps Institute, recently wrote an article titled The Brutal Truth: Developer Experience Challenges in Non-Tech Enterprises. Lakshminarayanan, better known as BMK, examined “developer experience challenges in non-tech enterprises,” discussing the practice and cultural challenges faced by engineers in non-technical enterprises.

Challenging the notion that it is possible to be a non-technical enterprise, McKinsey Digital have also recently published a report titled Every Company is a Software Company, authored by McKinsey partners Jeremy Schneider, Chandra Gnanasambandam and Janaki Palaniappan. The report shared findings from a recent survey revealing that most non-technology companies see software as a “bolt-on” without acknowledging the need for cultural change. Both BMK and McKinsey’s report point to the business benefits of investing in engineering leadership, DevOps culture and better integration of technology into product strategy.

Writing of the barriers to alignment within non-technical organisations, BMK wrote that engineers “may find it challenging to communicate with business stakeholders who lack technical knowledge.” He wrote the consequence of this is a negative impact on productivity, including ‘”misunderstandings” and “missed deadlines.” BMK recommended regularly bringing developers and their “dependent teams” together to “encourage openness” and “improve the quality of products.”

McKinsey’s report also discussed the need for non-technical organisations to ensure that they have technically versed leadership, stating that “one-third to one-half of a leadership team should be deep software experts” They also wrote of the importance of empowering “software product managers” and acknowledged their impact on a company’s bottom line, stating:

You can’t build world-class software capabilities without world-class software product managers. They turn the creative force of engineers and designers into winning software products and services. They have end-to-end accountability and, in some cases, even full profit-and-loss responsibility for a specific product. In the tech world, the ascendancy and importance of product managers are well established. But few nontech companies give them commensurate responsibilities or influence. That’s a big mistake.

BMK discussed how a negative developer experience within such firms can arise from under-investments in tooling, security, training and upgrades of “outdated technology.” He wrote that by “investing in developer experience, enterprises can create a work environment that fosters innovation, collaboration, and growth.” McKinsey wrote about poor DevEx causing a risk of engineer retention. Their report stated that “developer experience is so important” that a particular CEO has a dashboard to “track developer satisfaction scores.”

Discussing the challenges of lasting cultural change, BMK wrote that although some non-technical organisations may have “DevOps, SRE, and Cloud-Native” titles in their leadership and orgnisational structures, there is “often a lack of DevOps culture.” He wrote that it is often the case in non-technical organisations, that “developers have limited access to the resources and tools required for successful software development.” BMK provided the example of organisations with a “strong divide between build and run teams.” Writing of the benefit when non-technical organisations “foster psychological safety”, he wrote:

Non-tech enterprises should encourage collaboration between development and operations teams and provide developers with the tools and resources to work effectively. Help engineers to experiment, fail and acquire new knowledge from their experience.

McKinsey also highlighted that while organisational leadership are often aware of the need to build “software culture,” this requires a deeper shift which “values the artisanship of great engineering.” The report says:

Every leader we spoke with underlined the fact that building a software-centric business means building a software culture. This goes way beyond adding a few software veterans and implementing DevOps (software development and IT operations). It requires building a culture that deeply values the creativity and artisanship of great engineering, elevates product leadership and a customer-first focus, and empowers a leadership team with a strong understanding of software business models and tech.

McKinsey’s report states that “good software development can’t thrive in a hierarchical organization.” The report discusses balancing autonomous and fast-paced delivery with “guardrails to limit risk.” McKinsey’s team wrote of the surveyed CEOs, that they were aware it was critical to provide “product teams with the autonomy to experiment, try new tech, and develop their own solutions.” The report proposed providing empowered product managers with OKRs and “freedom and accountability” to lead goal-driven “cross-functional teams.” BMK described how developers in such hierarchical organisations were often disempowered from contributing to organisational success:

Non-tech enterprises may have rigid hierarchies and decision-making processes, which can limit the autonomy of developers. This can lead to developers feeling frustrated and disengaged, resulting in a poor experience. Additionally, the lack of autonomy can make it difficult for developers to take ownership of their work and contribute to the organization’s success.

McKinsey’s team wrote that successful product managers “obsess over usage data” to continuously improve their products, engaging “designers, engineers and data scientists” early in the ideation phase of product development. Similarly, BMK recommended that non-technical enterprises enable engineers by providing the resources for teams to effectively collaborate across the “Ideate, Create, Release and Operate” life cycle of a product. McKinsey’s report states that this early collaboration enables product managers to “tap a wide range of unconventional thinking.”

BMK wrote that while it can be “daunting” for non-technical organisations to address all of these factors, it is ultimately in the organisation’s interest to improve its developer experience. He wrote:

By recognizing developers’ challenges and proactively addressing them, non-tech enterprises can attract and retain top talent and improve flow, effectiveness, and efficiency, resulting in higher-quality products and a more successful business overall.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Releases New Cloud-Optimized Linux Distribution With Amazon Linux 2023

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Recently, AWS announced the general availability of Amazon Linux 2023 (AL2023), a third-generation distribution with a high-security standard, predictable lifecycle, and deterministic updates.

The company released the first cloud-optimized Linux distribution in 2010, followed by a second called Amazon Linux 2. With AL2023, customers can expect a predictable two-year major release cycle and long-term support, frequent and flexible updates, improved security posture with features such as SELinux, kernel live-patching (x86-64 and ARM), OpenSSL 3.0, revised cryptographic policies, deterministic upgrades with versioned repositories, kernel hardening, and more.

There are several differences between Amazon Linux 2 and AL2023. One of the most important differences is that Amazon Linux 2 offers long-term support until June 30, 2023, while AL2023 has a predictable two-year major release cycle and long-term support.


Source: https://aws.amazon.com/blogs/aws/amazon-linux-2023-a-cloud-optimized-linux-distribution-with-long-term-support/

Furthermore, AL2023 provides customers with deterministic updates through versioned repositories, a flexible and consistent update mechanism. Sébastien Stormacq, a principal developer advocate at AWS, explains the feature in contrast to Linux 2:

The distribution locks to a specific version of the Amazon Linux package repository, giving you control over how and when you absorb updates. By default, and in contrast with Amazon Linux 2, a dnf update command will not update your installed packages (dnf is the successor to yum). This helps to ensure that you are using the same package versions across your fleet.

Customers wanting to leverage AL2023 can use the EC2 run-instances API, the AWS Command Line Interface (AWS CLI), or the AWS Management Console, and one of the four Amazon Linux 2023 AMIs that AWS provides – which support two machine architectures (x86_64 and Arm) and two sizes (standard and minimal):

  • arm64 architecture (standard AMI): al2023-ami-kernel-default-arm64
  • arm64 architecture (minimal AMI): al2023-ami-minimal-kernel-default-arm64
  • x86_64 architecture (standard AMI): al2023-ami-kernel-default-x86_64
  • x86_64 architecture (minimal AMI): al2023-ami-minimal-kernel-default-x86_64

AWS also distributes Amazon Linux 2023 as Docker images from Amazon Elastic Container Registry (Amazon ECR) and Docker Hub. These images are built from the same software components included in the Amazon Linux 2023 AMI. 

Ro’i Bandel, a DevOps Engineer, concluded in a medium blog post:

Amazon Linux 2023 is an exciting new release. There are many things to like about it, including the new Fedora base, updated packages, improved performance, and security. However, because of the many breaking changes, it is not an easy upgrade to recommend for existing Amazon Linux 2 users. The limited package availability also makes it not suitable for some workloads, which might still be better served by other popular AMIs (such as Ubuntu).

Amazon Linux 2023 is available in all AWS Regions, including the AWS GovCloud (US) and the China Regions. More details are available in the documentation pages and FAQs.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


HashiCorp Consul Improves Envoy Integration, Adds Debugging Tool

MMS Founder
MMS Matt Campbell

Article originally posted on InfoQ. Visit InfoQ

HashiCorp has released Consul 1.15, adding new features that improve interacting with Envoy and troubleshooting issues within the service mesh platform. The release introduces improvements to Envoy access logging as well as adding Consul Envoy extensions. To improve the troubleshooting experience, a new service-to-service troubleshooting tool has been added.

Consul provides support for both a built-in L4 proxy, Connect, as well as first-class support for Envoy. Proxies permit applications to connect to other services within the service mesh. In previous releases, escape-hatch overrides had to be used to modify the existing Envoy configuration to rewrite Envoy resources to be compatible with Consul. This was difficult to use as it required understanding how Consul named Envoy resources.

This release introduces a new extension system to Consul that allows operators to modify Consul-generated Envoy resources without having to customize the Consul binary. Extensions can add, delete, or modify Envoy listeners, routes, clusters, and endpoints. At the time of release, the Lua and AWS Lambda extensions are supported.

Envoy extensions can be configured to be used using the EnvoyExtensions field. It is definable in both the proxy-defaults and service-defaults configuration entries. However, it is recommended to enable EnvoyExtensions with service-defaults.

Within Consul, Envoy access logging provides details to help understand the incoming traffic patterns to the proxy. In previous versions, adjusting the bootstrapping configuration for Envoy to enable access logs required using escape hatches.

In Consul 1.15, logs are now centrally managed via config entries and CRDs. This simplifies enabling and disabling access logs for all proxies within the service mesh. Logs can be configured to output different request properties and to output to a stdout pipe or a file. For example, enabling the logs within the proxy-defaults configuration entry can be done as follows:

Kind      = "proxy-defaults"
Name      = "global"
AccessLogs {
  Enabled = true
  Type = "stdout"
}

This release introduces a new built-in tool for service-to-service troubleshooting. The tool will validate the Envoy configurations on both the upstream and downstream proxies for both VM and Kubernetes setups. When run, the tool performs a number of checks to help detect issues including:

  • Validating the existence of the upstream service
  • Checking if one or both hosts are unhealthy
  • Checking if a filter is affecting the upstream service
  • Validating if the certificate authority or any services have expired mTLS certificates

For example, to troubleshoot between services on Kubernetes the upstream IP address is used:

consul-k8s troubleshoot proxy -pod frontend-767ccfc8f9-6f6gx -upstream-ip 10.4.6.160

Version 1.14 of Consul introduced a new Consul Dataplane that removed the need for deploying the Consul client agent when using Kubernetes. However, this had the side effect of removing the rate-limiting support provided by Consul clients. With version 1.15 the Consul server has added support for rate limiting.

This includes a set of global limits for read and write operations for each Consul server as well as a mode to apply when that limit is reached. This mode can be one of enforcing, permissive, or disabled. When set to enforcing, the rate limiter will deny any request to the server that exceeds the configured rate. Permissive mode will continue to allow requests but will produce metrics and logs to review the traffic patterns. This mode is intended to be used during configuration and troubleshooting. Setting the mode to disabled will disable the rate limiter.

Consul 1.15 is now generally available. More information about the release can be found on the HashiCorp blog or within the Consul documentation.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Document Databases Software Market [USD 17.39 Billion by 2032] – Enterprise Apps Today

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Document Databases Software Market: Revenue 22.8% | [A Key Resource for Industry Analysis] by 2032

WHAT WE HAVE ON THIS PAGE

Market Overview

Published Via 11Press: According to recent research reports, the global Document Databases Software market size was valued at USD 2.23 billion in 2022 and is projected to reach a value of USD 17.39 billion by 2032, growing at a compound annual growth rate (CAGR) of 22.8% between 2022 and 2032.

Document databases are non-relational databases that store semi-structured and unstructured data in JSON, BSON, or XML documents. These databases offer scalability, flexibility, and ease of use – making them ideal for modern application development.

The growing adoption of cloud computing, the need for data-driven decision-making, and real-time data processing are some of the primary factors driving growth in the document databases market. Furthermore, document databases are finding greater applications in mobile and web applications as NoSQL databases gain popularity.

The Asia Pacific region is expected to experience significant growth over the forecast period due to the increasing adoption of digital technologies in emerging economies such as India and China. North America and Europe also appear to remain key markets due to a large number of document database vendors and high cloud computing adoption rates there.

Request For Sample Report Here: https://market.us/report/document-databases-software-market/request-sample

Key Takeaway

  • The market for Document Databases is expected to expand significantly over the next years, due to increased demand for cloud databases and the adoption of NoSQL databases.
  • MongoDB, Amazon, and ArangoDB are the main players in Document Databases.
  • The market is segmented according to type, application, as well as the region.
  • Due to its cost-effectiveness as well as its scalability, cloud-based segments will grow at the highest rate of all types.
  • Document Databases can be adopted faster by small and mid-sized enterprises due to their ease of use and lower cost.
  • Large Enterprises (LEA) and SMEs are some of the main users of Document Databases.
  • North America is expected to be the market leader in Document Databases, due to the present major players there and the high adoption of new technologies.
  • Due to the growing adoption of digital transformation and cloud-based services, significant growth is expected for the Asia Pacific region.
  • The Document Databases market will continue to grow due to the increasing need for efficient data administration and the growing trend toward big data analysis.
  • However, cloud-based databases could pose security and privacy issues that may limit the market’s growth.

Regional Snapshot

  • North America: North America will hold the largest market share in Document Databases due to the present major players such as MongoDB and Amazon Web Services. This market is also driven by the high adoption of new technologies and the growing need for data management.
  • Europe: Europe will see significant growth in the Document Databases Market due to the increasing adoption of cloud-based services and the growing trend of digital transformation initiatives. This region is dominated by Google, Oracle, and Couchbase.
  • Asia Pacific: Asia Pacific is expected to see significant growth in the Document Data Bases market, owing to increased adoption of cloud-based services as well as digital transformation initiatives. Tencent Cloud, Alibaba Cloud, and Huawei Cloud are the key players in this region.
  • Middle East & Africa: Middle East & Africa is expected to see moderate growth of the Document Databases Market due to increased adoption of cloud-based Services and the rising trend of digital Transformation Initiatives. These are the major players: Oracle, Microsoft, and IBM.
  • Latin America: Latin America should see moderate growth in the Document Databases Market due to the increased adoption of cloud-based services, and the rising trend of digital transformative initiatives. Microsoft, IBM, Oracle, and others are the main players in this region.

Market Dynamics

Drivers

  • Cloud databases are experiencing an unprecedented surge in demand: Cloud databases offer scalability, adaptability, and cost efficiency that have propelled Document Database adoption among organizations.
  • Adoption of NoSQL databases: NoSQL databases, such as Document Databases, are becoming more and more popular due to their capacity for handling unstructured data and supporting flexible data models.
  • The growing trend towards digital transformation: As organizations look to implement these initiatives, Document Databases are becoming increasingly necessary in order to efficiently manage and store large amounts of data.
  • Increasing Need for Efficient Data Management: As data volume and complexity increase, companies are seeking efficient ways to manage it efficiently – driving demand for Document Databases as one such solution.
  • The growing trend in big data analytics: As more businesses adopt this technique, demand for Document Databases is on the rise, which is capable of handling large volumes of information and offering real-time analysis.
  • Document Databases offer lower costs and easier deployment than traditional relational databases, making them attractive to small and medium-sized enterprises (SMEs).
  • Support for Multiple Applications: Document Databases can accommodate a range of applications, such as content management, mobile and web apps, e-commerce stores, and social networks, making them highly versatile and widely applicable.

Restraints

  • Security and Privacy Issues: Cloud-based Document Databases may present security and privacy risks, particularly if sensitive data is stored in the cloud.
  • Lack of Standardization: The absence of standards in Document Databases makes it challenging for organizations to select the ideal solution and integrate it with their existing infrastructure.
  • Integration Issues: Integrating Document Databases with legacy systems can present some difficulties, which may deter some organizations from adopting them.
  • Limited Expertise: Adopting Document Databases necessitates specialized expertise, which may present a challenge for some organizations that lack the necessary resources or personnel.
  • Performance Issues: Document Databases may not perform as well as traditional relational databases in certain scenarios, such as complex queries or transactions involving multiple tables.
  • Limited Ecosystem: The Document Databases ecosystem is not as developed as that of traditional relational databases, making it difficult to locate compatible tools and applications.
  • Resistance to Change: Some organizations may be unwilling to embrace innovation and opt for traditional relational databases, which could restrict the growth of the Document Database market.

Opportunities

  • Growing Demand for IoT: With the increasing adoption of Internet of Things (IoT) devices, there is an escalating demand for structured data management through Document Databases.
  • Expansion of Cloud Computing: The continued rise in cloud computing is creating new opportunities for Document Databases, as they are well suited to deployment on cloud platforms.
  • Rising Demand for Real-Time Analytics: The growing need for real-time analytics is driving the demand for Document Databases, which offer fast and efficient data processing and analysis.
  • Adoption of Artificial Intelligence: As artificial intelligence (AI) technologies gain traction, document databases are being given an edge as they can handle vast amounts of unstructured data which is frequently employed in AI applications.
  • Increased focus on customer experience: The growing significance of customer experience is driving demand for Document Databases, as they can efficiently store and manage customer data.
  • Need for Agility and Flexibility Systems: The demand for agile and adaptable systems has driven the adoption of NoSQL databases, such as Document Databases, which offer more versatile data models than traditional relational databases.
  • Expansion into New Industries: Document Databases can be applied in a variety of industries, such as healthcare, finance, retail, and media – creating opportunities for expansion into new markets.

View Detailed TOC of the Report | https://market.us/report/document-databases-software-market/table-of-content/

Challenges

  • Competition from traditional relational databases: Traditional relational databases remain the dominant option on the market, and many organizations remain hesitant to switch over due to associated costs and risks.
  • Limited Awareness and Education: Despite the growing popularity of NoSQL databases, such as Document Databases, many organizations still lack awareness or the necessary knowledge and skillset to effectively utilize them.
  • Integration with Existing Systems: Integrating Document Databases with legacy systems can be complex and time-consuming, which may deter some organizations from adopting them.
  • Data Security Concerns: Document Databases can be vulnerable to cyber-attacks and data breaches, which could harm an organization’s reputation and result in financial losses.
  • Data Privacy Compliance: As organizations place greater emphasis on data privacy and protection, they must abide by various regulations such as GDPR and CCPA – something which may prove challenging when using Document Databases.
  • Scalability Limitations: While Document Databases are designed for maximum scalability, there may still be limits on how much data can be stored and processed efficiently.
  • Implementation Cost and Complexity: Implementing Document Databases can be costly and time-consuming, especially for organizations with limited resources or expertise, which could impede adoption.

Key Market Segments

Type

  • Cloud-Based
  • Web Based

Application

  • Large Enterprises
  • SMEs

Key Market Players

  • MongoDB
  • Amazon
  • ArangoDB
  • Azure Cosmos DB
  • Couchbase
  • MarkLogic
  • RethinkDB
  • CouchDB
  • SQL-RD
  • OrientDB

Report Scope

Report Attribute Details
The market size value in 2022 USD 2.23 Bn
Revenue forecast by 2032 USD 17.39 Bn
Growth Rate CAGR Of 22.8%
Regions Covered North America, Europe, Asia Pacific, Latin America, and Middle East & Africa, and Rest of the World
Historical Years 2017-2022
Base Year 2022
Estimated Year 2023
Short-Term Projection Year 2028
Long-Term Projected Year 2032

Recent Development

  • One recent development in the Document Database market is the increasing adoption of multi-model databases. Multi-model databases enable organizations to support multiple data models, such as document, graph, and key-value models within one database. This provides greater flexibility and versatility by allowing organizations to use different models for various data types and applications.
  • Another recent development is the growing adoption of blockchain technology in Document Databases. Blockchain provides greater transparency and security within Document Databases, making it simpler to track changes and confirm data authenticity.

Frequently Asked Question

Q: What is the current market size for the Document Databases Software Market?
A: According to a report by Market.us, the Document Databases Software Market was valued at USD 2.23 billion in 2022 and is expected to reach USD 17.39 billion by 2032, growing at a CAGR of 22.8% during the forecast period.

Q: What are the key segments of the Document Databases Software Market?
A: The Document Databases Software Market can be segmented based on type (Cloud Based, Web Based), application (Large Enterprises, SMEs), and geography (North America, Europe, Asia-Pacific, Latin America, and Middle East & Africa).

Q: Who are the key players in the Document Databases Software Market?
A: Some of the key players in the Document Databases Software Market include MongoDB, Amazon, ArangoDB, Azure Cosmos DB, Couchbase, MarkLogic, RethinkDB, CouchDB, SQL-RD, and OrientDB.

Prudour Private Limited

Prudour Private Limited

The team behind market.us, marketresearch.biz, market.biz and more. Our purpose is to keep our customers ahead of the game with regard to the markets. They may fluctuate up or down, but we will help you to stay ahead of the curve in these market fluctuations.

Our consistent growth and ability to deliver in-depth analyses and market insight has engaged genuine market players. They have faith in us to offer the data and information they require to make balanced and decisive marketing decisions.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java 20 Delivers Features for Projects Amber, Loom and Panama

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

Oracle has released version 20 of the Java programming language and virtual machine. The seven (7) JEPs in this final feature set include:

The feature cadence for Java 20 is similar to that of the seven (7) new features in JDK 19 and nine (9) new features in JDK 18. However, this is lower than some of the more recent pre-JDK 18 releases: 14 features in JDK 17; 17 features in JDK16; 14 features in JDK 15; and 16 features in JDK 14.

This release features JEPs that provide continued contribution toward Project Amber, Project Loom and Project Panama along with new rounds of preview and incubation. We examine a few of these new features here. It is worth noting that there were no JEPs representing Project Valhalla in JDK 20.

Project Panama

JEP 434 and JEP 438 fall under the auspices of Project Panama, a project designed to improve and enrich interoperability between the JVM and well-defined “foreign,” i.e., non-Java, APIs that will most-likely include interfaces commonly used within C libraries.

JEP 434, Foreign Function & Memory API (Second Preview), incorporate refinements based on feedback and to provide a second preview from JEP 424, Foreign Function & Memory API (Preview), delivered in JDK 19, and the related incubating JEP 419, Foreign Function & Memory API (Second Incubator), delivered in JDK 18; and JEP 412, Foreign Function & Memory API (Incubator), delivered in JDK 17. This feature provides an API for Java applications to interoperate with code and data outside of the Java runtime by efficiently invoking foreign functions and by safely accessing foreign memory that is not managed by the JVM. Updates from JEP 424 include: the MemorySegment and MemoryAddress interfaces are now unified, i.e., memory addresses are modeled by zero-length memory segments; and the sealed MemoryLayout interface has been enhanced to facilitate usage with JEP 427, Pattern Matching for switch (Third Preview), delivered in JDK 19.

JEP 438, Vector API (Fifth Incubator), incorporates enhancements in response to feedback from the previous four rounds of incubation: JEP 426, Vector API (Fourth Incubator), delivered in JDK 19; JEP 417, Vector API (Third Incubator), delivered in JDK 18; JEP 414, Vector API (Second Incubator), delivered in JDK 17; and JEP 338, Vector API (Incubator), delivered as an incubator module in JDK 16. This feature proposes to enhance the Vector API to load and store vectors to and from a MemorySegment as defined by JEP 424, Foreign Function & Memory API (Preview).

A working application on how to implement the Foreign Function & Memory API may be found in this GitHub repository by Carl Dea, senior developer advocate at Azul.

Project Loom

JEP 429, JEP 436 and JEP 437 fall under the auspices of Project Loom, a project designed to explore, incubate and deliver Java VM features and APIs built for the purpose of supporting easy-to-use, high-throughput lightweight concurrency and new programming models. This would be accomplished via virtual threads, delimited continuations and tail calls.

JEP 429, Scoped Values (Incubator), an incubating JEP formerly known as Extent-Local Variables (Incubator), proposes to enable sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads.

JEP 436, Virtual Threads (Second Preview), proposes a second preview from JEP 425, Virtual Threads (Preview), delivered in JDK 19, to allow time for additional feedback and experience for this feature to progress. This feature provides virtual threads, lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications, to the Java platform. It is important to note that no changes are within this preview except for a small number of APIs from JEP 425 that were made permanent in JDK 19 and, therefore, not proposed in this second preview. More details on JEP 425 may be found in this InfoQ news story and this JEP Café screen cast by José Paumard, Java developer advocate, Java Platform Group at Oracle.

JEP 437, Structured Concurrency (Second Incubator), proposes to reincubate this feature from JEP 428, Structured Concurrency (Incubator), delivered in JDK 19, to allow time for additional feedback and experience. The intent of this feature is to simplify multithreaded programming by introducing a library to treat multiple tasks running in different threads as a single unit of work. This can streamline error handling and cancellation, improve reliability, and enhance observability. The only change is an updated StructuredTaskScope class to support the inheritance of scoped values by threads created in a task scope. This streamlines the sharing of immutable data across threads. Further details on JEP 428 may be found in this InfoQ news story.

Working applications on how to implement the Virtual Threads and Structured Concurrency APIs may be found in: this GitHub repository by Nicolai Parlog, Java developer advocate at Oracle; and this GitHub repository by Bazlur Rahman, Senior Software Engineer at Contrast Security.

Project Amber

JEP 432 and JEP 433 fall under the auspices of Project Amber, a project designed to explore and incubate smaller Java language features to improve productivity.

JEP 432, Record Patterns (Second Preview), incorporates enhancements in response to feedback from the previous round of preview, JEP 405, Record Patterns (Preview). This proposes to enhance the language with record patterns to deconstruct record values. Record patterns may be used in conjunction with type patterns to “enable a powerful, declarative, and composable form of data navigation and processing.” Type patterns were recently extended for use in switch case labels via: JEP 427, Pattern Matching for switch (Third Preview), delivered in JDK 19; JEP 420, Pattern Matching for switch (Second Preview), delivered in JDK 18; and JEP 406, Pattern Matching for switch (Preview), delivered in JDK 17. Changes from JEP 405 include: added support for inference of type arguments of generic record patterns; added support for record patterns to appear in the header of an enhanced for statement; and remove support for named record patterns.

Similarly, JEP 433, Pattern Matching for switch (Fourth Preview), incorporates enhancements in response to feedback from the previous three rounds of preview: JEP 427, Pattern Matching for switch (Third Preview), delivered in JDK 19; JEP 420, Pattern Matching for switch (Second Preview), delivered in JDK 18; and JEP 406, Pattern Matching for switch (Preview), delivered in JDK 17. Changes from JEP 427 include: a simplified grammar for switch labels; and inference of type arguments for generic type patterns and record patterns is now supported in switch expressions and statements along with the other constructs that support patterns.

A working application on how to implement the Record Patterns and Pattern Matching for switch APIs may be found in this GitHub repository, java-19 folder, by Wesley Egberto, Java technical lead at Global Points.

JDK 21

Only one (1) JEP has been Targeted for inclusion in JDK 21 at this time. JEP 431, Sequenced Collections, has been promoted from Proposed to Target to Targeted status for JDK 21. This JEP proposes to introduce “a new family of interfaces that represent the concept of a collection whose elements are arranged in a well-defined sequence or ordering, as a structural property of the collection.” Motivation was due to a lack of a well-defined ordering and uniform set of operations within the Collections Framework. Further details on JEP 431 may be found in this InfoQ news story.

However, based on recently submitted JEP drafts and JEP candidates that propose finalized features, we have surmised which JEPs have the potential to be included in JDK 21.

JEP 440, Record Patterns, has been promoted from its JEP Draft 8300541 to Candidate status this past week. This JEP finalizes this feature and incorporates enhancements in response to feedback from the previous two rounds of preview: JEP 432, Record Patterns (Second Preview), delivered in JDK 20; and JEP 405, Record Patterns (Preview), delivered in JDK 19. This feature enhances the language with record patterns to deconstruct record values. Record patterns may be used in conjunction with type patterns to “enable a powerful, declarative, and composable form of data navigation and processing.” Type patterns were recently extended for use in switch case labels via: JEP 420, Pattern Matching for switch (Second Preview), delivered in JDK 18, and JEP 406, Pattern Matching for switch (Preview), delivered in JDK 17. The most significant change from JEP 432 removed support for record patterns appearing in the header of an enhanced for statement.

Similarly, JEP 441: Pattern Matching for switch, has been promoted from its JEP Draft 8300542 to Candidate status. This JEP also finalizes this feature and incorporates enhancements in response to feedback from the previous four rounds of preview: JEP 433, Pattern Matching for switch (Fourth Preview), delivered in JDK 20; JEP 427, Pattern Matching for switch (Third Preview), delivered in JDK 19; JEP 420, Pattern Matching for switch (Second Preview), delivered in JDK 18; and JEP 406, Pattern Matching for switch (Preview), delivered in JDK 17. This feature enhances the language with pattern matching for switch expressions and statements.

JEP 442, Foreign Function & Memory API (Third Preview), has been promoted from its JEP Draft 8301625 to Candidate status. This JEP incorporate refinements based on feedback and to provide a third preview from: JEP 434, Foreign Function & Memory API (Second Preview), delivered in JDK 20; JEP 424, Foreign Function & Memory API (Preview), delivered in JDK 19, and the related incubating JEP 419, Foreign Function & Memory API (Second Incubator), delivered in JDK 18; and JEP 412, Foreign Function & Memory API (Incubator), delivered in JDK 17. This feature provides an API for Java applications to interoperate with code and data outside of the Java runtime by efficiently invoking foreign functions and by safely accessing foreign memory that is not managed by the JVM. Updates from JEP 434 include: centralizing the management of the lifetimes of native segments in the Arena interface; enhanced layout paths with a new element to dereference address layouts; and removal of the VaList class.

JEP Draft 8303683, Virtual Threads, was submitted by Ron Pressler, architect and technical lead for Project Loom at Oracle, and Alan Bateman, architect, Java Platform Group, at Oracle this past week. This JEP proposed to finalize this feature based on feedback from the previous two rounds of preview: JEP 436, Virtual Threads (Second Preview), delivered in JDK 20; and JEP 425, Virtual Threads (Preview), delivered in JDK 19. This feature provides virtual threads, lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications, to the Java platform. The most significant change from JEP 436 is that virtual threads now fully support thread-local variables by eliminating the option to opt-out of using these variables. Further details on JEP 425 may be found in this InfoQ news story and this JEP Café screen cast by José Paumard, Java developer advocate, Java Platform Group at Oracle.

The formal release date for JDK 21 has not yet been announced, but it is expected to be delivered in mid-September 2023 as per the six-month release cadence. Developers can anticipate a feature freeze in mid-June 2023. More details on additional JEP drafts and candidates may be found in this more detailed InfoQ news story.

JDK 20 may now be downloaded from Oracle with binaries from other vendors expected to become available in the coming days.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ScyllaDB Challenges DynamoDB on Latency and Pricing – The New Stack

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

<meta name="x-tns-categories" content="Cloud Services / Data“><meta name="x-tns-authors" content="“>

ScyllaDB Challenges DynamoDB on Latency and Pricing – The New Stack

Modal Title

2023-03-20 09:00:50

ScyllaDB Challenges DynamoDB on Latency and Pricing

sponsor-kinetica,sponsored-topic,



Cloud Services

/

Data

SycllaDB considers its database system to be such a close competitor to AWS’ DynamoDB that it offers ScyllaDB Alternator, an A/B testing suite for comparing the two.


Mar 20th, 2023 9:00am by


Featued image for: ScyllaDB Challenges DynamoDB on Latency and Pricing

ScyllaDB is a distributed database that operates at scale and is architected for data-intensive applications that need high performance and low latency. The creators consider the database system to be a close competitor to Amazon Web ServicesDynamoDB NoSQL database service. They are so confident in this claim that they have released ScyllaDB Alternator, which offers A/B testing between Scylla and Dynamo with just a few scripts and zero downtime.

At the ScyllaDB Summit 2023, ScyllaDB Vice President of Product Tzach Livyatan makes this case, in the talk “Use ScyllaDB Alternator to Use Amazon DynamoDB API, Everywhere, Better, More Affordable, All at Once.”

Livyatan compared ScyllaDB to Amazon’s DynamoDB in a side-by-side matchup and compared the two in the categories of price and latency. In his testing, the results show that Scylla had lower latency and pricing. ScyllaDB’s Alternator tool is a DynamoDB-compatible API which makes any application using DynamoDB also ScyllaDB compatible.

Any vendor’s self-reported numbers should be taken with more than one grain of salt. Nonetheless, there is a lot of insight that could be gained through reviewing this work.

Round One: Latency Testing

While the Scylla team had the home-court advantage with their deep understanding of ScyllaDB and how to manipulate it in testing, DynamoDB was a “black box” with the underlying tech unknown. They had to go through some of the same trial and error mishaps and learnings just like everyone else, to test the two.

Here are the specs for latency testing:

  • Yahoo! Cloud Serving Benchmark (YCSB) 0.18.0+, the “standard for no SQL databases” per Livyatan
  • Scylla’s “latest and greatest” version, Scylla Enterprise 2022.2
  • They used a three-node Scylla cluster — i4i.2xlarge, split across us-east-1 zones b, c, d. Scylla Cloud defaults to three zones. Higher reliability potentially slightly higher latency.
  • The loaders included eight nodes of i4i.2xlarge with each machine running three instances of YCSB. The total was 40 threads with 18 processes with a parallelism of 720. There was a test done with 50 threads with no performance gains.
  • They found the maximum throughput and then brought it back down to 70%. Latency suffers at the throughput max.
  • Testing was done with Uniform, Zipfian, and Hotspot distributions but this article only references Hotspot. Hotspot mimics the real-world scenario of hot partitions — many partitions but only a few receive the bulk of the traffic.
  • 1TB of storage since latency was being tested.

Here are the results for DynamoDB:

And for ScyllaDB:

The graphs above show that latencies are higher in DynamoDB.

Here is the use case comparison (including yearly cost):

The chart above is the overall use case comparison which tool results from both tests and compares them against one another. ScyllaDB has lower latencies and significant cost savings, in ScyllaDB’s estimation.

Similar comparisons were done for provisioned tables, which also came out in ScyllaDB’s favor, according to ScyllaDB. ScyllaDB provisions by cluster rather than table so if a cluster has additional allocation and a table is spiking, that table can sweep up the additional allocation. This concept allows for the provisioning of the entire database rather than the individual tables.

Zero Downtime Migration

The Alternator, an open source technology built on Apache Spark, offers a DynamoDB Compatible API built off of REST/ HTTP. It provides a way for those applications using DynamoDB to be compatible with ScyllaDB as a drop-in replacement. With the Alternator, Scylla is compatible with the same applications, SDKs, data modeling, queries, and other features. There’s an in-depth demo starting at 17:17 of the video of the presentation:

Group
Created with Sketch.

TNS owner Insight Partners is an investor in: Pragma, Uniform.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


TypeScript 5 GA Extends Decorators, Stabilizes New Module Resolution Option, and More

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

After announcing TypeScript 5.0 Beta three months ago, TypeScript 5 has finally reached general availability. Among the most relevant changes are extended support for decorators to enable their placement before or after export and export defaults; the new bundler module resolution option, and more.

InfoQ already provided a brief intro to the salient new features brought by TypeScript 5 after its beta was announced. Decorators are by far the biggest new feature, making it possible to decorate classes and their members to make them more easily reusable. For example, you could create a loggedMethod decorator to add some logging behaviour to an existing function. Decorators are just syntactic glue aiming to simplify the definition of higher order functions.

The mentioned change to decorators, making it possible to place their definition before or after export and export defaults is basically a compatibility requirement with future releases of ECMAScript/JavaScript.

TypeScript 5 introduced a new bundler module option, named --moduleResolution bundler aimed to improve coexistence with modern bundlers like Vite, esbuild, swc, Webpack, Parcel, and others. TypeScript 5 GA makes this option usable only with esnext modules.

This was done to ensure that import statements written in input files won’t be transformed to require calls before the bundler resolves them, whether or not the bundler or loader respects TypeScript’s module option

The older bundler option are on the contrary more suitable for packages to be distributed through npm.

While TypeScript 5.0 is the first to use the major version 5, it is useful to recall that TypeScript does not follow semantic versioning, meaning that moving from TypeScript 4.9 to 5.0 is not qualitatively different than going from 4.8 to 4.9.

This does not mean though that TypeScript 5 does not introduces a number of breaking changes and deprecations, like the complete overhaul of the enum type, which could break existing code.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


The Sentence Similarity Scenario in ML.NET Model Builder

MMS Founder
MMS Robert Krzaczynski

Article originally posted on InfoQ. Visit InfoQ

Microsoft recently published information about adding the Sentence Similarity scenario in Model Builder. This scenario allows the training of custom sentence similarity models. Together with the addition of this scenario to the Model Builder, it is no longer necessary to install the Model Builder GPU extension. Moreover, Microsoft informs about work in the coming months on developments in the areas of deep learning, the LightGBM algorithm, and AutoML.

A few months ago, Microsoft released a preview version of the Sentence Similarity API, which provides a way to train a sentence similarity-based machine learning model using custom data. This is achieved by integrating TorchSharp‘s NAS-BERT implementation with ML.NET. This is the same basic transformer-based model used by the Text Classification API. Applying a pre-trained version of this model, the sentence similarity API leverages custom data to fine-tune the model. 

In order to start using this scenario, Model Builder must be installed or upgraded to the latest version 16.14.4. The Sentence Similarity Scenario supports local training on both CPU and GPU. For GPUs, a CUDA-compatible GPU is required. More information on configuring the GPU is available in the ML.NET GPU guide.

It is no longer necessary to install the GPU extension from version 16.14.4 of Model Builder. In previous versions, in order to support the GPU in Model Builder, in addition to meeting the hardware requirements and installing the appropriate drivers, it was required to install the GPU extension of Model Builder.

The addition of this scenario received positive feedback from the community. For instance, a Reddit user wrote that this is a solution that he has been working on for some time and will apply this scenario in his project.

Microsoft additionally informed about development plans for machine learning solutions in the coming months. First of all, it will be expanding a deep learning scenario. This scope includes scenario APIs such as text classification and sentence similarity for object detection, question answering, and named entity recognition. Another point relates to updating the version of LightGBM supported in ML.NET and improving interoperability by allowing LightGBM models to be loaded in their native format. There are also further improvements planned for the AutoML API over the next year to enable new scenarios and customizations to simplify machine learning workflows.

The entire list of changes is available in the Model Builder release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Using ASP.NET Core 7 Minimal APIs: Request Filters, Parameter Mapping, and More

MMS Founder
MMS Fiodar Sazanavets

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Minimal APIs primarily exist for writing REST API endpoints while avoiding the verbosity of using classic Web API controllers
  • Minimal APIs invoke endpoints via simple method calls, such as app.MapGet and app.MapPost.
  • Minimal APIs support any standard API functionality, such as authorization filters and dependency injection.
  • Minimal APIs can use bespoke middleware that allows validation of the request and the capability to execute code before or after processing the request.
  • Headers and query string parameters can now be auto-mapped to variables, so developers no longer need to write custom code to extract them.
  • Developers can now use Minimal APIs to upload files to the server easily.
     

Introduction

Traditionally, ASP.NET Core applications allowed developers to build REST API endpoints using the so-called controller classes. Separate controllers would represent separate domains or entities within the application. Also, typically all endpoints assigned to the same controller would share a portion of the URL path. For example, a class called UsersController would be used to manage user data, and all of its endpoints would start with the /users URL path.

Since .NET 6, Minimal APIs on ASP.NET Core allow us to build REST API endpoints without defining controllers. Instead, we can map a specific HTTP verb to a method directly in the application startup code.

There are some major benefits to using Minimal APIs. One such benefit is that it’s way less verbose than using traditional controllers. Each endpoint requires only a few lines of code to set up. It’s easy to write and easy to read. There’s also less cognitive load associated with reading it.

Another major benefit of using Minimal APIs is that it’s faster than using traditional controllers. Because there is less code and a much more simplified bootstrapping, the methods representing the endpoints become easier to both compile and execute.

Of course, there are some disadvantages to using Minimal APIs too. Firstly, organizing REST API into separate controllers would make the code more readable and maintainable in an enterprise-grade application with a large number of endpoints. Secondly, as Minimal APIs are a relatively new technology, some features from controller-based APIs might be missing. However, since .NET 7 release, this is limited to a subset of lesser-known features that is relatively easy to find a substitute for; therefore, purely from the functionality perspective, Minimal APIs can now replace controller-based APIs in almost all scenarios.

In this article, we will cover the features that have been added to Minimal APIs with the .NET 7 release. We will examine why minimal APIs are now almost as powerful as the traditional controller-based APIs while being far less verbose.

Minimal API basics

Let’s imagine that we have an ASP.NET Core Web API project called ToDoApiApp. This is a .NET 7 project that uses a simplified Program.cs file as its entry point. The content of this file will be as follows:

using Microsoft.AspNetCore.Mvc;
using TodoApiApp;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddSingleton
    ();

var app = builder.Build();

app.MapGet("/todos", (
    [FromServices] ITodosRepository repository) =>
{
    return repository.GetTodos();
});

app.MapGet("/todos/{id}", (
    [FromRoute] int id,
    [FromServices] ITodosRepository repository) =>
{
    return repository.GetTodo(id);
});

app.Run();

We can see two examples of Minimal API endpoints, as they are registered directly in the application startup logic. Once we have created the app variable, we call the AppGet method twice. We are associating the first call with the /todos URL path. The second call is associated with the /todos/{id} URL path.

In both cases, we have an anonymous method associated with the specified path. In the first method, we are returning the results of the GetTodos method of the repository object. In the other instance, we are passing the integer id parameter to the GetTodo method of the repository object and returning the results.

So, as we can see from the above example, Minimum API endpoints use a mapping method directly on the object that represents the application. The mapping methods are specific to an HTTP verb. In the above example, both instances use a mapping method specific to HTTP GET verb. This is why it’s called MapGet. If we were to use the POST or PUT verb, we would have used MapPost or MapPut, respectively.

Once we call this method, the first parameter is the URL path that this method maps to. We can use route parameters by wrapping them in curly brackets as in the /todos/{id} path. Then we have the method that maps to this path. We can either pass a reference to an existing method or have an anonymous method, as we do in both of the examples above.

Our methods use two types of parameters. The parameters marked with the FromRoute attribute are extracted from the URL path that the method is mapped to. The parameters marked with the FromServices attribute represent objects extracted from the dependency injection system.

In our case, we are injecting an implementation of the ITodosRepository interface in both routes. We previously mapped this interface to the TodosRepository class by calling the AddSingleton method on the Services property of the builder object. The interface and the class definitions are as follows:

namespace TodoApiApp;

public interface ITodosRepository
{
    IEnumerable GetTodos();
    string GetTodo(int id);
    void InsertTodo(string description);
    void UpdateTodo(int id, string description);
    void DeleteTodo(int id);
}

internal class TodosRepository : ITodosRepository
{
    private readonly Dictionary todos
        = new Dictionary();
    private int currentId = 1;

    public IEnumerable GetTodos()
    {
        var results = new List();

        foreach (var item in todos)
        {
            results.Add((item.Key,
                item.Value));
        }

        return results;
    }

    public string GetTodo(int id)
    {
        return todos[id];
    }

    public void InsertTodo(
        string description)
    {
        todos[currentId] = description;
        currentId++;
    }

    public void UpdateTodo(
        int id, string description)
    {
        todos[id] = description;
    }

    public void DeleteTodo(
        int id)
    {
        todos.Remove(id);
    }
}

Essentially, this service represents a TODO list where we can read, add, modify, and delete the items. The items are stored in the dictionary, where each item is represented as a string value while it has a unique integer identifier represented by the dictionary key. So far, we have implemented REST API endpoints for two actions: reading the whole list and reading individual items. We will now implement the remaining actions while demonstrating the Minimal APIs features added in .NET 7.

Request filters in Minimal APIs

Request filtering is a powerful feature of Minimal APIs that allows developers to apply additional processing steps to incoming requests. It can execute some code either before the request reaches the endpoint or after.

Request filtering is commonly used for request validation. This is how we will use it in our example. In our Program.cs file, we will insert the following code just before the Run method call on the app object.

app.MapPost("/todos/{description}", (
    [FromRoute] string description,
    [FromServices] ITodosRepository repository) =>
{
    repository.InsertTodo(description);

}).AddEndpointFilter(async (context, next) =>
{
    var description = (string?)context.Arguments[0];
    if (string.IsNullOrWhiteSpace(description))
    {
        return Results.Problem("Empty TODO description not allowed!");
    }
    return await next(context);
});

In this example, we mapped a POST endpoint that allows us to insert an item into the TODO list. We added an endpoint filter to it by calling the AddEndpointFilter method. This is one of the ways of adding an endpoint filter. In this specific implementation, the method accepts two parameters: context and next.

The context parameter represents the context of the request. We can extract data from it and either examine or modify it. The next parameter represents the next step in the request-processing chain. We can add as many request filters as we want. The endpoint method will be executed once there are no more processing steps in the chain.

To short-circuit the request-processing chain and return the response early, we can call the static Problem method on the Results class. In our example, we are checking whether the entered TODO item is empty, and we are short-circuiting the pipeline and returning a validation error if it is. Otherwise, we pass the request to the next processing step by calling the method represented by the next delegate.

Automatic mapping of headers and query string parameters

Another useful feature of Minimal APIs is the ability to automatically map request parameters from the query string or the headers. All we need to do is create an object with the properties representing the expected parameters, pass this object as a parameter into the endpoint method, and mark it with the AsParameters attribute.

We have an example of it here. This is another endpoint mapping that we will add to the app object. This method is mapped to the PUT HTTP verb and is used for modifying the existing items in the TODO list.

app.MapPut("/todos/{id}", (
    [FromRoute] int id,
    [AsParameters] EditTodoRequest request,
    [FromServices] ITodosRepository repository) =>
{
    repository.UpdateTodo(id, request.Description);
});

We are inserting an object of the EditTodoRequest type as a collection of input parameters. The definition of this class is as follows:

internal struct EditTodoRequest
{
    public string Description { get; set; }
}

Because we have a single field called Description, it will be automatically mapped to a request parameter called description. If we needed to pass additional parameters, we could create additional properties in this class, and they will be mapped accordingly.

Working with file data in Minimal APIs

Until .NET 7, Minimal APIs only accepted basic request types. But now we can perform more advanced actions, such as file processing. Uploading a file to an API endpoint is now as simple as adding an IFormFile parameter to the endpoint method.

To demonstrate how file upload works, we added the following endpoint method to the endpoint mappings in the Program.cs file. This method assumes that the file we are uploading is a text file where each line represents a new TODO item to be inserted.

app.MapPost("/todos/upload", (IFormFile file,
    [FromServices] ITodosRepository repository) =>
{
    using var reader = new StreamReader(file.OpenReadStream());

    while (reader.Peek() >= 0)
        repository.InsertTodo(reader.ReadLine() ?? string.Empty);

});

This method demonstrates that reading from the uploaded file can be as simple as calling the OpenReadStream on the IFormFile implementation. We can then check if any lines are remaining by calling the Peek method on the stream. If there are any lines, we can read the next line by calling the ReadLine method.

Conclusion

This concludes our overview of the most powerful features added to ASP.NET Core Minimal APIs with the .NET 7 release. These are not the only features that have been added. But these features are the most prominent.

With the current functionality, Minimal APIs are almost as powerful as the traditional controller-based APIs. There are still use cases when controller-based APIs would be preferable. For example, controllers are easier to organize in applications that have a large number of API endpoints. But in terms of the available functionality, there is almost nothing that controllers can do that Minimal APIs cannot.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: New JEPs, GraalVM 23 Early-Access, Infinispan, Mojarra, Micrometer Metrics

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for March 13th, 2023 features news from OpenJDK, JDK 20, JDK 21, GraalVM 23.0 early-access, Spring Tools 4.18, Quarkus 3.0-Alpha6, Hibernate ORM 6.2 CR4, Micrometer Metrics 1.11, Micrometer Tracing 1.1, Infinispan 14.0.7, Piranha 23.3, Project Reactor 2022.0.5, Eclipse Mojarra 4.0.2, Apache Groovy 4.0.10 and 3.0.16, JHipster Lite 0.29.0, JReleaser 1.5.1 and JobRunr 6.1.2.

OpenJDK

JEP 440, Record Patterns, has been promoted from its JEP Draft 8300541 to Candidate status this past week. This JEP finalizes this feature and incorporates enhancements in response to feedback from the previous two rounds of preview: JEP 432, Record Patterns (Second Preview), delivered in JDK 20; and JEP 405, Record Patterns (Preview), delivered in JDK 19. This feature enhances the language with record patterns to deconstruct record values. Record patterns may be used in conjunction with type patterns to “enable a powerful, declarative, and composable form of data navigation and processing.” Type patterns were recently extended for use in switch case labels via: JEP 420, Pattern Matching for switch (Second Preview), delivered in JDK 18, and JEP 406, Pattern Matching for switch (Preview), delivered in JDK 17. The most significant change from JEP 432 removed support for record patterns appearing in the header of an enhanced for statement.

Similarly, JEP 441: Pattern Matching for switch, has been promoted from its JEP Draft 8300542 to Candidate status. This JEP also finalizes this feature and incorporates enhancements in response to feedback from the previous four rounds of preview: JEP 433, Pattern Matching for switch (Fourth Preview), delivered in JDK 20; JEP 427, Pattern Matching for switch (Third Preview), delivered in JDK 19; JEP 420, Pattern Matching for switch (Second Preview), delivered in JDK 18; and JEP 406, Pattern Matching for switch (Preview), delivered in JDK 17. This feature enhances the language with pattern matching for switch expressions and statements.

JEP 442, Foreign Function & Memory API (Third Preview), has been promoted from its JEP Draft 8301625 to Candidate status. This JEP incorporate refinements based on feedback and to provide a third preview from: JEP 434, Foreign Function & Memory API (Second Preview), delivered in JDK 20; JEP 424, Foreign Function & Memory API (Preview), delivered in JDK 19, and the related incubating JEP 419, Foreign Function & Memory API (Second Incubator), delivered in JDK 18; and JEP 412, Foreign Function & Memory API (Incubator), delivered in JDK 17. This feature provides an API for Java applications to interoperate with code and data outside of the Java runtime by efficiently invoking foreign functions and by safely accessing foreign memory that is not managed by the JVM. Updates from JEP 434 include: centralizing the management of the lifetimes of native segments in the Arena interface; enhanced layout paths with a new element to dereference address layouts; and removal of the VaList class.

JEP Draft 8303683, Virtual Threads, was submitted by Ron Pressler, architect and technical lead for Project Loom at Oracle, and Alan Bateman, architect, Java Platform Group, at Oracle this past week. This JEP proposed to finalize this feature based on feedback from the previous two rounds of preview: JEP 436, Virtual Threads (Second Preview), delivered in JDK 20; and JEP 425, Virtual Threads (Preview), delivered in JDK 19. This feature provides virtual threads, lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications, to the Java platform. The most significant change from JEP 436 is that virtual threads now fully support thread-local variables by eliminating the option to opt-out of using these variables. More details on JEP 425 may be found in this InfoQ news story and this JEP Café screen cast by José Paumard, Java developer advocate, Java Platform Group at Oracle.

JEP Draft 8304400, Launch Multi-File Source-Code Programs, also submitted by Pressler, proposes to enhance the Java Launcher to execute an application supplied as one or more files of Java source code. This allows a more gradual transition from small applications to larger ones by postponing a full-blown project setup.

JDK 20

JDK 20 remains in its release candidate phase with the anticipated GA release on March 21, 2023. Build 36 remains the current build in the JDK 20 early-access builds. More details on this build may be found in the release notes.

JDK 21

Build 14 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 13 that include fixes to various issues. Further details on this build may be found in the release notes.

For JDK 20 and JDK 21, developers are encouraged to report bugs via the Java Bug Database.

GraalVM

Oracle Labs has published the latest early-access developer builds for GraalVM 23.0.0. New features include: initial support for Native Image Bundles; improved support for AWT on Linux; and native image recommendations. More details on this release may be found in the release notes.

Spring Framework

The release of Spring Tools 4.18.0 ships with: an upgraded Eclipse 2023-03 IDE; new and improved content-assist for Spring Data repository query methods; a fix for an issue that caused regular Java content-assist in VSCode to stop working; and a fix in m2e that caused resource files, such as application.properties, to not be copied into the target folder. More details on this release may be found in the release notes.

Quarkus

The sixth alpha release of Quarkus 3.0.0 provides these two new features: enabling OpenTelemetry for JDBC by setting the quarkus.datasource.jdbc.telemetry property to true; and the CredentialsProviders interface now supports MongoDB connections. There were also dependency upgrades to SnakeYaml 2.0, Maven Compiler Plugin 3.11.0, Maven OpenRewrite Maven Plugin 4.41.0, SmallRye Common 2.1.0 and JBoss Threads 3.5.0.Final. More details on this release may be found in the changelog.

Hibernate

The fourth release candidate of Hibernate ORM 6.2 provides 33 bug fixes and 28 refinements based on Java community feedback. It is expected that this is the last release candidate before the final release.

Micrometer

The second milestone release of Micrometer Metrics 1.11.0 delivers new features such as: a new metric, jvm.threads.started, that reports the total number of active application threads in the JVM; a new Elasticsearch endpoint, _index_template, to create index templates; add the GC name to the jvm.gc.pause metric; and support for micrometer libraries on OSGi-based Java runtimes.

Similarly, the second milestone release of Micrometer Tracing 1.1.0 features: parity with Spring Cloud Sleuth annotations; and dependency upgrades to Micrometer 1.11.0-M2 and OpenTelemetry 1.24.0.

Infinispan

Infinispan 14.0.7.Final has been released featuring support for Spring Framework 6 and Spring Boot 3. Some notable bug fixes include: a NullPointerException in the MetricsCollector class; JSON parser doesn’t correctly report error locations; the Redis Serialization Protocol (RESP) endpoint cannot parse requests larger than the packet size; and concurrent access to Spring Session integrations result in lost session attributes.

Piranha

The release of Piranha 23.3.0 provides notable changes such as: an updated CodeQL workflow; add JUnit tests for the DefaultAnnotationManager class; and a fix for the RuntimeException when the endpoint application is still in the process of being deployed. More details on this release may be found in their documentation and issue tracker.

Project Reactor

Project Reactor 2022.0.5, the fifth maintenance release, provides dependency upgrades to reactor-core 3.5.4, reactor-addons 3.5.1, reactor-netty 1.1.5, reactor-kafka 1.3.17 and reactor-kotlin-extensions 1.2.2.

Eclipse Mojarra

Eclipse Mojarra 4.0.2 has been released with notable changes such as: cleanup of the MockServletContext class to remove unused methods and add the @Override annotation; cleanup of the ParseXMLTestCase class to remove unused methods, variables and commented code; ensure the version() method in the @FacesConfig annotation cannot return null; and a fix for a NumberFormatException upon updating buttons inside a facet header of a data table. More details on this release may be found in the release notes.

Apache Software Foundation

The release of Apache Groovy 4.0.10 delivers notable bug fixes and improvements such as: a confusing error message from the GroovyScriptEngine class; a memory leak where local variable values are not discarded; the @Builder annotation not working on JDK 16; and the MissingPropertyException truncates the name of a nested class name. More details on this release may be found in the release notes.

Similarly, the release of Apache Groovy 3.0.16 ships with notable bug fixes such as: unable to call methods from the BiPredicate interface on closures or lambdas on JRE 16+; the use of the @CompileStatic annotation confuses statically importing instances and methods; and an IllegalAccessException using the default interface methods with JDK 17 and Groovy 3.0.9. This release also supports JDK 16. More details on this release may be found in the release notes.

JHipster

The JHipster team has released JHipster Lite 0.29.0 with new features and enhancements such as: removing dependencies from the JHipsterModulePackageJson class based on user feedback; removing the warning messages when more than four active ApplicationContext sessions within Cassandra database applications are being tested; and a new dependency and configuration for Redis. More details on this release may be found in the release notes.

JReleaser

Version 1.5.1 of JReleaser, a Java utility that streamlines creating project releases, has been released delivering notable fixes such as: add the missing graalVMNativeImage property to the Native Image assembler utility; The Java Archive utility generates the wrong format for the JAVA_OPTS environmental variable; and improved error handling when executing external commands. More details on this release may be found in the release notes.

JobRunr

JobRunr 6.1.2 has been released featuring two bug fixes: a failure to update metadata, with eventual shut down, when using MySQL and the useServerPrepStmts property set to true; and an issue with the JobRunr Quarkus Extension in which the JobRunrDocumentDBStorageProviderProducer class is not using proper configuration.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.