Java News Roundup: OpenJDK Updates, Piranha Cloud, Spring Data 2024.0.0, GlassFish, Micrometer

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for May 13th, 2024 features news highlighting: JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), proposed to target for JDK 23; the May 2024 edition of Piranha Cloud; Spring Data 2024.0.0; and point and milestone releases of Spring Framework, GlassFish and Micrometer.

OpenJDK

JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), has been promoted from Candidate to Proposed to Target for JDK 23. Formerly known as Unnamed Classes and Instance Main Methods (Preview), Flexible Main Methods and Anonymous Main Classes (Preview) and Implicit Classes and Enhanced Main Methods (Preview), this JEP incorporates enhancements in response to feedback from the two previous rounds of preview, namely JEP 463, Implicit Classes and Instance Main Methods (Second Preview), delivered in JDK 22, and JEP 445, Unnamed Classes and Instance Main Methods (Preview), delivered in JDK 21. This JEP proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, Java Language Architect at Oracle. The latest draft of the specification document by Gavin Bierman, Consulting Member of Technical Staff at Oracle, is open for review by the Java community. More details on JEP 445 may be found in this InfoQ news story. The review is expected to conclude on May 21, 2024.

JEP 482, Flexible Constructor Bodies (Second Preview), has been promoted from its JEP Draft 8325803 to Candidate status. This JEP proposes a second round of preview and a name change to obtain feedback from the previous round of preview, namely JEP 447, Statements before super(…) (Preview), delivered in JDK 22. This feature allows statements that do not reference an instance being created to appear before the this() or super() calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Changes in this JEP include: a treatment of local classes; and a relaxation of the restriction that fields can not be accessed before an explicit constructor invocation to a requirement that fields can not be read before an explicit constructor invocation. Gavin Bierman, Consulting Member of Technical Staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback.

JEP 481, Scoped Values (Third Preview), has been promoted from its JEP Draft 8331056 to Candidate status. Formerly known as Extent-Local Variables (Incubator), this JEP proposes a third preview, with one change, in order to gain additional experience and feedback from one round of incubation and two rounds of preview, namely: JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. This feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads. The change in this feature is related to the operation parameter of the callWhere() method, defined in the ScopedValue class, is now a functional interface which allows the Java compiler to infer whether a checked exception might be thrown. With this change, the getWhere() method is no longer needed and has been removed.

JDK 23

Build 23 of the JDK 23 early-access builds was made available this past week featuring updates from Build 22 that include fixes for various issues. Further details on this release may be found in the release notes.

GlassFish

The sixth milestone release of GlassFish 8.0.0 provides bug fixes, dependency upgrades and notable improvements such as: an improved Jakarta Contexts and Dependency Injection TCK; a new Jakarta JSON Processing standalone TCK runner; and an improved class loader matching for implementations of the Weld BeanDeploymentArchive interface. More details on this release may be found in the release notes.

GraalVM

Oracle Labs has released version 0.10.2 of Native Build Tools, a GraalVM project consisting of plugins for interoperability with GraalVM Native Image. This latest release provides notable changes such as: a new parameter that allows for skip building a native image for POM-type modules defaulting to false for backwards compatibility; and an improved ClassPathDirectoryAnalyzer class that includes a check for the boolean value of ignoreExistingResourcesConfig field. Further details on this release may be found in the changelog.

Spring Framework

The second milestone release of Spring Framework 6.2.0 ships with bug fixes, improvements in documentation, dependency upgrades and new features such as: a new pathVariableOrNull() method added to the ServerRequest interface as a nullable variant of pathVariable() method for Kotlin extensions; a new generateCodeForArgument() method in CodeFlow class to provide the same functionality as the one defined in the SpelNodeImpl class; and a new CompilableIndexAccessor interface to support compiling expressions that use a custom implementation of the IndexAccessor interface. More details on this release may be found in the release notes.

Similarly, versions 6.1.7, 6.0.20 and 5.3.35 of Spring Framework have also been released featuring bug fixes, improvements in documentation and notable improvements such as:

All three versions provide a dependency upgrade to Protect Reactor 2023.0.6, 2022.0.19 and 2020.0.44, respectively. Further details on these releases may be found in the release notes for version 6.1.7, version 6.0.20 and version 5.3.35.

Spring Data 2024.0.0 has been released providing new features: support for value expressions for improved in expressions in entity- and property-related annotations that aligns with Spring Framework @Value annotation; and compatibility with the new MongoDB 5.0 driver containing a deprecated API that has now been removed. There were also upgrades to sub-projects such as: Spring Data Commons 3.3.0; Spring Data MongoDB 4.3.0; Spring Data Elasticsearch 5.3.0; and Spring Data Neo4j 7.3.0. This new version will be included in the upcoming release of Spring Boot 3.3.0. More details on this release may be found in the release notes.

Similarly, versions 2023.1.6 and 2023.0.12 of Spring Data have been released providing bug fixes and respective dependency upgrades to sub-projects such as: Spring Data Commons 3.2.6 and 3.1.12; Spring Data MongoDB 4.2.6 and 4.1.12; Spring Data Elasticsearch 5.2.6 and 5.1.12; and Spring Data Neo4j 7.2.6 and 7.1.12. These versions may also be consumed by the upcoming releases of Spring Boot 3.2.6 and 3.1.12, respectively.

The release of Spring Web Services 4.0.11 ships with a dependency upgrade to Spring Framework 6.0.20 and notable changes: elimination of starting/stopping of an instance of the Apache ActiveMQ EmbeddedActiveMQ Artemis server before/after every test method from the JmsIntegrationTest class to improve test performance; and override the securement password using the MessageContext interface from the Wss4jHandler class to efficiently support per-request credentials. Further details on this release may be found in the release notes.

Quarkus

The release of Quarkus 3.10.1 delivers dependency upgrades and notable changes such as: elimination of a possible NullPointerException when the getResources() method defined in the QuarkusClassLoader class returns null; and a resolution to an issue mocking an implementation of the GitInfo interface with @MockitoConfig, an annotation that makes use of the AnnotationsTransformer API which does not work for synthetic beans, that is beans whose metadata are programmatically created in a Quarkus extension. More details on this release may be found in the changelog.

Apache Software Foundation

The release of Apache Tomcat 10.1.24 provides bug fixes and notable changes such as: a refactor of handling trailer fields to use an instance of the MimeHeaders class to store trailer fields; correct error handling from the onError() method defined in the AsyncListener interface for asynchronous dispatch requests; and a resolution to WebDAV locks for non existing resources for thread safety and removal. Further details on this release may be found in the release notes.

Hibernate

The first alpha release of Hibernate Search 7.2.0 delivers dependency upgrades and improvements to the Search DSL such as: the ability to apply queryString and simpleQueryString predicates for numeric and date fields; define the minimum number of terms that should match using the match predicate; and a new @DistanceProjection annotation to map a constructor parameter to a distance projection. More details on this release may be found in the release notes.

Micrometer

The release of Micrometer Metrics 1.13.0 delivers bug fixes, improvements in documentation, dependency upgrades and many new features such as: support for the @Counted annotation on classes and an update to the CountedAspect class to handle when @Counted is used on a class; removal of an unnecessary call to getConventionName() defined in the PrometheusMeterRegistry class; and allow for customizing a start log message in an implementation of the PushMeterRegistry abstract class. Further details on this release may be found in the release notes.

Similarly, versions 1.12.6 and 1.11.12 of Micrometer Metrics provide dependency upgrades and resolutions to bug fixes such as: a NullPointerException in the DefaultJmsProcessObservationConvention class; and the AnnotationHandler can’t see methods from the parent class. More details on these releases may be found in the release notes for version 1.12.6 and version 1.11.12.

The release of Micrometer Tracing 1.3.0 ships with bug fixes, dependency upgrades and new features such as: a new TestSpanReporter class for improved integration tests that include components declared as beans that generate traces; and an implementation of the setParent() and setNoParent() methods, declared in the Tracer interface, in the SimpleSpanBuilder class. Further details on this release may be found in the release notes.

Similarly, versions 1.2.6 and 1.1.13 of Micrometer Tracing provide dependency upgrades to Micrometer Metrics 1.12.6 and 1.11.12, respectively and a resolution to an instance of the ObservationAwareBaggageThreadLocalAccessor class losing scope in the wrong order due to the JUnit @ParameterizedTest annotation executing tests in parallel which interferes with the baggage scopes resulting in return results in wrong spans and traces. More details on these releases may be found in the release notes for version 1.2.6 and version 1.1.13

Project Reactor

The second milestone release of Project Reactor 2024.0.0 provides dependency upgrades to reactor-core 3.7.0-M2, reactor-pool 1.1.0-M2 and reactor-netty 1.2.0-M2. There was also a realignment to version 2024.0.0-M2 with the reactor-kafka 1.4.0-M1, reactor-addons 3.6.0-M1 and reactor-kotlin-extensions 1.3.0-M1 artifacts that remain unchanged. Further details on this release may be found in the changelog.

Next, Project Reactor 2023.0.6, the sixth maintenance release, provides dependency upgrades to reactor-core 3.6.6. There was also a realignment to version 2023.0.6 with the reactor-netty 1.1.19, reactor-kafka 1.3.23, reactor-pool 1.0.5, reactor-addons 3.5.1 and reactor-kotlin-extensions 1.2.2 artifacts that remain unchanged. More details on this release may be found in the changelog.

Next, Project Reactor 2022.0.19, the nineteenth maintenance release, provides dependency upgrades to reactor-core 3.5.17 and reactor-netty 1.1.19. There was also a realignment to version 2022.0.19 with the reactor-kafka 1.3.23, reactor-pool 1.0.5, reactor-addons 3.5.1 and reactor-kotlin-extensions 1.2.2 artifacts that remain unchanged. Further details on this release may be found in the changelog.

And finally, the release of Project Reactor 2020.0.44, codenamed Europium-SR44, provides dependency upgrades to reactor-core 3.4.38 and reactor-netty 1.0.45. There was also a realignment to version 2020.0.44 with the reactor-kafka 1.3.23, reactor-pool 0.2.12, reactor-addons 3.4.10, reactor-kotlin-extensions 1.1.10 and reactor-rabbitmq 1.5.6 artifacts that remain unchanged. More details on this release may be found in the changelog.

Piranha Cloud

The release of Piranha 24.5.0 delivers notable changes such as: a new Tomcat10xExtension class to provide compatibility with Tomcat 10.0+; a new Glassfish7xExtension class to provide compatibility with GlassFish 7.0+; and removal of the SourceSpy project map. Further details on this release may be found in their documentation and issue tracker.

JobRunr

Version 7.1.2 of JobRunr, a library for background processing in Java that is distributed and backed by persistent storage, has been released providing a resolution to a BeanCreationException from within the validateTables() method defined in the DatabaseCreator class due to the table-prefix property not properly set. More details on this release may be found in the release notes.

Java Operator SDK

The release of Java Operator SDK 4.9.0 features dependency upgrades and removal of an invalid log message from the ReconciliationDispatcher class. Further details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Launches Gemini 1.5 Flash for Lower-Latency and More Efficient AI Serving

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Part of the Gemini family of AI models, Gemini Flash is a lighter-weight iteration that is designed to be faster and more efficient to use than Gemini Pro while offering the same “breakthrough” context window of one million tokens.

Gemini 1.5 Flash is optimized for high-volume, high-frequency tasks at scale and is less expensive to serve, Google says, but it keeps the ability for multimodal reasoning, including text, audio, and video generation.

1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more.

According to Google, Gemini Flash requires on average less than one second for users to start seeing the model’s output after entering their query (sub-second first-token latency) for the majority of use cases.

Gemini Flash has been “distilled” from Gemini Pro, which means it retains the latter’s model most essential knowledge and skills in a more compact package. This implies Gemini 1.5 Flash inherits all the improvements that went into Gemini 1.5 Pro, including its efficient Mixture-of-Experts (MoE)-based architecture, larger context window, and enhanced performance.

In its announcement, Google highlights the fact that Gemini 1.5 models are able to support a context window size of up to two million tokens, thus outperforming current competitors, with Gemini 1.5 Flash offering a one million token window by default. This is enough to process a significant amount of information in one go, up to one hour of video, 11 hours of audio, codebases with over 30k lines of code, or over 700,000 words, according to Google.

Other improvements that went into Gemini 1.5 Pro and thus also benefit Gemini 1.5 Flash are code generation, logical reasoning and planning, multi-turn conversation, and audio and image understanding. Thanks to these advances, Gemini 1.5 Pro outperforms Gemini 1.0 Ultra on most benchmarks, Google says, while Gemini 1.5 Flash outperforms Gemini 1.0 Pro.

On a related note, Google has also updated its Gemini Nano model for device-based inference, which reached version 1.0. In its latest iteration, Gemini Nano has expanded its ability to understand images in addition to text inputs, with plans to further expand it to sound and spoken language.

Both Gemini 1.5 Pro and Flash are available in preview and will be generally available in June through Google AI Studio and Vertex AI.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Charles Schwab Investment Management Inc. Acquires 11,439 Shares of MongoDB … – MarketBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Charles Schwab Investment Management Inc. boosted its holdings in MongoDB, Inc. (NASDAQ:MDBFree Report) by 4.8% in the fourth quarter, according to the company in its most recent 13F filing with the Securities and Exchange Commission (SEC). The fund owned 249,122 shares of the company’s stock after buying an additional 11,439 shares during the quarter. Charles Schwab Investment Management Inc. owned approximately 0.35% of MongoDB worth $101,854,000 at the end of the most recent reporting period.

Other institutional investors have also recently added to or reduced their stakes in the company. Blue Trust Inc. lifted its holdings in shares of MongoDB by 937.5% during the fourth quarter. Blue Trust Inc. now owns 83 shares of the company’s stock worth $34,000 after purchasing an additional 75 shares during the period. AM Squared Ltd acquired a new position in MongoDB during the 3rd quarter worth about $35,000. Cullen Frost Bankers Inc. acquired a new stake in shares of MongoDB in the third quarter valued at about $35,000. Beacon Capital Management LLC raised its position in shares of MongoDB by 1,111.1% during the fourth quarter. Beacon Capital Management LLC now owns 109 shares of the company’s stock worth $45,000 after acquiring an additional 100 shares during the last quarter. Finally, Huntington National Bank lifted its stake in shares of MongoDB by 279.3% in the third quarter. Huntington National Bank now owns 110 shares of the company’s stock worth $38,000 after acquiring an additional 81 shares during the period. 89.29% of the stock is owned by institutional investors.

Analysts Set New Price Targets

MDB has been the topic of a number of recent analyst reports. Stifel Nicolaus restated a “buy” rating and issued a $435.00 price target on shares of MongoDB in a research report on Thursday, March 14th. Citigroup upped their price target on shares of MongoDB from $515.00 to $550.00 and gave the company a “buy” rating in a research report on Wednesday, March 6th. Needham & Company LLC reissued a “buy” rating and issued a $465.00 price objective on shares of MongoDB in a research note on Friday, May 3rd. KeyCorp decreased their target price on MongoDB from $490.00 to $440.00 and set an “overweight” rating for the company in a research note on Thursday, April 18th. Finally, Tigress Financial boosted their price target on MongoDB from $495.00 to $500.00 and gave the company a “buy” rating in a research note on Thursday, March 28th. Two investment analysts have rated the stock with a sell rating, three have assigned a hold rating and twenty have issued a buy rating to the company’s stock. Based on data from MarketBeat.com, the stock has an average rating of “Moderate Buy” and a consensus target price of $443.86.

View Our Latest Stock Report on MongoDB

Insider Buying and Selling

In related news, Director Dwight A. Merriman sold 1,000 shares of the firm’s stock in a transaction dated Monday, April 1st. The shares were sold at an average price of $363.01, for a total transaction of $363,010.00. Following the completion of the sale, the director now owns 523,896 shares in the company, valued at $190,179,486.96. The transaction was disclosed in a filing with the Securities & Exchange Commission, which is accessible through this link. In other MongoDB news, Director Dwight A. Merriman sold 1,000 shares of the stock in a transaction dated Monday, April 1st. The shares were sold at an average price of $363.01, for a total transaction of $363,010.00. Following the completion of the transaction, the director now owns 523,896 shares in the company, valued at $190,179,486.96. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through the SEC website. Also, CEO Dev Ittycheria sold 17,160 shares of the business’s stock in a transaction that occurred on Tuesday, April 2nd. The shares were sold at an average price of $348.11, for a total transaction of $5,973,567.60. Following the completion of the sale, the chief executive officer now directly owns 226,073 shares in the company, valued at approximately $78,698,272.03. The disclosure for this sale can be found here. In the last ninety days, insiders have sold 46,802 shares of company stock valued at $16,514,071. 4.80% of the stock is owned by insiders.

MongoDB Stock Up 1.5 %

MDB stock traded up $5.42 during trading on Monday, reaching $358.89. The stock had a trading volume of 673,710 shares, compared to its average volume of 1,330,563. MongoDB, Inc. has a one year low of $264.58 and a one year high of $509.62. The company’s 50 day moving average is $362.63 and its two-hundred day moving average is $391.67. The company has a quick ratio of 4.40, a current ratio of 4.40 and a debt-to-equity ratio of 1.07. The company has a market capitalization of $26.14 billion, a price-to-earnings ratio of -142.53 and a beta of 1.19.

MongoDB (NASDAQ:MDBGet Free Report) last released its earnings results on Thursday, March 7th. The company reported ($1.03) earnings per share (EPS) for the quarter, missing the consensus estimate of ($0.71) by ($0.32). MongoDB had a negative net margin of 10.49% and a negative return on equity of 16.22%. The business had revenue of $458.00 million during the quarter, compared to the consensus estimate of $431.99 million. Equities research analysts forecast that MongoDB, Inc. will post -2.53 EPS for the current year.

MongoDB Company Profile

(Free Report)

MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Recommended Stories

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Before you consider MongoDB, you’ll want to hear this.

MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.

While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.

View The Five Stocks Here

The Next 7 Blockbuster Stocks for Growth Investors Cover

Wondering what the next stocks will be that hit it big, with solid fundamentals? Click the link below to learn more about how your portfolio could bloom.

Get This Free Report

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB Atlas Stream Processing Generally Available – I Programmer

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

The MongoDB developers have announced that MongoDB Atlas now has support for stream processing. The news was announced at MongoDB.Local NYC.

MongoDB is a NoSQL document database that stores its documents in a JSON-like format with schema.  MongoDB Atlas is the fully managed global cloud version of the software that can be run on AWS, Azure, or Google Cloud.

mongodblogo

The developers say MongoDB Atlas Stream Processing makes it easier to use real-time data from a wide variety of sources. The announcement says the feature can be used to make use of data in motion and data at rest to power event-driven applications that can respond to changing conditions. Streaming data from sources like IoT devices is highly dynamic, and the stream processing is designed to let organizations do more with their data in less time and with less operational overhead.

Alongside the stream processing, the team announced that MongoDB Atlas Search Nodes is now generally available on AWS and Google Cloud, and in preview on Microsoft Azure. This provides infrastructure for generative AI and relevance-based search workloads that use MongoDB Atlas Vector Search and MongoDB Atlas Search. MongoDB Atlas Search Nodes are independent of core operational database nodes and can be used to isolate workloads, optimize costs, and reduce query times by up to 60 percent. The feature can also be used to run highly available generative AI and relevance-based search workloads at scale. The MongoDB team also announced the general availability of MongoDB Atlas Vector Search on Knowledge Bases for Amazon Bedrock, saying this will enable organizations to build generative AI application features using fully managed foundation models (FMs) more easily.

Another preview, MongoDB Atlas Edge Server, was also announced for deploying and operating distributed applications in the cloud and at the edge. Edge Server provides a local instance of MongoDB with a synchronization server that runs on local or remote infrastructure, meaning applications can access operational data even with intermittent connections to the cloud.

The new features are available now on the MongoDB website.

mongodblogo

More Information

MongoDB Website

Related Articles

MongoDB Adds Vector Search

MongoDB 7 Adds Queryable Encryption 

MongoDB 6 Adds Encrypted Query Support

MongoDB 5 Adds Live Resharding

MongoDB Trends

MongoDB Atlas Adds MultiCloud Cluster Support

MongoDB Adds GraphQL Support

MongoDB Improves Query Language

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner

raspberry pi books

Comments

or email your comment to: comments@i-programmer.info

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JEP 476: Simplifying Java Development with Module Import

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

After its review concluded, JEP 476, Module Import Declarations (Preview), was integrated into JDK 23. This preview feature proposes to enhance the Java programming language with the ability to succinctly import all of the packages exported by a module, with the goal of simplifying the reuse of modular libraries without requiring code to be in a module itself.

This JEP streamlines the importing of entire modules in Java, thus simplifying code and making it easier for developers, especially beginners, to utilize libraries and standard classes. This feature reduces the need for multiple import statements and eliminates the necessity of knowing the package hierarchy.

Importantly, this change does not disrupt existing code, as developers are not required to modularize their work. This feature is developed in conjunction with JEP 477, which automatically imports all public classes and interfaces from the java.base module for implicitly declared classes.

The Java programming language includes the automatic import of essential classes from the java.lang package.  However, as the platform has evolved, many classes, like List, Map, and Stream are not automatically included, forcing developers to import them explicitly.

For instance, the following code demonstrates how manually importing several packages consumes unnecessary lines:

import java.util.Map;                   
import java.util.function.Function;     
import java.util.stream.Collectors;     
import java.util.stream.Stream;

String[] fruits = new String[] { "apple", "berry", "citrus" };
Map m =
    Stream.of(fruits)
          .collect(Collectors.toMap(s -> s.toUpperCase().substring(0,1),
                                    Function.identity()));

With module imports, the syntax simplifies significantly:

import module java.base;

String[] fruits = new String[] { "apple", "berry", "citrus" };
Map m =
    Stream.of(fruits)
          .collect(Collectors.toMap(s -> s.toUpperCase().substring(0,1),
                                    Function.identity()));

A module import declaration follows this pattern:

import module M;

where M is the name of the module whose packages should be imported.

The effect of the import module is twofold:

  • Direct Packages: It imports all public top-level classes and interfaces in packages exported by the module M to the current module.
  • Transitive Dependencies: Packages exported by modules read via transitive dependencies are also imported.

As an example, the declaration import module java.base imports all 54 exported packages, effectively bringing a wide range of classes into scope from java.util to java.io.

However, importing entire modules increases the risk of ambiguous names when multiple packages contain classes with identical simple names. For instance, this example will trigger an error due to ambiguous List references:

import module java.base; // exports java.util.List

import module java.desktop; // exports java.awt.List

List l = ...                // Error - Ambiguous name!

The solution is to import the desired type explicitly:

import java.sql.Date; // resolve the ambiguity of the simple name Date!

Date d = ...                 // Ok! Date is resolved to java.sql.Date

The import module feature is currently a preview available through the --enable-preview flag with the JDK 23 compiler and runtime. To compile and run use the following command:

  • Compile the program with javac --release 23 --enable-preview Main.java and run it with java --enable-preview Main; or,

  • When using the source code launcher, run the program with java --enable-preview Main.java; or,

  • When using jshell, start it with jshell --enable-preview.

This JEP aims to provide a cleaner and more modular way to import Java libraries, reducing boilerplate code and enhancing accessibility, especially for new learners and developers working with modular libraries.

Overall, Java’s module import feature promises to improve productivity and ease of development. By simplifying imports, developers can focus more on crafting meaningful code and less on keeping their imports organized.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Delivering Software Securely: Techniques for Building a Resilient and Secure Code Pipeline

MMS Founder
MMS Satrajit Basu

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • A CI/CD pipeline potentially exposes sensitive information. Project teams often overlook the importance of securing their pipelines. They should have a comprehensive plan for securing their pipelines.
  • Access to a pipeline should be restricted. Everyone should have the least privileges required to perform their assigned jobs and no more.
  • To protect sensitive information and prevent it from getting exposed, all data at rest including logs should be encrypted.
  • Build and deployment logs should be treated with the same importance as application logs. These logs should be monitored regularly to make sure that there are no security vulnerabilities.
  • As part of the build and deploy process, data are often logged and stored. This necessitates the system to be compliant with regulatory standards.

Introduction

Data protection is a key component of cloud services, and code pipelines running on public clouds are no exception. Data protection is based on several basic principles designed to protect information from misuse, disclosure, alteration, and destruction. These principles are essential to maintain the confidentiality, integrity, and availability of data in your pipelines. Thus, let’s examine what these principles mean and why they are crucial to your DevOps security posture.

In a code pipeline, data protection is based on universally recognized principles in the field of cybersecurity. The principle of minimum privilege guarantees that only necessary access is granted to resources, thereby reducing the risk of data damage. Encryption acts as a robust barrier, breaking the data to make it unreadable to unauthorized users. In addition, redundancy prevents data loss by copying crucial data, and audit trails provide historical records of activities for review and compliance. These principles form the basis for building a safe environment that can support your continuous integration and delivery processes. Encryption is not just a best practice, it is the core of data privacy. Encrypting data during rest ensures that your source code and build artifacts remain confidential. When data is in transit, whether it is between pipeline stages or external services, SSL/TLS encryption helps prevent interceptions and man-in-the-middle attacks. This level of data privacy is not only about protecting intellectual property; it is also about maintaining trust and complying with strict regulatory standards governing how data should be managed. In this article, I’ll discuss these topics and use Amazon Web Services (AWS) to cite examples.

Restricting Access

A CI/CD pipeline, like any other sensitive resource, should have restricted access. For example, in AWS, Identity and Access Management (IAM) serves as the gatekeeper of pipeline safety. IAM is an important component in managing access to services and resources securely. You can create and manage users and groups and use permissions to allow or deny access to resources. By defining roles and attaching policies that clearly define what actions are permitted, you can control who can change your pipeline, access its artifacts, and carry out deployments. This granular control is crucial to protecting your CI/CD workflow from unauthorized access and potential threats. To minimize risks, it is essential to respect the principle of minimum privilege, providing the users and services with the minimum level of access necessary to carry out their functions. The following strategies for implementing this principle are:

  • Create specific roles for different tasks: Design access roles based on the user’s or service’s responsibilities. This avoids a universal permission policy that can lead to excessive privileges.
  • Audit permissions: Review service permissions and ensure that they comply with current requirements – use tools where possible.
  • Using managed policies: Usage of pre-configured policies with permissions for specific tasks, reduces the likelihood of a misconfigured permission.
  • Implement conditional access: This helps to establish the conditions for actions, such as IP whitelisting and time restrictions, to strengthen security.

These strategies ensure that, if a violation occurs, potential damage is limited by limiting access to compromised credentials.

Even with robust permission settings, passwords are vulnerable. Here Multi-Factor Identification (MFA) plays a role by adding an additional layer of security. MFA requires users to provide two or more verification factors for access to resources, significantly reducing the risk of unauthorized access. The benefits of implementing MFA on pipelines include:

  • Increased security: MFA requires a second factor — usually a unique password generated by hardware devices or mobile applications — even if password credentials are compromised.
  • Compliance: Many compliance frameworks require MFA as part of control measures. Using MFA, not only secures your pipeline but also meets regulatory standards.
  • User confidence: Demonstrating that you have multiple security control points builds confidence among stakeholders in the protection of your code and data.

Although implementing MFA is an additional step, the security benefits it brings to your pipeline are valuable.

By effectively restricting access to pipelines, you build a safe foundation for CI/CD operations. Through least privilege, you ensure that the right people have the right access at the right time. On top of that, the MFA places guards at the gate and asks all visitors to verify their identity. Together, these practices form a coherent defense strategy that makes your pipelines resistant to threats while maintaining operational efficiency.

Enhancing Logging and Monitoring

Why improving logging and monitoring is similar to installing a high-tech security system at home? In the vast digital landscape, these practices act as vigilant sentinels to mitigate potential threats and ensure that operations operate smoothly. Let’s explore their importance.

Like security cameras recording everything that happens in their field of view, a recording of a pipeline captures all actions, transitions, and changes. This information is crucial to identify and analyze potential security threats and performance bottlenecks. On the other hand, monitoring is a continuous process that analyzes these logs in real time and highlights abnormal activities that indicate security concerns or system failures. Together, they provide a comprehensive overview of the health and security situation of the system, enabling teams to react quickly to any abnormality. This combination of historical data and real-time analysis strengthens the pipelines against internal and external threats.

Some technologies that can be used to improve log management include: structured logs, a consistent format of log data to facilitate analysis; log rotation policies, preventing storage overflow by archiving old logs; and log aggregation, merging logs from different sources to create a centralized point for analysis. These tools and techniques will ensure that you have a structured, searchable, and scalable log system.

However, with logs, there are a few points that need to be considered. The most important here is to know what exactly is getting logged. It is imperative to make sure that no confidential information is getting logged. Any passwords, access tokens, or other secrets should not be present in any shape or form in a pipeline. If the code being built contains embedded passwords or files that contain sensitive information, these could get logged. So, it needs to be confirmed that applications don’t embed secrets, rather they access these secrets from a secret manager post-deployment. This would ensure that secrets are not getting exposed via build logs.

The next thing to consider is: who has access to the logs. There have been situations where pipelines are access-controlled, but the logs are publicly available in read-only mode. This is a common vulnerability that must be checked periodically to ensure only the necessary users can access the logs. As a last line of defense, it’s always a good practice to encrypt the logs.

Ensuring Compliance Validation

After examining the important role of recording and monitoring, we focus on the equally important aspect of compliance. Maintaining compliance is crucial to maintaining trust and ensuring your applications comply with various regulatory standards. Define the regulatory requirements that affect your pipeline and how automation and reporting functions can be used to stay on the right side of these regulations.

Regulatory Requirements

Navigating the sea of regulatory requirements is a daunting task for any organization. These regulations determine how data should be used and protected, and they can vary depending on industry and region. Common frameworks such as GDPR, HIPAA, and SOC 2 are often implemented, each with a complex mandate. For example, the GDPR applies to all businesses dealing with EU citizens’ data and provides for strict data protection and privacy practices. HIPAA protects medical information, and SOC 2 focuses on cloud service security, availability, and privacy. Understanding these guidelines is the first step in designing a compliant pipeline.

Automating Compliance Checks

AWS like other public clouds, has impressive automation capabilities. By automating compliance checks, teams can ensure that code, applications, and deployments comply with the necessary standards before reaching production. Configuration tools allow you to define rules that reflect compliance requirements. These rules automatically assess the extent to which resources comply with the policy and provide a continuous overview of your compliance status. This proactive compliance approach not only saves time but also reduces human errors and keeps your operations effective and secure.

Auditing and Reporting Features for Maintaining Compliance

An audit trail is the best defense against compliance checks. It provides historical records of changes and access that may be crucial during an audit. In AWS, CodePipeline is integrated with services such as CloudTrail to track every action on your pipelines and resources. This integration ensures that no stone is left unturned when it comes to demonstrating compliance efforts. In addition, robust reporting can help you generate the evidence needed to prove compliance with various regulations. Quick and accurate reporting on compliance status can greatly ease the burden during audit periods. In essence, ensuring compliance validation requires a comprehensive understanding of the relevant regulatory requirements, strategic automation of compliance verification, and robust audit and reporting mechanisms. Focusing on these areas can build a safe, resilient, and compliant pipeline that not only protects your data but also protects the integrity of your business.

Building Resilience

In the complicated world of continuous integration and delivery, the resilience of a pipeline is similar to a security net for your deployments. Understanding the concept of resilience in this context is the key. What does it mean for a pipeline to be resilient? Simply put, it means that the pipeline can adapt to changes, recover from failures, and continue to operate even under adverse conditions.

Understanding the Concept of Resilience

Resilience in a pipeline embodies the system’s ability to deal with unexpected events such as network latency, system failures, and resource limitations without causing interruptions. The aim is to design a pipeline that not only provides strength but also maintains self-healing and service continuity. By doing this, you can ensure that the development and deployment of applications can withstand the inevitable failures of any technical environment.

Implementing Fault Tolerance and Disaster Recovery Mechanisms

To introduce fault tolerance into your pipeline, you have to diversify resources and automate recovery processes. In AWS, for example, this includes the deployment of pipelines across several Availability Zones to minimize the risk of single failure points. When it comes to disaster recovery, it is crucial to have a well-organized plan that covers the procedures for data backup, resource provision, and restoration operations. This could include automating backups and using CloudFormation scripts to provision the infrastructure needed quickly.

Testing and Validating Resilience Strategies

How can we ensure that these resilience strategies are not only theoretically effective but also practically effective? Through careful testing and validation. Use chaos engineering principles by intentionally introducing defects into the system to ensure that the pipeline responds as planned. This may include simulating power outages or blocking resources to test the pipeline’s response. In addition, ensure that your disaster recovery plan is continuously validated by conducting drills and updating based on lessons learned. A regularly scheduled game day where teams simulate disaster scenarios helps uncover gaps in their survival strategies and provides useful practice for real incidents. The practice of resilience is iterative and requires continuous vigilance and improvement.

Strengthening Infrastructure Security

After exploring resilience strategies, we should focus on strengthening the foundations on which these strategies are based: infrastructure security. Without a secure infrastructure, even the strongest resilience plan may fail. But what exactly does it mean to secure the infrastructure components and how to achieve this fortification?

Securing Infrastructure Components

The backbone of any CI/CD pipeline is its infrastructure, which includes servers, storage systems, and network resources. Ensuring that these components are secure is essential to protect the pipeline from potential threats. The first step is to complete an in-depth inventory of all assets – know what you must protect before you can protect it. From there, the principle of minimizing privilege is applied to minimize access to these resources. This means that users and services have just enough permission to perform their tasks and no more. Next, consider using private virtual clouds (VPCs) and dedicated instances to isolate pipeline infrastructures. This isolation reduces the likelihood of unauthorized access and interference between services. In addition, implement network security measures such as firewalls, intrusion detection systems, and subnets to monitor and control network traffic from resource to resource.

Vulnerability Assessment and Remediation

When vulnerabilities are not controlled, they can become the Achilles heel of a system. Regular vulnerability assessments are crucial to identifying potential security vulnerabilities. In AWS for example, tools like Amazon Inspector can automatically evaluate application exposure, vulnerabilities, and deviations from best practices. Once identified, prioritize these vulnerabilities based on their severity and correct them promptly. Patches must be applied, configurations must be tightened, and outdated components updated or replaced. Remediation is not a one-time task, but a continuous process. Automate scanning and patching processes as much as possible to maintain a consistent defense against emerging threats. These checks must be integrated into the continuous delivery cycle to ensure that each release is as safe as the last.

Embracing Security Best Practices

Security is a continuous practice embedded in software development and delivery. But what are the security practices recommended by the industry that should be applied to ensure the integrity of your CI/CD pipeline? Let’s dive into the essentials.

Overview of Industry-Recommended Security Practices

Starting with the basics, you must secure the source code. It is the blueprint of your application and deserves strict protection. Implementation of best practices for version control, such as the use of pre-commit hooks and peer review workflows, can help mitigate the risk of vulnerabilities in your code base. In addition, the use of static code analysis tools helps identify potential security problems before deployment. In addition, dynamic application security tests (DASTs) during the staging phase can discover runtime problems that static analysis might miss. Encrypting sensitive data in your pipeline is also essential. Whether it is an environment variable, database credentials, or API key, encryption ensures that this information remains confidential. Moreover, security practices continue beyond the technical level. It is essential to raise awareness and train developers on secure coding techniques. Encourage your team to keep abreast of the latest security trends and threats to promote an environment in which security is the responsibility of all.

Continuous Security Improvement Through Regular Assessments

Complacency is the enemy of security. In a rapidly evolving environment, regular assessments are essential to maintaining strong defenses. This includes periodic penetration tests to simulate attacks on your system and identify vulnerabilities. But it is not just about finding gaps; it is about learning from them and improving. Post-mortem analysis after any security incident is invaluable in preventing similar problems in the future. Another aspect of continuous improvement is to regularly review the role and policy of IAM to ensure that the principle of minimum privilege is strictly implemented. As projects grow and evolve, so do access requirements. Regular audits can prevent unnecessary permission accumulation that could become a potential attack vector. Finally, keep your dependencies up-to-date. Third-party libraries and components may become a liability if they contain unpatched vulnerabilities. Automated tools can help track the version you use and alert you when updates or patches are available.

Collaborative Security Culture and Knowledge Sharing Within Teams

In the context of CI/CD pipelines, it is imperative to encourage a collaborative security culture to ensure that the entire team is aligned with best security practices. This involves the creation of clear communication channels to report potential security issues and the sharing of knowledge about emerging threats and effective countermeasures. Workshops, training sessions, and even gamified security challenges can improve engagement and knowledge retention between team members. By making security a part of daily conversations, teams can proactively address risks rather than react to incidents. Furthermore, by integrating security into CI/CD pipelines, automated security checks become part of processes, and developers can respond immediately. With these practices, teams secure pipelines and establish a strong security culture permeating all operations. By continuously assessing and improving security measures, staying on the cutting edge of industry standards, and encouraging collaborative security-centered approaches, pipelines can be resilient and secure.

Resilience and Security in Other Cloud Platforms

Throughout this article, I have used AWS to exemplify the various aspects of resilience and security. It is important to note that these technologies are also available in other cloud platforms like Azure and GCP. Azure DevOps comes with a suite of products to implement modern pipelines. Azure Key Vault can be used to manage keys and other secrets. Google Cloud also provides services like Cloud Build, a serverless CI/CD platform, and other related services.

Fundamentally, the principles, techniques, and design considerations for building a resilient and secure pipeline are of prime focus. The technologies to implement these are available in all popular public cloud platforms.

Conclusion

In this article I have elaborated the principles of data protection, the importance of encryption as a reliable defense mechanism for maintaining data privacy, and best practices for data protection during and after transit. To summarize, for moving towards resilience and security, it’s essential to encrypt sensitive information, work with the least privileges, and store secrets in vaults. Consequently, you must establish robust audit trails, maintain historical activity logs, and protect your DevOps practices while complying with stringent regulatory standards.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: If LLMs Do the Easy Programming Tasks – How are Junior Developers Trained? What Have We Done?

MMS Founder
MMS Anthony Alford Roland Meertens

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Introduction [00:18]

Michael Stiefel: Welcome to the What Have I Done Podcast, where we ask ourselves do we really want the technology future that we seem to be heading for? This podcast was inspired by that iconic moment at the end of the movie, The Bridge on the River Kwai, where the British Commander, Colonel Nicholson, realizes that his obsession with building a technologically superior bridge has aided the enemy and asked himself, “What have I done,” right before he dies, falling the detonator that blows up the bridge.

For our first episode, I wanted to discuss the impact of large language models on software development. I have two guests, well known in the world of InfoQ, Anthony Alford and Roland Meertens. Both host the “Generally AI” Podcast for InfoQ.

Anthony is a director of development at Genesys, where he’s working on several AI and NL projects related to customer experience. He has over 20 years of experience in designing and building scalable software. Anthony holds a PhD degree in electrical engineering with specialization in intelligent robotic software, and has worked on various problems in the areas of human AI interaction and predictive analytics for SaaS business optimization.

Roland is tech lead at Wayve, a company which is building embodied AI for self-driving cars. Besides robotics, he has worked on the safety for dating apps, transforming the exciting world of human love into the computer readable data. I can’t until that and LLMs get together. He also bakes a pretty good pizza.

Welcome, both of you, to the podcast.

Anthony Alford: Thanks for having us.

Roland Meertens: Yes, thank you very much.

The Software Development Lifeycle of the Future [02:05]

Michael Stiefel: I would like to start out with the assumption that we live at some future time, not probably too far in the future, when the problems of using LLMs for writing code have largely been ironed out. The first question to ask both of you is what does the software development life cycle look like in this world?

Roland Meertens: Well, what I assume is that bots will automatically find issues within software, they can automatically raise a PR, and then other bots automatically accept the improvements. Basically, none of the code is readable anymore, which is not that much different than today.

Anthony Alford: That’s a very cynical take. But then you’ve worked with robots a lot. I start out with my general principle here is, probably what will actually happen is going to be surprising to a lot of people. I’m going to go with the safe bets and I’m going to go with the concept of what robots are for. They’re for automating the tasks that people find dangerous, dirty and dull. I’ve never really experienced a lot of danger or dirt in my software development career, but there’s a lot of dullness. I’m going to go with the idea that the automation, the LLMs, are going to take care of the dull stuff. Like Roland said, that’s definitely pull requests, code reviews for sure. But also things like writing tests, writing documentation. Things that we find hard, like naming variables.

The idea is to free up the time for the human engineers to focus on the important things, like do we use spaces or tabs?

Michael Stiefel: In other words, coding standards.

Roland Meertens: Those are things we should have, ideally, already automated, like those decisions you should give to an intern normally. I think that’s at least already something which you give away.

Who of you two is using GitHub Copilot at the moment?

Anthony Alford: I’ve used it for fun side projects. It’s not bad.

Roland Meertens: But you’re not using it for your day-to-day work?

Legal or Regulatory Issues [04:24]

Anthony Alford: That’s an interesting … One of the premises of this episode is that all of the problems have been ironed out. One of the problems for us, professionally at our company, is we don’t want to send data out into the potential training dataset. There’s also concerns about the code that you get back. Who owns the copyright to that code, for example? No, we’re not using Copilot at work.

Roland Meertens: But it’s because of legal trouble and not because of technical capabilities?

Anthony Alford: More or less, yes.

Teaching Future Developers [04:57]

Michael Stiefel: Well, both of you have hit on the idea that we’re going to use this technology to do all the dull stuff and the easy stuff. Isn’t that traditionally where novice programmers get trained? So the question then becomes, if we’ve automated all the easy stuff, how are we going to train programmers? What is the programming class of the future going to look like?

Anthony Alford: The Copilot is actually, I think, a pretty useful paradigm, if we want to use that word.

Michael Stiefel: Do you want to explain to some of the listeners what the Copilot is? Because they may or may not know what it is.

Anthony Alford: Yes, it can mean a couple things. The capital letter Copilot is a product from GitHub that is a code generating LLM. You type a comment that says, “This function does,” whatever you want the function to do and it will spit out the entire code for the function, maybe.

Michael Stiefel: In whichever language you tell it to.

Anthony Alford: Right, exactly. Copilot could also mean an LLM that’s assisting you and I think that might be a nice model for training novices. Maybe more it’s a real time code reviewer, a real time debugging assistant might even be better.

The other way to look at it is maybe the LLMs save your senior programmers so much time that they’ll have nothing else to do but mentor the younger ones. I don’t know.

Michael Stiefel: Well this is, I think what you’re hitting on, is one of the I’ve always found the paradoxes of programming technology in general. It’s that unlike other engineering … For example, if you’re a civil engineer, you can spend your entire life building the same bridge over and over again.

Anthony Alford: It’s the Big Dig.

Michael Stiefel: Well, yes. For those of you who don’t live in Boston, that was a very interesting civil engineering experience for many years, paid for with taxpayer money from throughout the United States. But in any case, there were very few projects like that in the engineering world. You do something that has never been done before.

When I was a graduate student, I took nuclear engineering. On one of the final exams of reactor design was we were supposed to design the cooling system for a nuclear reactor. Not from physics first principles, but taking the ASME standard and applying that to the design. Software’s very different. In software, if you want another copy of something, we just copy the bits. We tend, in software, always to do new things that we haven’t done before because otherwise, why write a software program?

The question is how do you take a LLM, or any technology that is trained on the past, and apply it to the future that we don’t necessarily know what it is?

Roland Meertens: But aren’t you also frequently doing the same thing, over and over again?

Michael Stiefel: Yes.

Roland Meertens: As humankind.

Michael Stiefel: Yes.

Roland Meertens: That’s why Stack Overflow is so popular because everyone seems to be solving the same problems every day.

Michael Stiefel: Yes. But the question is … Let’s say, for example, I go back to the days, well I don’t want to say exactly how old I am, but I remember card readers and even before virtual memory. Yes, there were repetitive things, but people forget things like compilers, linkers, loaders, debuggers. These were all invented at some point in time and they were new technologies that required insight. Firewalls, load balancers. How does a LLM take all these things into consideration if it’s never seen them before?

Roland Meertens: Yes. But also, I think that we started this discussion with how do you learn if you don’t know about the past? In this case, I’m the youngest being only 33-years-old and I unfortunately missed out on the days of punch card programming.

Michael Stiefel: You didn’t miss much.

Roland Meertens: That’s just what I’m asking is how often do you, in your daily work think, “Oh yes, I remember this from punch card days. This is exactly what I need.”

Michael Stiefel: But the point is who would have thought of a compiler? In other words, a LLM is not going to come up with something new. Is it going to look at data and say, “Ha, if we could do this, it would be great, and this is how we’re going to do something that I’ve never seen before?”

What Will Programmers Actually Understand [09:39]

Roland Meertens: I’m mostly wondering what this means for future senior developers. These are the people who are beginning with programming today. I think the question is are they going to learn faster because they can focus on code 100% of the time, instead of having to go through many obscure online fora to find the API code they need? Or are they not going to build a thorough understanding of code and what the machine actually does, because they just ask ChatGPT to generate everything they do?

Anthony Alford: Yes. I was going to say yes it’s true that sometimes software developers have to solve a problem that has not come up before, but really I think a more common use case is it’s like you were talking about with ASME standards, you’re basically putting together pieces that you’ve already seen before. Maybe in a novel way, but quite often not really. “I need to make a rest API. I need something that stores …” This is how these frameworks, like Rails and Django work. They know that there’s common patterns that you’re going to go to. I think that the LLM is just the next iteration of that.

Uses of Large Language Models

LLMs Embody Software Patterns of the Future [10:56]

Michael Stiefel: So it’s the next iteration, design patterns, architecture patterns, enterprise integration patterns.

LLMs as The DevOp First Responder [11:02]

Anthony Alford: Probably. The other thing is, like I said, the code itself is not the entire job. There’s a lot of other stuff. Let’s take the devops model. In my company, the software engineers who write code are on call for that code in production. What if we could make an LLM be the first responder to the pager and have it either automate a remedy or filter out noise? Or pass it along when it’s stuck. LLMs could do other things like helping you design your APIs, helping you generate test cases, maybe even debug or optimize your code.

How Understandable Will LLM Written Code Be? [11:46]

Again, I talked about automating the parts that are dull. Or maybe not necessarily dull, but maybe harder. I don’t think we’re going to see LLMs writing all the code. Maybe we will, but I think it’s still very important. Like Roland said, we don’t want code that’s just completely LLM generated, because then nobody will know how it works.

LLMs as Code Explainers [12:06]

Michael Stiefel: Well, I think it’s hard enough to sometimes figure out how the software we write manually works.

Anthony Alford: Exactly.

Michael Stiefel: In fact, I’ve come across code that I wrote maybe two or three years ago and look at, and say, “How does this work?”

Anthony Alford: Sure. That’s what the LLM-

Michael Stiefel: Incidentally, sometimes I try to convince myself there’s a bug, but then I realize I was right the first time. It’s just that I forgot the intricacies of what was going on.

Roland Meertens: It is worse if there’s a comment which says, “To do: fix this,” next to it.

Anthony Alford: Maybe that’s the way that LLMs can help us. The LLM can explain the code to you maybe. That would be a great start.

Michael Stiefel: Yes.

Anthony Alford: What does this code do?

Michael Stiefel: I like your idea about the pager. Having worn a pager for a small period of time in my life, anything that would prevent me from getting beeped in the middle of something else would be a great improvement.

LLMs and Refining Requirements [13:03]

We haven’t quite figured out how you’d train the new programmers yet, because I really want to come back to that. But how do you describe requirements… One of the things that’s the toughest, to do in a software development, is figure out what the requirements are. I’ve worked for myself for many, many years and I’ve said, over the years, there are only three things that I do in my life. Inserting levels of indirection, trading off space and time, and trying to figure out what my customers really want.

Anthony Alford: I actually made a note of that as well. If we could get an LLM to translate what the product managers write into something that we can actually implement, I think that would be a huge … I’ve had to do that myself. The product managers, let’s assume they know what the customers want because they’re supposed to. They know better than I do. But what they write, sometimes you have to go back and forth with them a couple of times. “What do you really mean? How can I put this in more technical terms?”

Michael Stiefel: Well, I think the point is they very often don’t understand the technology and the customers don’t. Because one of the things I find is just because you can write a simple English language statement doesn’t mean it’s easy to implement. That would be interesting. How would you see that working with the LLM? In other words, the product manager says, I don’t know, that we need something fast and responsive. How would the LLM get the program manager to explain what they really mean by that?

Roland Meertens: I think here, there’s two possible options again. On the one hand, I think that sometimes thinking about a problem when manually programming gives you some insights, and that also goes for junior developers who need to learn how to code. It’s often not the results which counts, but the process. It’s not about auto generating a guitar song, it’s about slowly learning and understanding your instrument.

LLMs Generating Prototypes [15:10]

On the other hand, if you have a product manager which asks for a specific website, if you can have ChatGPT generate you five examples, and they can pinpoint and say, “Yes, that’s what I want.” If you can auto generate mock ups or auto generate some ideas, then maybe you get it right from the first time, instead of first having to spend three sprints building the wrong product.

Michael Stiefel: A LLM, another idea is to have it be an advanced prototyping tool.

Anthony Alford: Absolutely. I think we’ve all seen that demo where somebody drew a picture on a napkin of a website.

Michael Stiefel: Yes.

Roland Meertens: Right.

Anthony Alford: And they give it to the image understander.

Michael Stiefel: Interesting. I think because there’s been a lot of attempts to do prototyping code generation, I’m sure you’ve all seen them in the past. There’s a frustration with them, but maybe large language models can … Again, how would you train such a prototype? Would you put before them samples?

Because one of the things I think with machine learning in general, they don’t understand vagueness very well. In other words, you give a machine learning algorithm something, it comes up with an answer, but it doesn’t come up with probabilities. When you’re doing prototyping, you’re combining probabilities and there’s no certainty. How do you solve that kind of problem? If you understand what I’m trying to get at.

Roland Meertens: But isn’t this the same problem as we used to have with search engines?

Michael Stiefel: Yes.

Roland Meertens: Where people would go to a search engine and they would type, “Please give me a great recipe for a cake.” Now everybody knows to type, “Chocolate cake recipe 15 minutes,” or something.

Michael Stiefel: Right, because we trained the humans that deal with the software.

Roland Meertens: Yes.

Michael Stiefel: But ideally, it really should be the software that can deal with the humans.

Roland Meertens: Yes. I think my father already stopped using search engines and is now only asking ChatGPT for answers, which I don’t know how I feel about that.

Michael Stiefel: I asked ChatGPT to come up with a bio of myself and it came up with something that was 80% true, 20% complete fabrication, but I couldn’t tell. I know because I knew my own bio, but reading it, you couldn’t tell what was true and what was false.

Roland Meertens: But are you paying for GPT-4?

Michael Stiefel: This was on GPT-3 I think I was doing this.

Roland Meertens: Yes. I noticed that on GPT-3, it also knew my name. I assumed that it knows all of our names because of InfoQ. Then it generated something which, indeed, was 80% true and 20% made me look better than I actually am.

Michael Stiefel: Yes.

Roland Meertens: I was happy with that. GPT-4 actually seems to do some retrieval.

Anthony Alford: Isn’t that a game, two truths and a lie, or something like that?

Michael Stiefel: Yes. Yes, yes, it is. But isn’t that the worry about using large language models in general?

Anthony Alford: Yes, but I would submit that we already have that problem. The developers have been writing bugs for-

Michael Stiefel: Yes.

Anthony Alford: Ever since, I guess even Ada Lovelace maybe wrote a bug, I don’t know.

Michael Stiefel: Well, supposedly the term bug came because The Grace Hopper found an insect in the hardware. But I think the saving grace, I would like to think, with software developers, unlike the general public, they could recognize when the LLM has generated something that makes no sense. It gets caught in a test cause. In other words, you could have maybe one LLM generates the test cases and another one generates the code, the way they like to have battling, sometimes, machine learning systems.

Anthony Alford: Yes, and maybe that’s the way we’ll wind up doing this is invert it. The human is the code reviewer, although I can’t imagine that would … I hope it’s not to that point. Nobody likes reading code.

Michael Stiefel: Oh, no. Yes. To do a code review, I’ve done code reviews because again, being in this business for a long time, that was a big thing. Code reviews, at some point in time, people said it was going to solve the software quality problem. To do a code review is really hard.

Anthony Alford: Yes.

Michael Stiefel: Because you have to understand the assumptions of the code, you have to spend time reading it. I would hope that the LLMs could do the code reviews.

Anthony Alford: Me, too.

Roland Meertens: Well, I mostly want to make sure that we keep things balanced, not that the product manager automatically generates the requirements, and then someone automatically generates the code. Then the reviewer, at the end, has to spot that the requirements were bad to begin with.

Michael Stiefel: Well, you see, you raise an interesting point here because when we speak of requirements right now, even when you talk about the program manager, the project manager having requirements, that’s already where the requirements are some way fleshed out. But if you’ve ever done real requirements analysis and I have, you sit down with the client and they don’t really know what they want, and you have to pull that out of them. There is an art form of asking open ended questions. Because most of, when we do requirements analysis, ask them, “Do you want A or do you want B?” But you already narrowed the field and given the person you’re asking perhaps a false choice. You have to be able to deal with open ended questions. Do you see LLMs being able to deal with open-ended questions? And then refine down, as answers come.

Anthony Alford: I really like the idea of the LLM as a rapid prototyper and even a simulator. I’ve seen projects where people have an LLM pretend to be an API, for example. I have a feeling that would be quite handy.

Michael Stiefel: In that case, you’d still be training the software developers the old way.

Let’s go with that idea, for the moment. The LLMs are the rapid prototypers. They may or may not generate reusable code. Because one thing I found with prototypes is you have to be prepared to throw the whole thing away.

Anthony Alford: Yes.

Michael Stiefel: Because you very often get in trouble when you build a prototype and then you say, “Oh, I want to salvage this code.” You have to be prepared to throw the whole thing away. So we use the LLM to come up with a rapid prototype. Then, what’s the next step?

Also, I’m thinking, when I think of the next step, how do you do the ilities? The security, the scalability. Because it’s one thing to design an algorithm, it’s another thing to design an algorithm that can scale.

Anthony Alford: Yes. Well, the security part, for example, our company, our security team is already doing a lot of automated stuff. I think that’s another great fit for an LLM. Maybe not an LLM, but automation for sure, is something that security people use a lot. Maybe LLMs are good on reading things like logs.

Michael Stiefel: Yes.

Anthony Alford: To find an intrusion, for example. Anyway, that’s the only part of that that I have an answer for. I don’t know about scalability, other than maybe helping you generate load at scale somehow. Roland, you got an idea?

Roland Meertens: No, not really. In this case, I must also say that as a human, I don’t always know how to do things except for go to InfoQ and see how other experts do things. I can only imagine that an LLM has already read all of the articles you guys wrote so they can already summarize them for me.

Michael Stiefel: Assuming I was right to begin with, in the article.

Anthony Alford: Yes.

Roland Meertens: Yes. Yes, but in that sense, that those code generation requirements I think could be a good way to brainstorm. I think that something like ChatGPT can remind you to also think about the ilities.

Michael Stiefel: What I hear being developed is that the LLMs are essentially being used as idea generators, checkers to make sure that the human has done, “Have you considered this? I’ve looked at the code.” Yes, it may generate a lot of stupid things, just looking at the code, but it will generate a checklist. “If you use this API, have you considered this? Should you use a Mutex here?” Or something like that. Is that where we’re going with this?

What Could Go Wrong? [24:20]

Roland Meertens: Well, as this podcast is about What Have I Done, I think the dystopian thing I’m not looking forward to is that I think there will be a day where someone adds me, a junior developer will add me to a pull request. I argue that I am right and their AI generated code is wrong, and then I probably learn that their ChatGPT generated code was better to begin with and their AI generated proposal is faster than my code.

Michael Stiefel: Okay, that’s humiliating for us, but that’s not necessarily a bad future. Going with that idea, what could go wrong? Let’s say we were doing a pre-mortem. You’re familiar with the idea of a pre-mortem. Something is successful and you ask yourself, “Well, what could go wrong?” What could go wrong here?

Anthony Alford: I think we’ve already touched on it. I think a big risk of having generated code like this is when something goes wrong, nobody has an idea of how to solve it.

Here’s something, I have this idea with autonomous vehicles. Some of you who are experts in that area may fact check me here. My suspicion that if all vehicles were autonomous, overall traffic accidents would go down. But the ones that happened would be just absurd. It would be stuff that no human would ever do, like on The Office, driving into the lake or something.

Michael Stiefel: Right.

Anthony Alford: I suspect something similar would happen with-

Michael Stiefel: Well, yes.

Anthony Alford: Generated code.

Michael Stiefel: Let’s take your analogy, because I think there’s something very interesting about this. Before you get to fully … I think with self-driving cars, the problem is the world getting to self-driving cars. When you have humans and self-driving cars at the same time, you have a problem. I’ll give you two examples.

One is there’s something called the Pittsburgh Left. Which, for those of us who drive the United States, you generally, if you’re at an intersection, the cars going straight have the right-of-way over the cars that are turning. But in Pittsburgh, there’s a local custom that those making the left turn have the right-of-way. The question is, you have a car that was trained on data in some other city that comes to Pittsburgh, what is that situation? Or you have the situation happen in Sweden, where they went from driving on the left side of the road to the right side of the road overnight. Humans did wonderfully. I don’t see how a self-driving car could do that.

Roland Meertens: I’m only hearing that we need to learn from situations as fast as possible, and that we need and learned driving, so you can actually capture, all in once, as in all the different areas.

Michael Stiefel: Yes. But also, I think the easy case is when everything is automated. As you say, Anthony, there is that case where it comes across something it didn’t expect, like a chicken running across the road. But if everything’s automated, then everything’s predictable because they all know what they should do. The problem is in the world where you’re half automated and you’re half not, that’s where you get into a problem. I don’t know if there’s an analogy here with, you brought up self-driving cars, an analogy with doing the LLMs to generate code and not knowing, if the LLMs are always generating the code?

Roland Meertens: Well, I still think the problem is mostly here with humans, that thinking about the code and thinking about the problem gives you insights into what you actually want to achieve. Whereas if you automate everything, at the end, you maybe have a website very quickly but why did you make this website again? Were you just happy that ChatGPT generated it? I think that’s at least one thing which I noticed when using things like ChatGPT, is that at the start, I used it quite often to help me write a message to a friend. I’m like, “I don’t want to lose the capability of writing messages to a friend.” I think we all lost the capability of remembering 10-digit phone numbers because we just stored them in our phone.

Michael Stiefel: Well, I still could do that but that’s because I got trained with that a long time ago.

Roland Meertens: Yes. The younger folks, they don’t know how to remember 10-digit numbers anymore.

Michael Stiefel: Well, it always amazes me at the checkout, when someone is at the checkout and I sometimes like to use cash instead of a credit card. I can compute the change in my head and the person at the other end is, “How’d you do that?”

Roland Meertens: Yes. Yes, maybe at some point, someone will say, “Wait, you can actually open Notepad and edit the HTML yourself.”

Michael Stiefel: Well, that’s Back to the Future.

Roland Meertens: Yes.

Anthony Alford: We’re going to turn this into “The Kids Today”.

Michael Stiefel: Actually, you raise a very interesting point there because … Let’s go back to the point, you were both talking about before about figuring out what ChatGPT has done. The question is how elegant will the code … Because if ChatGPT can be clever, you could have the equivalent of go-tos all over the place. They could produce spaghetti code and they understand it, but then if you have to, as you say, open up Notepad and look at the HTML, you’ll look at it and say, “What the hell is going on here?” Is that a danger?

Roland Meertens: Have you guys seen that DeepMind’s AlphaDev found a faster sorting algorithm?

Michael Stiefel: No.

Anthony Alford: Yes, I did see that headline.

Roland Meertens: Yes. A while ago, they trained some kind of reinforcement learning agent to generate code and their algorithm found, I think they went from, I don’t know, 33 instructions to 32 or something like that. It was a bit faster than the fastest human written code.

Michael Stiefel: But the question is in sorting algorithms, because if I go back to the good old days, we had to choose them and write them yourself. Sorting algorithms are not universally used in all cases. For example, Bubble Sort, if I remember going back to, is a very good sort except if the data is almost already in ordered state. Do I recall that right? I don’t know.

The question is, in the situation you just had where you came up with a faster sort, does the algorithm now know the cases to use this sort? Or is it just going to blindly use it every place?

Roland Meertens: Or maybe you can apply this algorithm to your specific deployment.

Michael Stiefel: Yes.

Roland Meertens: You just tell it, “Optimize my P99 latency.” Then you don’t know what’s going on, what kind AB tests is set up, but your P99 latency slowly goes down. You don’t know if that’s maybe because your AI starts rejecting customers from a certain country. I think that’s the real danger is what are you optimizing, at the end of the day?

Michael Stiefel: So what you’re saying that in this world of LLMs, we have to log a lot of stuff.

Anthony Alford: Yes. Well actually, now that I started thinking about it, if the LLM is going to be part of your software development pipeline, we’re going to want to check in the prompts into Git. You’re going to want to commit your prompts to the source code because now, that’s the source code.

Michael Stiefel: Right.

Anthony Alford: Maybe.

Michael Stiefel: So you have version control on the prompts, and you have … Well, the question is then …. All right. Let me think about this for a moment. Because many, many years ago, I worked in the computer design world for military. The military is one of the users of the application. They used to, when they archived the designs, they archived the software that was used to create the design, so if they ever had to revise the design, they could bring back the exact software to use it. Are you suggesting perhaps not only do we archive the prompts, but we archive the LLM that was used with those prompts as well?

Anthony Alford: I think you should. It’s almost like PIP freezing your requirements for a Python environment. I don’t know. It depends on the model we’re using. If the LLM is just a Copilot and it’s helping you, that’s basically the same as copy and pasting from Stack Overflow.

Michael Stiefel: Right, right. Because you have the responsibility, in the end, for what you cut and paste, or what you put in, or what was generated. The question then becomes, at some point, does the LLM become compilers, who we just assumed that it works?

I can remember, one time, actually finding a bug in a compiler because it put an instruction on a page boundary, and we took a page fault that actually caused it, but that’s really sophisticated and you have to understand what’s going on behind the scenes. Are people going to understand?

Roland Meertens: I think you can build up this understanding faster. I personally, people are probably going to laugh, but I have no clue how to write SQL queries or work with other large databases. I don’t really know how to work with, I don’t know, PySpark. But nowadays, all these tools have AI built in, so the only thing I do is say, “Fetch me this data from this table, and then do this with it, and select these specific things.” The first day that you’re doing this, you have no clue what you’re doing but you get some auto generated code which does the thing for you. That’s great, but then after a couple of weeks, you start seeing patterns and you actually start learning it. So it’s more interactive learning for me, and slowly learning an API through AI generated commands. Whenever something crashes, you can actually ask it to fix it, which is insanely powerful.

Michael Stiefel: But you said something very interesting. One of the things you do learn when you’ve done SQL, and believe me, I’ve written my share of SQL in my life, is that for example, if you’re doing certain types of queries, you may want to put indices on columns.

Anthony Alford: Hints.

Michael Stiefel: Or hints. Or you may want to renormalize or denormalize things, for example, for performance. There are all kinds of things that you may not learn or the thing may not know to do. Again, I guess what I’m trying to get at is there’s always some sort of a contextual or meta problem, so what I’m afraid of, is in this new world of LLMs do a lot, that people lose their knowledge of the meta problems. They lose their ability to make change, they lose their ability to have long attention spans, or whatever it is, and they lose the context and they begin to trust these things. Then we find ourselves in a situation where it’s too late to get out of, except to rip the whole thing up.

Anthony Alford: I don’t know if we’ll get to that point. But I do think that … As someone with teenage children, I can see the other side. There’s a reason we’re not still writing machine code, most of us.

Michael Stiefel: Yes.

Anthony Alford: Some of us do. Very few of us, I imagine. But I’m sure that, when the compilers came along, everybody was saying, “These kids today don’t know how to write machine code.”

Michael Stiefel: Yes, they did. There were some, they did say it. They did say, “They don’t know how to write assembly.”

Anthony Alford: But I still learned it and I wasn’t that long ago, I hope. But anyway, where I was going was I like what Roland said. If we can use these things as tools to help us learn things, help increase our productivity, I think that is a good future. I think you sure will lose a few skills, but sometimes … Really, in the grand scheme of things, is writing machine code a skill that people still need?

Michael Stiefel: No, probably-

Anthony Alford: People still can write and writing’s been around a long time.

Michael Stiefel: I know that when phones first came out, the ability to write machine language code was very important. That skill had to come back because you didn’t have virtual memory. You had to worry about memory mapping and things like that, because again, this goes back to the whole context.

Anthony Alford: My wife was still writing assembler in the 21st Century for embedded software.

Michael Stiefel: Yes.

Anthony Alford: For sure.

Michael Stiefel: Again, this comes back to I guess the point of context and knowing where the tool works and where the tool doesn’t work. I’m afraid that that would get lost in such a world, where people don’t know. I guess, as Donald Rumsfeld said, “It’s the unknown unknowns that get you.” Or the limits of the tools that get you. The more you automate, the more you run the risk of that. Where’s the happy medium?

Because again, economic efficiency is going to drive us. The most expensive thing in a programming project probably is the cost of the programmer.

Roland Meertens: Yes. Or alternatively, the highest cost is the very large SQL query this developer wrote, who had no clue how to use indices.

Michael Stiefel: Right.

Anthony Alford: I would say that’s R&D cost. What’s the cost of an outage, a multi-hour, multi-day outage of your software? It’s true, that there are always going to be companies that are foolishly shortsighted in some ways, but the ones that survive, we hope, are the ones who are not.

Michael Stiefel: But the question of what is the path for then getting them to survive? How much suffering is there in the process of that evolution? Dinosaurs disappeared in the process of evolution. They didn’t make it, but that took a long time. The question then becomes how do we know what we don’t know? Because I think the economic efficiency, I’m convinced because I’ve seen this happen over and over again, most managers think programmers are replaceable. Interchangeable, especially if you get to larger organizations.

Anthony Alford: Fungible.

Michael Stiefel: Fungible, okay. That is going to provide an economic incentive for people to get rid of programmers and to use technologies. I remember years ago, before even LLMs came out, people were saying automatic generation of code is around the corner.

Anthony Alford: Yes, they’ve been saying that for a while.

Michael Stiefel: Yes. But there was a strong incentive for managers to believe this because programmers are pains in the neck, they cost money. They say no, so get rid of them. I’m playing Devil’s Advocate here, to some extent, but that is going to be a big push, I think, for having LLMs. Or startups that can’t afford programmers.

Roland Meertens: But then you will lose a lot of the tribal knowledge in companies.

Michael Stiefel: Yes, you do. But are they going to care? The more I think about this, the more I realize this world that we move to is a world that … A lot of technology changes. For example, take phones. No one thought about the attention span problem. No one thought of TikTok. No one thought of spyware. No one thought of all the privacy problems.

Roland Meertens: I agree that, in that sense, the difference between a good senior developer and a bad senior developer I think will be how much restraint they were able to use to either automatically accept every LLM generated proposal, or taking a minute to think through what is actually generated.

Although, reading is going to become a way, way more important skill, reading codes and quickly being able to understand what’s happening, and either accepting or rejecting it.

Michael Stiefel: Well, I think you’re right. One of the things that I learned very early in my programming career is that code is read far more often than it’s written. Code should be written from the point of view of the reader, the potential reader, as opposed to the writer. In other words, Donald Knuth, I’m sure you both know that name, wrote a book called Literate Programming. His idea was you program in an explanatory way.

Roland Meertens: Yes, but how often do you take a moment to read some nice codes? I always think it’s weird that if you are a science fiction writer and people ask you, “What is the last book you read?” If you say, “Oh, I never read books,” people will be like, “Oh, how can he be a good writer?” But if you ask programmers, “What’s the last code you read?” People are like, “Oh, I’m not going to read code for fun.” Even though that’s a new skill to have.

Michael Stiefel: I used to read code all the time because I had to debug other people’s code, look at code. I read code so that I could understand my peers. I’d learnt probably reading code from my betters, when I got stared. Perhaps that’s a skill that’s going to have to come back in the future world.

Can We, or Should We Stop This Future? [43:18]

I’d like to sum up at this point and ask both of you. The question is, based on our conversation and any thoughts you have, what does this world look like and do we really, really want to live in this world? Because now’s the time to say, “Stop.” Now, can we really say stop? Is it like the atomic bomb that’s going to be developed by somebody and then everybody has to have it? Or is there a realistic chance that we can say that this is not a good idea, or it should be restricted to a certain area?

Anthony Alford: I was going to say if anybody can stop it, it would be the lawyers, but I don’t know if they will.

Michael Stiefel: They’re already starting. Well, for example, did you know the story about the Air Canada bot?

Anthony Alford: Yes.

Roland Meertens: I know the story about the Air Canada bot, yes. It promised things it couldn’t promise.

Michael Stiefel: Yes. But the thing is that Air Canada tried to say, “No, no, no, this is a platform. We’re not responsible.” The judge said, “No, you’re responsible.”

Most of them I think are copyright lawsuits right now, but eventually there will be lawsuits. That’s one thing that’s certainly a possibility.

Anthony Alford: If I were going to say here’s how we could turn this future into a bright future, I’d say let the machines write the code, but let’s write really good tests. That’s what I tell my development teams today already. Let’s make sure we have really good tests. If we have really good tests, if we run tests all the time, we run tests for scale, for security and all that, fantastic. We’ll keep an old timer around who can go in and debug the code, or look at the code and make the tweaks. But other than that, let’s let ‘er rip.

Michael Stiefel: What happens when the old timer retires? How do you get the next old timer?

Anthony Alford: There’s always somebody who’s an old timer at heart.

Michael Stiefel: What you’re suggesting, if I understand you correctly, is a division of labor.

Anthony Alford: Yes.

Michael Stiefel: That also means that the LLMs have to learn to write testable code.

Anthony Alford: Well, that’s the key, isn’t it? That’s all of us.

Michael Stiefel: Yes, but that’s the point. In other words, for your division of labor to work, the LLM has to generate code that could be unit test, that could be scenario test, that could be use case test. It has to know how to do that.

Anthony Alford: The way we do testing is we use our API. We write tests that call our API, so end-to-end test it. Unit tests for sure, but end-to-end tests, that’s the truth.

Michael Stiefel: But you also have to test the user interface as well.

Anthony Alford: Right. That’s a great human skill.

Michael Stiefel: Yes.

Roland Meertens: I think for me, as someone who has been using Copilot ever since it was in beta phase, I learned a lot of new tricks in terms of programming from having this Copilot automatically inject code into my code. I have therefore explored APIs which I would normally ever do, like found new tricks which can do things faster. If I would have to think about them from scratch, I would have not implemented them this way. I found I became a better programmer with the help of LLMs. But people should show restraint. Don’t just accept every suggestion.

Michael Stiefel: Right.

Roland Meertens: Keep thinking about what you actually want to achieve. I think this constraint, for me, is the hardest. Sometimes when I get tired, I notice myself accepting everything which gets proposed and that’s the moment where I start writing extremely bad code, very bad prototypes, nothing makes sense anymore, it’s not maintainable anymore. You have to be at least a certain amount of awake to use it responsibly. It’s like a lot of things which are addictive, you got to keep using it responsibly.

Michael Stiefel: I have two more questions before we wrap up. One is, again, how do you train the new developers? You talk about exercising restraint. You’re coming from a world where you wrote code and you know what restraint is. How do you teach the next generation of programmers to fear, to respect, whatever adjective or verb you want to use, to know how to treat the technology?

Roland Meertens: You keep pressing the needs work button on GitHub until they finally get it right.

Michael Stiefel: Anthony?

Anthony Alford: Yes, one of those, maybe … I don’t have the answer, I should have the answer. I think it’s experience. We could copy and paste from Stack Overflow. How do we know whether we should or not? Some of its experience. I think with the younger generation, sometimes you throw them in the deep end.

Michael Stiefel: Yes.

Anthony Alford: Of course, you mentor them, you don’t let them drown.

Michael Stiefel: But they come close to drowning.

Anthony Alford: Really, that’s how people learn is by making mistakes so we’ve got to give people an environment where they can make mistakes. It’s just like now. We’ve got to give people an environment where they can make mistakes, where a mistake is not catastrophic, and I was going to say, cross our fingers.

Michael Stiefel: Well, yes. There’s always a certain amount of crossing our fingers with software development.

Just to ask you the last question. To think about Colonel Nicholson, what would cause you to say, “What have I done with this technology?” What is it that you would fear would cause you to ask yourself, “What have we done with this world?”

Roland Meertens: I think for me, there are a lot of things which used to be fun for me because there was a certain challenge to it, from generating simple websites about something silly or building cool tech prototypes about something cool. A lot of those things which would previously be fun weekend projects, you can nowadays generate with ChatGPT in five minutes and that takes all the fun out of it. As long as I can keep doing things manually, I am extremely happy. But just knowing that something can be automated sometimes takes the fun out of it, unfortunately.

Anthony Alford: I think just losing sight of the fact that we need to stay in control, we need to exercise restraint. We need to not let the robots take over. I think that’s the fear we all have. I think when it comes down to it, that’s the fear we all have is that the robots take over. I don’t think that’s going to end life, but it could be very expensive for a company if something bad happened. That’s where I would see, “What have I done?” I’m not a CTO, but if I were a CTO and we implemented this, and it ruined the company, “Oops.”

Michael Stiefel: Well, thank you very much, both of you. I found this very interesting and hopefully our listeners will learn something, and hopefully we’ll have a chance to make the world a little bit better place, get people to think about stuff.

Anthony Alford: Yes, it was a lot of fun. Thanks for having me.

Roland Meertens: Thanks for having us.

About the Authors

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: New JEPs, Payara Platform, Spring Boot 10th Anniversary Podcast

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for May 6th, 2024 features news highlighting: the May edition of the Payara Platform; and new JEP candidates, namely: JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), JEP 480, Structured Concurrency (Third Preview), JEP 479, Remove the Windows 32-bit x86 Port, and JEP 478, Key Derivation API (Preview).

OpenJDK

JEP 467, Markdown Documentation Comments, has been promoted from Proposed to Target to Targeted for JDK 23. This feature proposes to enable JavaDoc documentation comments to be written in Markdown rather than a mix of HTML and JavaDoc @ tags. This will allow for documentation comments that are easier to write and easier to read in source form.

JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), has been promoted from its JEP Draft 8323335 to Candidate status. Formerly known as Unnamed Classes and Instance Main Methods (Preview), Flexible Main Methods and Anonymous Main Classes (Preview) and Implicit Classes and Enhanced Main Methods (Preview), this JEP incorporates enhancements in response to feedback from the two previous rounds of preview, namely JEP 463, Implicit Classes and Instance Main Methods (Second Preview), to be delivered in the upcoming release of JDK 22, and JEP 445, Unnamed Classes and Instance Main Methods (Preview), delivered in JDK 21. This JEP proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, Java Language Architect at Oracle. The latest draft of the specification document by Gavin Bierman, Consulting Member of Technical Staff at Oracle, is open for review by the Java community. More details on JEP 445 may be found in this InfoQ news story.

JEP 480, Structured Concurrency (Third Preview), has been promoted from its JEP Draft 8330818 to Candidate status. This JEP proposes a third preview, without change, in order to gain more feedback from the previous two rounds of preview, namely: JEP 462, Structured Concurrency (Second Preview), delivered in JDK 22; and JEP 453, Structured Concurrency (Preview), delivered in JDK 21. This feature simplifies concurrent programming by introducing structured concurrency to “treat groups of related tasks running in different threads as a single unit of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability.”

JEP 479, Remove the Windows 32-bit x86 Port, has been promoted from its JEP Draft 8330623 to Candidate status. This JEP proposes to fully remove the Windows 32-bit x86 port following its deprecation as described in JEP 449, Deprecate the Windows 32-bit x86 Port for Removal, delivered in JDK 21. The goals are to: remove all code paths in the code base that apply only to Windows 32-bit; cease all testing and development efforts targeting the Windows 32-bit platform; and simplify OpenJDK’s build and test infrastructure, aligning with current computing standards.

JEP 478, Key Derivation API (Preview), has been promoted from its JEP Draft 8189808 to Candidate status. This JEP proposes to introduce an API for Key Derivation Functions (KDFs), cryptographic algorithms for deriving additional keys from a secret key and other data, with goals to: allow security providers to implement KDF algorithms in either Java or native code; and enable the use of KDFs in implementations of JEP 452, Key Encapsulation Mechanism.

JDK 23

Build 22 of the JDK 23 early-access builds was made available this past week featuring updates from Build 21 that include fixes for various issues. Further details on this release may be found in the release notes.

Spring Framework

It was a quiet week over at Spring, however the latest edition of A Bootiful Podcast, facilitated by Josh Long, Spring Developer Advocate at Broadcom, was published this past week featuring Spring Boot co-founders, Phil Webb, Software Engineer at Broadcom, and Dr. David Syer, Senior Staff Engineer at Broadcom, on the occasion of the 10th anniversary of the release of Spring Boot 1.0.

Payara

Payara has released their May 2024 edition of the Payara Platform that includes Community Edition 6.2024.5 and Enterprise Edition 6.13.0. Both editions feature component upgrades and resolutions to notable issues such as: not being able to delete system properties via the Admin Console; an HTTP/2 warning in the log file despite HTTP/2 having been disabled; and a JDK 21 compilation error stating the “The Security Manager is deprecated and will be removed in a future release.” More details on these releases may be found in the release notes for Community Edition 6.2024.5 and Enterprise Edition 6.14.0.

Open Liberty

IBM has released version 24.0.0.5-beta of Open Liberty featuring previews of updated Jakarta EE 11 specifications, namely: Jakarta Contexts and Dependency Injection 4.1; Jakarta Concurrency 3.1; Jakarta Data 1.0; Jakarta Expression Language 6.0; Jakarta Pages 4.0; and Jakarta Servlet 6.1. This release also includes support for using InstantOn with IBM MQ messaging.

Eclipse Foundation

The release of Eclipse Store 1.3.2 ships with bug fixes and improved Spring Framework integration featuring: the removal of the @Component annotation from the EclipseStoreConfigConverter class to prevent two beans from conflicting with each other; and the addition of configuration to disable the automatic creation of default instances of the StorageFoundation interface or Storage class. Further details on this release may be found in the release notes.

Apache Software Foundation

Versions 11.0.0-M20 and 9.0.89 of Apache Tomcat both provide bug fixes and notable changes such as: a refactor of handling trailer fields to use an instance of the MimeHeaders class to store trailer fields; improved parsing of HTTP headers to use common parsing code; a more robust parsing of patterns from the ExtendedAccessLogValve class; and additional time scale options to allow timescales to apply a “time-taken” token in the AccessLogValve and ExtendedAccessLogValve classes. More details on these releases may be found in the release notes for version 11.0.0-M20 and version 9.0.89.

Infinispan

The release of Infinispan 15.0.3.Final delivers notable changes such as: implementations of the ServerTask and ClusterExecutor interfaces should run user code in the blocking thread pool for improved control; lock Single-Instance File System directories to avoid shared usage among multiple caches mapped to the same directory; and a drop in support for OpenSSL due to the performance of the JDK implementation of TLS is now comparable with native. Further details on this release may be found in the release notes and more information on the recent release of Infinispan 15.0.0 may be found in this InfoQ news story.

JobRunr

Version 7.1.1 of JobRunr, a library for background processing in Java that is distributed and backed by persistent storage, has been released to deliver notable bug fixes and enhancements such as: a SevereJobRunrException thrown from the BackgroundJobServer class due to not being able to resolve an instance of the ConcurrentJobModificationException class; the DeleteDeletedJobsPermanentlyTask class does not use correct configuration resulting in jobs declared as DELETED are not permanently deleted after the configured time period; and an improvement in database migrations. More details on this release may be found in the release notes.

Testcontainers for Java

The release of Testcontainers for Java 1.19.8 ships with bug fixes, improvements in documentation, dependency upgrades and new features such as: a new getDatabaseName() method added to the ClickHouseContainer class to avoid an UnsupportedOperationException; eliminate the use of the non-monotonic currentTimeMillis() method in favor of the nanoTime() method defined in the Java System class as calculating a time lapse in the former may result in a negative number; and a new convenience method, getGrpcHostAddress(), added to the WeaviateContainer class to obtain the gRPC host. Further details on this release may be found in the release notes.

OpenXava

The release of OpenXava 7.3.1 provides bug fixes, improvements in documentation, dependency upgrades and notable new features such as: a new method, isJava21orBetter(), defined in the XSystem utility class to complement the corresponding methods to check for JDK 9 and JDK 17; and new automated tests for date, date/time and popup calendar related issues. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Optimizing JVM for the Cloud: Strategies for Success

MMS Founder
MMS Tobi Ajila

Article originally posted on InfoQ. Visit InfoQ

Transcript

Ajila: I’m going to talk about optimizing JVMs for the cloud, and specifically how to do this effectively. I’ll start off with a couple data points, just to frame the discussion that we’ll have. Ninety-four percent of companies are using cloud computing. It’s becoming more rare for developers to host machines on-prem now. Cloud computing offers the flexibility of increasing your computing resources. It decouples the task of maintaining hardware and writing software, so you can focus on writing applications, deriving business value, and you can offload the cost of maintaining infrastructure to your cloud provider. The result has been lower startup costs for a lot of small medium-sized businesses, and faster time to market. Here’s another one. With all these benefits from cloud computing, a lot of the costs that small to medium-sized business spend is on cloud computing, which makes sense. This is becoming increasingly where more of the budget is going. It’s very important to understand these costs and how to reduce them.

The main costs in providing cloud services are network, storage, and compute. The first, network, it’s the cost of buying the hardware, setting it up, maintaining it. Likewise, with storage. To some extent, the same is true with compute. However, for most people, compute is where the flexibility and the cost is. You typically pay for CPUs and memory, times the duration of using it. This is probably where most people have the most flexibility in the cost. A lot of cloud providers provide a pay as you go model, so you only pay for what you use. It’s very important to pay less if you can. Most people don’t want to pay more for the same service. You want to pay as little as possible. We saw on the previous slide that for compute, typically it’s CPU and memory that is the biggest cost there. For most applications, demand isn’t constant. It scales over time, certain days of the week, weeks of the year, might be more than others. Using a deployment strategy that scales to the demand is the most effective way to do this. This can be downscale to zero approaches. You scale up when demand is higher, you scale down when demand is low. Again, this is not new, lots of people are doing these. Technologies like Knative and OpenFaaS make this possible. Another way to reduce costs in the cloud is to increase density. Essentially, it’s to do more or the same with less. A lot of applications tend to be memory bound. In many cases, it’s using less memory to achieve the same goals. What needs to be done is fairly straightforward. You have to scale and you have to increase density. However, achieving these goals can be challenging. To have a successful scale to zero strategy, you need to have very fast startup times. Typically, you have to be under a second. The lower the better. Or else you have to do a scale to one type of thing. Equally as important is how much memory your applications are consuming. If you can use 2 Gigs instead of 3, again, the savings are quite large there.

Here’s the final one. Historically, Java has been the language of the enterprise. In the old days, Java applications were typically deployed as large monoliths, so large applications with long running times. You typically didn’t care about startup because your application is running for hours, days, sometimes even weeks. More and more we’re seeing Java workloads being moved to the cloud. In this case, the stats showing that 30% of Java applications are deployed in public clouds. This is from last year. That’s just Java applications. There’s more than just Java that runs on top of the JVM. The JVM is used quite a lot in the cloud. There’s a large shift away from monoliths and a shift towards microservices. If you’re running a JVM based app server, then it’s very important that your JVM starts up very quickly. It’s very important that you can tune your JVM to use as little memory as possible.

Background

In this presentation, I’m going to talk about techniques that you can use to improve startup time and reduce memory footprint. I am a JVM engineer for OpenJ9. Currently, my main focus is on startup enhancements, hence this talk. I’ve also done some work on Valhalla, so I’ve worked with the hot prototypes on OpenJ9. I did work on the virtual threads and FFI implementation of OpenJ9 as well. OpenJ9 is an open source JVM. It was open sourced back in 2017. It’s based on the IBM commercial JVM, so the JVM has been around for a couple decades. The OpenJ9 was the open source version in 2017. The branded name is called Semeru. They’re really the same thing. We’ll talk about two things, how to improve JVM startup time and how to improve JVM memory density.

How to Improve JVM Startup Time

Now I’ll talk about some of the solutions that are currently being looked at in industry. Some have been around for some time, but some are fairly new. I’ll talk about the benefits and drawbacks to each one. First, we’ll do a little review. Here’s the JRE on the left, and you have the class libraries, and then the JVM. The JVM is the engine that runs the applications. This is not a complete diagram by any means. It’s a high-level illustration of the components. In the JVM, we have the class loader. The class loader is basically responsible for consuming the application. When you think about how applications are packaged, it’s typically JARs or modules. The main unit there that we’re interested in is the class file. If you’ve seen the class file structure, it has a lot of non-uniform repeating types. It’s not a structure that’s very easy to use. When the JVM loads the class file, it’s going to translate it into an intermediate representation. For purposes of this talk, I’ll refer to as class metadata. Basically, it’s an internal representation of the class file, that’s a lot easier to index, a lot easier to do something with. This translation takes time. Once you have parsed the class file, you have to verify the bytecodes, which also takes time. Then once you’ve done that, then you’re ready to interpret the bytecodes. Interpretation can be quite slow. If you compare interpretation with profile JIT method, the JIT method is about 100x faster. In order to get to peak performance, it does take quite a while. You do your class loading. You do interpretation, profile guided. You eventually walk up the compilation tiers, and then you get to peak performance. This is historically why the JVM has had slower startup times.

Now I’m going to show you some techniques to improve this. One of the approaches is class metadata caching. The idea is that you save some of that intermediate state that the JVM builds up. On a subsequent run, you don’t have to parse the class file from start again, you just reuse what was there. You save time by not having to do a lot of the mechanics of class loading. OpenJ9 has a feature called shared class cache. It’s very easy to enable, just a command line option. On the newer releases it’s on by default. When you turn this on, it will save the class metadata in a file, and then any other JVM invocation can make use of that cache. There are minimal changes to the application. This is only if you’re using a custom class loader that overrides load class. Even then the changes are fairly trivial. If you’re not doing any of that, then there is no change required. No impact on peak performance, because it’s just the regular JVM, so there’s no constraints on JVM features either. The drawback is that in comparison to the other approaches I’ll show you, the startup gains are relatively modest. You can get under a second with the Liberty pingperf application, which is pretty good. It’s typically 30% to 40% faster for smaller applications. As you’ll see, with the next approaches, things can get a lot faster.

Static compilation. The idea here is very similar to what you’re probably familiar with C++ compilers. You’re essentially compiling the entire application and creating a Native Image. Some of the popular examples are Graal Native Image. That’s one. There’s another one that was being worked on by Red Hat called Cubic CC. The idea is that you run your static initializers at build time, and you build up the object graph, build up the initial state. Once you have that state, you serialize the heap. Then you compile all the methods. Then that goes in your Native Image. Then that’s it. The result is very fast startup times, under 90 milliseconds on some Quarkus native apps. There’s also smaller footprints, because you’re only keeping the things you need, you get to strip out all the other bits and pieces that you don’t care about. The limitations go hand in hand with the benefits. Because it’s a Native Image, you don’t have any of the dynamic capabilities that JVMs offer. A lot of the reasons why people like the JVM is because of these dynamic capabilities. People have built applications using them. Things like dynamic class loading is limited, reflection. Essentially, you have to inform the compiler at build time, every class method that you’re potentially going to need so it remains in the image. Also, there are longer build times. Compiling to native code takes a lot longer than compiling to a class file. The peak performance tends to not be as good as the JVM. That’s because with a Native Image, you don’t have a dynamic compiler, you don’t have a JIT, you don’t benefit from profile guided recompilation, and things like that. The peak performance tends to lag. Lastly, because it’s not a JVM, you can’t use your standard debugging tools. There’s no JVM TI support, so you are limited on how you debug things.

The last one I’ll show you is checkpoint/restore. The idea is that you run the JVM at build time, you pause it, and then you resume it at deployment. The benefit is that you get very fast startup time, because you don’t have to do that at deployment. The other benefits are that it’s still a JVM, so there’s no impact on peak performance, no constraint on JVM functionality, everything just works as it did. The drawback is that there are some code changes required, because you do have to participate in the checkpoint. However, this might be an easier option, for people who are migrating, than Native Image, because you can still use the same JVM capabilities but you do have to opt into it.

Checkpoint/Restore in Userspace (CRIU)

I’m going to spend a few more minutes talking about this approach. I’ll do a little demo just to show how it works. In order to checkpoint the JVM, we need a mechanism that can serialize the states and resume it at a later point in time. We did some experimentation at the JVM level, and we ran through a lot of hurdles, because there is actually a lot of OS resources required to recreate the initial state. We decided to use CRIU, which is Checkpoint/Restore In Userspace. It’s a Linux utility, which essentially lets you pause and serialize the application and resume it. The idea is that you can take a running container or process, you can suspend it, serialize it to disk. Uses ptrace to query all the OS resources the application’s using. Then it writes all the mapped pages to file, and uses Netlink to query the networking state. Then all that information is written to a bunch of files. Then at restore time, the resource file is in, replays those system calls that was used to create the initial state, maps back into memory, register values, and then it resumes it. This is similar to what you can do with hypervisors, if you’ve used this on KVM or VMware where you can just pause an application and resume it. It’s the same idea. The benefit of doing this at the OS level is that you get a bit more. Because you’re closer to the application you have more control as to what is being serialized, so you can serialize only the application instead of the entire VM, VM as in the KVM virtual machine in this case.

Here, we’ll look at an example of how this works. Here would be the typical workflow where you would compile, build and link in application at build time, and then at runtime, you would run the application. Now I have two boxes for runtime, because, as I said earlier, it does take time for the application to ramp up and reach the optimal performance. That’s indicated in the dark blue, and then in yellow is where it’s at optimal. With checkpoint/restore, the idea is to shift as much of that startup and ramp-up into build time as possible, such that when you deploy your application, it’s just ready to go, and you get better startup performance. The way you would do this, again, you would run the application at build time. You would pause it and serialize it, ideally, before you open any external connections, because they can be challenging to maintain those connections on restore. Then at deployment, you initiate restore. It’s not instant, it does take some time. That’s the red boxes there. It’s typically a fraction of the amount of time it would take to start up the application. In the end, we really get something like this. There is some amount of time required to restore the application, but again, it’s a much smaller fraction.

How can Java users take advantage of this? OpenJ9 has a feature called CRIUSupport. It provides an API for users to take a checkpoint and restore it. There are few methods there just to query support for the feature on the system. At the bottom, there are some methods there that basically let you control how the checkpoint is taken, so where to save the image, set logging level, file locks. Then in the middle, there are a few methods that allow you to basically modify the state before and after the checkpoint. The reason you might need this is when you take a checkpoint in one environment and you restore it in another environment, in many cases there are going to be incompatibilities between both environments. In the JVM, we try to compensate for a bunch of those things. Things like CPU, if you have certain CPU values baked in. For example, java.util.concurrent, the default helper allocates the number of threads based on number of CPUs. If you go from eight CPUs to two CPUs, then you could hit some performance issues there. We try to compensate for things like that for machine resources that may be different on the other side. We also fix up time-aware instances. Things like timers, we do automatic compensation such that they still make sense on the other end. There’s going to be some cases where we don’t know what the application is doing, so we provide hooks, so application developers themselves can also do those fixups.

Demo

I’ll do a little demo. It’s an application just to show how it works. This application is a fairly rudimentary application, but it’s just going to simulate what happens with the real application. The real application will start, and then we’re going to simulate loading and initializing classes with a few sleeps. Then we’re just going to print ready. Compile that. It’s doing exactly what we expected. It took about 3 seconds. Now we’re going to try this with CRIUSupport. I’ve got a helper method here. All it’s going to do is it’s going to query if CRIUSupport is on the machine. Then it’s going to setLeaveRunning to false, and this basically means that once we take a checkpoint, we’re going to terminate the application. We’re going to setShellJob to true. We have to inform CRIU that the shell is part of the process tree so it knows how to restore it properly. Then we setFileLocks to true. That’s because the JVM internally uses file locks. Then there’s some logging, and then we checkpoint the image. Compile that. Then I’ll make the directory that will put the checkpoint data in. This option is not enabled by default, so we have to turn it on. That message there, that’s from CRIU saying it has terminated the application. Then we will restore it. I’ll just use the CRIU CLI to restore it. It took about 60 milliseconds. This basically gives you an idea of how this would work in practice.

Open Liberty (Web Services Framework)

That example was fairly simple. Again, we’re interested in optimizing JVM for the cloud. Most people don’t deploy Hello World applications to the cloud. We’re going to try this with something more real. Next, we’ll look at Open Liberty. Open Liberty is a web services framework, supports MicroProfile, Jakarta EE, and you can even run Spring Boot applications on it. We’ve been working with the Open Liberty team to integrate this feature. At Open Liberty, it’s called InstantOn. The nice thing about doing this at the framework level is that you can actually abstract all the details of taking a checkpoint away from the user. As you’ll see, with Liberty, you don’t have to change the application at all, the framework handles all that for you. It’s just a matter of configuring it differently. We’ll look at an example. A typical application would be set up like this. You’d start with the Liberty Base Image. Then you would copy in your server config that basically tells it the hostname, the port, and all that stuff. Then you run features. Liberty only puts in the features that you’ll use. Then you’ll copy in your application WAR file. Then this last step here just runs the JVM, warms up the shared class cache, so it generates the class metadata. Does some AOT compiles. Then, that’s it. It’s already built, so I’ll just run it. We see here it started in about 3.4 seconds. Let me go to my web page. There we go. Everything’s working as expected.

Now we’ll try the same thing except we’ll do it with InstantOn. We’ll take a look at the Dockerfile. It’s identical, except that we’ve added this last step here. Essentially what’s going to happen is that we’re going to take a checkpoint while we’re building the container image. We’re going to run the server. We’re going to pause it, serialize it, save its state, bundle everything in the image, so that when we restore it, we can just resume it. Let’s do that. One thing to know is, a CRIU requires a lot of OS capabilities to serialize the state of the application. Before Linux 5.8, you would have had to do this with privilege support. As of Linux 5.8, there is a new capability called checkpoint/restore, which encapsulates a lot of the functionalities it needs. To do this in a container, you need checkpoint/restore, set PTRACE, and SETPCAP. Those are the only capabilities you need. When you’re restoring it, you only need checkpoint/restore, and SETPCAP. You don’t have to add ptrace. We’ll restore the image. As you see, it took 2.75 seconds. It’s about 10x improvement. My machine’s not the fastest. On a performance machine, the numbers are a lot better. The idea is that we’re seeing a 10x reduction in startup time, which is what we want. We want to do as much as possible at build time so that we have less to do at deployment time. As I was saying, these are the kinds of performance numbers you would get if you were doing this on a performance machine. Typically, a 10x to 18x improvement. With a small application, you’re around the 130-millisecond range. It’s not quite Native Image, but it’s pretty close, and you don’t have to give up any of the dynamic JVM capabilities. Again, our first response time is pretty low, too.

How to Improve JVM Density

Now we’ll transition to talk about how to improve JVM density. When you look at what are the big contributors to footprints in the JVM, it’s typically classes, Java heap, native memory, and JIT. There are other contributors like stack, memory, logging, but these tend to be the big four. With classes, again, JVMs typically don’t operate on class file, they operate on some intermediate structure. If you’ve loaded a lot of classes, compiled a lot of code, you’re going to have a lot of metadata associated with that, and that consumes a lot of memory. There are ways to reduce this cost. Earlier, I talked about shared classes. The way OpenJ9 creates its class metadata, it does it in two phases. The first phase, we call it RAM classes. Essentially, this is the static part of the class. This is the part of the class that will never change throughout the lifetime of the JVM. Things like bytecodes, string literals, constants, that kind of stuff. Then everything else is in the RAM class, so resolution state, linkage state, things like that. Class layout, because if you’re using a different GC policy the layout is different. The thing about RAM classes is that if you are parsing the same class, the result is always the same. It’s identical. If you have multiple JVMs on a node, you’re going to create the same RAM class for each one, if you’re loading the same classes. Typically, you’ll at the very least be loading the JCL, which is identical. There’s a lot of classes that are generated that are identical. The idea is that you can use the cache, generate it once, and then all the other JVMs can make use of it. It’s not only a startup improvement, but it also reduces the footprint because now you only have one copy of the RAM class. With some applications, you can see up to a 20% reduction in footprint if you have a lot of classes.

Another one is the Java heap. The Java heap captures the application state. A lot of JVMs are configurable. There’s a lot of different options you can parse in to set the heap geometry. You can also do things to make the heap expansion less aggressive. A lot of these things come at a cost. If you reduce the aggression of heap expansion, you often have more GCs and you get a throughput hit. It’s a bit of a tradeoff how to approach this one. With native memory, there aren’t as many options. Anything that’s using a direct byte buffer or unsafe, is going to be allocating into native memory. There is an option called max memory size, that puts a hard limit on the amount of native memory you use. Again, if your application needs more, it just means that you hit an out of memory exception.

Lastly, the JIT. There are some things you can do to minimize the amount of memory used in the JIT. You can try to drive more AOT compilations with heuristics. AOT code can be shared between multiple JVMs, so you have one copy, instead of many. Whereas JIT-ed code cannot be shared. JIT-ed code is always local to the JVM. The JIT can also be configured to be more conservative with inlining. When you inline a method, you’re essentially putting a copy of it. The more copies you have, the more memory you’re using. The less inlining you do the less memory you use. Again, there’s a tradeoff, because the less inlining you do, the less your throughput is. A lot of these cases, some of the solutions require a throughput tradeoff. There are a lot of options that you can play around with to optimize the JVM. Now everyone’s familiar with the JVM, so it can be tricky to figure out which options to use. OpenJ9 has an ergonomics option called -Xtune:virtualized. The idea is that this option tunes the JVM for a constrained environment. We always assume that you’re limited in terms of CPU, limited in terms of memory, and we try to be as conservative as possible. The effect is that you can reduce your memory consumption by 25%, and it comes at a roughly 5% throughput cost. If that is a cost you’re willing to live with, this option is something that you can use.

OpenJ9: JITServer (Cloud compiler)

The last thing I’ll talk about is something called JITServer, or cloud compiler. Essentially, it’s a remote compilation technique. The idea is that if you’re running a configuration with multiple JVMs, you’re going to be doing a lot of compilation, there’s going to be a large overlap in classes that are similar. Each JVM is going to have its own JIT that’s going to compile the same class each time, which is wasteful. The idea is to decouple the JVM from the JIT, and put the JIT in as a service, and then the JVMs as clients to the JIT, so that the JVM’s request compiles from the JITServer. The big benefit of this is that it makes the job of provisioning your nodes a lot simpler. You don’t have to account for the memory spikes that occur when you do JIT compilation. If you look at the memory profiles in applications, when there’s a lot of JIT activity, the memory goes up because you need a lot of scratch space to do the compilations. With a JITServer, it’s a lot easier to tune more conservatively in terms of memory usage, because you know there won’t be that variance in memory usage. Another benefit is that you have more predictable performance, you don’t have the JIT stealing CPU from you. You have improved ramp-up performance, because the CPU used for the node is used to run the application, and the compilation is done remotely. This is very noticeable with smaller, shorter-lived applications.

Demo – Improve Ramp-up Time with JITServer

I’ll do a little demo here. Essentially, the idea is that there’s going to be three configurations. One JVM with 400 Megs, another with 200, another with 200 using JITServer. Then we’ll just drive some load and output the results to Grafana. The first step, I basically start the JITServer. That’s the command we’re using there. Essentially, we’re going to give it 8 CPUs and 1 gigabyte of memory. Then we’re also going to keep track of the metrics. You don’t actually have to do that when you deploy. For the purposes of this demo, I chose to do that. They actually show up there. Here are the metrics for the JITServer. On the top left is number of clients, on the right is CPU utilization. The bottom left is number of JIT threads. We start with one, and it’s just idling. Then we have memory on the right. Now we’ll start the nodes. There are three worker nodes. The first two are not connected to the JITServer, so one is just a 400-Meg node, and the other is a 200-Meg node. The last is a 200-Meg node that is connected to a JITServer. You’ll see the option above. That’s basically how you connect to a JITServer, and then you provide the other options to tell which host and which port to connect it to. We’ll start those. Once it’s connected, we should see number of this kind spike up to one, there we go. On the right, you can see there’s an increase in CPU activity. When you start up a JVM, there’s compilation that occurs to compile the classes in the class library. Typically, it doesn’t take very long, so you can see we’ve already gone back down. CPU utilization is back down to zero. We only needed one JIT compilation thread. Now we’re going to apply load to it. You can see that CPU is already starting to go up. That means, now it’s just spiked. We’ve spiked up to three compilation threads, because there’s more load applied to the application, there’s more JIT requests and there’s increased JIT activity. Now we’re up to four compilation threads. We see that the memory usage is also increasing. We started at 1000, I think at this stage we’re down to 900 Megs, so we’re using about 100 Megs of memory. The important thing is that this is memory that’s being used by the JITServer and not the node. You’d have to account for this when sizing your node, typically. With JITServer, you only have to account for it on the server. You basically size your nodes for steady state because you don’t have these spikes.

Now we’ll look at the three nodes. The top left is the 400-Meg node, the top right is the 200-Meg node, and the bottom left is the 200-Meg node with JITServer. You can already see that the ramp-up is much faster with JITServer. The 400-Meg node will get there eventually, but it’ll take a bit more time. The 200-Meg node is going to be bound because when you’re constrained, you have to limit the amount of compilation activity you can do. As you can see, we’re back down to one JIT thread on the JITServer. That’s because we’ve done most of the compilation. The CPU utilization has gone back down. The memory consumption is reducing now, because we don’t need the scratch space for compilation anymore. These are the spikes that you would typically incur in every node. With the remote server, you only incur it in the JITServer. You can see that 400-Meg node there is ramping up, but the 200-Meg node JITServer has already reached steady state. Yes, that’s basically how it works.

Here’s another example, there are two configurations. At the top, we have three nodes with roughly 8 Gigs, and at the bottom, we have two nodes with 9 Gigs using JITServer. You can achieve the same throughput with both of those configurations. The savings are that with the bottom configuration, each node is sized more conservatively, because you don’t have to account for the variation in memory usage. Here are some charts that show the performance in a constrained environment. The left one is unconstrained. You can see with JITServer you do have a ramp-up improvement. Peak throughput is the same. As you constrain it, you see that the peak throughput reduces. When you have less memory, you have less memory for the code cache, less memory for scratch space. Overall, it limits what you can do from a performance perspective. The more you constrain the node, the more limited you get, which is why the more constrained it is the better JITServer tends to do.

When Should You Use JITServer?

When should you use it? There are tradeoffs using this approach. Like any remote service, latency is very important. Typically, the requirement is that you have less than a millisecond latency. If you’re above that, then the latency overheads just dominate, and there really isn’t a point to using it at all. It’s really good in resource constrained environments. If you’re packing in a lot in a node, then this is where it’s going to perform well. Also, it tends to perform well when you’re scaling out.

Summary

We’ve looked at the main requirements for optimizing your experience in the cloud. We looked at startup and why that’s important and why memory density is important. We looked at a few approaches there. The big one is definitely remote compilation. It seems to have the biggest impact on memory density. Then with startup, there are a few things you can do. There are tradeoffs for each approach. Technologies like checkpoint/restore with CRIUSupport are a way to achieve the best of static compilation and the existing class metadata caching techniques.

Azul’s OpenJDK CRaC

Azul’s got a feature called OpenJDK CRaC, which is very similar to CRIUSupport. There’s a couple differences in the API, but for the most part, it achieves the same thing.

Questions and Answers

Beckwith: When I last looked at it maybe like five years ago, with respect to the JITServers, there were different optimizations that were tried. At first, the focus was mostly, how do we deliver to the worker nodes or stuff like that? Now I see that there was this memory limit constraints. Is there any optimizations just based on the memory limit constraints or it’s equal for everything?

Ajila: The default policy when using JITServer is to go to the JITServer first, and see if it’s there. There is a fallback. I’m not sure what the default heuristics are there. If it’s not there, and it would be faster to compile it locally, you can compile it locally. We don’t actually jettison the JIT, we just don’t use it. Because if you don’t use it, there’s no real cost to having it there. Yes.

Beckwith: Is it a timeout or is it just based on the resources?

Ajila: I do know with the tech latency, and if the latency goes beyond a threshold, then yes.

Beckwith: There was a graph that you showed at the JVMLS when talking about startup and the [inaudible 00:45:01]? Maybe you can shed light where the boundaries are? In the JVM world, when do we call startup, or just what we call?

Ajila: Again, people sometimes measure this differently. Startup, in the JVM context, is basically, how long does it take you to get to main? For a lot of end user applications, main is actually where things start, for many people. It’s after main that you start to load your framework, code, and stuff. Main is like the beginning in a lot of those cases, and there’s a lot of code that has to be run before your application does something useful. From that perspective, startup can actually be later. Even then, after you’ve received that first request, the performance is not optimal yet. It does take time to get there. A lot of JVMs are profile dependent, meaning that you have to run, you have to execute the code a number of times before you generate something that’s optimal, something that performs very well. That’s the ramp-up phase. That’s the phase where you have a lot of JIT activity. The JVM is profiling. The code path is figuring out which branches are taking, which are not, recompiling. Then after that phase, when JIT activity goes down, then you’re past ramp-up, and now you’re at steady state.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Modern Data Architecture, ML, and Resilience Topics Announced for QCon San Francisco 2024

MMS Founder
MMS Artenisa Chatziou

Article originally posted on InfoQ. Visit InfoQ

QCon San Francisco returns November 18-22, focusing on innovations and emerging trends you should pay attention to in 2024. With technical talks from international software practitioners, QCon will provide actionable insights and skills you can take back to your teams.

The QCon San Francisco 2024 Program Committee has met and finalized the 2024 tracks (conference topics). The 12 tracks include:

  • Generative AI in Production & Advancements: This track dives into the latest advancements, exploring how to translate groundbreaking research into real-world applications across various industries.
  • Architectures You’ve Always Wondered About: This track will explore real-world examples of innovative companies pushing the limits with modern software systems.
  • Modern Data Architectures: This track showcases how innovators are finding new ways to store, analyze, and scale data components.
  • Hardware Architectures You Need To Know: This track explores how hardware has moved significantly in different directions to serve the needs of our ever-complex service with Machine Learning, Networking, Graphics Processing, and more.
  • Programming Languages and Paradigms for the Next Decade: This track offers a deep dive into innovative programming languages, language features, cutting-edge programming models, and paradigm shifts that aid in making code more efficient, safe, and maintainable.
  • Sociotechnical Resilience: Explore how artificial intelligence is reshaping traditional leadership models and the skills required for engineering leaders to effectively guide teams in the age of AI.
  • Getting Started in Machine Learning: This track gets to grips with the theoretical and practical considerations in ML and understands the tools and platforms that make up the discipline.
  • Explore all tracks.

Learn modern practices to evolve your skills with QCon training days

As in previous years, QCon San Francisco offers two additional days of training days for senior developers/leaders following the main conference. This year, QCon features 12 training sessions on November 21-22, covering topics such as:

  • A Practitioner’s View on Implementing Team Topologies
  • Soft Skills for Tomorrow’s Technical Leaders
  • AI/ML Mastery: From Theory to Implementation
  • AI Engineering: Building Generative AI Apps
  • Mastering Serverless
  • Microservices Bootcamp
  • Architecting Scalable APIs
  • Applied Domain-Driven Design
  • Mastering Cloud, K8s, and DevOps
  • Building Modern Data Analytics
  • Securing Modern Software
  • Building Scalable Java Applications

Don’t miss out on practical, hands-on learning with software domain experts at QCon San Francisco. Add Training Days to your conference pass, or get a training-only pass. If you change your mind, QCon’s Training Refund Guarantee Policy offers a no-questions-asked refund up to September 23, 2024.

Why Attend QCon San Francisco?

  • Walk away with actionable insights: Learn how your peers are solving similar complex challenges right now.
  • Level-up with senior software leaders: Learn what’s next from world-class leaders pushing the boundaries.
  • No hype. No hidden marketing. No sales pitches: At QCon, there are no product pitches or hidden marketing. Your time is focused on learning, exploring, and researching, not being sold to.
  • Save valuable time without wasting resources: Get the assurance you are adopting the right technologies and skills.

Join QCon San Francisco this Nov 18-22, and take advantage of the last early bird tickets. Don’t miss the chance to learn from software leaders at early adopter companies.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.