NIST Launches Program to Discriminate How Far From “Human-Quality” Are Gen AI Generated Summaries

MMS Founder
MMS Olimpiu Pop

Article originally posted on InfoQ. Visit InfoQ

The US National Institute of Standards and Technology (NIST) launched a public generative AI evaluation program developed by the international research community. The pilot program focuses on text-to-text and text-to-image. The general objectives include but are not limited to evolving benchmark dataset creation, multi-modal authenticity detection, comparative analysis and fake or misinformation source detection. The first-round submission deadline is August.

The pilot aims to measure and understand system behaviours for discriminating between synthetic and human-generated content in text-to-text (T2T) and text-to-image (T2I) modalities. Mainly, to respond to the following: “How does human content differ from synthetic content?” and “How can users differentiate between the two?”

Teams can act as generators, discriminators or both. Generator teams will be evaluated on their system’s ability to generate synthetic content as close as possible to humans. The discriminator teams will be evaluated on their system’s ability to detect synthetic content created by generative AI (LLMs and deep fake tools).

The Text-To-Text-Discriminators (T2T-D) task will need to detect whether a targeted output summary was generated using generative AI. Each trial consists of a single summary, the T2T-D detection system must render a confidence score (any real number). The higher numbers indicating the target text summary is more likely to have been generated using LLM-based models. The primary metric for measuring detection performance will be the Area Under the Receiver Operating Characteristics (ROC) Curve (AUC).

The Text-to-Text Generators (T2T-G) task is designed to automatically generate high-quality summaries based on a “topic” (statement of information needed) and a set of targeted documents (about 25). The summary must answer the need for information expressed in the topic statement. Participants should assume that the target audience of the summary is a supervisory information analyst who will use it to inform decision-making. The submission will have to adhere to the following rules:

  • All processing of documents and generation of summaries must be automatic
  • The summary can be no longer than 250 words (whitespace-delimited tokens)
  • Summaries longer than the size limit will be truncated
  • No bonus will be given for creating shorter summaries
  • No specific formatting other than linear is allowed (e.g. plain text)

There will be about 45 topics in the test data for generator teams. This set of summaries from all generator teams will serve as the testing data for discriminator teams. The summary output will be evaluated by determining how easy or difficult it is to discriminate AI-generated summaries from human-generated summaries (the goal of generators is to output a summary that is indistinguishable from human-generated summaries).

The participants are not allowed to use the test dataset for training, modelling, or tuning their algorithms. All machine learning or statistical analysis algorithms must complete training, model selection, and tuning before running their system on the available test data; learning/adaptation during processing is not permissible. Each participant is allowed to submit system output for evaluation only once per 24-hour period.

The first pilot is focused on text-to-text and it will run throughout 2024. The platform is designed to support multiple modalities and technologies for teams from academia, industry, and other research labs. Those interested in participating can register on the program’s website until May 2025. The test phases are scheduled in June, September and November. After the evaluation closes in January 2025, the results will be released in February 2025 and a GenAI evaluation workshop will be organised in March 2025.

Other such contests are the generative AI hackathon organised by Google, the RTX developer challenge proposed by Nvidia, the generative AI competition organised by members from Harvard and AI for Life Sciences organised with support from the University of Vienna.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NIST Launches Program to Discriminate How Far From “Human-Quality” Are Gen AI Generated Summaries

MMS Founder
MMS Olimpiu Pop

Article originally posted on InfoQ. Visit InfoQ

The US National Institute of Standards and Technology (NIST) launched a public generative AI evaluation program developed by the international research community. The pilot program focuses on text-to-text and text-to-image. The general objectives include but are not limited to evolving benchmark dataset creation, multi-modal authenticity detection, comparative analysis and fake or misinformation source detection. The first-round submission deadline is August.

The pilot aims to measure and understand system behaviours for discriminating between synthetic and human-generated content in text-to-text (T2T) and text-to-image (T2I) modalities. Mainly, to respond to the following: “How does human content differ from synthetic content?” and “How can users differentiate between the two?”

Teams can act as generators, discriminators or both. Generator teams will be evaluated on their system’s ability to generate synthetic content as close as possible to humans. The discriminator teams will be evaluated on their system’s ability to detect synthetic content created by generative AI (LLMs and deep fake tools).

The Text-To-Text-Discriminators (T2T-D) task will need to detect whether a targeted output summary was generated using generative AI. Each trial consists of a single summary, the T2T-D detection system must render a confidence score (any real number). The higher numbers indicating the target text summary is more likely to have been generated using LLM-based models. The primary metric for measuring detection performance will be the Area Under the Receiver Operating Characteristics (ROC) Curve (AUC).

The Text-to-Text Generators (T2T-G) task is designed to automatically generate high-quality summaries based on a “topic” (statement of information needed) and a set of targeted documents (about 25). The summary must answer the need for information expressed in the topic statement. Participants should assume that the target audience of the summary is a supervisory information analyst who will use it to inform decision-making. The submission will have to adhere to the following rules:

  • All processing of documents and generation of summaries must be automatic
  • The summary can be no longer than 250 words (whitespace-delimited tokens)
  • Summaries longer than the size limit will be truncated
  • No bonus will be given for creating shorter summaries
  • No specific formatting other than linear is allowed (e.g. plain text)

There will be about 45 topics in the test data for generator teams. This set of summaries from all generator teams will serve as the testing data for discriminator teams. The summary output will be evaluated by determining how easy or difficult it is to discriminate AI-generated summaries from human-generated summaries (the goal of generators is to output a summary that is indistinguishable from human-generated summaries).

The participants are not allowed to use the test dataset for training, modelling, or tuning their algorithms. All machine learning or statistical analysis algorithms must complete training, model selection, and tuning before running their system on the available test data; learning/adaptation during processing is not permissible. Each participant is allowed to submit system output for evaluation only once per 24-hour period.

The first pilot is focused on text-to-text and it will run throughout 2024. The platform is designed to support multiple modalities and technologies for teams from academia, industry, and other research labs. Those interested in participating can register on the program’s website until May 2025. The test phases are scheduled in June, September and November. After the evaluation closes in January 2025, the results will be released in February 2025 and a GenAI evaluation workshop will be organised in March 2025.

Other such contests are the generative AI hackathon organised by Google, the RTX developer challenge proposed by Nvidia, the generative AI competition organised by members from Harvard and AI for Life Sciences organised with support from the University of Vienna.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Kotlin 2.0 Launched with New, Faster, More Flexible K2 Compiler

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

JetBrains has released Kotlin 2.0 along with the new K2 compiler. While the language itself introduces no new syntax, the K2 compiler brings several benefits, including faster builds, extended language capabilities with smart casts, and multiplatform support out of the box.

This release introduces the K2 compiler, unifying all platforms that Kotlin supports, as all compiler backends now share a lot of logic and a unified pipeline. This allows us to implement most features, optimizations, and bug fixes once for all platforms, drastically increasing the development speed of new language features.

K2 currently supports four backends: JVM, JavaScript, Wasm, and native. One of the benefits of targeting all platforms through the same compiler is the possibility to support multiplatform library development easily through the definition of a new format for multi-platform library distribution which will enable the creation of universal Kotlin libraries from any host.

Additionally, as Michail Zarečenskij explained in his Kotlin 2.0 talk at the Kotlin 2024 Conference, multiplatform support came about piecemeal, which made support for the different platforms hard to maintain and evolve.

On the performance front, K2 significantly speeds up compilation time for real world project. JetBrains says K2 doubles compilation speed on the average, with some project compiling faster and others compiling slower than that. The speedup is mostly related to improvement in the initialization phase, which is up to 488% faster, and the analysis phase, which is up to 376% quicker.

Besides performance and multiplatform support, another key reason to switch to a new compiler was to make the language smarter when interpreting developers’ intentions with their code.

This was achieved by making the Frontend Intermediate Representation (FIR) support earlier desugaring, so the compiler has more chances to analyze the code; implementing a phased approach to analysis across imports, annotations, and types, which brings more opportunities to integrate IDEs and compiler plugins; and bringing in a new control flow engine with improvements to type inference and resolution. The new control flow engine helps detect unusual code, bugs and other potential issues, thus contributing to the language safety.

As a point in case for the improvements to the language expressivenes these changes bring, Kotlin 2.0 now better supports combinations of operators and numeric conversions. For example, the statement longList[0] += 1 is now allowed and also works in combination with nullable values and the optional dereferencing operator ?.

Control flow is one of the main task of developers today, says Zarečenskij. This is the reason why JetBrains focused on extending the capabilities of the language (syntax) to inspect data and describe conditions, with the effect of improving readability and removing layers of nesting. Additionally, he says, smart casts reduce cognitive load since you do not need to learn new constructs.

For example Kotlin 2.0 will propagate a smart cast on a local variable as in the following example:

fun petAnimal(animal: Any) {
    if (animal is Cat) {
        animal.purr()
    }
}

Likewise, smart casts will be propagated to save the state about nullability, is-checks, as-casts and contracts.

Another case when Kotlin 2.0 applies new smart cast is with variables captured inside closures as read/write. In this case,

Kotlin will continue enhancing its control flow engine by adding features like pattern matching without binding, context-sensitive resolution, generalized ADTs to support even more smart casts, an effect system, and more.

Many of these new features are in the language roadmap for Kotlin 2.1 or 2.2. Covering all of the newly announced features goes beyond the scope of this article, so do not miss talk at the Kotlin 2024 Conference for the full detail.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Kotlin 2.0 Launched with New, Faster, More Flexible K2 Compiler

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

JetBrains has released Kotlin 2.0 along with the new K2 compiler. While the language itself introduces no new syntax, the K2 compiler brings several benefits, including faster builds, extended language capabilities with smart casts, and multiplatform support out of the box.

This release introduces the K2 compiler, unifying all platforms that Kotlin supports, as all compiler backends now share a lot of logic and a unified pipeline. This allows us to implement most features, optimizations, and bug fixes once for all platforms, drastically increasing the development speed of new language features.

K2 currently supports four backends: JVM, JavaScript, Wasm, and native. One of the benefits of targeting all platforms through the same compiler is the possibility to support multiplatform library development easily through the definition of a new format for multi-platform library distribution which will enable the creation of universal Kotlin libraries from any host.

Additionally, as Michail Zarečenskij explained in his Kotlin 2.0 talk at the Kotlin 2024 Conference, multiplatform support came about piecemeal, which made support for the different platforms hard to maintain and evolve.

On the performance front, K2 significantly speeds up compilation time for real world project. JetBrains says K2 doubles compilation speed on the average, with some project compiling faster and others compiling slower than that. The speedup is mostly related to improvement in the initialization phase, which is up to 488% faster, and the analysis phase, which is up to 376% quicker.

Besides performance and multiplatform support, another key reason to switch to a new compiler was to make the language smarter when interpreting developers’ intentions with their code.

This was achieved by making the Frontend Intermediate Representation (FIR) support earlier desugaring, so the compiler has more chances to analyze the code; implementing a phased approach to analysis across imports, annotations, and types, which brings more opportunities to integrate IDEs and compiler plugins; and bringing in a new control flow engine with improvements to type inference and resolution. The new control flow engine helps detect unusual code, bugs and other potential issues, thus contributing to the language safety.

As a point in case for the improvements to the language expressivenes these changes bring, Kotlin 2.0 now better supports combinations of operators and numeric conversions. For example, the statement longList[0] += 1 is now allowed and also works in combination with nullable values and the optional dereferencing operator ?.

Control flow is one of the main task of developers today, says Zarečenskij. This is the reason why JetBrains focused on extending the capabilities of the language (syntax) to inspect data and describe conditions, with the effect of improving readability and removing layers of nesting. Additionally, he says, smart casts reduce cognitive load since you do not need to learn new constructs.

For example Kotlin 2.0 will propagate a smart cast on a local variable as in the following example:

fun petAnimal(animal: Any) {
    if (animal is Cat) {
        animal.purr()
    }
}

Likewise, smart casts will be propagated to save the state about nullability, is-checks, as-casts and contracts.

Another case when Kotlin 2.0 applies new smart cast is with variables captured inside closures as read/write. In this case,

Kotlin will continue enhancing its control flow engine by adding features like pattern matching without binding, context-sensitive resolution, generalized ADTs to support even more smart casts, an effect system, and more.

Many of these new features are in the language roadmap for Kotlin 2.1 or 2.2. Covering all of the newly announced features goes beyond the scope of this article, so do not miss talk at the Kotlin 2024 Conference for the full detail.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: Java Turns 29, Kotlin 2.0, Semantic Kernel for Java 1.0, More OpenJDK Updates

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for May 20th, 2024 features news highlighting: Java’s 29th birthday; the release of Kotlin 2.0 and Semantic Kernel for Java 1.0; JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), targeted for JDK 23; and four JEPs proposed to target for JDK 23

Java Turns 29

On May 23rd, 1995, Java 1.0 was introduced to developers who attended the Sun World ‘95 conference. A Java community was born and 29 years later, the community has evolved to include 370 global Java User Groups, 370 Java Champions and many Java-related conferences in a given calendar year. There have been 22 formal releases

At Devnexus 2024, Sharat Chander, Senior Director, Product Management & Developer Engagement at Oracle, announced that JavaOne will return to celebrate Java’s 30th birthday. It will be held March 17-20, 2025 in Redwood Shores, California. Please stay tuned for the call for papers.

OpenJDK

After its review has concluded, JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), has been promoted from Proposed to Target to Targeted for JDK 23. Formerly known as Unnamed Classes and Instance Main Methods (Preview), Flexible Main Methods and Anonymous Main Classes (Preview) and Implicit Classes and Enhanced Main Methods (Preview), this JEP incorporates enhancements in response to feedback from the two previous rounds of preview, namely JEP 463, Implicit Classes and Instance Main Methods (Second Preview), delivered in JDK 22, and JEP 445, Unnamed Classes and Instance Main Methods (Preview), delivered in JDK 21. This JEP proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, Java Language Architect at Oracle. The latest draft of the specification document by Gavin Bierman, Consulting Member of Technical Staff at Oracle, is open for review by the Java community. More details on JEP 445 may be found in this InfoQ news story.

JEP 482, Flexible Constructor Bodies (Second Preview), has been promoted from Candidate to Proposed to Target for JDK 23. This JEP proposes a second round of preview and a name change to obtain feedback from the previous round of preview, namely JEP 447, Statements before super(…) (Preview), delivered in JDK 22. This feature allows statements that do not reference an instance being created to appear before the this() or super() calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Changes in this JEP include: a treatment of local classes; and a relaxation of the restriction that fields can not be accessed before an explicit constructor invocation to a requirement that fields can not be read before an explicit constructor invocation. Gavin Bierman, Consulting Member of Technical Staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback. The review is expected to conclude on May 27, 2024.

JEP 481, Scoped Values (Third Preview), has been promoted from Candidate to Proposed to Target for JDK 23. Formerly known as Extent-Local Variables (Incubator), this JEP proposes a third preview, with one change, in order to gain additional experience and feedback from one round of incubation and two rounds of preview, namely: JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. This feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads. The change in this feature is related to the operation parameter of the callWhere() method, defined in the ScopedValue class, is now a functional interface which allows the Java compiler to infer whether a checked exception might be thrown. With this change, the getWhere() method is no longer needed and has been removed. The review is expected to conclude on May 29, 2024

JEP 480, Structured Concurrency (Third Preview), has been promoted from Candidate to Proposed to Target for JDK 23. This JEP proposes a third preview, without change, in order to gain more feedback from the previous two rounds of preview, namely: JEP 462, Structured Concurrency (Second Preview), delivered in JDK 22; and JEP 453, Structured Concurrency (Preview), delivered in JDK 21. This feature simplifies concurrent programming by introducing structured concurrency to “treat groups of related tasks running in different threads as a single unit of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability.” The review is expected to conclude on May 27, 2024.

JEP 471, Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal, has been promoted from Candidate to Proposed to Target for JDK 23. This JEP proposes to deprecate the memory access methods in the Unsafe class for removal in a future release. These unsupported methods have been superseded by standard APIs, namely; JEP 193, Variable Handles, delivered in JDK 9; and JEP 454, Foreign Function & Memory API, delivered in JDK 22. The review is expected to conclude on May 27, 2024.

JDK 23

Build 24 of the JDK 23 early-access builds was made available this past week featuring updates from Build 23 that include fixes for various issues. Further details on this release may be found in the release notes.

Jakarta EE 11

In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE Developer Advocate at the Eclipse Foundation, has provided an update on the upcoming GA release of Jakarta EE 11. Nine (9) specifications, namely – Jakarta Annotations 3.0, Jakarta Authorization 3.0, Jakarta Contexts and Dependency Injection 4.1, Jakarta Expression Language 6.0, Jakarta Interceptors 2.2, Jakarta RESTful Web Services 4.0, Jakarta Persistence 3.2, Jakarta Validation 3.1 and Jakarta WebSocket 2.2 – have been finalized for Jakarta EE 11. The remaining seven (7) updated specifications are in various stages of review.

Spring Framework

It was a busy week over at Spring as the various teams have delivered numerous milestone and point releases on Spring Boot, Spring Framework, Spring Cloud Data Flow, Spring Security, Spring Authorization Server, Spring for GraphQL, Spring Session, Spring Integration, Spring Modulith, Spring Batch, Spring AMQP, Spring for Apache Kafka and Spring for Apache Pulsar. More details may be found in this InfoQ news story.

Kotlin

JetBrains has released version 2.0 of Kotlin that finalizes the new frontend for the Kotlin compiler, codenamed K2, to unify all supported Kotlin platforms by which all compiler backends now share a significant amount of logic and a unified pipeline. Kotlin 2.0 promises faster compilation speed and support for Compose Multiplatform projects. JetBrains has also been developing a K2 Kotlin Mode, currently in alpha, for IntelliJ IDEA. Further details on this release may be found in the what’s new page. InfoQ will follow up with a more detailed news story.

Quarkus

Quarkus 3.10.2, the second maintenance release, ships with notable changes such as: a resolution to a QuarkusErrorHandler runtime error upon invoking a REST client endpoint that accepts a @BeanParam on a bean containing a List field annotated with @RestForm; and set the correct configuration key upon generating a native build from Gradle. More details on this release may be found in the changelog.

Open Liberty

IBM has released version 24.0.0.5 of Open Liberty featuring resolutions to the following Common Vulnerabilities and Exposures (CVEs) where a remote attacker can send a specially crafted request causing the server to consume memory resources resulting in a denial-of-service

  • CVE-2024-27268, a vulnerability in WebSphere Application Server Liberty versions 18.0.0.2 through 24.0.0.4.
  • CVE-2024-22353, a vulnerability in WebSphere Application Server Liberty versions 17.0.0.3 through 24.0.0.4.
  • CVE-2024-25026, a vulnerability in WebSphere Application Server versions 8.5 and 9.0, and IBM WebSphere Application Server Liberty 17.0.0.3 through 24.0.0.4.

Notable bug fixes include: a ClassCastException upon using SIP Servlet 1.1 and WebSocket features; the featureUtility connection test to a base URL of a custom repository returns a HTTP 400 response code and fails to recognize it as a working repository. Further details on all of the bug fixes may be found in this list of issues.

Microsoft

After more than a year in development, Microsoft has introduced the general availability of Semantic Kernel for Java, an SDK that meshes Large Language Models (LLMs) with popular programming languages. Version 1.0 delivers: tool calling that enables an AI service to request the invocation of native Java functions; support for both text-to-audio and audio-to-text conversions with their audio service; an enhanced type conversion that allows user to register types and serialize/deserialize them to and from prompts; and the introduction of hooks to monitor key points such as function calls, enabling users to log or intercept them for better tracking and debugging. InfoQ will follow up with a more detailed news story.

Infinispan

Infinispan 15.0.4, the fourth maintenance release, provides notable changes such as: a resolution to a flaky test found in the TracingSecurityTest class; the addition of node names in the tracing spans; and a simplification of the default server configuration files. More details on this release may be found in the release notes and in this InfoQ news story on the release of Infinispan 15.0.0.

JHipster Lite

The release of JHipster Lite 1.9.0 ships with bug fixes, dependency upgrades and new features/enhancements such as: the addition of the Gradle frontend server plugin; removal of the gradleapp property from generate.sh as Gradle is now fully supported; and a simplification of lint-staged configuration. Further details on this release may be found in the release notes.

Langchain4j

Version 0.31.0 of LangChain for Java (LangChain4j) features new integrations: embedding models, Cohere and Jina; web search engines, Google and Tavily; the Jina scoring (re-ranking) model; and the Azure Cosmos DB for NoSQL embedding store. Breaking changes include: a rename of the Judge0 package from dev.langchain4j.code to dev.langchain4j.code.judge0; and a migration of the Anthropic language model from Gson to Jackson. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: Java Turns 29, Kotlin 2.0, Semantic Kernel for Java 1.0, More OpenJDK Updates

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for May 20th, 2024 features news highlighting: Java’s 29th birthday; the release of Kotlin 2.0 and Semantic Kernel for Java 1.0; JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), targeted for JDK 23; and four JEPs proposed to target for JDK 23

Java Turns 29

On May 23rd, 1995, Java 1.0 was introduced to developers who attended the Sun World ‘95 conference. A Java community was born and 29 years later, the community has evolved to include 370 global Java User Groups, 370 Java Champions and many Java-related conferences in a given calendar year. There have been 22 formal releases

At Devnexus 2024, Sharat Chander, Senior Director, Product Management & Developer Engagement at Oracle, announced that JavaOne will return to celebrate Java’s 30th birthday. It will be held March 17-20, 2025 in Redwood Shores, California. Please stay tuned for the call for papers.

OpenJDK

After its review has concluded, JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), has been promoted from Proposed to Target to Targeted for JDK 23. Formerly known as Unnamed Classes and Instance Main Methods (Preview), Flexible Main Methods and Anonymous Main Classes (Preview) and Implicit Classes and Enhanced Main Methods (Preview), this JEP incorporates enhancements in response to feedback from the two previous rounds of preview, namely JEP 463, Implicit Classes and Instance Main Methods (Second Preview), delivered in JDK 22, and JEP 445, Unnamed Classes and Instance Main Methods (Preview), delivered in JDK 21. This JEP proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, Java Language Architect at Oracle. The latest draft of the specification document by Gavin Bierman, Consulting Member of Technical Staff at Oracle, is open for review by the Java community. More details on JEP 445 may be found in this InfoQ news story.

JEP 482, Flexible Constructor Bodies (Second Preview), has been promoted from Candidate to Proposed to Target for JDK 23. This JEP proposes a second round of preview and a name change to obtain feedback from the previous round of preview, namely JEP 447, Statements before super(…) (Preview), delivered in JDK 22. This feature allows statements that do not reference an instance being created to appear before the this() or super() calls in a constructor; and preserve existing safety and initialization guarantees for constructors. Changes in this JEP include: a treatment of local classes; and a relaxation of the restriction that fields can not be accessed before an explicit constructor invocation to a requirement that fields can not be read before an explicit constructor invocation. Gavin Bierman, Consulting Member of Technical Staff at Oracle, has provided an initial specification of this JEP for the Java community to review and provide feedback. The review is expected to conclude on May 27, 2024.

JEP 481, Scoped Values (Third Preview), has been promoted from Candidate to Proposed to Target for JDK 23. Formerly known as Extent-Local Variables (Incubator), this JEP proposes a third preview, with one change, in order to gain additional experience and feedback from one round of incubation and two rounds of preview, namely: JEP 464, Scoped Values (Second Preview), delivered in JDK 22; JEP 446, Scoped Values (Preview), delivered in JDK 21; and JEP 429, Scoped Values (Incubator), delivered in JDK 20. This feature enables sharing of immutable data within and across threads. This is preferred to thread-local variables, especially when using large numbers of virtual threads. The change in this feature is related to the operation parameter of the callWhere() method, defined in the ScopedValue class, is now a functional interface which allows the Java compiler to infer whether a checked exception might be thrown. With this change, the getWhere() method is no longer needed and has been removed. The review is expected to conclude on May 29, 2024

JEP 480, Structured Concurrency (Third Preview), has been promoted from Candidate to Proposed to Target for JDK 23. This JEP proposes a third preview, without change, in order to gain more feedback from the previous two rounds of preview, namely: JEP 462, Structured Concurrency (Second Preview), delivered in JDK 22; and JEP 453, Structured Concurrency (Preview), delivered in JDK 21. This feature simplifies concurrent programming by introducing structured concurrency to “treat groups of related tasks running in different threads as a single unit of work, thereby streamlining error handling and cancellation, improving reliability, and enhancing observability.” The review is expected to conclude on May 27, 2024.

JEP 471, Deprecate the Memory-Access Methods in sun.misc.Unsafe for Removal, has been promoted from Candidate to Proposed to Target for JDK 23. This JEP proposes to deprecate the memory access methods in the Unsafe class for removal in a future release. These unsupported methods have been superseded by standard APIs, namely; JEP 193, Variable Handles, delivered in JDK 9; and JEP 454, Foreign Function & Memory API, delivered in JDK 22. The review is expected to conclude on May 27, 2024.

JDK 23

Build 24 of the JDK 23 early-access builds was made available this past week featuring updates from Build 23 that include fixes for various issues. Further details on this release may be found in the release notes.

Jakarta EE 11

In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE Developer Advocate at the Eclipse Foundation, has provided an update on the upcoming GA release of Jakarta EE 11. Nine (9) specifications, namely – Jakarta Annotations 3.0, Jakarta Authorization 3.0, Jakarta Contexts and Dependency Injection 4.1, Jakarta Expression Language 6.0, Jakarta Interceptors 2.2, Jakarta RESTful Web Services 4.0, Jakarta Persistence 3.2, Jakarta Validation 3.1 and Jakarta WebSocket 2.2 – have been finalized for Jakarta EE 11. The remaining seven (7) updated specifications are in various stages of review.

Spring Framework

It was a busy week over at Spring as the various teams have delivered numerous milestone and point releases on Spring Boot, Spring Framework, Spring Cloud Data Flow, Spring Security, Spring Authorization Server, Spring for GraphQL, Spring Session, Spring Integration, Spring Modulith, Spring Batch, Spring AMQP, Spring for Apache Kafka and Spring for Apache Pulsar. More details may be found in this InfoQ news story.

Kotlin

JetBrains has released version 2.0 of Kotlin that finalizes the new frontend for the Kotlin compiler, codenamed K2, to unify all supported Kotlin platforms by which all compiler backends now share a significant amount of logic and a unified pipeline. Kotlin 2.0 promises faster compilation speed and support for Compose Multiplatform projects. JetBrains has also been developing a K2 Kotlin Mode, currently in alpha, for IntelliJ IDEA. Further details on this release may be found in the what’s new page. InfoQ will follow up with a more detailed news story.

Quarkus

Quarkus 3.10.2, the second maintenance release, ships with notable changes such as: a resolution to a QuarkusErrorHandler runtime error upon invoking a REST client endpoint that accepts a @BeanParam on a bean containing a List field annotated with @RestForm; and set the correct configuration key upon generating a native build from Gradle. More details on this release may be found in the changelog.

Open Liberty

IBM has released version 24.0.0.5 of Open Liberty featuring resolutions to the following Common Vulnerabilities and Exposures (CVEs) where a remote attacker can send a specially crafted request causing the server to consume memory resources resulting in a denial-of-service

  • CVE-2024-27268, a vulnerability in WebSphere Application Server Liberty versions 18.0.0.2 through 24.0.0.4.
  • CVE-2024-22353, a vulnerability in WebSphere Application Server Liberty versions 17.0.0.3 through 24.0.0.4.
  • CVE-2024-25026, a vulnerability in WebSphere Application Server versions 8.5 and 9.0, and IBM WebSphere Application Server Liberty 17.0.0.3 through 24.0.0.4.

Notable bug fixes include: a ClassCastException upon using SIP Servlet 1.1 and WebSocket features; the featureUtility connection test to a base URL of a custom repository returns a HTTP 400 response code and fails to recognize it as a working repository. Further details on all of the bug fixes may be found in this list of issues.

Microsoft

After more than a year in development, Microsoft has introduced the general availability of Semantic Kernel for Java, an SDK that meshes Large Language Models (LLMs) with popular programming languages. Version 1.0 delivers: tool calling that enables an AI service to request the invocation of native Java functions; support for both text-to-audio and audio-to-text conversions with their audio service; an enhanced type conversion that allows user to register types and serialize/deserialize them to and from prompts; and the introduction of hooks to monitor key points such as function calls, enabling users to log or intercept them for better tracking and debugging. InfoQ will follow up with a more detailed news story.

Infinispan

Infinispan 15.0.4, the fourth maintenance release, provides notable changes such as: a resolution to a flaky test found in the TracingSecurityTest class; the addition of node names in the tracing spans; and a simplification of the default server configuration files. More details on this release may be found in the release notes and in this InfoQ news story on the release of Infinispan 15.0.0.

JHipster Lite

The release of JHipster Lite 1.9.0 ships with bug fixes, dependency upgrades and new features/enhancements such as: the addition of the Gradle frontend server plugin; removal of the gradleapp property from generate.sh as Gradle is now fully supported; and a simplification of lint-staged configuration. Further details on this release may be found in the release notes.

Langchain4j

Version 0.31.0 of LangChain for Java (LangChain4j) features new integrations: embedding models, Cohere and Jina; web search engines, Google and Tavily; the Jina scoring (re-ranking) model; and the Azure Cosmos DB for NoSQL embedding store. Breaking changes include: a rename of the Judge0 package from dev.langchain4j.code to dev.langchain4j.code.judge0; and a migration of the Anthropic language model from Gson to Jackson. More details on this release may be found in the release notes.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JEP 477 Enhances Beginner Experience with Implicitly Declared Classes and Instance Main Methods

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), has been promoted from its Proposed to Target to Targeted status. This JEP proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, the Java language architect at Oracle. The latest draft of the specification document by Gavin Bierman, a consulting member of the technical staff at Oracle, is open for review by the Java community.

Java has long been recognized for its capabilities in building large, complex applications. However, its extensive features can be daunting for beginners who are just starting to learn programming. To address this, this JEP has introduced new preview features to simplify the language for new programmers. These features allow beginners to write their first programs without needing to understand complex language constructs designed for larger applications and empower experienced developers to write small programs more succinctly, enhancing their productivity and code readability.

Consider the classic Hello, World! example that is often a beginner’s first program:

public class HelloWorld {

    public static void main(String[] args) {
        System.out.println("Hello, World!");
    }
}

With this JEP, the above program can be simplified to:

void main() {
    println("Hello, World!");
}

The proposal introduces several key features designed to simplify Java for beginners while maintaining its robust capabilities. One of the main highlights is the introduction of implicitly declared classes, allowing new programs to be written without explicit class declarations. In this new approach, all methods and fields in a source file are considered part of an implicitly declared class, which extends Object, does not implement interfaces, and cannot be referenced by name in source code. Additionally, the proposal introduces instance main methods, which no longer need to be static or public, and methods without parameters are also recognized as valid program entry points.

With these changes, developers can now write Hello, World! As:

void main() {
    System.out.println("Hello, World!");
}

Top-level members are interpreted as members of the implicit class, so we can also write the program as:

String greeting() { 
    return "Hello, World!"; 
}

void main() {
    System.out.println(greeting());
}

Or, using a field as: 

String greeting = "Hello, World!";

void main() {
    System.out.println(greeting);
}

Following the initial preview in JDK 21 (JEP 445) and subsequent updates in JDK 22 (JEP 463), the proposal has been refined further based on community feedback. For instance, in this JEP, implicitly declared classes now automatically import the following three static methods from the new java.io.IO class for simple textual I/O:

public static void println(Object obj);
public static void print(Object obj);
public static String readln(String prompt);

This change eliminates the need for the System.out.println incantation, thereby simplifying console output. Consider the following example:

void main() {
    String name = readln("Please enter your name: ");
    print("Pleased to meet you, ");
    println(name);
}

Many other classes declared in the Java API are useful in small programs. They can be imported explicitly at the start of the source file:

import java.util.List;

void main() {
    var authors = List.of("James", "Bill", "Bazlur", "Mike", "Dan", "Gavin");
    for (var name : authors) {
        println(name + ": " + name.length());
    }
}

However, with the JEP 476, Module Import Declarations, implicitly declared classes automatically import all public top-level classes and interfaces from the java.base module, removing the need for explicit import statements for commonly used APIs such as java.util.List. This makes the development process more seamless and reduces the learning curve for new programmers. More details on JEP 476 may be found in this InfoQ news story.

This is a preview language feature, available through the --enable-preview flag with the JDK 23 compiler and runtime. To try the examples above in JDK 23, you must enable the preview features:

  • Compile the program with javac --release 23 --enable-preview Main.java and run it with java --enable-preview Main; or,
  • When using the source code launcher, run the program with java --enable-preview Main.java; or,
  • When using jshell, start it with jshell --enable-preview.

Rather than introducing a separate dialect of Java, this JEP streamlines the declaration process for single-class programs. This approach facilitates a gradual learning curve, allowing beginners to start with simple, concise code and progressively adopt more advanced features as they gain experience. By simplifying syntax and minimizing boilerplate code, Java continues to uphold its reputation as a versatile and powerful programming language suitable for a wide range of applications. This enhancement not only makes Java more accessible to new programmers but also boosts productivity and readability for experienced developers working on smaller projects.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JEP 477 Enhances Beginner Experience with Implicitly Declared Classes and Instance Main Methods

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

JEP 477, Implicitly Declared Classes and Instance Main Methods (Third Preview), has been promoted from its Proposed to Target to Targeted status. This JEP proposes to “evolve the Java language so that students can write their first programs without needing to understand language features designed for large programs.” This JEP moves forward the September 2022 blog post, Paving the on-ramp, by Brian Goetz, the Java language architect at Oracle. The latest draft of the specification document by Gavin Bierman, a consulting member of the technical staff at Oracle, is open for review by the Java community.

Java has long been recognized for its capabilities in building large, complex applications. However, its extensive features can be daunting for beginners who are just starting to learn programming. To address this, this JEP has introduced new preview features to simplify the language for new programmers. These features allow beginners to write their first programs without needing to understand complex language constructs designed for larger applications and empower experienced developers to write small programs more succinctly, enhancing their productivity and code readability.

Consider the classic Hello, World! example that is often a beginner’s first program:

public class HelloWorld {

    public static void main(String[] args) {
        System.out.println("Hello, World!");
    }
}

With this JEP, the above program can be simplified to:

void main() {
    println("Hello, World!");
}

The proposal introduces several key features designed to simplify Java for beginners while maintaining its robust capabilities. One of the main highlights is the introduction of implicitly declared classes, allowing new programs to be written without explicit class declarations. In this new approach, all methods and fields in a source file are considered part of an implicitly declared class, which extends Object, does not implement interfaces, and cannot be referenced by name in source code. Additionally, the proposal introduces instance main methods, which no longer need to be static or public, and methods without parameters are also recognized as valid program entry points.

With these changes, developers can now write Hello, World! As:

void main() {
    System.out.println("Hello, World!");
}

Top-level members are interpreted as members of the implicit class, so we can also write the program as:

String greeting() { 
    return "Hello, World!"; 
}

void main() {
    System.out.println(greeting());
}

Or, using a field as: 

String greeting = "Hello, World!";

void main() {
    System.out.println(greeting);
}

Following the initial preview in JDK 21 (JEP 445) and subsequent updates in JDK 22 (JEP 463), the proposal has been refined further based on community feedback. For instance, in this JEP, implicitly declared classes now automatically import the following three static methods from the new java.io.IO class for simple textual I/O:

public static void println(Object obj);
public static void print(Object obj);
public static String readln(String prompt);

This change eliminates the need for the System.out.println incantation, thereby simplifying console output. Consider the following example:

void main() {
    String name = readln("Please enter your name: ");
    print("Pleased to meet you, ");
    println(name);
}

Many other classes declared in the Java API are useful in small programs. They can be imported explicitly at the start of the source file:

import java.util.List;

void main() {
    var authors = List.of("James", "Bill", "Bazlur", "Mike", "Dan", "Gavin");
    for (var name : authors) {
        println(name + ": " + name.length());
    }
}

However, with the JEP 476, Module Import Declarations, implicitly declared classes automatically import all public top-level classes and interfaces from the java.base module, removing the need for explicit import statements for commonly used APIs such as java.util.List. This makes the development process more seamless and reduces the learning curve for new programmers. More details on JEP 476 may be found in this InfoQ news story.

This is a preview language feature, available through the --enable-preview flag with the JDK 23 compiler and runtime. To try the examples above in JDK 23, you must enable the preview features:

  • Compile the program with javac --release 23 --enable-preview Main.java and run it with java --enable-preview Main; or,
  • When using the source code launcher, run the program with java --enable-preview Main.java; or,
  • When using jshell, start it with jshell --enable-preview.

Rather than introducing a separate dialect of Java, this JEP streamlines the declaration process for single-class programs. This approach facilitates a gradual learning curve, allowing beginners to start with simple, concise code and progressively adopt more advanced features as they gain experience. By simplifying syntax and minimizing boilerplate code, Java continues to uphold its reputation as a versatile and powerful programming language suitable for a wide range of applications. This enhancement not only makes Java more accessible to new programmers but also boosts productivity and readability for experienced developers working on smaller projects.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Architecture Modernization with Nick Tune

MMS Founder
MMS Nick Tune

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Introduction

Thomas Betts: Hello and welcome to another episode of the InfoQ podcast. I’m Thomas Betts, and today’s guest is Nick Tune, who recently co-authored the book, Architecture Modernization: Socio-technical Alignment of Software, Strategy, and Structure.

At the recent Explore DDD conference, Nick’s sessions were some of the most popular with more people trying to sign up than the room could hold. And since there was clearly a lot of interest in the talk of architecture modernization, I thought it’d be great to get Nick on the podcast and allow the InfoQ audience to hear some of his ideas. So Nick, welcome to the InfoQ podcast.

Nick Tune: Thank you.

When does architecture modernization make sense? [01:18]

Thomas Betts: So I think we can have a baseline that the subject of architecture modernization assumes that we have some amount of legacy software, legacy architecture. We’re not talking about doing this for greenfield and in most successful companies that legacy software is also what allows them to be profitable. This is the stuff they built years ago and now it’s just running the business. And that can sometimes lead to a mindset of “if it’s not broke, don’t fix it.” Why is it important for companies to invest in modernizing architecture?

Nick Tune: Well, that’s a good question and maybe it’s not sometimes. Sometimes it’s not the software that makes a company successful, it’s the brand reputation. You could have the worst product in your industry, you could have new competitors, new startups who are all building better products than you, but if people know your name, they’ve got a big investment with your company and your products.

Yes, maybe you can still be a successful market leader even in those kinds of situations. But it’s a very risky strategy. For a lot of companies, the more legacy you’ve got, the older the legacy gets and it slows you down for building new features and can make costs certainly more expensive. And it can also pose reliability risks as your business grows and you start to scale and build new features.

You might be building on the foundations that were designed to support one country and now it’s running in five or six countries and suddenly the whole non-functional requirements are completely different. So yes, I think becoming unreliable, they could prevent your company from growing. It’s definitely a situation I see a lot. Companies want to do things like move into new countries, support different customer segments, but it’s just not possible with a tech or it’s too expensive to achieve that.

Consider the business risks of not modernizing [02:59]

Thomas Betts: Yes, I like the idea of looking at your risks and if your company doesn’t have any eminent risks right now, maybe it’s fine, but if you can start to analyze and say, we can project that we’re going to have some of these risks that we need to deal with in the next two years because we’re on a platform that’s retiring or there’s too many vulnerable packages we’re pulling in or whatever it is. Sometimes you’re built on a framework that is just getting out of support. So are you seeing companies that you work with looking at those risks when they’re looking long-term and saying, “Okay, the way we need to address this is to look at our tech stack?”

Nick Tune: I think some yes, as some no. I’m currently working for a French company called Payfit, and one thing I’m really enjoying about this company is they’ve recognized that where they want to be in a few years requires an investment and some big changes to their technology. And so yes, I would say this company’s very forward-looking, understands that investing in technology now can have big business benefits in the future.

On the other side of things, I think there are some companies that just want to keep building features until they can’t build features anymore, until things have totally broken, everything’s failing, the system’s not working, they can’t build new features, they’ve got serious competition. It feels like in some companies you have to be in a real crisis situation before they’re like, “Okay, let’s modernize.” But to summarize, I think there is a mix. Some companies are very good at looking forward and some very short-term focused.

Thomas Betts: Yes, you talked about that crisis and that’s when you make very reactive decisions. And where I think you’re talking about for architecture modernization is let’s look at what we have now and make a proactive plan that avoids those crisis moments, how do you recognize that those crises are going to show up. So how do we recognize what they are and find a plan to get away from it.

Nick Tune: Yes, definitely. One of my favorite questions as a consultant is how long would you be a market leader or how long could you keep your current position if you made no improvements in your tech? So it’s kind of like what happens if you don’t modernize, basically? How long has your company got? Some companies are like, “Yes, we have to do something now.” Worked with one of the big companies and Amazon entered their industry and they were like, it’s like now crisis is here now we can’t afford to wait at all.

We should have started two years ago. Some companies are like, “Yes, we could probably go without doing anything for 18 months and still be the market leader for two years.” And I always think 18 months, two years is a good number actually because it’s far enough away that you’re not being reactive, but it’s kind of close where it’s like, yes, we do need to do something. And I think that’s a good balance where you’ve got some pressure but not too much pressure that you make bad decisions or you’re looking for silver bullets, that kind of thing.

Look ahead 18 months to two years [05:43]

Thomas Betts: And I like that because I think a lot of companies tend to do one year planning and then they’ll do longer three to five year planning, and that fits in that sweet spot of we don’t need to deal with this right now, but if you know it’s on the horizon, you start doing the advance work so that you avoid, “Oh, we only have six months to do the 18-month plan.”

Nick Tune: Yes, definitely. If people feel like it’s more than two years away, it’s like, oh yes, I don’t need to worry about that. We can do that next year.

How do you get started? [06:08]

Thomas Betts: Yes, that’s definitely true. So let’s say that we’ve got the company on board and they’ve decided we’re going to do this modernization. It’s a good idea. How do you get started? How do you look at all of the legacy systems you have and determine what do you need to prioritize?

Nick Tune: Yes, great question. In terms of where to get started, a lot of different ways you can approach this, I think, and depending on the size of your company, can also vary. I think if you go to the extreme approach in one side you would spend all this time mapping out all of your systems, building up a full picture. You’ll have one of your systems, you’ll have a full target architecture to find, you’ll have a three-year roadmap. That’s probably a bit too much.

I’m more on the side of try and deliver something in three to six months, do some assessment of the current states, talk to teams, get the information that they have available, but try and pick something that you can modernize in three to six months and it will give you some value. It’s not a wasted project. It might not be the most important thing or the most valuable thing, but it would be in your top five to 10.

So I would say try and get started early, validate some assumptions, try and start showing people that this is happening, you are delivering modernization, deliver some business wins, and then whilst you’re doing that, you can be doing more discovery of mapping in other areas.

I think most of the modernizations I work on is kind of a distributed nature. It’s not like a centralized project. It’s like we’ve got all of these teams and these teams are all going to be involved in modernization. Some of the decision that comes up to the teams themselves to decide how much effort they want to do and if you are leading modernization, you might try and encourage certain teams to do more work, do more modernization work, do more discovery, and you might be happy for others to step back. And that can also be driven by the people involved.

Your first modernization project to focus on, it might not be the area you think is best to modernize. It might be those are the right people to deliver the first piece, will modernize the bit that they worked on before, so will pick the right people to start modernizing. So yes, difficult one to give a concrete answer I think. But I would say having some overall mapping of the system at quite a reasonable level, at least understanding what are all of the different pieces of our architecture, and then also trying to deliver something in three to six months. Having those two things pulling you in different directions, I would say.

Finding a quick win to get buy-in on the modernization journey [08:34]

Thomas Betts: Yes, I think if you take that 18 month/two year mindset and you look at all the components of your system, all the big systems, all the small systems, you’re probably going to see things that say, “This is going to be a bigger problem if we don’t address it in 18 months.” And others like, “Well, it’d be nice to do but it’s going to take too long” or “it’s not as much of a priority.”

And when you’re saying pick something that you can do in three to six months, that kind of sets the mindset of let’s show a quick win, relatively quick win that we can do this architecture modernization and does that help buy in with the company and the decision makers that now we can take on more, then we can start planning a more broad spread. We can apply this mindset of modernization to the rest of our tech stack.

Nick Tune: And I think there’s different ways you can do that first three to six months. On one approach, it might be let’s just pick this one thing and modernize it and show what’s possible. But you could also have more of a narrative around that. So connecting the business strategy to the modernization, the tech strategy, you could have your system mapped out at a high level and you can say, “We know the business is going in this direction where we want to move into multiple countries and we know the current system is optimized for the current country we’re in, but it won’t support these three other countries.”

So at least you can say we’ve got this overall strategy of being multi-country or multi-product, for example, and everything we do is connected back to that. It could be something else. It could be our focus is on more stability or our focus might be on making our data more accessible so that we can feed it into LLMs and stuff. So having some narrative, so even though you haven’t figured out the full modernization plan, at least you can connect everything back to this theme that your company’s trying to achieve.

Thomas Betts: Yes, you’re talking about, you mentioned feeding data into an LLM, that was probably not on anyone’s roadmap two years ago, and now everyone’s talking about we have to have an LLM somewhere in our system or we won’t look like we’re keeping up. And I think that gets to this is not just a technical problem, this is a business problem. And if you have business requirements, like you said, we want to expand multinational, does our system support that? No.

Well, where are the pain points going to be in getting to that and finding those things that you can say, “We need to get to the business, we could just build something new” or we can take the time to modernize what we have and that’s going to set us up for more success. Have you seen that approach where somebody says let’s just do the new thing versus modernize the old thing?

Building new vs. modernizing what exists [11:04]

Nick Tune: I think this is quite a common thing. Should we do the new thing or should we modernize the old thing? I don’t know. I think it depends really. And I’m not sure what to say on that one. Maybe if I reframe the question to, should you modernize the old thing or build the new thing, how would I go about making that decision?

So I think building the new thing is good. Let’s say you’ve got an old tech stack, 10 years old, 20 years old, 30 years old, some legacy. The benefit of building something new is that you can start from a blank canvas and use all the latest technologies and you can say, “Here’s what good looks like. Here’s the best possible thing that we can create in this company.” That’s the standard for everything we modernize. You want to get to this level.

Sometimes they call that the new world and the old world, like here’s what the new world looks like. We want to get all of the old world into this new world. And obviously you can get things out to market quicker with that approach. Caveat being, you might have to still talk to the legacy or you might have to integrate with the legacy later, and that could be the legacy infrastructure. You build some stuff in the cloud on a test environment, but then it all has to connect back to the on-prem stuff and it’s like, yes, building all of this stuff in the new world didn’t get that much benefit anyway.

But the other problem with that approach is you’re not actually modernizing, you are not learning about how you modernize your system. Sometimes companies do all this new stuff and it’s like when they come to modernize, they realized, “Oh, it might not be possible to do that. We built this new thing in AWS Lambda on the cloud, but because of our network rules and other kind of policies, we’re not going to be able to do that anyway.”

So if you do build something new, you are missing out on learning opportunities about how you actually start modernizing your legacy. Whereas if you start by modernizing something, you start building up this playbook and knowledge of how to modernize other things afterwards.

So I think pros and cons with both approaches. I’ve seen both approaches being successful. If you can hedge your bets, that’s always the best option. If you can do something new and start to modernize something else, then that’s always helpful. You get the best of both.

Thomas Betts: Somebody gave the house analogy one time for building something new and it’s like, “Do you want to remodel what you have and you get to keep living in it and it’s going to be disruptive while all these contractors are showing up and building out the new kitchen, or are you going to build a new house down the street? And when it’s done, now you have to figure out how to move everything out of your old house and into the new house.”

And it’s two very different mindsets and it’s like, well, you’re throwing away all of the benefits of the old house. You’ve lived there for years. You have well-established routines, your stuff fits in all the places, and now you have to find new places to store everything. So when you say that there’s pros and cons to both, yes, most people can see that for non-software problems where you’re looking at do we change what we have or do we build something new?

Hedging your bets [13:53]

When you say, “Hedge your bets,” the idea of building something new, you said that creates a new world, like standard, this is what it’s going to be. Would you see that as the same functionality of what you have in the legacy system or something that you don’t have yet just to show here’s an example of what the architecture looks like?

Nick Tune: Yes, good question. So it can be either of this new world concept. I think the new world concept is something that can really blow people away. You might have people who don’t work in tech more on the business side, and they’ve got used to deploying once every six weeks or once a month. They’ve got used to all of this bureaucracy. They’ve got used to developers deploying on weekends and doing three days of regression tests after every deployment, and they think that’s normal. Even if you build something new that’s new functionality in this new world and they can see like, “Oh wow, we’re deploying a day. All the tests automated, we’ve got our compliance baked in. That’s amazing.”

Yes, it’s like you want to blow their minds basically, but it could also be existing functionality as well. But I think, yes, it depends on what projects come up in your roadmap and what learnings and what effect you want to have with that new world because if you get this amazing new world, but then the next step is you can’t actually get your existing system to look like that, things can fall apart at that point. So you want to be able to blow people away but be able to back that up.

Measuring success of a modernization journey [15:18]

Thomas Betts: Well, since you talked about releases, that got to one of my next questions, which is, how do you measure success and I think you hinted at this a little bit and when you were first talking about why do you want to do architecture modernization, but I think some business people don’t understand how much of a culture shift and how much of the patterns and the way of working can change significantly. So talk more about how you see those things change and not just release times shortening down, but what does it look like for the business to adopt those new modern practices?

Nick Tune: So I think the metrics are crucial. Every modernization investment should be contributing to certain kind of business metrics. I don’t want to give too much away about the company I’m currently working for, nothing sensitive, but I’ll talk in general, let’s say. So some things might be around customer efficiency. So it takes a week to onboard a customer, let’s get that down to a day or an hour.

There might be some manual steps involved. Let’s try and automate the whole onboarding process for example. Could be around customer support, it could be like for every 100 transactions placed on your platform, there’s like five customer support tickets and that can get very expensive and that can stop your business scaling. So I think, yes, all kinds of metrics, either customer facing or internal efficiency modernization should be supporting those in some way.

Resetting developer expectations [16:35]

Thomas Betts: So what does it change for the developer experience? That’s the next thing because I think you can see significant improvements from, “I write code in this old way and it takes six weeks or a month or whenever it is for our release cycle, and I know I won’t see my code in production until after it goes through that round of testing and that release,” versus, “I can have my code deployed in minutes.” And that’s, like you said, it blows people away. They don’t realize just how different it is. And that gets to developers behave differently when they have that different expectation of how soon their stuff is out in production, don’t they?

Nick Tune: Yes, in my career it’s definitely when you have a reliable way of quickly getting code into production, there’s less bureaucracy, there’s less need to estimate things and the whole process becomes more efficient. When I first moved to London almost 15 years ago, I worked at one company and we had this release process and it would be like two or three days of manual testing and there was a lot of safeguards in the process, a lot of checks, a lot of inefficiencies. And then I went to work for a company called Seven Digital, and we were clicking one-click deployments to production every day.

We would come to work in the morning, someone’s raised a bug, “Okay, let’s just fix that bug, deploy to production before we go for our mid-morning coffee break and then carry on with the work we were supposed to do.” When you have that super-efficient pipeline, you can quickly make changes like that.

You don’t have all of these games about trying to get work put into the backlog and deployed for the next sprints or doing some emergency release. So yes, you can be much more responsive to business needs, much more focused on getting work done, time to market is much faster.

And there’s other benefits as well in that area definitely. Hiring and retention, I would say definitely if you’ve got some old legacy systems, people can lose motivation, let’s say very quickly working in those kind of environments. Whereas if you’re using modern tech modern approaches, it’s much easier to hire really good people.

Thomas Betts: Like I said, these all tie in technology benefits but also a lot of business benefits. And so we were talking about developers get to see their code get deployed faster, but that means they get to solve a customer bug faster. And that’s going back to your example of, “Oh, we get five bugs reported for every a hundred things released.” Well that’s not a problem if you can fix them in minutes. It’s a problem if they sit open for two weeks.

Nick Tune: Yes, exactly. The cost of fixing bugs gets much lower. All the work you do is more efficient. And as another benefit of that, developers typically get to spend a bit more time thinking about the products being involved in the domain itself so they become more knowledgeable about your business, able to contribute more to actually improving the product and solving problems and less figuring out how to get things in production or test things that don’t really add too much value.

When is architecture modernization done? [19:26]

Thomas Betts: So if we go through an architecture modernization journey, are we ever done or is this something that there’s always, once you’ve started the path, you just keep finding ways to continually improve the architecture?

Nick Tune: Yes, I mean continuous improvement is crucial. In terms of modernization, is it done? I would say there’s not really an end date in most cases. I would say the amount of time people talk about modernization just goes down until it’s not that important anymore. And there’s a blurry line between modernizing and not modernizing, but sometimes you’ve got clear milestones, like get out of the data center onto the cloud.

Once that’s been achieved, you’ve hit a big milestone and people might think, “Oh yes, modernization is done at that point,” but you might have done a lot of lift and shift to get there, and so you still might be actually improving the architecture itself, might be investing a lot after that. So yes, it’s normally not like a party at the end where it’s all done, it just people stop talking about it, even if there is still bits of modernization going on I would say.

Thomas Betts: Yes, I think that was actually one of the talks that QCon London last month was about modernization. They had to do a lift and shift first to get into the cloud and then they could start refactoring. But if they tried to refactor all their code once into the cloud, it was going to take so long, they would’ve lost their customers who were asking for a cloud-based solution rather than on-prem solution because it was covid times and no one was going into the office anymore. They couldn’t access the software, they needed it cloud-based. So sometimes it is a two-step process. And so I like that you’re talking about the project may be initially get us out of the data center, get us into the cloud, but the process is ongoing.

Prerequisites for a modernization journey [21:06]

And my next question was going to be are there ever prerequisites either technical or organizational before a company can start on their modernization journey? And the one that came to mind was having a product-led mindset as opposed to a project mindset.

Nick Tune: I think even that one is… You could challenge that one. I think there are a lot of companies that don’t really have a clear strategy. Where I’m working at Payfit now, the business strategy is very clear and it’s very obvious how some big improvements in the architecture over the next few years will enable that. In some companies it can be the opposite. It can be, yes, we might move into some new countries.

You get the chief product officer who’s talking about, yes, we’re going to move into new countries, but the CEO’s like, “No, we probably won’t do that.” And so on the technical side you are like, “What are we trying to achieve here?” You might not have this clear strategy, but you might still modernize in a way where when they do have more clarity on the strategy, you’ll be able to respond. It’s a difficult situation because you might optimize things that actually aren’t that useful to how the company grows in the future. You might end up modernizing some legacy that just gets thrown away. It is not needed anymore.

So I wouldn’t say that’s a prerequisite, but if you do have a very compelling business need and you can actually say, “Look, the business has this specific objective to achieve, the business knows it can double, triple, quadruple revenue, but we can’t do that with the current system.” That’s easy mode I would say, because you know what you need to modernize to achieve the business goals.

So it’s not a prerequisite, but I would say definitely do your homework, put a lot of effort because if you can get clarity on the strategy, then it will make your modernization much easier. You’re not guessing what are we trying to optimize for, where’s the company going? You’ve got those clear benchmarks. So not a prerequisite, but I would say that one is something I would focus on as much as possible.

Having or hiring the skills to do the modernization work [23:04]

I think the other prerequisite is having the skills.

Sometimes I feel sorry for these teams. Some new leader’s come in or the company’s got some ambition and they’re going to modernize and it’s like these developers who’ve been working in a monolith in this more classical way of working. Company’s, “Right now, you’ve got to modernize all this stuff and we’ve got to be able to innovate much faster in the next couple of years.” If you haven’t been modernizing systems, if you’ve just watching in one legacy system, fixing bugs, adding small features, it’s very difficult to… You just don’t have the skills or experience basically. So I think recognizing what skills need to be bought in-house.

I’m not saying hire consultants or partner with consultancies. I think that there are different strategies there definitely multiple ways to achieve that, but usually you need to bring in some outside help, external help. And also kind of a counterpoint to that one is, bringing in outside help. They can’t do all of that by themselves in any situation. I think a definite prerequisite for me is you need to put aside time and money for learning and training and up-skilling of your employees. Having the business narrative that drives the strategy and investing in up-skilling the people I would say probably those are the two most important.

Thomas Betts: You don’t want to bring in a consultant to do the work of the 10 people you have. You want them to make the 10 people you have start working in that modernization mindset.

Nick Tune: Yes, exactly. 100%. Bringing in people who have the skills you want and using them in a way where you have those skills in your company if you’re not dependent on those consultants. But there’s always educated in different scenarios. You could hire a consultancy to come in and build something and show what’s possible, but I would only treat that as a very short-term solution. I wouldn’t do that for the whole project.

Team Topologies [25:01]

Thomas Betts: When you’re doing this, what do you like to see for team structures? I know Team Topologies gets mentioned a lot. Is that something that you’re finding people don’t have a Team Topologies mindset, they don’t have stream-aligned teams. If they’re doing that monolithic legacy development and you think they should have it for doing more modern work?

Nick Tune: Yes, I’m quite lucky in that respect. My views maybe aren’t representative of what other people would find, because I’ve been talking about domain-driven design a lot. People normally come to me at the point where that’s something they want to achieve. We want to have these different domains owned by different teams so we can have different backlogs and have those teams all working independently. For me, people have already made that decision that we need to have teams that can work more independently and applying concepts like DDD and Team Topologies.

Put in the effort to change old business processes [25:49]

Thomas Betts: Also, you said you’ve been brought in as a consultant, but is that one of those, here’s how we’re going to build the new system and here’s how we’re going to change. You have to look at the whole picture. It’s not here’s how we write code on the new tech stack. It’s also we’re going to change some of how we do our day-to-day work. And that again, takes in business buy-in as well to get new processes that maybe don’t have the bureaucracy used to have.

Nick Tune: Yes, definitely 100%. Changing those old processes can be very, very ingrained and difficult. And you get companies where you’ve got someone who’s responsible for this longer complex release process that involves manual sign-offs and approvals, and they’re like, well, in this vision you are proposing, do I even have a job? So you encounter situations like that. There’s all kinds of frictions and difficulties.

But to answer your point, I would say that’s definitely one of the things people don’t think about or are not aware of or don’t anticipate upfront. So yes, can we modernize? Can we change your organization structure? But in terms of how you prioritize work, how you estimate things, how you release things, all those discovery and delivery processes, sometimes they’re not really ready to change those things.

It can also be around, I would say the budgeting and finance is always an interesting area. I worked with one company and they wanted to modernize, build this platform, but the platform team had no budget. The only way the platform team could get money to build stuff was if there was another team in the company who asked them to build something and gave them some of their budget. And so the platform team had no autonomy, they weren’t able to have a more long-term perspective. It was very much like they were just like an internal outsourcing provider.

They just get told what to build, how to build it, and the leadership of the company thought that was a good approach, a good idea. They didn’t see why that wouldn’t work, why that wasn’t effective. So yes, some things can be quite ingrained, can we go from processes, compliance, bureaucracy, funding and finance budget, all that kind of stuff. Yes, it touches almost everything really.

Creating personal relationships is key [27:59]

Thomas Betts: And so when you’re trying to do this, are there any key techniques? I mean we mentioned team topologies. Are there other tools or techniques or processes that you would recommend and what are the benefits of using those?

Nick Tune: To be honest, this might not sound like a great answer. I wasn’t always good at this. Maybe I’m still not good at this, but I think just have good relationships with people.

One thing I’ve learned over 15 years is if you just build good relationships with people, you can solve a very hard problem sometimes by just going to lunch together. When I worked with one company, there was a head of IT, he’d been there for quite a while. I was proposing new ideas, tried to be polite, we didn’t have a great relationship. And then after six months someone said to me, “Oh yes, he’s been saying some stuff about how you did some good work together.” I’m like, “Oh, that just seemed like any other meeting I had with him.” But basically after six months, something changed in him where he was like, “I kind of trust that Nick guy now. Yes, it’s not so bad after all.”

And after that point, having some of the deeper, more difficult conversations became a lot easier. So I don’t know if it’s the answer you’re looking for, but if I could say if there were no silver bullets, but if there was a silver bullet, I would definitely say building good relationships with all the stakeholders involved, learning about what they’re working on, what their challenges are, what their problems are, just being someone that they trust.

And then when you do have these difficult moments, difficult conversations, at least they respect you and trust you on a personal level and might be willing to open up more with you, give you the benefit of the doubts, those kind of human aspects.

Thomas Betts: Yes, nothing can beat having good trusting relationships. If you aren’t trusted, especially in your case of coming in as a consultant or new to an employer and you’re saying, “I’m going to switch stuff up,” that can be jarring if they don’t say, “I trust him to be able to go and switch stuff up.”

Trade-offs when deciding to modernize architecture [29:47]

My final question, I wanted to end on kind of a curve ball. So architects are familiar with making trade-offs. If we go down this modernization path, what are some of the things that we’d be giving up?

Nick Tune: You mean what would the architects be giving up?

Thomas Betts: Not just from an architecture perspective, but let’s say it’s the business deciding we’re going to prioritize this architecture modernization work for the next 18 months, two years. What are some of the trade-offs? They’re saying, we’re going to do this instead of what? What’s the trade-off?

Nick Tune: Oh yes, definitely. Well, this is the name of the game really. Modernization is about taking some short-term, I would like to say pain, but let’s say compromises. You are reducing the things you deliver now so that you can do more in the future. So I think it’s all about compromises. It’s all about maybe you don’t fix those bugs, maybe you don’t build those new features. Maybe you don’t expand into a new country, but you wait for a year or two years and now you can expand into four countries and operate at scale more effectively. So yes, I think the compromises usually come on the product side, usually not adding more features, I would say.

Thomas Betts: Yes, that seems to be the answer usually when it comes to tech debt is I need to work on my tech debt. That means I’m not working on features. But this is that to a whole other order of magnitude potentially of we’re going to be doing a big tech debt rewrite, for lack of a better term, and trading off the business is not going to get stuff for a little while.

Nick Tune: Sometimes it’s not features. Sometimes it’s like whole new products or expanding into new countries. We’re going to put on hold this big business initiative, not just the features in the backlog, but something quite big and significant.

Thomas Betts: Not just one little bug. Yes. So if you have to trade off, we don’t get to do that big win, that big objective right now, but you’re going to make it easier for me to do more of those in the future instead of, well one country now and then one country next year. If you wait two years, maybe I’ll give you five countries much more easily.

Nick Tune: And the thing is, a lot of the time it might look like you are compromising or sacrificing, but a lot of those projects might not have been feasible anyway. See this sometimes you get six months in or a year in and you are like, “Well, trying to do this rollout to a new country on our existing architecture, it’s just gone completely not as we expected. It’s more expensive. We’re hitting all these kind of problems and edge cases. We get more support tickets than we thought.” So you might even feel like you’re compromising, but you are not actually compromising in some cases.

Thomas Betts: Yes, I think someone pointed out to me one time, it’s not that if we go down path A versus path B, one is going to take a longer timeline to get there. It was, if we go down path A, we cannot get to our destination. We have to switch to path B.

Nick Tune: Absolutely. Yes. In some cases it’s just not possible to get there with the current system.

Thomas Betts: Well, I think that’s a great place to wrap things up. Nick, thanks again for talking with me today. If people want to follow you or see you, are you going to be at conferences or anywhere in the future? How can they get in touch with you?

Nick Tune: Yes, normally quite a few conferences. I’m doing a few less this year, so probably you can follow me on LinkedIn. I put all my notifications there.

Thomas Betts: And then listeners, we hope you’ll join us again for another episode of the InfoQ podcast.

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Architecture Modernization with Nick Tune

MMS Founder
MMS Nick Tune

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Introduction

Thomas Betts: Hello and welcome to another episode of the InfoQ podcast. I’m Thomas Betts, and today’s guest is Nick Tune, who recently co-authored the book, Architecture Modernization: Socio-technical Alignment of Software, Strategy, and Structure.

At the recent Explore DDD conference, Nick’s sessions were some of the most popular with more people trying to sign up than the room could hold. And since there was clearly a lot of interest in the talk of architecture modernization, I thought it’d be great to get Nick on the podcast and allow the InfoQ audience to hear some of his ideas. So Nick, welcome to the InfoQ podcast.

Nick Tune: Thank you.

When does architecture modernization make sense? [01:18]

Thomas Betts: So I think we can have a baseline that the subject of architecture modernization assumes that we have some amount of legacy software, legacy architecture. We’re not talking about doing this for greenfield and in most successful companies that legacy software is also what allows them to be profitable. This is the stuff they built years ago and now it’s just running the business. And that can sometimes lead to a mindset of “if it’s not broke, don’t fix it.” Why is it important for companies to invest in modernizing architecture?

Nick Tune: Well, that’s a good question and maybe it’s not sometimes. Sometimes it’s not the software that makes a company successful, it’s the brand reputation. You could have the worst product in your industry, you could have new competitors, new startups who are all building better products than you, but if people know your name, they’ve got a big investment with your company and your products.

Yes, maybe you can still be a successful market leader even in those kinds of situations. But it’s a very risky strategy. For a lot of companies, the more legacy you’ve got, the older the legacy gets and it slows you down for building new features and can make costs certainly more expensive. And it can also pose reliability risks as your business grows and you start to scale and build new features.

You might be building on the foundations that were designed to support one country and now it’s running in five or six countries and suddenly the whole non-functional requirements are completely different. So yes, I think becoming unreliable, they could prevent your company from growing. It’s definitely a situation I see a lot. Companies want to do things like move into new countries, support different customer segments, but it’s just not possible with a tech or it’s too expensive to achieve that.

Consider the business risks of not modernizing [02:59]

Thomas Betts: Yes, I like the idea of looking at your risks and if your company doesn’t have any eminent risks right now, maybe it’s fine, but if you can start to analyze and say, we can project that we’re going to have some of these risks that we need to deal with in the next two years because we’re on a platform that’s retiring or there’s too many vulnerable packages we’re pulling in or whatever it is. Sometimes you’re built on a framework that is just getting out of support. So are you seeing companies that you work with looking at those risks when they’re looking long-term and saying, “Okay, the way we need to address this is to look at our tech stack?”

Nick Tune: I think some yes, as some no. I’m currently working for a French company called Payfit, and one thing I’m really enjoying about this company is they’ve recognized that where they want to be in a few years requires an investment and some big changes to their technology. And so yes, I would say this company’s very forward-looking, understands that investing in technology now can have big business benefits in the future.

On the other side of things, I think there are some companies that just want to keep building features until they can’t build features anymore, until things have totally broken, everything’s failing, the system’s not working, they can’t build new features, they’ve got serious competition. It feels like in some companies you have to be in a real crisis situation before they’re like, “Okay, let’s modernize.” But to summarize, I think there is a mix. Some companies are very good at looking forward and some very short-term focused.

Thomas Betts: Yes, you talked about that crisis and that’s when you make very reactive decisions. And where I think you’re talking about for architecture modernization is let’s look at what we have now and make a proactive plan that avoids those crisis moments, how do you recognize that those crises are going to show up. So how do we recognize what they are and find a plan to get away from it.

Nick Tune: Yes, definitely. One of my favorite questions as a consultant is how long would you be a market leader or how long could you keep your current position if you made no improvements in your tech? So it’s kind of like what happens if you don’t modernize, basically? How long has your company got? Some companies are like, “Yes, we have to do something now.” Worked with one of the big companies and Amazon entered their industry and they were like, it’s like now crisis is here now we can’t afford to wait at all.

We should have started two years ago. Some companies are like, “Yes, we could probably go without doing anything for 18 months and still be the market leader for two years.” And I always think 18 months, two years is a good number actually because it’s far enough away that you’re not being reactive, but it’s kind of close where it’s like, yes, we do need to do something. And I think that’s a good balance where you’ve got some pressure but not too much pressure that you make bad decisions or you’re looking for silver bullets, that kind of thing.

Look ahead 18 months to two years [05:43]

Thomas Betts: And I like that because I think a lot of companies tend to do one year planning and then they’ll do longer three to five year planning, and that fits in that sweet spot of we don’t need to deal with this right now, but if you know it’s on the horizon, you start doing the advance work so that you avoid, “Oh, we only have six months to do the 18-month plan.”

Nick Tune: Yes, definitely. If people feel like it’s more than two years away, it’s like, oh yes, I don’t need to worry about that. We can do that next year.

How do you get started? [06:08]

Thomas Betts: Yes, that’s definitely true. So let’s say that we’ve got the company on board and they’ve decided we’re going to do this modernization. It’s a good idea. How do you get started? How do you look at all of the legacy systems you have and determine what do you need to prioritize?

Nick Tune: Yes, great question. In terms of where to get started, a lot of different ways you can approach this, I think, and depending on the size of your company, can also vary. I think if you go to the extreme approach in one side you would spend all this time mapping out all of your systems, building up a full picture. You’ll have one of your systems, you’ll have a full target architecture to find, you’ll have a three-year roadmap. That’s probably a bit too much.

I’m more on the side of try and deliver something in three to six months, do some assessment of the current states, talk to teams, get the information that they have available, but try and pick something that you can modernize in three to six months and it will give you some value. It’s not a wasted project. It might not be the most important thing or the most valuable thing, but it would be in your top five to 10.

So I would say try and get started early, validate some assumptions, try and start showing people that this is happening, you are delivering modernization, deliver some business wins, and then whilst you’re doing that, you can be doing more discovery of mapping in other areas.

I think most of the modernizations I work on is kind of a distributed nature. It’s not like a centralized project. It’s like we’ve got all of these teams and these teams are all going to be involved in modernization. Some of the decision that comes up to the teams themselves to decide how much effort they want to do and if you are leading modernization, you might try and encourage certain teams to do more work, do more modernization work, do more discovery, and you might be happy for others to step back. And that can also be driven by the people involved.

Your first modernization project to focus on, it might not be the area you think is best to modernize. It might be those are the right people to deliver the first piece, will modernize the bit that they worked on before, so will pick the right people to start modernizing. So yes, difficult one to give a concrete answer I think. But I would say having some overall mapping of the system at quite a reasonable level, at least understanding what are all of the different pieces of our architecture, and then also trying to deliver something in three to six months. Having those two things pulling you in different directions, I would say.

Finding a quick win to get buy-in on the modernization journey [08:34]

Thomas Betts: Yes, I think if you take that 18 month/two year mindset and you look at all the components of your system, all the big systems, all the small systems, you’re probably going to see things that say, “This is going to be a bigger problem if we don’t address it in 18 months.” And others like, “Well, it’d be nice to do but it’s going to take too long” or “it’s not as much of a priority.”

And when you’re saying pick something that you can do in three to six months, that kind of sets the mindset of let’s show a quick win, relatively quick win that we can do this architecture modernization and does that help buy in with the company and the decision makers that now we can take on more, then we can start planning a more broad spread. We can apply this mindset of modernization to the rest of our tech stack.

Nick Tune: And I think there’s different ways you can do that first three to six months. On one approach, it might be let’s just pick this one thing and modernize it and show what’s possible. But you could also have more of a narrative around that. So connecting the business strategy to the modernization, the tech strategy, you could have your system mapped out at a high level and you can say, “We know the business is going in this direction where we want to move into multiple countries and we know the current system is optimized for the current country we’re in, but it won’t support these three other countries.”

So at least you can say we’ve got this overall strategy of being multi-country or multi-product, for example, and everything we do is connected back to that. It could be something else. It could be our focus is on more stability or our focus might be on making our data more accessible so that we can feed it into LLMs and stuff. So having some narrative, so even though you haven’t figured out the full modernization plan, at least you can connect everything back to this theme that your company’s trying to achieve.

Thomas Betts: Yes, you’re talking about, you mentioned feeding data into an LLM, that was probably not on anyone’s roadmap two years ago, and now everyone’s talking about we have to have an LLM somewhere in our system or we won’t look like we’re keeping up. And I think that gets to this is not just a technical problem, this is a business problem. And if you have business requirements, like you said, we want to expand multinational, does our system support that? No.

Well, where are the pain points going to be in getting to that and finding those things that you can say, “We need to get to the business, we could just build something new” or we can take the time to modernize what we have and that’s going to set us up for more success. Have you seen that approach where somebody says let’s just do the new thing versus modernize the old thing?

Building new vs. modernizing what exists [11:04]

Nick Tune: I think this is quite a common thing. Should we do the new thing or should we modernize the old thing? I don’t know. I think it depends really. And I’m not sure what to say on that one. Maybe if I reframe the question to, should you modernize the old thing or build the new thing, how would I go about making that decision?

So I think building the new thing is good. Let’s say you’ve got an old tech stack, 10 years old, 20 years old, 30 years old, some legacy. The benefit of building something new is that you can start from a blank canvas and use all the latest technologies and you can say, “Here’s what good looks like. Here’s the best possible thing that we can create in this company.” That’s the standard for everything we modernize. You want to get to this level.

Sometimes they call that the new world and the old world, like here’s what the new world looks like. We want to get all of the old world into this new world. And obviously you can get things out to market quicker with that approach. Caveat being, you might have to still talk to the legacy or you might have to integrate with the legacy later, and that could be the legacy infrastructure. You build some stuff in the cloud on a test environment, but then it all has to connect back to the on-prem stuff and it’s like, yes, building all of this stuff in the new world didn’t get that much benefit anyway.

But the other problem with that approach is you’re not actually modernizing, you are not learning about how you modernize your system. Sometimes companies do all this new stuff and it’s like when they come to modernize, they realized, “Oh, it might not be possible to do that. We built this new thing in AWS Lambda on the cloud, but because of our network rules and other kind of policies, we’re not going to be able to do that anyway.”

So if you do build something new, you are missing out on learning opportunities about how you actually start modernizing your legacy. Whereas if you start by modernizing something, you start building up this playbook and knowledge of how to modernize other things afterwards.

So I think pros and cons with both approaches. I’ve seen both approaches being successful. If you can hedge your bets, that’s always the best option. If you can do something new and start to modernize something else, then that’s always helpful. You get the best of both.

Thomas Betts: Somebody gave the house analogy one time for building something new and it’s like, “Do you want to remodel what you have and you get to keep living in it and it’s going to be disruptive while all these contractors are showing up and building out the new kitchen, or are you going to build a new house down the street? And when it’s done, now you have to figure out how to move everything out of your old house and into the new house.”

And it’s two very different mindsets and it’s like, well, you’re throwing away all of the benefits of the old house. You’ve lived there for years. You have well-established routines, your stuff fits in all the places, and now you have to find new places to store everything. So when you say that there’s pros and cons to both, yes, most people can see that for non-software problems where you’re looking at do we change what we have or do we build something new?

Hedging your bets [13:53]

When you say, “Hedge your bets,” the idea of building something new, you said that creates a new world, like standard, this is what it’s going to be. Would you see that as the same functionality of what you have in the legacy system or something that you don’t have yet just to show here’s an example of what the architecture looks like?

Nick Tune: Yes, good question. So it can be either of this new world concept. I think the new world concept is something that can really blow people away. You might have people who don’t work in tech more on the business side, and they’ve got used to deploying once every six weeks or once a month. They’ve got used to all of this bureaucracy. They’ve got used to developers deploying on weekends and doing three days of regression tests after every deployment, and they think that’s normal. Even if you build something new that’s new functionality in this new world and they can see like, “Oh wow, we’re deploying a day. All the tests automated, we’ve got our compliance baked in. That’s amazing.”

Yes, it’s like you want to blow their minds basically, but it could also be existing functionality as well. But I think, yes, it depends on what projects come up in your roadmap and what learnings and what effect you want to have with that new world because if you get this amazing new world, but then the next step is you can’t actually get your existing system to look like that, things can fall apart at that point. So you want to be able to blow people away but be able to back that up.

Measuring success of a modernization journey [15:18]

Thomas Betts: Well, since you talked about releases, that got to one of my next questions, which is, how do you measure success and I think you hinted at this a little bit and when you were first talking about why do you want to do architecture modernization, but I think some business people don’t understand how much of a culture shift and how much of the patterns and the way of working can change significantly. So talk more about how you see those things change and not just release times shortening down, but what does it look like for the business to adopt those new modern practices?

Nick Tune: So I think the metrics are crucial. Every modernization investment should be contributing to certain kind of business metrics. I don’t want to give too much away about the company I’m currently working for, nothing sensitive, but I’ll talk in general, let’s say. So some things might be around customer efficiency. So it takes a week to onboard a customer, let’s get that down to a day or an hour.

There might be some manual steps involved. Let’s try and automate the whole onboarding process for example. Could be around customer support, it could be like for every 100 transactions placed on your platform, there’s like five customer support tickets and that can get very expensive and that can stop your business scaling. So I think, yes, all kinds of metrics, either customer facing or internal efficiency modernization should be supporting those in some way.

Resetting developer expectations [16:35]

Thomas Betts: So what does it change for the developer experience? That’s the next thing because I think you can see significant improvements from, “I write code in this old way and it takes six weeks or a month or whenever it is for our release cycle, and I know I won’t see my code in production until after it goes through that round of testing and that release,” versus, “I can have my code deployed in minutes.” And that’s, like you said, it blows people away. They don’t realize just how different it is. And that gets to developers behave differently when they have that different expectation of how soon their stuff is out in production, don’t they?

Nick Tune: Yes, in my career it’s definitely when you have a reliable way of quickly getting code into production, there’s less bureaucracy, there’s less need to estimate things and the whole process becomes more efficient. When I first moved to London almost 15 years ago, I worked at one company and we had this release process and it would be like two or three days of manual testing and there was a lot of safeguards in the process, a lot of checks, a lot of inefficiencies. And then I went to work for a company called Seven Digital, and we were clicking one-click deployments to production every day.

We would come to work in the morning, someone’s raised a bug, “Okay, let’s just fix that bug, deploy to production before we go for our mid-morning coffee break and then carry on with the work we were supposed to do.” When you have that super-efficient pipeline, you can quickly make changes like that.

You don’t have all of these games about trying to get work put into the backlog and deployed for the next sprints or doing some emergency release. So yes, you can be much more responsive to business needs, much more focused on getting work done, time to market is much faster.

And there’s other benefits as well in that area definitely. Hiring and retention, I would say definitely if you’ve got some old legacy systems, people can lose motivation, let’s say very quickly working in those kind of environments. Whereas if you’re using modern tech modern approaches, it’s much easier to hire really good people.

Thomas Betts: Like I said, these all tie in technology benefits but also a lot of business benefits. And so we were talking about developers get to see their code get deployed faster, but that means they get to solve a customer bug faster. And that’s going back to your example of, “Oh, we get five bugs reported for every a hundred things released.” Well that’s not a problem if you can fix them in minutes. It’s a problem if they sit open for two weeks.

Nick Tune: Yes, exactly. The cost of fixing bugs gets much lower. All the work you do is more efficient. And as another benefit of that, developers typically get to spend a bit more time thinking about the products being involved in the domain itself so they become more knowledgeable about your business, able to contribute more to actually improving the product and solving problems and less figuring out how to get things in production or test things that don’t really add too much value.

When is architecture modernization done? [19:26]

Thomas Betts: So if we go through an architecture modernization journey, are we ever done or is this something that there’s always, once you’ve started the path, you just keep finding ways to continually improve the architecture?

Nick Tune: Yes, I mean continuous improvement is crucial. In terms of modernization, is it done? I would say there’s not really an end date in most cases. I would say the amount of time people talk about modernization just goes down until it’s not that important anymore. And there’s a blurry line between modernizing and not modernizing, but sometimes you’ve got clear milestones, like get out of the data center onto the cloud.

Once that’s been achieved, you’ve hit a big milestone and people might think, “Oh yes, modernization is done at that point,” but you might have done a lot of lift and shift to get there, and so you still might be actually improving the architecture itself, might be investing a lot after that. So yes, it’s normally not like a party at the end where it’s all done, it just people stop talking about it, even if there is still bits of modernization going on I would say.

Thomas Betts: Yes, I think that was actually one of the talks that QCon London last month was about modernization. They had to do a lift and shift first to get into the cloud and then they could start refactoring. But if they tried to refactor all their code once into the cloud, it was going to take so long, they would’ve lost their customers who were asking for a cloud-based solution rather than on-prem solution because it was covid times and no one was going into the office anymore. They couldn’t access the software, they needed it cloud-based. So sometimes it is a two-step process. And so I like that you’re talking about the project may be initially get us out of the data center, get us into the cloud, but the process is ongoing.

Prerequisites for a modernization journey [21:06]

And my next question was going to be are there ever prerequisites either technical or organizational before a company can start on their modernization journey? And the one that came to mind was having a product-led mindset as opposed to a project mindset.

Nick Tune: I think even that one is… You could challenge that one. I think there are a lot of companies that don’t really have a clear strategy. Where I’m working at Payfit now, the business strategy is very clear and it’s very obvious how some big improvements in the architecture over the next few years will enable that. In some companies it can be the opposite. It can be, yes, we might move into some new countries.

You get the chief product officer who’s talking about, yes, we’re going to move into new countries, but the CEO’s like, “No, we probably won’t do that.” And so on the technical side you are like, “What are we trying to achieve here?” You might not have this clear strategy, but you might still modernize in a way where when they do have more clarity on the strategy, you’ll be able to respond. It’s a difficult situation because you might optimize things that actually aren’t that useful to how the company grows in the future. You might end up modernizing some legacy that just gets thrown away. It is not needed anymore.

So I wouldn’t say that’s a prerequisite, but if you do have a very compelling business need and you can actually say, “Look, the business has this specific objective to achieve, the business knows it can double, triple, quadruple revenue, but we can’t do that with the current system.” That’s easy mode I would say, because you know what you need to modernize to achieve the business goals.

So it’s not a prerequisite, but I would say definitely do your homework, put a lot of effort because if you can get clarity on the strategy, then it will make your modernization much easier. You’re not guessing what are we trying to optimize for, where’s the company going? You’ve got those clear benchmarks. So not a prerequisite, but I would say that one is something I would focus on as much as possible.

Having or hiring the skills to do the modernization work [23:04]

I think the other prerequisite is having the skills.

Sometimes I feel sorry for these teams. Some new leader’s come in or the company’s got some ambition and they’re going to modernize and it’s like these developers who’ve been working in a monolith in this more classical way of working. Company’s, “Right now, you’ve got to modernize all this stuff and we’ve got to be able to innovate much faster in the next couple of years.” If you haven’t been modernizing systems, if you’ve just watching in one legacy system, fixing bugs, adding small features, it’s very difficult to… You just don’t have the skills or experience basically. So I think recognizing what skills need to be bought in-house.

I’m not saying hire consultants or partner with consultancies. I think that there are different strategies there definitely multiple ways to achieve that, but usually you need to bring in some outside help, external help. And also kind of a counterpoint to that one is, bringing in outside help. They can’t do all of that by themselves in any situation. I think a definite prerequisite for me is you need to put aside time and money for learning and training and up-skilling of your employees. Having the business narrative that drives the strategy and investing in up-skilling the people I would say probably those are the two most important.

Thomas Betts: You don’t want to bring in a consultant to do the work of the 10 people you have. You want them to make the 10 people you have start working in that modernization mindset.

Nick Tune: Yes, exactly. 100%. Bringing in people who have the skills you want and using them in a way where you have those skills in your company if you’re not dependent on those consultants. But there’s always educated in different scenarios. You could hire a consultancy to come in and build something and show what’s possible, but I would only treat that as a very short-term solution. I wouldn’t do that for the whole project.

Team Topologies [25:01]

Thomas Betts: When you’re doing this, what do you like to see for team structures? I know Team Topologies gets mentioned a lot. Is that something that you’re finding people don’t have a Team Topologies mindset, they don’t have stream-aligned teams. If they’re doing that monolithic legacy development and you think they should have it for doing more modern work?

Nick Tune: Yes, I’m quite lucky in that respect. My views maybe aren’t representative of what other people would find, because I’ve been talking about domain-driven design a lot. People normally come to me at the point where that’s something they want to achieve. We want to have these different domains owned by different teams so we can have different backlogs and have those teams all working independently. For me, people have already made that decision that we need to have teams that can work more independently and applying concepts like DDD and Team Topologies.

Put in the effort to change old business processes [25:49]

Thomas Betts: Also, you said you’ve been brought in as a consultant, but is that one of those, here’s how we’re going to build the new system and here’s how we’re going to change. You have to look at the whole picture. It’s not here’s how we write code on the new tech stack. It’s also we’re going to change some of how we do our day-to-day work. And that again, takes in business buy-in as well to get new processes that maybe don’t have the bureaucracy used to have.

Nick Tune: Yes, definitely 100%. Changing those old processes can be very, very ingrained and difficult. And you get companies where you’ve got someone who’s responsible for this longer complex release process that involves manual sign-offs and approvals, and they’re like, well, in this vision you are proposing, do I even have a job? So you encounter situations like that. There’s all kinds of frictions and difficulties.

But to answer your point, I would say that’s definitely one of the things people don’t think about or are not aware of or don’t anticipate upfront. So yes, can we modernize? Can we change your organization structure? But in terms of how you prioritize work, how you estimate things, how you release things, all those discovery and delivery processes, sometimes they’re not really ready to change those things.

It can also be around, I would say the budgeting and finance is always an interesting area. I worked with one company and they wanted to modernize, build this platform, but the platform team had no budget. The only way the platform team could get money to build stuff was if there was another team in the company who asked them to build something and gave them some of their budget. And so the platform team had no autonomy, they weren’t able to have a more long-term perspective. It was very much like they were just like an internal outsourcing provider.

They just get told what to build, how to build it, and the leadership of the company thought that was a good approach, a good idea. They didn’t see why that wouldn’t work, why that wasn’t effective. So yes, some things can be quite ingrained, can we go from processes, compliance, bureaucracy, funding and finance budget, all that kind of stuff. Yes, it touches almost everything really.

Creating personal relationships is key [27:59]

Thomas Betts: And so when you’re trying to do this, are there any key techniques? I mean we mentioned team topologies. Are there other tools or techniques or processes that you would recommend and what are the benefits of using those?

Nick Tune: To be honest, this might not sound like a great answer. I wasn’t always good at this. Maybe I’m still not good at this, but I think just have good relationships with people.

One thing I’ve learned over 15 years is if you just build good relationships with people, you can solve a very hard problem sometimes by just going to lunch together. When I worked with one company, there was a head of IT, he’d been there for quite a while. I was proposing new ideas, tried to be polite, we didn’t have a great relationship. And then after six months someone said to me, “Oh yes, he’s been saying some stuff about how you did some good work together.” I’m like, “Oh, that just seemed like any other meeting I had with him.” But basically after six months, something changed in him where he was like, “I kind of trust that Nick guy now. Yes, it’s not so bad after all.”

And after that point, having some of the deeper, more difficult conversations became a lot easier. So I don’t know if it’s the answer you’re looking for, but if I could say if there were no silver bullets, but if there was a silver bullet, I would definitely say building good relationships with all the stakeholders involved, learning about what they’re working on, what their challenges are, what their problems are, just being someone that they trust.

And then when you do have these difficult moments, difficult conversations, at least they respect you and trust you on a personal level and might be willing to open up more with you, give you the benefit of the doubts, those kind of human aspects.

Thomas Betts: Yes, nothing can beat having good trusting relationships. If you aren’t trusted, especially in your case of coming in as a consultant or new to an employer and you’re saying, “I’m going to switch stuff up,” that can be jarring if they don’t say, “I trust him to be able to go and switch stuff up.”

Trade-offs when deciding to modernize architecture [29:47]

My final question, I wanted to end on kind of a curve ball. So architects are familiar with making trade-offs. If we go down this modernization path, what are some of the things that we’d be giving up?

Nick Tune: You mean what would the architects be giving up?

Thomas Betts: Not just from an architecture perspective, but let’s say it’s the business deciding we’re going to prioritize this architecture modernization work for the next 18 months, two years. What are some of the trade-offs? They’re saying, we’re going to do this instead of what? What’s the trade-off?

Nick Tune: Oh yes, definitely. Well, this is the name of the game really. Modernization is about taking some short-term, I would like to say pain, but let’s say compromises. You are reducing the things you deliver now so that you can do more in the future. So I think it’s all about compromises. It’s all about maybe you don’t fix those bugs, maybe you don’t build those new features. Maybe you don’t expand into a new country, but you wait for a year or two years and now you can expand into four countries and operate at scale more effectively. So yes, I think the compromises usually come on the product side, usually not adding more features, I would say.

Thomas Betts: Yes, that seems to be the answer usually when it comes to tech debt is I need to work on my tech debt. That means I’m not working on features. But this is that to a whole other order of magnitude potentially of we’re going to be doing a big tech debt rewrite, for lack of a better term, and trading off the business is not going to get stuff for a little while.

Nick Tune: Sometimes it’s not features. Sometimes it’s like whole new products or expanding into new countries. We’re going to put on hold this big business initiative, not just the features in the backlog, but something quite big and significant.

Thomas Betts: Not just one little bug. Yes. So if you have to trade off, we don’t get to do that big win, that big objective right now, but you’re going to make it easier for me to do more of those in the future instead of, well one country now and then one country next year. If you wait two years, maybe I’ll give you five countries much more easily.

Nick Tune: And the thing is, a lot of the time it might look like you are compromising or sacrificing, but a lot of those projects might not have been feasible anyway. See this sometimes you get six months in or a year in and you are like, “Well, trying to do this rollout to a new country on our existing architecture, it’s just gone completely not as we expected. It’s more expensive. We’re hitting all these kind of problems and edge cases. We get more support tickets than we thought.” So you might even feel like you’re compromising, but you are not actually compromising in some cases.

Thomas Betts: Yes, I think someone pointed out to me one time, it’s not that if we go down path A versus path B, one is going to take a longer timeline to get there. It was, if we go down path A, we cannot get to our destination. We have to switch to path B.

Nick Tune: Absolutely. Yes. In some cases it’s just not possible to get there with the current system.

Thomas Betts: Well, I think that’s a great place to wrap things up. Nick, thanks again for talking with me today. If people want to follow you or see you, are you going to be at conferences or anywhere in the future? How can they get in touch with you?

Nick Tune: Yes, normally quite a few conferences. I’m doing a few less this year, so probably you can follow me on LinkedIn. I put all my notifications there.

Thomas Betts: And then listeners, we hope you’ll join us again for another episode of the InfoQ podcast.

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.