Month: April 2023
Web Framework Astro Now Features Static, Server, and Hybrid Rendering for Faster Web Sites
MMS • Bruno Couriol
Article originally posted on InfoQ. Visit InfoQ
HTML-first web framework Astro recently released Astro 2.0 and complements previously available static and dynamic server rendering with new hybrid rendering capabilities. Hybrid rendering allows the prerendering of specific pages for faster performance.
The Astro web framework which seeks to popularize a front-end architecture known as Island Architecture explained the motivation for hybrid rendering as follows:
For almost a year now, Astro has let you choose between static (SSG) and server (SSR) build output. Static websites offer incredible performance, yet they lack the ability to generate HTML on-demand for each request.
Astro 2.0 brings together the best of both worlds with Hybrid Rendering.
Mixing static and dynamic content together unlocks new possibilities:
- Improve render performance of popular pages
- Improve build performance of large sites
- Add an API to your existing static site
In previous versions, Astro developers had to choose between either static rendering (targeting static, content-dominated sites) or server-side rendering for all web pages. With hybrid rendering, developers can prerender specific pages or server endpoints at build time without giving up on their deployed server.
Large sites often feature sections that are suitable for pre-rendering, while other sections do require generating their content at request time. An e-commerce site may for instance benefit from pre-rendering its homepage and miscellaneous marketing-focused content while products, pricing, or discount pages may continue to be server-rendered to incorporate the latest available data. The hybrid approach may reduce the volume of computing resources required to deliver web pages, and, with that, the associated costs.
Other optimizations valuable for large sites in a Jamstack context include Incremental Static Regeneration as popularized by the application framework Next.js.
The new Astro release also includes redesigned error overlays, improves support for hot module reloading in development, and builds content with the newly released Vite 4.0.
Astro is a web framework primarily targeting optimal user experience for content-focused websites. To that purpose, Astro strives to send the minimal amount of JavaScript necessary to ensure interactive pages. For entirely static pages, no JavaScript is sent at all. Astro named the architecture it uses to achieve its aim Island Architecture. Web pages can be seen as divided into static HTML content interspersed with otherwise interactive UI components termed Astro islands. Islands render in isolation and can use any UI framework (e.g., React, Preact, Svelte, Vue, Solid, Lit).
Astro self-describes as “the all-in-one web framework designed for speed.” According to its own benchmark based on performance data measured in the wild (The Chrome User Experience Report (CrUX), The HTTP Archive, and the Core Web Vitals Technology Report), Astro often outperforms a selected set of compared web frameworks (SvelteKit, Gatsby, Remix, WordPress, Next, Nuxt).
Astro is an open-source project distributed under the MIT license. Contributions and feedback are welcome.
MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ
This week’s Java roundup for March 27th, 2023 features news from OpenJDK, JDK 21, GlassFish 7.0.3, Spring point and milestone releases, Payara Platform, Quarkus 3.0.CR1, Micronaut 3.8.8, WildFly 28 Beta1, Hibernate ORM 6.2, Groovy 4.0.11, Camel 3.20.3, James 3.7.4, Eclipse Vert.x 4.4.1, JHipster Quarkus Blueprint 2.0, JHipster Lite 0.30, JBang 0.106, Gradle 8.1-CR2 and new Foojay.io calendar.
OpenJDK
The results of the 2023 Governing Board Election show that Andrew Haley, technical lead, Open Source Java at Red Hat, and Phil Race, consulting member of technical staff at Oracle, have been elected to the board to fill the two At-Large member seats. They will serve a term of one calendar year effective April 1, 2023. InfoQ will follow up with a more detailed news story.
JEP 444, Virtual Threads, was promoted from its JEP Draft 8303683 to Candidate status, then quickly promoted from Candidate to Proposed to Target status for JDK 21. This JEP proposes to finalize this feature based on feedback from the previous two rounds of preview: JEP 436, Virtual Threads (Second Preview), delivered in JDK 20; and JEP 425, Virtual Threads (Preview), delivered in JDK 19. This feature provides virtual threads, lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications, to the Java platform. The most significant change from JEP 436 is that virtual threads now fully support thread-local variables by eliminating the option to opt-out of using these variables. More details on JEP 425 may be found in this InfoQ news story and this JEP Café screen cast by José Paumard, Java developer advocate, Java Platform Group at Oracle. The review is expected to conclude on April 7, 2023.
Version 7.2 of the Regression Test Harness for the JDK, jtreg
, has been released and ready for integration in the JDK. The most significant new feature is the ability to run tests using virtual threads. Further details on this release may be found in the release notes.
JDK 21
Build 16 of the JDK 21 early-access builds was made available this past week featuring updates from Build 15 that include fixes to various issues. More details on this build may be found in the release notes.
Mark Reinhold, chief architect, Java Platform Group at Oracle, formally proposed the release schedule for JDK 21 as follows:
- Rampdown Phase One (fork from main line): June 8, 2023
- Rampdown Phase Two: July 20, 2023
- Initial Release Candidate: August 10, 2023
- Final Release Candidate: August 24, 2023
- General Availability: September 19, 2023
For JDK 21, developers are encouraged to report bugs via the Java Bug Database.
GlassFish
The release of GlassFish 7.0.3 ships with bug fixes, improvements in documentation and dependency upgrades such as: Mojarra 4.0.2, EclipseLink 4.0.1, Helidon Config 3.2.0 and ASM 9.5. Further details on this release may be found in the release notes.
Spring Framework
The Spring Integration team has announced that the Spring Integration Extension for Amazon Web Services (AWS), version 3.0.0-M2, and Spring Cloud Stream Binder for AWS Kinesis, version 4.0.0-M1, projects have been moved to the AWS Java SDK. Notable changes in each of these milestone releases include: AWS Java SDK 2.20.32, the latest version; a dependency upgrade to Spring Cloud AWS 3.0.0 with the new SQS listener API; a DynamoDbLockRegistry
class, an implementation of the ExpirableLockRegistry
and RenewableLockRegistry
interfaces, to provide proper TTL support; and removal of XML configuration.
Spring Cloud 2022.0.2, codenamed Kilburn, has been released featuring updates to sub-projects such as: Spring Cloud Vault 4.0.1, Spring Cloud Kubernetes 3.0.2, Spring Cloud OpenFeign 4.0.2 and Spring Cloud Config 4.0.2. There are, however, breaking changes with the removal of sub-projects: Spring Cloud CLI, Spring Cloud for Cloud Foundry and Spring Cloud Sleuth. More details on this release may be found in the release notes.
The first release candidate of Spring Web Flow 3.0.0 delivers new features: a migration of Spring Faces to Spring Framework 6, Jakarta EE, and JSF 4; and an update of the JSF samples to Jakarta EE. Further details on this release may be found in the release notes.
Payara
Payara has released their March 2023 edition of the Payara Platform that includes Community Edition 6.2023.3, Enterprise Edition 5.49.0 and the formal release of Payara Enterprise 6.0. All of these editions now support Jakarta EE 10 and MicroProfile 6.0. It is important to note that a known issue is currently being investigated: when deploying an application that contains a Java Record, a warning is logged in the server logs about Records not being supported. The Payara team assures that an application will still deploy and operate as expected.
Community Edition 6.2023.3 delivers bug fixes, component upgrades and improvements such as: an update to the REST SSL alias extension for Payara 6; upgrade the cacerts.jks
and keystore.jks
certificates to PKCS#12; and configure all SameSite cookie attributes for an HTTP network listener. More details on this release may be found in the release notes.
Enterprise Edition 5.49.0 also ships with bug fixes, component upgrades and the same SameSite cookie improvement as noted in the Community Edition. Further details on this release may be found in the release notes.
The Payara team has also published CVE-2023-28462, a vulnerability affecting server environments running on JDK 8 on updates lower than version 1.8u191. This vulnerability allows a remote attacker to load malicious code into a public-facing Payara Server installation using remote JNDI access via unsecured object request broker (ORB) listeners. Developers are encouraged to install a version of JDK 8 higher than 1.8u191.
Quarkus
After six alpha releases and one beta release, the first release candidate of Quarkus 3.0.0 was made available to the Java community this past week. New features include: introduce an initial version of the non-application root path, /q/info
, endpoint; use SmallRye BeanBag to initialize the Maven RepositorySystem
interface for compatibility with Maven 3.9; and a new plugin mechanism for the Quarkus CLI. More details on this release may be found in the release notes.
Micronaut
The Micronaut Foundation has released Micronaut Framework 3.8.8 featuring bug fixes and updates to modules: Micronaut Data, Micronaut Views, Micronaut OpenAPI, Micronaut Security and Micronaut Maven Plugin. There was also a dependency upgrade to Netty 4.1.90. Further details on this release may be found in the release notes.
WildFly
The first beta release of WildFly 28 delivers new features such as: support for Micrometer that includes integration of Micrometer with their implementation of MicroProfile Fault Tolerance specification; and support for the MicroProfile Telemetry and MicroProfile Long Running Actions (LRA) specifications. There was also a removal of support for the MicroProfile Metrics and MicroProfile OpenTracing specifications. More details on this release may be found in the release notes.
Hibernate
After four release candidates, the formal release of Hibernate ORM 6.2 delivers support for: structured SQL types; Java records; unified generated persisted values; database partitions; proprietary SQL types; and the ability to use the SQL MERGE
command to handle updates against optional tables.
Apache Software Foundation
Paul King, principal software engineer at Object Computing, Inc., director at ASERT and vice president at Apache Groovy, has announced three point releases of Apache Groovy as described below. Developers should expect fewer point releases in the 3.0 and 2.0 release trains as the team will be focusing on Groovy 5.0.
Version 4.0.11 delivers bug fixes and new features such as: new methods, asReversed()
and reverseEach()
, that will map directly to the descendingSet()
and descendingIterator()
methods, respectively, defined in the NavigableSet
interface; a dependency upgrade to ASM 9.5; and a new constant for JDK 21. Further details on this release may be found in the changelog.
Version 3.0.17 features bug fixes, improvements in documentation and a dependency upgrade to ASM 9.5. More details on this release may be found in the changelog.
Similarly, version 2.5.22 features bug fixes, improvements in documentation and a dependency upgrade to ASM 9.5. Further details on this release may be found in the changelog.
The release of Apache Camel 3.20.3 provides bug fixes, dependency upgrades and new features/improvements such as: add health checks for components that have an extension for connectivity verification (camel-health
); a user configuration file in the camel-jbang
component; and favor instances of the CompositeMeterRegistry
class in the Camel Registry API. More details on this release may be found in the release notes.
The release of Apache James 3.7.4 addresses CVE-2023-26269, Privilege Escalation through Unauthenticated JMX, a vulnerability in which versions of Apache James Server 3.7.3 and earlier provides a JMX management service without authentication by default that would allow an attacker to have access to privilege escalation. Further details on this release may be found in the release notes.
Eclipse Vert.x
Eclipse Vert.x 4.4.1 has been released with bug fixes and dependency upgrades to GraphQL-Java 20.1, Netty 4.1.90, SnakeYAML 2.0, Micrometer 1.10.5 and Apache Qpid Proton-J 0.34.1. More details on this release may be found in the release notes and deprecations and breaking changes.
JHipster
The JHipster team has released version 2.0.0 of the JHipster Quarkus Blueprint with notable changes such as: fix OIDC settings for the production profile; update the blueprint dependencies and Quarkus to 2.16.2; fix Keycloak authorization and Cypress tests; and a fix for the SQL Docker images. Further details on this release may be found in the release notes.
The JHipster team has also released JHipster Lite 0.30.0 features bug fixes, dependency upgrades and enhancements such as: remove a duplicated JSON Web Token dependency; a new getUsername()
method to the ApplicationAuthorizations
class; and a fix for the Angular OAuth2 with Keycloak. More details on this release may be found in the release notes.
JBang
Version 0.106.0 and 0.106.1 of JBang introduces support for the use of GPT in the jbang init
command by calling the ChatGPT API to initialize and create a jbang
script that attempts to execute the string that is expressed on the command line. Further details on this new feature may be found in this YouTube video and InfoQ will follow up with a more detailed news story.
Gradle
The second release candidate of Gradle 8.1 provides: continued improvements to the configuration cache; support for dependency verification; improved error reporting for Groovy closures; support for Java lambdas; and support for building projects with JDK 20. More details on this release may be found in the release notes.
Foojay.io
Foojay.io, the Friends of OpenJDK resource for Java developers, have provided their Java community calendar for developers to view and add events. The calendar is open for adding content without the need for a special account and the content is moderated.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Interview Couchbase is a JSON document database with global users including PayPal, eBay, and travel distribution system Amadeus, companies with a combined revenue of nearly $40 billion. Its total annual income was $154.8 million, so why is such a small database outfit trusted by big biz?
We caught up with CTO Ravi Mayuram to find out about the history of the system which became a leader in the NoSQL movement. You can view the video below.
Mayuram argued that since relational systems were first designed in the 1970s, the IT landscape has changed. “Networks are faster, memory is cheaper, and there is a cloud-based consumption model. These were not available when the relational systems were built, so they are showing their age. We wanted to build something that would stand the test of time, just like relational systems did,” he said.
One of the significant departures Couchbase took from the relational approach to databases was to avoid designing the data schema up front. This move has attracted criticism from some database experts.
“Having the schema is when you really pay the price. If you have a schema, data is in a solid brick form. Whenever you want to move that, you have to sort of re-cast that into a new schema when you go from system to system,” he said.
Instead, Couchbase employs something called a late-binding schema, which are not tied to the underlying database objects that it references.
“You always need a schema — without that there is no structure — but a schemaless database removes that friction between front end and back end. There is no separate application schema and database schema; there is only one schema which is what the application defines,” he said.
But from Couchbase 7.0 it began to look a bit more like a relational database by offering multi-statement SQL transactions which cover what relational systems can offer from the standpoint of OLTP guarantees but also preserves performance, scale and flexibility, the company argued.
In the video, Mayuram talks listeners through these design choices and elaborates on how they change the role of developers and DBAs within tech teams. ®
MMS • Franklyne Omondi
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- Since security professionals are scarce and outnumbered by developers, automation and DevSecOps practices are key to building secure software.
- Security is not an afterthought: DevSecOps emphasizes the importance of integrating security into every stage of the development process.
- Collaboration is key: DevSecOps requires collaboration between development, operations, and security teams to ensure security is considered throughout the development process.
- Software supply chain compromise is an emerging security issue that must be addressed.
- State-sponsored actors have added complexity to the ever-evolving threat landscape, now more than ever, organizations need to ensure security at every stage of the SDLC. DevSecOps practices largely boost this requirement.
I have now been working in cybersecurity for about two years and the lessons have been immense. During this time, my morning routine has largely remained unchanged; wake up, get some coffee, open my Spotify app, and get my daily dose of CyberWire Daily news. Credit where due, Dave Bittner and his team have done an amazing job with the show. The one thing that has remained constant over the years is the cyber attacks, the data breaches, and the massive thefts and sale of personally identifiable information of many people on the dark web.
We continue to hear the outcry from the cybersecurity world about the acute shortage of cyber talent. Many organizations are putting in place initiatives to try and fill this gap and even train and retain cybersecurity talent. The truth though is that the developers outnumber security professionals tremendously. So how did we get here? And more importantly, what is the industry doing to address these cybersecurity concerns of their platforms and products?
Historically in many organizations security has been treated as an afterthought. It has always been one of those checklist items at the end of the development and given the least priority and effort. The development team works on their software together with the operations team for deployment and maintenance. This was mostly successful, and became widely known as the DevOps process; a collaboration between the developers and the operations teams, working side by side to build, ship, and maintain software. However, security was not often a priority with DevOps. In addition, software development, deployment, and maintenance have continually increased in complexity at scale.
Nowadays, security can no longer be an afterthought, and this has become the general consensus among most professionals in the technology space. In fact, Hackerone has noted that fixing security defects in production is much more expensive than in development. It is becoming a standard practice in the SDLC to ensure security is a consideration during development.
Security and privacy are, more than ever, necessary components of all software. This is a challenge as it is not easy to change age-old processes overnight. But the change is needed as security incidents grow in volume and data breaches continue to be extremely expensive for organizations. A lot of work is being put in place to ensure security is ingrained in all products right from ideation. This is commonly referred to as shifting security left or DevSecOps.
Most companies are committed to churning out secure software to their customers. Currently, the speed at which new software is produced is lightning-fast. In many settings, developers are pushing new updates to products at hourly intervals. There needs to be an intricate balance between ensuring that the speed of innovation is not slowed down and that the security of products is not compromised. In recent years, there has been a huge push, even by governments, for better security in the SDLC processes. This has given rise to new practices in application and software supply chain security.
The Challenges
What are some of the challenges currently with how everything is set up? If organizations are committed to security and privacy, why are they still developing vulnerable software? This is not a question with a single answer. There are a lot of possibilities, but for the sake of this article, I will focus on a few points that most people tend to agree with.
The “Cybersecurity Tech Talent Crisis”
There have been reports and never-ending discussions about the shortage of experienced cybersecurity professionals. This is a contributing factor to the security challenges experienced by most organizations.
Advanced Persistent Threats
Nation-state-sponsored threat actors have become a normal thing in the recent past. These sophisticated actors continue to wreak havoc stealthily, breaching defenses and causing chaos along the way.
Attribution largely remains a complex topic, but most security researchers have pointed fingers at certain Asian states and Eastern European state actors. Andy Greenberg’s book Sandworm provides great details and insights into this with first-hand accounts from Robert M. Lee, a renowned Industrial control systems security engineer.
Ransomware Attacks
In the past couple of years, there has been an increase in ransomware attacks on various institutions and infrastructure. The crooks are into breaking into networks, encrypting data, and demanding colossal amounts of money in order to provide the decryption keys. Conti ransomware gang is perhaps the most prolific actor out there, affiliated with some high-profile and particularly damaging attacks.
Cloud Security Risks
Most of us probably still remember the Capital One hack. Cloud misconfigurations continue to present organizations with a huge security challenge. The cloud has been described by certain people as the wild west if proper guardrails are not properly put in place.
Software Supply Chain Attacks
This topic has attracted a lot of attention lately. Some of the common ways employed by bad actors to carry out these attacks include; compromising software building tools and/or infrastructure used for updates, as in the case of SolarWinds which is widely believed to have been conducted via a compromised FTP server, compromised code being baked into some hardware, or firmware, and stolen code signing certificates which are then used to sneak malicious applications into the download stores and package repositories.
DevSecOps and Software Supply Chain Security to the Rescue?
Is DevSecOps the answer to all the security challenges? Sadly, the answer is no, there is no single magic bullet that addresses all the security challenges faced by different organizations. However, it is agreeable that DevSecOps practices and securing the software supply chain play a pivotal role in greatly reducing software and application security risks.
DevSecOps advocates for knitting security within the software development process, from inception all the way to release. Security is considered at each stage of the development cycle. There are a lot of initiatives that have been put in place, to help drive this adoption. My two favorites, which are also big within the organization that I work for are:
- Automation of Security Testing: Building and automating security testing within CI/CD pipelines greatly reduces the human effort needed to review each code change. DAST (dynamic application security testing), SAST (static application security testing), IAST (interactive application security testing), and even SCA (software composition analysis) scans are pivotal in modern-day security scanning within pipelines. In most cases, if any security defect is detected, that particular build fails, and the deployment of potentially vulnerable software is subsequently stopped.
- Security Champions: This is by far my favorite practice. Perhaps it has different names in different organizations. However, training developers to have a security mindset has turned out to be highly beneficial, especially in helping to address the cybersecurity talent shortage problem.
In general, to implement DevSecOps, organizations often adopt a set of best practices that promote security throughout the software development lifecycle. Some key practices can be summarized as below:
Security as Code
Security should be integrated into the codebase and treated as code, with security policies and controls written as code and stored in version control. Code-driven configuration management tools like Puppet, Chef, and Ansible make it easy to set up standardized configurations across hundreds of servers. This is made possible by using common templates, minimizing the risk that bad actors can exploit one unpatched server. Further, this minimizes any differences between production, test, and development environments.
All of the configuration information for the managed environments is visible in a central repository and under version control. This means that when a vulnerability is reported in a software component, it is easy to identify which systems need to be patched. And it is easy to push patches out, too.
I was once part of a team that was tasked with ensuring cloud operational excellence. At the time, there was a problem with how virtual machines were spun up in the cloud environment. We needed a way to control which type and version of AMIs could be launched in the environment. Our solution was to create golden images according to the CIS Benchmark standards. We then implemented policies to restrict the creation of virtual machines to only our specific, golden Amazon Machine Images.
In the end, we had EC2 instances that adhered to best practices. To further lock things down, we had created Amazon CloudFormation templates, complete with which policies can be attached to the VMs. To achieve our goal, we leveraged these CloudFormation templates along with Service Control Policies (SCPs) to implement a standard and secure way of creating VMs. The point that I am trying to drive with this story is that code-driven configuration can be used to ensure automated, standard, and secure infrastructure deployment within environments.
Automated Security Testing
Automated security testing should be performed as part of the continuous integration and deployment (CI/CD) pipeline, with tests for vulnerabilities, code quality, and compliance. Using automated tools, it is often possible within an environment to set up and run automated pentests against a system as part of the automated test cycle.
During my stint as a cybersecurity engineer, I have been involved in setting up these tools within a pipeline whose sole purpose was to conduct automated fuzzing and scans on applications to search for any known OWAST Top 10 vulnerabilities. A popular open-source tool that can be used to achieve similar results is OWASP Zap.
At the time, this had two immediate results: The developers became more aware of the OWASP Top 10 vulnerabilities and actively tried to address them during development. On top of that, some of the routine and mundane tasks were taken off from the already stretched application security team. This is a simple example of just how automated testing can go a long way in addressing the software security challenges we currently face.
Continuous Monitoring
Continuous monitoring of applications and infrastructure is essential to detect and respond to security threats in real-time. A popular design pattern is the implementation of centralized logging within an environment. Logs from networks, infrastructure, and applications are collected into a single location for storage and analysis. This can provide teams with a consolidated view of all activity across the network, making it easier to detect, identify and respond to events proactively.
This article goes into great detail on some of the freely available solutions that are often used to implement centralized logging. Logging forms the foundation upon which metrics for monitoring can be set within an environment. The Target Breach from 2013 has often been used as a case study as to why proper investment in logging and monitoring is crucial.
Collaboration
Collaboration between development, operations, and security teams is critical to ensure security is considered throughout the development process. A lot of companies have embraced the agile way of work in a bid to give teams flexibility during software development, this further fosters collaboration within the teams.
DevSecOps is primarily intended to avoid slowing down the delivery pipeline. The DevSecOps methodology, being an evolution of DevOps, advocates for application developers, software release, and security teams to work together more efficiently, with more cooperation, in order to deliver applications with higher velocity while adhering to the security requirements. The end goal is to achieve delivery at scale with speed while ensuring collaboration and the free flow of ideas among the different involved teams.
Training and Awareness
Security training and awareness programs should be provided to all team members to ensure everyone understands their role in ensuring security. In my first job in cybersecurity, one of my key tasks was to implement a culture of secure coding within the organization. As a result, we decided to embark on massive training and awareness of the developers within the company. Our reasoning behind this decision was that if the simulated phishing tests carried out within most organizations usually work, then the same concept could be applied to the developers; train their eyes to spot the common problems.
This was quite the task, but in the end, we had two avenues to implement this. The first was to use commercial software that integrates with IDEs and scans code as the developers write, providing them with suggestions to address any security defects in the code. This was a huge success.
The second thing we did was implement regular training for the developers. Every fortnight, I would choose a topic, prepare some slides and perform a demo on how to leverage different security misconfigurations to compromise infrastructure. The two tools that were vital in achieving this were Portswigger WebSecurity Academy and Kontra security which provided a lot of practice on API and Web security misconfigurations.
At the time of my departure from the organization, this was a routine event and developers were more aware of certain common security misconfigurations. We also leveraged Capture the Flag events and provided some incentives to the winning teams to keep everyone motivated. This was one of the most successful initiatives I have undertaken in my career, and it was a win for both parties: the developers gained crucial knowledge, and the security team had some work taken off their plates.
Preventing Software Supply Chain Attacks
Any time I come across these words, my mind automatically goes back to the 2020 SolarWinds attack and the 2021 Log4j Vulnerability. Most of the people in the security world are quite familiar with these two, not only because of the damages they caused but also due to the fact that they struck right around the Christmas holidays! The conversations around software secure supply chain gained a lot of traction with these two incidents.
There was already much talk about the software supply chain, but there was little traction in terms of actually putting in some measures to address this problem. The whole frenzy that was caused by the Log4j vulnerability seems to have been the push that was needed for organizations to act. The US National Security Agency has been giving continuous advisories to developers and organizations on how to better address this problem. The problem with software supply chain compromise is not one that will be addressed overnight. To date, we still see reports of malicious Python or JavaScript packages, and even malicious applications finding their way to the Google Play Store.
I came across one of the simplest analogies describing software supply chains as I was reading blogs and attempting to gain more understanding about this topic, back in 2020. The author of the blog compared software supply chain attacks to the ancient kingdoms, where the enemy soldiers would poison the common water well, rendering every person in that village basically weak and unable to fight.
This is quite accurate with respect to supply chain attacks. Instead of an attacker compromising many different targets, they only have to taint the one common thing that many unknowing victims use. This makes an attacker’s work quite easy thereafter, and this is exactly what happened with SolarWinds, and later with the Log4j vulnerability. This is obviously extremely dangerous, in line with how software is currently made; there is a huge dependence on open-source libraries and packages.
This post by John P. Mello Jr. provides ten high-profile software supply chain attacks we can learn from, including that one staged on Okta in 2022. From the blog post, the compromise on both npm and Python Package Index (PyPI) alone affected over 700,000 customers. In this case, the attacker did not have to compromise each of the individual seven hundred thousand victims, they simply found a way to tamper with third-party software building components and enjoyed their loot. Like in the poisoned well analogy, the consequences are felt downstream by anyone who uses the compromised packages. Third-party risk assessment is now a whole thing in most organizations, however, this is still no defense for the poisoned well.
The big question is, how secure are these third-party libraries and packages? This is a topic that deserves its own article, there is simply so much to cover. With the problem highlighted, what are governments and private organizations doing to prevent these kinds of attacks from happening?
- Zero Trust Security Architectures (ZTA): This concept encourages us to always assume a breach is present and act as if the attackers are already in our environment. Cisco Systems and Microsoft have some amazing products for implementing ZTA through passwordless authentication and continuous monitoring of authenticated users.
- More organizations are opting to conduct regular third-party risk assessments. There have been cases in which certain organizations have been compromised by first compromising a less secure vendor or contractor, as was in the case of the famous Target breach.
- Enhanced security in build and update infrastructure should definitely be on top of the list. If properly implemented, attackers would technically not be able to tamper with and deliver vulnerable software or software updates to the downstream clients.
- Proper asset inventory, together with a comprehensive list of software bill of materials (SBOM). When Log4j hit back in 2021, many of us were in much bigger trouble because we had no idea where to look in our environments, as we didn’t have an accurate SBOM for the various applications we were running. It is crucial to maintain this inventory because it’s impossible to defend and investigate that which you do not know.
DevSecOps plays a huge role in ensuring secure software development, this is becoming more apparent every day. Ensuring that software build materials remain safe is also key to preventing most software supply chain attacks. It remains to be seen how the threat landscape evolves, for now, however, we must focus on getting the basics right.
As cyber threats continue to evolve, it is essential to integrate security into the software development process. DevSecOps is a culture shift that promotes collaboration, shared responsibility, and continuous improvement, with security integrated into every stage of the development process. By adopting DevSecOps best practices, organizations can build more secure software faster and reduce the risk of security breaches, while also improving collaboration and reducing costs.
MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ
The latest release of Swift introduces support for piecemeal adoption of upcoming features, which allows developers to start using new features that will become stable in Swift 6. Additionally, it opens the way for making new features retroactively available in earlier OSes.
The main reason for Swift 5.8 to support upcoming language features is to allow developers to start prepare for the migration of their programs. This is especially relevant given the number of Swift 6 features that bring some level of source-incompatibility, says Swift team member Alexander Sandberg. Additionally, the new feature may help Apple to gather feedback from early adopters.
Upcoming feature support is controlled by a new compiler flag, -enable-upcoming-feature X
, where X
is the feature to enable. At the moment, there are four upcoming features that can be selectively enabled in Swift 5.8: concise magic file names, [forward-scan matching for trailing closures](Forward-scan matching for trailing closures), existential any, and regex literals.
To make sure an upcoming feature is actually available before using it, a new #if
check is available, #if hasFeature(ImplicitOpenExistentials)
, which can be used along with a compiler(>=x.y)
check, in case it is needed.
Swift 5.8 also introduces support for a @backDeployed
attribute aimed to make it easier to backport new capabilities to older versions of a framework. For example, a new capability can be added through an extension and annotated with both the well known @available
and the new @backDeployed
attributes:
extension FrameworkAPI {
@available(FrameworkAPIVersion 1.0, *)
@backDeployed(before: FrameworkAPIVersion 2.0)
public func newCapability(...) -> ResultType { ... }
}
In the provided example, the newCapability
function is natively available only from version 2.0 of FrameworkAPI
, but using the @backDeployed
attribute, developers can provide an implementation of that capability that can be injected into previous versions of the framework.
This new feature is meant to make it easier for developers to create resilient libraries and can only applied to functions, methods, subscripts, and computed properties. This implies, for example, that new types cannot be supported using this mechanism. Additionally, the bodies of back deployed functions must conform to the same restrictions as @inlinable
functions, e.g., they can only reference declarations that are accessible to the client, such as public
and @usableFromInline
declarations.
Another area where Swift 5.8 brings significant change is result builder implementation, which improves compile-time performance, code completion results, and diagnostics. Specifically, the new implementation leverages Swift 5.7 extended multi-statement closure inference, which enables the compiler to optimize type inference and error messages. In particular, developers will welcome the removal of several limitations on local variable declarations in result builders, such as the requirement to have an initializer, and lack of support for computed variables, observers, and property wrappers.
There are many additional changes in Swift 5.8 than what can be covered here, so do not miss the official release announcement for the full detail.
MMS • Alen Genzic
Article originally posted on InfoQ. Visit InfoQ
Entity Framework Core 8 Preview 2 was released on March 14th. The most notable feature in EF Core 8 preview 2 is support for SQL Server hierarchical data.
The EntityFrameworkCore.SqlServer.HierarchyId package is an unofficial way to include the usage of hierarchical data into Entity Framework and has been available for a few years already (version 1.0.0 of the package has been available from April 2020), however in EF Core 8 preview 2, this feature has an official implementation, which is based on this community package. The new official package is Microsoft.EntityFrameworkCore.SqlServer.HierarchyId.
HierarchyId is enabled by installing the aforementioned package and the following code to your application startup code
options.UseSqlServer(
connectionString,
x => x.UseHierarchyId());
After installing and configuring the HierarchyId feature, you can use it to represent hierarchical data such as organizational structures, folder structures and web site page tree structures.
On the entity type itself, the type is used as any other property type:
public HierarchyId NodePath { get; set; }
The type represents an entities path in a tree structure. For example, for a node with the path /1/2/3:
- / is the root of the tree,
- 1 is the grandparent
- 2 is the parent of the node
- 3 is the identifier for the node itself
Taking this same example, the paths /1/2/4 and /1/2/5 are siblings of our initial example node, whilst /1/3 and /1/4 are siblings to the nodes parent. Nodes can also be inserted between two other nodes by using decimal values. The node /1/3.5 is between the nodes /1/3 and /1/4.
Whilst this format is human readable in code, SQL Server itself uses a compact binary format to store this identifier (varbinary).
The type also has some limitations in SQL Server directly:
- A SQL Server hierarchyId doesn’t by itself represent a tree structure. It is the responsibility of the application to assign hierarchyId values so that the relationships between rows in a table represent a tree
- There is no guarantee that a hierarchyId will be unique, thus it is also the applications responsibility to ensure appropriate concurrency control
- There is no foreign key constraint on the values of hierarchyId. For example, the application should make sure all descendants of a node in the hierarchy have their hierarchyId’s updated upon deletion of their parent.
- EF Core 7 introduced the JSON Columns support for SQL Server and in this version, that feature is extended to support SQLite databases as well.
EF8 previews can currently be used in .NET 6 LTS and .NET 7. The EF8 release is aligned with the next LTS version of .NET 8, scheduled for release in November 2023.
MMS • Roland Meertens
Article originally posted on InfoQ. Visit InfoQ
At the recent QCon London conference, Hien Luu, Senior Engineering Manager for the Machine Learning Platform at DoorDash, delivered a talk on “Strategies and Principles to Scale and Evolve MLOps at DoorDash.” Luu shared insights on how to overcome the challenge of ML systems not adding value to production, which is a common issue faced by companies. In his talk, Luu outlined three key principles that have proven effective in addressing this challenge at DoorDash.
Luu started his talk by highlighting that according to Gartner, 85% of Machine Learning (ML) projects fail. This is largely caused by them not hitting production, and he emphasized that ML models have zero Return on Investment (ROI) until they are in production. MLOps should be seen as an engineering discipline, and adopting the right strategies is crucial for building a successful MLOps infrastructure from scratch.
The AI Infrastructure Alliance provides a helpful blueprint for successful MLOps, which is based on four key factors: use cases, culture, technology, and people.
Focusing on the use case, or “the game you are playing,” is essential for understanding the critical aspects of your ML project. Collaborating with stakeholders and decision-makers is crucial in this process. It might also involve dealing with issues such as fairness, biases, and explainability of the predictions. It is also important to understand the company culture. Companies may differ in their approach to projects, either being innovative, collaborative, results-driven, traditional, customer-focused, or inclusive. Identifying the expectations for progress and adapting to the company culture enables smoother implementation of MLOps strategies.
Naturally the technology used plays a significant role in MLOps, and it is essential to identify any hidden tech debt surrounding the systems. Assessing the maturity level of all dependencies helps in making informed decisions. Naturally this should be combined with the previous point, where people and technology ideally are aligned to reach the desired impact. Involving stakeholders in the infrastructure planning and maintaining effective communication patterns ensures that the MLOps strategy aligns with the organization’s needs and goals.
Hien Luu shared three core principles at DoorDash that have been instrumental in scaling and evolving MLOps, ensuring the success of ML systems in production. These principles are “Dream Big, Start Small,” “1% Better Every Day,” and “Customer Obsession.” Each principle highlights a specific approach that has driven DoorDash’s MLOps success.
1. Dream Big, Start Small: This principle emphasizes the need for a clear vision and ambitious goals, while also focusing on progress and impact through incremental improvements. By starting small, companies can make steady progress and achieve their grand vision over time.
2. 1% Better Every Day: Luu shared a real-world example of how DoorDash has embraced this principle. They decided to adopt Redis for feature storage and moved from storing each attribute separately to storing each piece of information as a JSON string, forming a whole profile. They developed a method to minimize the amount of encoded bits in the key and value, resulting in reduced CPU time and memory usage. This led to a 3x cost reduction and a 38% latency reduction. Their experience is documented in a blog post titled “Building a Gigascale ML Feature Store with Redis”. Constantly striving for small improvements each day can lead to significant overall enhancements in the MLOps infrastructure.
3. Customer Obsession: This principle stresses the importance of not only listening to your customers but also inventing on their behalf. DoorDash believes in delighting customers with “French fry moments,” which refers to surprising customers with something they don’t expect. By being genuinely obsessed with customer satisfaction, companies can create MLOps strategies and systems that truly cater to their users’ needs and improve their overall experience.
During Hien Luu’s talk, attendees raised questions regarding the use of existing tooling and aligning stakeholders. Luu recommended considering existing solutions before building a custom tool, citing examples such as Triton or Bento. As for aligning stakeholders, Luu emphasized the importance of understanding their goals and the desired impact on the company.
In conclusion, Hien Luu shared three principles at the QCon London conference to scale and evolve your MLOps projects. By following the three principles of “Dream Big, Start Small,” “1% Better Every Day,” and “Customer Obsession,” as well as considering existing tools and aligning stakeholders, companies can significantly enhance the success of their ML systems in production.
MMS • Roland Meertens
Article originally posted on InfoQ. Visit InfoQ
At the QCon London conference, Mehrnoosh Sameki, principal product manager at Microsoft, delivered a talk on “Responsible AI: From Principle to Practice“. She outlined six key principles for responsible AI, detailed the four essential building blocks for implementing these principles, and introduced the audience to useful tools such as Fairlearn, InterpretML, and the Responsible AI dashboard.
Mehrnoosh Sameki opted for the term “Responsible AI” over other alternatives such as “Ethical AI” and “Trusted AI”. She believes that Responsible AI embodies a more holistic and proactive approach that is widely shared among the community. Those discussing this field should demonstrate empathy, humility, and a helpful attitude. As the AI landscape is currently evolving at a rapid pace, with companies accelerating the adoption of AI technologie, our societal expectations will shift and regulations emerge. It is thus becoming a best practice for individuals to introduce the right to inquire about the rationale behind AI-driven decisions.
Mehrnoosh outlined Microsoft’s Responsible AI principles, which are based on six fundamental aspects:
1. Fairness
2. Reliability and safety
3. Privacy and security
4. Inclusiveness
5. Transparency
6. Accountability
She also outlined four building blocks she deemed essential to effectively implement these principles, which were “tools and processes”, “training and practices”, “rules” and “governance”. In the presentation she mostly talked about the tools and processes and practices around responsible AI.
The importance of fairness can be best understood through the potential harms it prevents. Examples of such harms include different qualities of service for various groups of people, such as varying performance for genders in voice recognition systems or considering skin tone when determining loan eligibility. It is crucial to evaluate the possibility of these harms and understand their implications. To address fairness, Microsoft developed Fairlearn, a tool that enables assessment through evaluation metrics and visualizations, as well as mitigation using fairness criteria and algorithms.
InterpretML is another useful tool aimed at understanding and debugging AI algorithms. It focuses on both glassbox models and so-called “opaquebox” explanations, such as explainable boosting machines. This allows users to see through their predictions and determine the top-k factors impacting them. InterpretML also offers counterfactuals as a powerful debugging tool, enabling users to ask questions like, “What can I do to get a different outcome from the AI?”. Counterfactuals give a machine learning engineer insight into how far away certain samples are from the decision border, and which features are most likely to “flip” a decision. For example, an outcome could be that people where the gender feature is switched suddenly get a different prediction, which could indicate an unwanted bias in your model.
Mehrnoosh also gave a demo of Microsoft’s Responsible AI dashboard. The analysis of errors in predictions is vital for ensuring reliability and safety. The tool provides insights into the various factors leading to errors, and allows you to create cohorts to dive deeper into causes of bias and errors.
Mehrnoosh Sameki also discussed the potential dangers associated with large language models, specifically in the context of Responsible AI for Generative AI, such as GPT-3, which is used for zero-shot, one-shot, and few-shot learning. Some considerations for responsible AI in this context include:
1. Discrimination, hate speech, and exclusion. It is easy to let models generate this automatically.
2. Hallucination – the generation of unintentional misinformation. Models are generating text, and are not knowledge engines.
3. Information hazards. It’s possible for models to leak information in an unintended way
4. Malicious use by bad actors to automatically generate text.
5. Environmental and socioeconomic harms.
To address these challenges, Sameki proposed several solutions and predictions for improving AI-generated output:
1. Provide clearer instructions to the model. This is something which individuals should do.
2. Break complex tasks into simpler subtasks. Large language models
3. Structure instructions to keep the model focused on the task
4. Prompt the model to explain its reasoning before answering
5. Request justifications for multiple possible answers and synthesize them
6. Generate numerous outputs and use the model to select the best one
7. Fine-tune custom models to maximize performance and align with responsible AI practices
To explore Mehrnoosh Sameki’s work on Responsible AI, consider visiting the following resources:
The Microsoft’s Responsible AI Dashboard. This impressive tool allows users to visualize different factors that contribute to errors in AI systems.
Responsible AI Mitigations Library and Responsible AI Tracker. These newly launched open-source tools provide guidance on mitigating potential risks and tracking progress in the development of Responsible AI.
Fairlearn. This toolkit helps assess and improve fairness in AI systems, providing both evaluation metrics and visualization capabilities as well as mitigation algorithms.
InterpretML. This tool aims to make machine learning models more understandable and explainable, offering insights and debugging capabilities for both glassbox models and opaquebox explainers.
Microsoft’s Responsible AI Guidelines
Last but not least: her talk Responsible AI: From Principle to Practice
MMS • Tobi Ajila Thomas Watson
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- Fast startup time is essential to the success of your cloud-native strategy.
- While there are different solutions in the JVM space to improve startup time, InstantOn is the only one that provides fast startup without compromising your applications.
- InstantOn is based on checkpoint/restore technology, which has some challenges with regard to the checkpoint and restore environment. InstantOn addresses these challenges offering a seamless experience for developers deploying new and existing applications.
- InstantOn integrates with container-based technologies.
A shift to cloud-native computing has been on the minds of many developers in recent years so that their business applications can benefit from reduced IT infrastructure costs, increased scalability, and more. Scale-to-zero is the standard provisioning policy when it comes to deploying applications on the cloud in order to save costs when demand is low. As demand increases, more instances of the application, and runtime, are provisioned; this scaling out must happen very quickly so that end users don’t experience a lag in response times. The startup time of your runtime can play a big part in scale-out performance.
Open Liberty is a cloud-native Java runtime and, like other Java runtimes, is built on JVM technology. The performance, debugging capabilities, and class libraries that the JVM (more broadly, the whole JDK) offers make it a compelling technology to base your applications on. Although JVMs are known for excellent throughput, their startup time lags behind statically compiled languages like Go and C++. Given the requirements of scale-to-zero, significantly improving startup time has been a key area of innovation for all JVM implementations for many years. Metadata caching techniques such as AppCDS (HotSpot) and Shared Classes Cache (Eclipse OpenJ9) have shown impressive improvements in startup time but don’t achieve the order-of-magnitude start time reduction required in scale-out scenarios such as serverless computing.
Compiling to a native image to reduce startup time
Graal Native Image gained attention when it was announced that it can achieve sub-100ms startup times with its compile-to-native approach. This was a significant shift in the JVM landscape because, for the first time, Java applications were competing with C++ in startup time. While Graal Native Image significantly reduced startup time, this came with several trade-offs.
Firstly, static compilation requires a global view of the application at build-time. This imposes some limitations on the use of dynamic capabilities that developers building applications on the JVM have traditionally relied on. For example, operations such as reflection, dynamic class loading, and invokedynamic need special treatment because they interfere with the requirements of the static analysis needed to produce a native image. This, in turn, means you might need to modify your applications significantly for the native image to work and worse, your dependencies might also need updating.
Secondly, debugging becomes challenging because you are no longer debugging a JVM application but rather a native executable. You’ll need to trade your familiar Java debugger for a native debugger like gdb to investigate issues. One way to work around this is by using a JVM in development and a native image in production. However, this means that your production environment will not match your development environment, and you could end up having to fix bugs on two different runtimes!
Finally, one of the great things that JVMs offer is excellent throughput with just-in-time compilers optimizing the application at runtime based on live data to achieve optimal performance. This too, must be sacrificed in a native image because developers only get one shot at compilation – at build time. Several frameworks, such as Spring Native, have built up capabilities to help Java developers work within the native image constraints, but there is no getting away from the fact that the developer does have to give up something to obtain the startup time benefits of native image.
Skipping startup with checkpoint/restore
The Liberty runtime has taken a different approach to improve startup. Liberty aims to offer fast startup without compromise with a feature called Liberty InstantOn. This feature offers all the capabilities with which Java developers are familiar while improving runtime startup times by up to 10 times in comparison to JVM runs without InstantOn.
Liberty InstantOn is fundamentally based on checkpoint/restore technology. You start an application, then pause it and persist the state — checkpoint — at some well-defined points offered by Liberty. This checkpoint then becomes your application image, and when you deploy your application, you just resume the image from the saved state — restore — so that the application skips the startup and initialization process that it would normally go through (as those steps have already run).
Liberty uses OpenJ9 CRIU support, a technology based on Linux CRIU which enables any application to be checkpointed and resumed. Because you are still running with a JVM in the Liberty InstantOn approach, there is no loss to throughput performance. Java debugging works as expected, and all the libraries that depend on dynamic JVM capabilities will also work.
Resolving the limitations of checkpoint/restore
While the concept of checkpoint/restore sounds very simple, in reality, there are some constraints (arising out of how CRIU works) that need to be addressed by the runtime and JVM working together for an application to experience these benefits. When a checkpoint is taken (when building the image), CRIU takes the environment and “freezes” it within the checkpointed state: environment variables, knowledge of computing resources (CPU, memory), and time itself are all baked into the image. Any of those things can be different in the restored environment, causing inconsistencies in the application that can be difficult to track. Additionally, there can be some data captured in the checkpoint image that would not be ideal if, for example, images are to be shipped across public networks via container registries to deployment environments. This data can include external connections to endpoints that do not exist in the restore environment and security tokens that you don’t want to embed in the checkpoint image.
For these reasons, OpenJ9 CRIU support has built-in compensations to ensure that a checkpointed application behaves correctly and safely when restored. Time-sensitive APIs are modified to compensate for the downtime between the checkpoint and restore. Random APIs like SecureRandom are re-seeded upon restore to ensure that each time a checkpoint is restored, it is restored as a unique instance.
The JVM can address the things it knows about, but there might be application code that needs similar treatment. The Liberty runtime helps to shield developers from the complexities of checkpoint/restore by working with the JVM to address any remaining issues that the JVM cannot deal with on its own. To facilitate this, OpenJ9 offers a hook mechanism that developers can use to register the methods that will run before and after a checkpoint. This mechanism is used by Liberty extensively, for example, to re-parse the configuration at deployment time to ensure that the correct configuration is used for the environment.
So, while OpenJ9 offers the tools to leverage checkpoint/restore technology effectively, the straightforward way to enhance the startup time of an existing application is to run it on Liberty with Liberty InstantOn. Liberty InstantOn abstracts the checkpoint/restore process, simplifying the developer’s choices to only a few, such as determining whether a checkpoint should be before or after the application starts.
Ultimately, the end goal is to improve the cloud-native experience of Java applications, which means that whatever technology you use must work effectively in a cloud environment. Liberty InstantOn integrates seamlessly with container technologies like Docker and Podman. Liberty InstantOn also works with container engines like Knative and OpenShift Container Platform. We have done work to ensure that Liberty InstantOn runs in unprivileged modes because this is essential for the security of production environments. This work is being contributed back to the CRIU project.
Trying out Liberty InstantOn with your own app
Liberty InstantOn is publicly available as a beta, and developers can try it with their existing applications to see the improvements (up to 10 times faster) in startup time. You just need to create an application container image of your application using the Liberty InstantOn tools. Open Liberty publishes production-ready container images that make it easy to containerize your applications to run in a container engine such as Docker, Podman, or in Kubernetes environments like Red Hat OpenShift.
The Open Liberty container images contain all the necessary dependencies for running an application with the Open Liberty runtime. The following instructions for developers describe how to create a base application container image with their application on top of the provided Open Liberty beta-instanton
image (icr.io/appcafe/open-liberty:beta-instanton
) and then how to create and add a layer on top that contains the checkpoint process state. The beta-instanton
image contains all the prerequisites needed to checkpoint an Open Liberty process and store that checkpoint process in a container image layer. This includes an early access build of OpenJ9 CRIU support and Linux CRIU.
How to containerize your app to start faster with Liberty InstantOn
The following instructions use Podman to build and run the application container and use the application from the Open Liberty getting started guide. Developers can substitute their own application if you have one to hand.
The completed getting started application contains a Dockerfile that looks like this:
FROM icr.io/appcafe/open-liberty:full-java11-openj9-ubi
ARG VERSION=1.0
ARG REVISION=SNAPSHOT
COPY --chown=1001:0 src/main/liberty/config/ /config/
COPY --chown=1001:0 target/*.war /config/apps/
RUN configure.sh
First, the developer needs to update the FROM instruction to utilize the beta-instanton image:
FROM icr.io/appcafe/open-liberty:beta-instanton
After that, the application container image can be built with the updated Dockerfile using the following command:
podman build –t getting-started .
The command creates the application container image, but no checkpoint process has been created yet. The checkpoint process for the application is created by running the application container image with some additional options using the following command:
podman run
--name getting-started-checkpoint-container
--privileged
--env WLP_CHECKPOINT=applications
getting-started
The WLP_CHECKPOINT
variable specifies that the Open Liberty runtime will checkpoint the application process at the point after the configured applications have been started but before any ports are opened to take incoming requests for the applications. When the application process checkpoint has been completed, the running container will stop. This results in a stopped container that contains the checkpoint process state.
The final step is to layer this checkpoint process state on top of the original application process image. This is done by committing the stopped application container called getting-started-checkpoint-container
to a new container image with the following command:
podman commit
getting-started-checkpoint-container
getting-started-instanton
The final result is the getting-started-instanton
container image ready to run.
Running the container with privileged Linux capabilities
When running the getting-started-instanton
container, developers must grant it a set of Linux capabilities so that the CRIU binary in the container image can perform the restore process:
cap_checkpoint_restore
cap_net_admin
cap_sys_ptrace
When you created the checkpoint process, a privileged container was used, which granted the CRIU binary in the container image the Linux capabilities required.
Run the following Podman command to run the container with the three required capabilities:
podman run
--rm
--cap-add=CHECKPOINT_RESTORE
--cap-add=NET_ADMIN
--cap-add=SYS_PTRACE
-p 9080:9080
getting-started-instanton
The getting-started-instanton
container runs with the necessary privileges to perform the restore process, and the application runs up to 10 times faster than the original getting-started application.
Future Improvements
The Open Liberty beta releases publish regular updates to Liberty InstantOn. Some improvements are being planned in future releases to make the process of building and running an application image with Liberty InstantOn easier. For example, additional work has been done to remove the need for the NET_ADMIN
Linux capability. There is also a plan to remove the requirement for SYS_PTRACE
when restoring the application process. This would reduce the required capability list to only the CHECKPOINT_RESTORE
capability when running the application.
Other plans include performing the application process checkpoint during the application container build step without requiring a container run and container commit command to store the application process state into an application container image layer.
Let us know what you think
While cloud native requires many changes to how organizations approach their businesses, with Liberty InstantOn, developers won’t have to worry about altering their application development approach.
Developers are encouraged to try Liberty InstantOn in beta using Open Liberty 22.0.0.11-beta or a later version. Feedback is welcome and can be shared through the project’s mailing list. In case of encountering an issue, developers can post a question on StackOverflow. If they discover a bug, they are welcome to raise an issue.
Background notes
Open Liberty and Eclipse OpenJ9 are open-source projects. IBM builds its commercial WebSphere Liberty Java runtime and IBM Semeru Runtimes Java distributions from these projects. Liberty InstantOn uses the checkpoint/restore technology made available by the Linux Checkpoint/Restore In Userspace (CRIU) project and works with CRIU to contribute code back to the project.