Article: The Future of DevOps Is No-Code

MMS Founder
MMS Nahla Davies

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • DevOps usage is snowballing, but organizations are having problems finding enough experienced personnel to build DevOps teams
  • DevOps tools and workflows allow organizations to think beyond traditional staffing sources in building DevOps teams
  • Low-code and no-code tools, in particular, let organizations integrate less experienced developers into their DevOps teams, effectively bridging the talent gap
  • In one report, 72% of new low-code developers can build apps within the first three months of learning how to use the tools
  • Low-code and no-code tools also free up existing developers by reducing the time spent on integrating and administering DevOps toolsets

The global DevOps market has expanded rapidly in recent years, reaching more than $7 billion in 2021. That number will grow to nearly $40 billion, a five-fold increase, by the end of the decade. 

At the same time, the talent gap in the DevOps chain is also growing steadily. According to the U.S. Department of Labor, the global shortage of development engineers will reach more than 85 million by 2030. Annually, the demand for DevOps professionals will likely grow more than 20% for the rest of the decade.

These two conflicting trends place software and application companies in an extremely complicated position. On the one hand, they have an opportunity to substantially increase revenue by filling the growing demand for new and improved applications. But the increasing lack of ability to find the right people to build these products limits their ability to take advantage of that opportunity. So, what can companies do to compete effectively in the global market? 

One potential solution is to integrate more low-code and no-code tools into the DevOps cycle. These tools provide DevOps teams with numerous benefits and efficiencies, from streamlining the work of existing DevOps professionals to allowing companies to look beyond traditional personnel sources for expanding their teams. Indeed, it is likely that organizations that fail to integrate these tools into the DevOps process will quickly fall behind their competitors.

The rise of DevOps

DevOps is a relatively recent phenomenon, only beginning to make itself known around 2008. But it is a trend that has quickly overtaken the software and application industry. 

DevOps arose as a way to streamline the overall software development lifecycle. Before DevOps, the teams involved in the various stages of the lifecycle operated independently in very insulated silos. Communication between teams was often lacking and ineffective when it did occur.

Because one hand never knew what the other was doing, software development was often highly inefficient. Worse yet, the different teams frequently had different goals and objectives, and these objectives were often conflicting. Speed of release vs. functionality vs. quality assurance strained against each other, making the development teams compete with each other rather than working together towards the same goal – a quality product that reaches the end customer as quickly as possible.

DevOps offered a new, collaborative model. While the term DevOps comes from combining development and operations (i.e., deployment), as an overall philosophy, DevOps means much more. Amazon Web Services defines DevOps as:

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.

But DevOps is more than just better communication and integrated teams working towards common goals. Instead, the truly effective DevOps team extends beyond traditional development and deployment. It also tightly integrates monitoring (for example, Java logging), quality assurance, and security to ensure that customers receive the best product possible and one they can trust with their information. 

DevOps also demands applying the right tools and workflows to achieve those goals. Indeed, the automation of workflows is one of the most essential practices of DevOps. And well-implemented automation further enhances the communication between parts of the DevOps team.

In the 15 years since organizations began applying DevOps, it has seen rapid adoption with excellent results. According to one recent survey, 61% of IT decision-makers said DevOps practices and methodologies helped them deliver better products to their customers. And 49% of companies relying on DevOps could reduce time to market for their products.

DevOps was not a perfect solution

DevOps was unquestionably a substantial improvement over traditional software development methodologies. In addition to removing communications obstacles throughout the development chain, DevOps provided benefits such as:

  • Increased speed of development: Because all parts of the chain are working effectively together, they can resolve issues more quickly.
  • Reduced time to market: Improved workflows and automation, including continuous integration (CI) and continuous delivery (CD), allow for more frequent and rapid distribution of products to consumers.
  • Enhanced scalability: With many automated and robust testing and production environments in place, teams can more easily scale products to meet new demands.
  • Built-in security: Many DevOps teams now employ processes such as policy-as-code that integrate security into the development process rather than having it be an afterthought.

Despite its clear advantages, DevOps was not without its issues. One of the most significant challenges facing organizations transitioning to DevOps was the need to create a new mindset focused on collaboration. Extensive cultural and philosophical shifts inevitably generate anxiety as team members move away from well-known and comfortable workflows.

But the move to DevOps requires more than just a cultural change. It also necessitates learning new governance structures, tools, and workflows. As anyone who has participated in the rollout of a new tool knows, it is never as simple as it sounds, especially when it involves letting go of legacy systems.

DevOps tools themselves compounded the difficulty of the transition, for many different reasons. Siloed development and operations teams typically had separate toolsets and they used them to facilitate different goals and metrics. Finding the right set of tools to bridge these differences can be challenging. And asking both teams to learn a new set of tools raises concerns about morale and the use of time. 

All of this makes it doubly important to focus on cultural changes before toolset changes. Tell your development team that they have to take time away from their primary tasks to learn new tools, and more likely than not you will get disgruntled developers. But if you first show them how those new tools will make their lives easier not just now, but well into the future, you will more quickly develop buy-in. Low-code and no-code tools can do just that by allowing citizen developers to take more simple tasks off the developers’ plates, leaving them to focus on higher-end work.

Even with full buy-in, however, new tools can still pose problems. Until teams become comfortable with the new processes and structures, there is a danger of overreliance on tools, many of which seem to have features that can address any problem under the sun. And with the wide assortment of tools available, developers frequently spent more time making their tools work together than actually completing projects. Indeed, developers report spending up to 40% of their time on integration tasks.

Another major hurdle organizations face in today’s workplace is finding the right people to staff their DevOps teams. Although interest in information technology is expanding and more and more young people have a substantial amount of self-taught IT knowledge, the talent gap for developers remains problematic. According to a McKinsey study, 26% of organizations identified IT, mobile, and web design as their business area with the greatest shortage of personnel.

These are just a few of the challenges organizations face when moving to and becoming proficient in the DevOps environment. But organizations quickly discover that the benefits of DevOps are more than worth the time, money, and effort they invest in the change.

The case for integrating low-code and no-code tools in the DevOps cycle

Companies are looking outside the box to fill the talent gap, and one of the most successful approaches currently is upskilling their existing workforce. As a side benefit, upskilling improves employee satisfaction and retention, which is increasingly important as, according to recent surveys, 90% of the workforce reports being unsatisfied with their current work environment.

For DevOps, the starting point for upskilling is to train non-DevOps personnel to become effective members of the DevOps team. And this is where no-code and low-code DevOps tools come in. With no-code and low-code tools, even complete development novices can learn to build websites and applications. If someone has enough computer knowledge to drag-and-drop, they can probably learn no-code tools. And those with a little more computer savvy can put low-code tools to good use.

As their name suggests, no-code and low-code tools facilitate software and application development with minimal need for writing or understanding code. Instead of building code, developers rely on visual, drag-and-drop processes to piece together pre-made functionality. So instead of needing to understand the intricacies of specific programming languages, developers need only have a good feel for the business’s needs, the overall application architecture, and the application’s workflows.

These so-called ‘citizen developers’ fill a glaring need at a much lower cost than competing for the few available experienced developers on the market. And sometimes, they can be the only truly viable option.

While building a stable of citizen developers sounds good in theory, companies may wonder whether they can really gain development benefits from previously unskilled workers. The numbers, however, are impressive. According to studies of companies using low-code tools, 24% of their citizen developers had absolutely no programming experience before taking up low-code application development. Yet 72% of new low-code developers can build apps within the first three months of learning how to use the tools. Is it any wonder that 84% of organizations are now either actively using these tools or putting plans in place to implement them in the near future?

As the workforce gets younger, the likelihood that new employees will have little or no programming experience decreases. Many new employees are coming into the workplace having already built their own websites or blogs, and perhaps even their own e-commerce businesses and applications. And they probably did it using personal low-code and no-code tools like WordPress, Wix, or Square. Businesses should leverage this experience in filling their development needs.

But no-code and low-code tools also benefit more experienced developers and optimize developer time. Rather than spending a large part of their limited work hours on pipelines and integration, they can focus more fully on substantive development and delivery. And because low-code and no-code tools use pre-built and pre-tested modules, there is less need to chase down bugs and rewrite code, further easing the workload of already overburdened developers.

Another key benefit of low-code and no-code tools is that they can help businesses automate and simplify cybersecurity tasks. Many tools have built-in security features that are simple for even the most novice developers to put in place. And IT staff can use low-code and no-code tools to build security “playbooks” for development teams so that everyone is always on the same page when it comes to the critical issue of application and network security. 

Both businesses and customers see substantial benefits from citizen developers using low-code and no-code tools. Deployment velocity increases quickly, as much as 17 times according to one study, so businesses can push new and improved products out to their customers more frequently. And customers are getting more and more functionality, along with more stable and reliable products.

While organizations of all sizes can (and should) put low-code and no-code tools into their development toolboxes, it is small and medium enterprises (SMEs) that stand to gain the most benefit. SMEs frequently have few IT staff and limited resources to compete for talent in the increasingly competitive IT labor market. But with low-code and no-code tools, SMEs can use existing staff to effectively fill the development talent gap.

What low-code and no-code tools are available?

The number of no-code and low-code tools is growing almost as rapidly as the DevOps market. And they cover every stage of the software development cycle, from building functionality to testing and quality assurance to security. 

Consider Microsoft’s PowerPlatform, which includes the Power Apps, Power BI, and Power Automate products. Microsoft recently expanded the suite to include a new module called Power Pages. This product lets users build high-end business websites without having any coding expertise. 

Although Power Pages is geared towards the citizen developer, more experienced developers can take the no-code development tool and optimize it as needed using their own DevOps tools. But with more people in the chain and experienced developers focused on the most critical parts of the delivery cycle, organizations will find themselves delivering better products far more quickly than before.

Low-code and no-code tools can do far more than just build websites. There are also tools geared towards developing internal applications to help employees be more efficient (e.g. Appian, Retool, SalesForce Lightning, Creatio). And some tools allow developers to build cross-platform applications, taking advantage of an ever-increasing demand for mobile applications that can work on any device no matter what the operating system (e.g. Zoho Creator).

Naturally, these are just a few examples. Major providers from Amazon (Honeycode) to IBM (Automation Workstation) to Oracle (APEX) and others also offer low-code and no-code tools for almost any application. It is not a matter of finding low-code and no-code tools; it is only a matter of finding the right ones for your organization.

Conclusion

If you aren’t a DevOps organization already, chances are you soon will be. And you will need as many qualified DevOps team members as you can get your hands on. No-code and low-code DevOps tools are an easy way to build your stable of developers while freeing up your existing developers to focus their time on getting quality products out the door. 

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Introduces Cloud Workstations in Public Preview

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Google introduced the public preview of Cloud Workstations during its Cloud Next event, which provides fully-managed and integrated development environments on the Google Cloud Platform. 

Brain Dorsey, a developer advocate at Google, explains in a Google Cloud Tech video what Cloud Workstations are exactly:

It is a web application hosted in the cloud console to create and manage container images, which are then used as templates to create development environments that run on a dedicated VM for each developer.

Google Workstation addresses two personas according to the company: developers and administrators. Developers can quickly access a secure, fast, and customizable development environment, while administrators can quickly provision, scale, and secure development environments. 


Source: https://cloud.google.com/blog/products/application-development/introducing-cloud-workstations/

Under the hood, Cloud Workstations manage resources for the workstation, such as Compute Engine VMs and persistent disks (PDs). Workstations are contained in and managed by workstation clusters, each with a dedicated controller connected to a VPC in which workstations reside with Private Service Connect. In addition, it is also possible to enable a fully private gateway so that only endpoints inside a private network have access to Cloud Workstations.

 
Source: https://cloud.google.com/workstations/docs/architecture

The company states it focuses on three core areas with Google Workstations:

  • Fast developer onboarding via consistent environments: organizations can set up one or more workstation configurations as their developer teams’ environment templates.
  • Customizable development environments, providing developers flexibility with multi-IDE support such as VS Code, IntelliJ IDEA, and Rider. Google also partners with JetBrains. Furthermore, there is support for third-party tools like GitLab and Jenkins, and developers can customize the container images.
  • Security controls and policy support, extending the same security policies and mechanisms organizations use for their production services in the cloud to their developer workstations. For example, running workstations on dedicated virtual machines and automatically applying Identity and Access management policies.

Max Golov, a principal technical instructor at MuleSoft, explained in a JetBrains blog post what the partnership with Google brings for developers:

Cloud Workstations provides preconfigured but customizable development environments available anywhere and anytime. With this partnership, Cloud Workstations now has support for the most popular IDEs, such as IntelliJ IDEA, PyCharm, Rider, and many more, allowing users to take advantage of managed and customizable developer environments in Google Cloud in their preferred IDEs.

Alsp next to Google, Microsoft and AWS offer development environments in the cloud. Microsoft offers CodeSpaces and Microsoft Dev Box as coding environments in the cloud. With CodeSpaces, developers can get a VM with VSCode quickly; similarly, with Microsoft Dev Box, they can get an entire preconfigured developer workstation in the cloud. And AWS offers Cloud9, allowing developers a cloud-based integrated development environment (IDE) in a browser to develop, run and debug code.  

The question is whether developers will adopt the available cloud-based development environments or cloud-based Integrated Development Environments (IDEs). Corey Quinn, a cloud economist, concludes in his blog post on Cloud IDE adoption:

Unfortunately, the experience of cloud development, while periodically better on a variety of axes, hasn’t been a transformative breakthrough from the developer experience perspective so far. Until that changes, I suspect driving the adoption of cloud-based development environments is going to be an uphill battle.

In addition, Richard Seroter, a director of developer relations and outbound product management at Google Cloud, tweeted:

I’ve used @googlecloud Workstations a bit this week for coding, but not yet sure it’ll be my primary dev environment. It’s got promise!

Lastly, more details are available on the documentation landing page, and while in preview, the specifics on pricing can be found on the pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: Payara Platform 6, Spring Updates and CVEs, Asynchronous Stack Trace VM API

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for October 31st, 2022 features news from OpenJDK, JDK 20, JavaFX 20, GZC 20, Spring Framework milestone, point and release candidates, Payara Platform 6, Micronaut 3.7.3, MicroProfile 6.0-RC2, Hibernate ORM point releases, Apache TomEE 9.0-RC1, Apache Camel 3.18.3, GraalVM Native Build Tools 0.9.17, JReleaser 1.3.1, JobRunr 5.3.1, JDKMon 17.0.39 and J-Fall 2022.

OpenJDK

JEP 435, Asynchronous Stack Trace VM API, was promoted from its Draft 8284289 to Candidate status this past week. This HotSpot JEP, proposes to define a well-tested, efficient and reliable API to asynchronously collect stack traces and include information on both Java and native stack frames.

JDK 20

Build 22 of the JDK 20 early-access builds was also made available this past week, featuring updates from Build 21 that include fixes to various issues. Further details on this build may be found in the release notes.

For JDK 20, developers are encouraged to report bugs via the Java Bug Database.

JavaFX 20

Build 6 and Build 5 of the JavaFX 20 early-access builds were made available to the Java community. Designed to work with the JDK 20 early-access builds, JavaFX application developers may build and test their applications with JavaFX 20 on JDK 20.

Generational ZGC

Build 20-genzgc+2-20 of the Generational ZGC early-access builds was also made available to the Java community and is based on an incomplete version of JDK 20.

Spring Framework

On the road to Spring Framework 6.0.0, the third release candidate was made available that delivers 22 bug fixes and improvements that include: support for @RequestPart arguments in the methods defined in the @HttpExchange annotation; introduce the SimpleValueStyler class for use with the ToStringCreator class; and provide AOT support for clients of the HttpServiceProxyFactory class. This is the last release candidate before the planned GA release in November 2022. More details on this release may be found in the release notes.

The second release candidate of Spring Data 2022.0.0, codenamed Turing, was made available featuring numerous bug fixes and a refined integration of observability through Micrometer for the Spring Data MongoDB, Spring Data Redis, and Spring Data for Apache Cassandra modules. All of the modules were also upgraded to their RC2 equivalents. Further details on this release may be found in the release notes.

Versions 5.7.5 and 5.6.9 of Spring Security have been released featuring fixes for: the AuthorizationFilter class incorrectly extending the OncePerRequestFilter class; and incorrect scope mapping. More details on this release may be found in the release notes for version 5.7.5 and version 5.6.9.

On the road to Spring Cloud 2022.0.0, the first release candidate was made available that ships with upgrades to the RC1 equivalents of all of the subprojects except Spring Cloud CLI, Spring Cloud for Cloud Foundry and Spring Cloud Sleuth which were removed from the release train. Further details on this release may be found in the release notes.

The first release candidate of Spring Authorization Server 1.0.0, was made available with new features that include: a requirement in which the @Configuration annotation in used in conjunction with the @EnableWebSecurity annotation; replace the loadContext() method with loadDeferredContext() method defined in the SecurityContextRepository interface; and merge enhancements from the 0.4 release train into main. More details on this release may be found in the release notes.

Similarly, the first release candidate of Spring Authorization Server 0.4.0 was made available featuring improvements to custom endpoints related to the OidcUserInfoEndpointFilter and OidcClientRegistration classes. Further details on this release may be found in the release notes.

On the road to Spring Modulith 0.1, the second milestone release delivers new features such as: the removal of the obsolete spring.factories property in the observability module; and ensuring that test autoconfiguration is ordered first. InfoQ will follow up with a more detailed news story on Spring Modulith that was introduced in late October 2022.

VMware has published three Common Vulnerabilities and Exposures (CVEs) this past week:

Developers are encouraged to upgrade to Spring Tools 4.16.1 and Spring Security versions 5.7.5 and 5.6.9.

Payara

Payara has released their November 2022 edition of the Payara Platform that introduced Payara Community 6.2022.1 as the first stable release of Payara 6 Community and serves as a compatible implementation for the Jakarta EE 10 Platform, Web Profile and Core Profile. Payara 6 will now serve as the updated, current version of Payara Platform Community. More details on this release may be found in the release notes.

Payara Community 5.2022.4 is the second-to-last release in Payara 5 Community. Further details on this release may be found in the release notes.

Payara Enterprise 5.45.0 delivers five bug fixes, one security fix and two improvements. More details on this release may be found in the release notes.

All these new versions address a zero-day vulnerability in which attackers can explore the contents of the WEB-INF and META-INF folders if an application is deployed to the root context.

Micronaut

The Micronaut Foundation has released Micronaut 3.7.3 featuring bug fixes and patch releases of Micronaut Test Resources, Micronaut Servlet, Micronaut Security, Micronaut Kafka, and Micronaut Redis. There were also dependency upgrades to SnakeYAML 1.33 and Netty 4.1.84. Further details on this release may be found in the release notes.

MicroProfile

On the road to MicroProfile 6.0, the MicroProfile Working Group has provided the second release candidate of MicroProfile 6.0 that delivers updates to all the specifications. It is also important to note that the MicroProfile OpenTracing specification has been replaced with the new MicroProfile Telemetry specification. The anticipated GA release of MicroProfile 6.0 is expected by late-November/early-December 2022.

Hibernate

A particular pattern of code that triggers a severe performance penalty on large multi-core servers has been identified by the Red Hat performance team. Many libraries, including Hibernate ORM, have been affected. The release of Hibernate ORM 6.1.5.Final ships with some patches as an initial step in mitigating this issue. The Hibernate team claims that early tests are promising.

Hibernate ORM 5.6.13.Final has been released featuring bug fixes and enhancements such as the access modifier of the getOp() method defined in the SimpleExpression class was changed from protected to public to assist developers in migrating from the legacy Criteria API. There were also dependency upgrades to ByteBuddy 1.12.18 and Byteman 4.0.20.

Shortly after the release of Hibernate ORM 5.6.13, a critical regression was discovered in which a ClasscastException was thrown via a check for an implementation of the Managed interface rather than an implementation of the ManagedEntity interface. Hibernate ORM 5.6.14.Final has been released to address this issue.

Apache Software Foundation

The release of Apache TomEE 9.0.0-RC1 ships with full compatibility with MicroProfile 5.0 and dependency upgrades such as: Eclipse Mojarra 3.0.2, HSQLDB 2.7.1, Hibernate 6.1.4.Final, Log4J2 2.18.0, Tomcat 10.0.27 and Jackson 2.13.4. More details on this release may be found in the release notes.

Apache Camel 3.18.3 has been released featuring 52 bug fixes, improvements and dependency upgrades that include: Spring Boot 2.7.5, camel-hbase 2.5.0 and kamelets 0.9.0 in the camel-jbang module. Further details on this release may be found in the release notes.

GraalVM Native Build Tools

On the road to version 1.0, Oracle Labs has released version 0.9.17 of Native Build Tools, a GraalVM project consisting of plugins for interoperability with GraalVM Native Image. This latest release provides improvements such as: a new requiredVersion property to check for a minimal version of GraalVM; and make the GraalVM installation check lazy. More details on this release may be found in the changelog.

JReleaser

Version 1.3.1 of JReleaser, a Java utility that streamlines creating project releases, has been released featuring a fix of the Nexus2 query status after close/release/drop operations were not reported if those remote operations failed. Further details on this release may be found in the release notes.

JobRunr

JobRunr 5.3.1 has been released featuring fixes for: JobRunr does not fail on null values for an instance of the MDC class; DB Migration is applied multiple times if the time to execute the first run takes an excessive amount of time; and inheritance in background jobs not always working.

JDKMon

Version 17.0.39 of JDKMon, a tool that monitors and updates installed JDKs, has been made available this past week. Created by Gerrit Grunwald, principal engineer at Azul, this new version ships with a CVE detection tool for builds of GraalVM in which the CVEs are sorted by severity.

J-Fall Conference

J-Fall 2022, sponsored by the Nederlandse Java User Group (NLJUG), was held at the Pathé Ede in Ede, Netherlands this past week featuring speakers from the Java community who presented keynotes, technical sessions, workshops and hands-on labs.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Kubernetes 1.24 Released with Network Policy Status, Contextual Logging, and Subresource Support

MMS Founder
MMS Mostafa Radwan

Article originally posted on InfoQ. Visit InfoQ

The Cloud Native Computing Foundation (CNCF) released Kubernetes 1.24 in May by the name Stargazer. The release has new features such as Network Policy Status, Contextual Logging, and signing release artifacts, generally available or stable features such as PodOverhead, CSI volume expansion, and CSR duration, beta features such as OpenAPI v3, gRPC probes, volume populator, and deprecated features such as DynamicKubeletConfig. In version 1.24, dockershim is removed.

In the new release,kubectl, the command-line tool to run commands against clusters, includes a new subresource flag to fetch and update subresources. The new subcommand makes it easier to update subresources instead of using curl commands.

Contextual logging is introduced to make log output more useful so that libraries are passed a logger instance by their caller and use that for logging instead of accessing a global logger.

To increase supply chain security, container images pertaining release artifacts can now be signed and verified using cosign, one of signstore’s tools to sign, verify, and protect software.

In version 1.24, a status subresource has been added to network policies to make it easier to troubleshoot network-related issues since network policies are implemented differently by the different CNIs.

OpenAPI v3 support moved to beta in version 1.24 and it’s turned on by default. Such a feature allows the kube-apiserver, the server that validates and configures data for the API objects which include pods, services,…etc, to serve objects in OpenAPI v3 format.

In addition, mixed protocols in services with the type LoadBalancer are turned on by default in beta. This allows a service of type LoadBalancer to serve different protocols (i.e: TCP and UDP) on the same port.

Graceful node shutdown was first introduced in version 1.21 and now it’s in beta. Such a feature allows distinction between the termination of regular pods and critical pods running on the node and provides pods with extra time to stop.

CSI volume expansion became generally available in this release and enabled by default. This feature can dynamically resize persistent volumes whenever the underlying CSI driver supports volume expansion.

Also, PodOverhead became stable in this release and enabled by default. This allows Kubernetes when scheduling a pod to account for the pod infrastructure on top of the container requests and limits. A Runtime class that defines the overhead field is required to utilize such a feature.

Storage capacity tracking moved to stable in version 1.24 allowing the Kubernetes scheduler to make sure there’s enough capacity on a node’s associated storage before placing a pod. That way, it minimizes multiple scheduling attempts by filtering out nodes that do not have enough storage.

Kubernetes is an open-source production-grade orchestration system for deploying, scaling, and managing application containers.

According to the release notes, Kubernetes version 1.24 has 46 enhancements including 13 new, 13 becoming generally available or stable, and 15 moving to beta. In addition, six features have been deprecated.

CNCF held a webinar on May 24, 2022, to review the major features and answer questions.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Amazon EC2 Introduces Replace Root Volume to Patch Guest Operating System and Applications

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

AWS recently introduced the ability to replace the root volume of EC2 instances using an updated AMI without stopping them. The Replace Root Volume helps patch the guest operating system and applications but still triggers a reboot of the instance.

The Replace Root Volume option allows developers to patch software quickly without having to perform instance store data backups or replication. Changing the AMI of a running instance will update applications and the operating system but will retain the instance store data, networking, and IAM configuration. An improvement on replacing root volumes using a snapshot, the new option can help developers with stateful workloads, simplifying the operating system’s patching and improving the deployment’s security.

Frank Fioretti, principal infrastructure architect at Huron Consulting Group, tweets:

This seems more like orchestration/automation than anything new really (…) For those using an Instance Store I can see the benefit in the event they want to swap out their root volume and maintain the instance store data.

One option of the new API is to restore a root volume to its launch state, with the replacement volume automatically restored from the snapshot that was used to create the initial volume during the launch. The replacement volume gets the same type, size, and delete on termination attributes as the original root volume. Jason Axley, principal security engineer at Amazon, tweets:

This makes using D.I.E. (Distributed Immutable Ephemeral) paradigm for cloud security way easier for legacy EC2: replace root volume by reverting to launch state.

According to the documentation, the EC2 remains on the same physical host, retaining its public and private IP addresses and DNS name. All network interfaces remain associated with the instance and all pending network traffic is flushed when the instance becomes available.

Corey Quinn, cloud economist at The Duckbill Group, comments in his newsletter:

Okay, this is awesome for a number of use cases. Sadly, it requires the instance to reboot quickly, but other than that it’s way more streamlined. Some people are going to hate this because it’s treating an instance as a pet instead of cattle, but… well, my development instance is a pet just as your laptop probably is to you.

A successful replacement task transitions through the following three states: pending, when the replacement volume is being created, in-progress, when the original volume is being detached and the replacement volume attached, and succeeded when the process completes and the instance is again available.

Replacing a root volume using an AMI will not change the encryption status of the root volume. If the AMI has multiple block device mappings, only the root volume of the AMI is used and the other volumes are ignored. If the instance supports the Nitro Trusted Platform Module (NitroTPM), the NitroTPM data for the instance is reset and new keys are generated.

The Replace Root Volume API is available in all AWS regions using the console, CLI, or SDKs. If performed using the AWS console, the new functionality is available in the new console only.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Slack Engineers Addressed their Most Common Mobile Development Pain Points

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

In a rather detailed article, Slack engineers Ahmed Eid and Arpita Patel provided an interesting peek into the processes they adopted along the years to improve developer experience in a number of distinct areas and the tools they used to that aim.

Developer experience at Slack is catered for by a dedicated team of eight people, which formed to provide the answer to growing costs as the organization and its development team grew. Among the areas of the development process that caused the most cost, Slack engineers focused on merge conflicts, long-running CI jobs, flaky tests, and CI infrastructure failures.

While developers can learn to resolve some of these issues, the time spent and the cost incurred is not justifiable as the team grows. Having a dedicated team that can focus on these problem areas and identifying ways to make the developer teams more efficient will ensure that developers can maintain an intense product focus.

Estimated at $2,400,000 yearly cost per 100 developers, merge conflicts were the single most expensive pain point. This resulted out of Xcode project merge conflicts, concurrent merges to main, and longer pull request review process.

Xcode project are notoriously hard to merge without incurring in multiple conflicts. To address this problem, Slack engineers used Xcodegen to generate .xcodeproj files from a YAML file, a much more forbearing format.

Multiple concurrent merges to main bring an increased risk of merge conflicts, halting the merging of additional PRs until the conflict is resolved. For this, Slack adopted Aviator to queue all PRs and process them one by one. Instead of merging a PR into main directly, Aviator attempts to merge main into a developer branch. It that step breaks main, the PR is rejected and the author notified.

Finally, to speed up the pull request lifecycle, Slack engineers found it useful to introduce timed alerts for PR assignments, comments, approvals, and direct messages for successful builds including the option to merge the PR without leaving Slack. To accomplish all of this, they created their own GitHub bot, called MergeBot.

Mergebot has helped shorten the pull request review process and keep developers in flow. This is yet another example of how saving just five minutes of developer time saved ~$240,000 for a 100-developer team in a year.

Luckily, GitHub supports a similar feature, called scheduled reminder, albeit it does not provide one-click merge from the message itself.

Improving the PR/merge process was not the only action taken at Slack to improve dev experience. Another area which incurred high costs was testing and failures in their CI infrastructure. On the first count, the solution was parallel test execution along with a strategy to only run the tests strictly required for a given PR based on the PR diff. On the second count, BuildKite proved effective to increase CI-infrastructure reliability.

According to Slack, improving developer experience both made developers happier and reduced the overall development costs. If you are interested in the full detail about how Slack achieved that, do not miss the original article.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.