Mobile Monitoring Solutions

Search
Close this search box.

GitHub Actions API Released Into Public Beta

MMS Founder
MMS Matt Campbell

Article originally posted on InfoQ. Visit InfoQ

GitHub announced the release into public beta of their Actions API. The Actions API can be used to manage GitHub Actions via a REST API. Endpoints available within the API allow for managing artifacts, secrets, runners, and workflows.

GitHub Actions allow for automating workflows based on repository events such as push, issue creation, or the creation of a new release. These workflows can be created either via a visual editor or by writing YAML code. GitHub released CI/CD for GitHub Actions in August allowing for building and deploying software directly within GitHub and without the need for third-party services.

The workflows API allows for viewing workflows associated with a repository. This includes listing all the workflows or getting a specific workflow. For example, the request:

curl -u username:token  
"https://api.github.com/repos/my-org/hello-world/actions/workflows/72844"

Will return the following details about workflow 72844 within the hello-world repo:

{
  "id": 72844,
  "node_id": "MDg6V29ya2Zsb3cxNjEzMzU=",
  "name": "CI",
  "path": ".github/workflows/blank.yml",
  "state": "active",
  "created_at": "2020-01-08T23:48:37.000-08:00",
  "updated_at": "2020-01-08T23:50:21.000-08:00",
  "url": "https://api.github.com/repos/my-org/hello-world/actions/workflows/72844",
  "html_url": "https://github.com/my-org/hello-world/blob/master/.github/workflows/72844",
  "badge_url": "https://github.com/my-org/hello-world/workflows/CI/badge.svg"
}

Details from a run of a workflow can be accessed through the workflow runs API. A Workflow run within GitHub Actions is an instance of a workflow that runs on the configured event. This API provides access to view runs, review the logs for a run, re-trigger a workflow, and cancel active runs. These endpoints provide access to the status of the runs including the overall outcome and duration. The Workflow jobs API enables access to details about the jobs including log files.

The API route for accessing the workflow run logs provides a redirect URL to download an archive of the log files. This link is accessible to anyone with read access to the repository and expires after one minute. This URL is provided as the location header in the response. As an example, it could be downloaded via one request with CURL by enabling the verbose output to see the header and using -L to permit the redirection to the location value:

curl -v -L -u username:token -o logs.zip /
"https://api.github.com/repos/my-org/my-repo/actions/runs/30209828/logs"

With the artifacts API users are able to download, delete, and retrieve information about workflow artifacts. Artifacts allow for sharing data between jobs in a workflow and also for persisting data after a workflow finishes.

While GitHub Actions run within Docker containers on GitHub’s servers by default, there is also support for self-hosted runners. Self-hosted runners allow for executing workflows on servers hosted within your own environment. The Self-hosted runners API allows for listing the runners within a repository, getting a specific runner, removing a runner, and creating the appropriate tokens for authorizing the runner. With this it is now possible to automate the the creation and clean-up of runners.

To facilitate working with the new API endpoints, GitHub has added additional data to the runner context. Each Actions run now includes a run_id and run_number. These values can be integrated into workflow scripts to more easily interact with the new API endpoints.

The Actions API is available for authenticated users, OAuth Apps, and GitHub Apps. Access tokens will require repo scope on private repositories and public_repo scope on public repositories.

The GitHub Actions API is available to all users who currently have access to GitHub Actions. GitHub Actions is available with most of GitHub’s pricing plans. However Actions is not available for repositories owned by accounts that are using the legacy per-repo plans. While the API is in beta, GitHub will be posting changes to their blog based on developer feedback.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


ZetZ is a Formally Verified Dialect of C

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

ZetZ, or ZZ for short, is a Rust-inspired C dialect that is able to formally verify your code by executing it symbolically at compile time in a virtual machine.

ZZ is targeted to software that runs close to hardware but it can be also used to build cross-platform, ANSI-C compliant libraries. Actually, ZZ works as a transpiler to C code, which is then fed into any standard C compiler. In contrast to how many modern languages approach safety, ZZ does not preclude or limit features that are deemed “unsafe”, such as raw pointer access. Rather, it uses static single assignment form (SSA) to prove your code is undefined behaviour-free at compile time in a SMT prover such as yices2 or z3.

The following snippet shows how ZZ code looks like:

using <stdio.h>::{printf}

export fn main() -> int {
    let r = Random{
        num: 42,
    };
    printf("your lucky number: %un", r.gen());
    return 0;
}

struct Random {
    u32 num;
}

fn gen(Random *self) -> u32 {
    return self->num;
}

To prevent any undefined behaviour from leaking into your code, ZZ requires all memory access to be correct. For example, indexing into an array requires you to tell the compiler the accessed index is valid using the where keyword, which allows you to specify a constraint the caller has to satisfy:

fn bla(int * a)
    where len(a) >= 3
{
    a[2];
}

Analogously, the model keyword allows you to specify a constraints on the behaviour of a function:

fn bla(int a) -> int
    model return == 2 * a
{
    return a * a;           //-- This would fail here
}

To make all of this possible, ZZ includes a number of constraints on C syntax to make it more suitable to formal proving. For example, ZZ enforces east-constness and enables function pointers through function types.

InfoQ has spoken with ZZ creator and maintainer Avid Picciani to learn more about ZZ.

InfoQ: Could you describe your background and your current professional endeavours?

Picciani: My background is in large scale hardware, specifically I worked for Nokia on all of its Linux phones.

Currently, I am the founder of devguard.io. We do cloudless IoT management tools for data sovereignty.

The killer feature that gets developers excited is carrier shell, which allows you to open a remote shell to millions of devices without configuring any network in between. Just with a single ed25519 identity, you can talk to any device behind any physical network, of course peer-to-peer encrypted with us neither seeing nor storing any data.

We shipped about a quarter million devices with Rust on it, but want to expand to lower level embedded stuff that Rust cannot and does not want to target.

InfoQ: Could you tell us how ZZ was born?

Picciani: In order to offer our products for extreme low level hardware (ESP32, an AVR 8 Bit) we needed tools that enable extreme memory efficiency and portability. But coming from the rust language, transitioning back to C felt a little sad. Everyone here really loves Rust and C is full of pitfalls.

At the same time, lots of commercial tools exist for C that we need but are not available in Rust, such as compilers for obscure chips and regulatory approval processes.

ZZ is the result of 6 months of research into what current computer science can do to make programming better. Specifically it is inspired Microsoft research such as F* and Z3. It just transfers it to a more pragmatic language. We use F* at devguard as well, specifically for cryptography. Also Alloy is worth mentioning, which inspired the idea of counter-example based checking.

InfoQ: What are the main benefits ZZ provides and in which scenarios? Can it be considered a general-purpose language or is it more of a niche language?

Picciani: While Rust has a magic bullet for making programming safer (just don’t allow mutable borrow twice), ZZ is really just a layer on top of C. You can do whatever unsafe stuff you want, as long as you also additionally write a formal proof for it. It’s a very different way of programming, more like pair programming with a math professor who is constantly throwing corner cases at you under which your code will break.

Formal proofs can get very very boring and long, and as such ZZ isn’t really a generic programming language. You totally can write a web service in ZZ, and we in fact do, but it will never ever compete with the quick-to-go nature of the likes of NodeJS. That may change when we integrate alloy directly into the state machine generation, at which point you probably might want to try implementing security critical web applications in ZZ.

InfoQ: How do you deem ZZ current status of maturity?

Picciani: Either way, ZZ is very new and the two big ideas–transpiler for C, and symbolic verification–still have to fleshed out in detail. It already works today and I encourage people to try and build stuff with it, but be prepared for major compiler bugs and big language changes. For commercial code I would encourage people to use ZZ sparingly side by side with Rust, specifically ZZ is good for interfacing with C system code.

My company, devguard.io will ship a new product for cloudless automotive telemetry sometime around April which is mostly built in ZZ on top of the Zephyr OS and a Nordic NRF91, so we’re committed to it. The biggest open source code base in ZZ is probably the ZZ branch in devguard carrier. As soon as that is done, I feel like ZZ is ready for production.

To use ZZ, you need to install Rust for bootstrapping and provide a .toml file used by Rust package manager Cargo to compile your code.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Leaders Can Foster High-performing Teams

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

A leader can act as a coach, provide opportunities for ownership, and find out what motivates people to foster high performing teams. It also helps teams if leaders have powerful and meaningful conversations with team members and give vocal feedback face to face to team members.

InfoQ attended Stretch 2020, a leadership and management conference that took place on February 13 and 14 in Budapest, Hungary. InfoQ interviewed several speakers from this conference to explore what leaders can do to foster high-performing teams.

Ivett Ördög, a senior software engineer and creator of lean poker, suggested to start by defining what we actually mean when we talk about high-performing teams:

For me the more important question is what we mean by a high-performing team, and what we mean by “the team”? When a company struggles to deliver value, it’s rarely because the development team couldn’t churn out code faster.

She continued by explaining who should be on the team and what we should expect from high-performing teams:

For a high performing team, the team should include everyone from product to engineering and they should all work towards a single well-defined goal: what is the next small product increment that brings the most value within the next 1 or 2 weeks?

To lead high-performing teams, leadership qualities like being able to coach people and give feedback can make a difference, as Amy Tez, a communications trainer and speaker, explained:

A leader is a coach. That doesn’t mean they have to be nice, but they have to be truthful and constructive and kind so they help people forward – to help the company forward. And give regular constructive insight and coaching and vocal feedback. So that people feel heard and supported towards a common goal. People are still too anxious to talk face to face about whatever elephant is in the room. So problems continue to fester and grow – and before long, people lose motivation and resentment brews. Teams disintegrate and people leave.

Amber Vandenburg, director of human resources at Paradigm Shift, mentioned the importance of ownership for high-performing teams:

Leaders can foster high performing teams by providing more opportunities for ownership. Opportunities for ownership provide more space for innovation, creativity, experiments, and conversation, rather than complacency. I believe that in every industry and team there are opportunities for ownership in our processes, methods, projects, or roles. It is only when we trust our teams and empower them to take ownership that they can truly perform at their highest level.

Teams consist of individuals who can be motivated in different ways. Jurgen Appelo, CEO at Happy Melly, and Kate Wardin, Sr. engineering manager at Target & founder of Developer First leadership, both explained why understanding such differences matters for leaders:

Jurgen Appelo: Leaders should figure out what motivates people and makes them happy. Engaged and satisfied team members are more productive, more eager to solve problems, and more creative at finding innovative solutions.

Kate Wardin: As leaders, we need to work to understand how each individual on our team derives meaning from their work. This enables us to effectively motivate and inspire team members and the collective team to reach their full potential. I also believe that diverse and inclusive teams are essential to the productivity and happiness of a team

Appelo suggested that leaders should be an example for their teams and take action to increase motivation:

Literally thousands of practices are available to achieve motivation and happiness. It doesn’t matter where you begin. Just do something and lead by example. I started by inviting teams to my home for collaborative cooking, by sharing vacation photos with each other during lunch breaks, and by always being open and honest about what was great and what sucked and needed improvement. When team members see leaders actually lead, they will follow.

In this digital era, communication is being done more and more using chat tools or collaboration environments. Tez reminded us that leaders should also remain in direct contact with their employees:

Leaders need to start having much more powerful and meaningful conversations with everyone in the room. Move away from that computer, stop the incessant digital conversation, and actually talk to people face to face – have meaningful conversations. Ask your employees open questions, find out what’s working and what problems they have – individually and as a team. And attack the problem, never the person.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Building (And Re-Building) the Airbnb Design System – React Conf 2019

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Maja Wichrowska and Tae Kim, engineers at Airbnb, explained how Airbnb’s design system evolved its architecture and implementation in response to the business and technical challenges encountered.

Airbnb’s design system is driven by UX and product needs. In 2016, Airbnb Experiences were introduced publicly, and Airbnb applications and sites moved their focus away from just accomodations to the whole end-to-end trip. At the time, the Airbnb codebase was suffering from three technical challenges, enumerated by Wichrowska as fragmentation, complexity and performance.

The fragmentation resulted from engineers using any framework or library they were most familiar with. That meant a mix of JavaScript, CoffeeScript, jQuery, React, CSS, Sass and more. The complexity was driven by the growth of the codebase and the styling needs. Engineers were continually adding CSS files to override existing files. Wichrowska gave the following example of CSS files used to style buttons:

core.scss

.button {
  background: #ff5a5f;
  color: #ffffff
}

custom_page.scss

.button {
  background: #008489;
}

yet_another_custom_page.scss

.button {
  background: #a61d55;
  padding: 1px 1px;
          display: block
}

The root cause behind the additive behaviour is that it was difficult to track whether a CSS selector was still used by some part of the codebase, The problem was particularly acute when it came to positioning and layout. Removing existing selectors or files could result in changing the page appearance in unpredictable ways, and tracking down the culprit selector after the fact could be an arduous task. Keeping the unused CSS led to growing bundle sizes which in turn eventually led to slow page loading, in a way that performance became a key worry.

The fragmentation pain point was addressed by using only React for UI and component concerns. A Design Language System (DLS) was refined. The customization of components was achieved through components’ public interface (props in the case of React). Consistency and predictability were achieved by requiring style customizations to be expressed by props. A button with an alpaca color would for instance require a isAlpaca prop rather than a style or className prop which could be freely customized.

Kim additionally explained that CSS-in-JS, by tightly coupling the component’s CSS to the component’s markup, solved the maintainability issues previously encountered. CSS-in-JS prevented style overrides, enabled theming and allowed shipping only critical CSS through a combination of lazy loading and tree shaking. This helped considerably with complexity and performance issues. As an Airbnb engineer expressed at the time, praising the DLS:

Consistency and reusability with things like responsiveness and reusability all taken care of. DLS is definitely the way forward.

Fast-forward to 2018. As the DLS adoption grew, and the range of Airbnb businesses continued to grow, pain points became more apparent. In 2018, the product portfolio has increased considerably, including new tiers of accommodation with market-specific branding, expanding into business travel, concierge services, peer spaces, and more. Those new products required their own layout, typography, use of images and more.

Fragmentation reappeared. As the DLS required being updated to include new style-customizing props, every update potentially meant going through a lengthy process of deciding whether the extra prop actually fit into the DLS in the first place and whether it followed the DLS guidelines. When pressed with incoming deadlines, engineers would, rather than resort to the design system’s components, quickly write their own components, defeating the point of the design system in the first place. As a result, Kim witnessed the same component being written over and over because that was the more productive thing to do. This lowered consistency and predictability of components, when it comes to styling, but also accessibility.

Furthermore, as developers added customizing props to components, the component code base grew and featured convoluted, unwieldy logic. Kim mentioned that the button component over time grew from 1 Kb to 33 Kb minified, not including external libraries and with no tree-shaking possible. Kim exclaimed:

This is a big number for a button.

Kim also scrolled through the <button> component codebase, which featured 30+ props in its public interface, each prop having in its configuration complexity. The ability to customize components was thus creating implementation, documentation, usage and maintainability challenges. The complexity and performance pain points, previously solved in 2016, were surfacing anew.

In a subsequent iteration, Airbnb’s design system moved to a monochromatic design, making it easy for instance for different Airbnb businesses to add their own palette and differentiate themselves through the use of color. The design system also adopted a modular architecture (as favored by React) and a base + variant pattern. The base + variant pattern consisted in isolating the core parts of a component that are not subject to change from the parts that are open to customization. This translated in a component file implementing the core behavior, structure and appearance of a component (like accessibility or default styling). The base component could later be extended for customization purposes, for example modifying the appearance of the base component. A customized component however does not allow style overrides, beyond those that it already implements.

The Airbnb <button> core component thus featured callbacks to customize behavior (onClick, onHover props), and structural placeholders for components trailing or following the button’s label (renderLeading, renderTrailing props). A primary button is implemented in a primaryButton.jsx file. A secondary button is implemented in a secondaryButton.jsx file. Both primary and secondary buttons extend the base button, implemented in a baseButton.jsx file.

The chosen approach restores the capability of customizing a component behavior with the full flexibility of the JavaScript language, rather than being constrained to a hard-to-changeprop interface. At the same time, while base components can be freely extended, constraints are imposed on variant components (no overrides), ensuring consistency and predictability. The chosen approach reduces the bundle size as developers only need to import the component which they use, and do not pay a bundle-size tax for features that they do not use.

The full talk is available on ReactConf’s site and contains further code snippets and detailed explanations. React Conf is the official Facebook React event. React Conf was held in 2019 in Henderson, Nevada, on October 24 & 25.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Storybook 5.3 Released, Targets Design Systems, Supports Web Components

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Michael Shilman, a key Storybook contributor, recently announced the release of Storybook 5.3. The new version of Storybook strives to allow developers to build production design systems faster. Storybook users can now document their components with MDX, have a documentation site automatically generated, and integrate with popular design tools like Sketch, Figma, Adobe XD and more. Storybook 5.3 also now officially supports web components.

Shilman explained in Storybook 5.3’s release note the motivation behind the recent evolution of Storybook:

Storybook began with a simple goal: help developers build UI components and their key states.
(…)
In 2020, it’s not enough to provide great tooling for building and testing components. Today’s modern frontend teams also need to document and package their components into reusable design systems.

Users of the new Storybook can now write stories and docs in MDX. Towards that goal, Storybook 5.2 included the DocsPage tool to auto-generate documentation from existing stories. Storybook 5.3 goes a step further and allows developers to write fully customized documentation. Shilman explained:

MDX enables you to customize Storybook’s auto-generated documentation with your own components, mix & match DocBlocks, and loop in non-technical teammates.

The recently open-sourced ING design system is an example of design system whose documentation relies on Storybook’s documentation features. The displayed documentation for the Tabs component originates from a 20-lea-tabs.stories.mdx .mdx file whose first lines are as follows:

import '../lea-tabs.js';
import '../lea-tab.js';
import '../lea-tab-panel.js';

<Meta title="Intro/Tabs Example" parameters={{ component: 'lea-tabs' }} />

# lea Tabs

> This is not a real implementation!
>
> It is an example of how you can reuse the functionality of Lion to create your own Style System

`lea-tabs` implements tabs view to allow users to quickly move between a small number of equally important views.

<Preview>
  <Story name="default">
    {html`
      <lea-tabs>
        <lea-tab slot="tab">Info</lea-tab>
        <lea-tab-panel slot="panel">
          Info page with lots of information about us.
        </lea-tab-panel>
        <lea-tab slot="tab">Work</lea-tab>
        <lea-tab-panel slot="panel">
          Work page that showcases our work.
        </lea-tab-panel>
      </lea-tabs>
    `}
  </Story>
</Preview>

## How to use

### Installation

(...)

Storybook 5.3’s Storybook Docs currently covers Vue, Angular, Ember, and Web components, with more frameworks planned in the future, as Storybook Docs strives to be a universal tool for UI docs.

Storybook 5.3 also shipped with new ways to connect Storybook with external design tools through an ecosystem of addons. Shilman praised the extension of the current ecosystem of addons:

First, InVision built Storybook support into Design System Manager. Now, there’s an addon for every major design tool, including Sketch, Figma, Abstract, and Adobe XD.

Storybook 5.3 now officially supports web components through the new @storybook/web-components package. The latter package strives to provide isolated component development and documentation for all web components projects, and follows the Open-WC recommendations.

The learnstorybook.com website serves as extensive documentation for the design-system-related features of Storybook. The site includes specific parts dealing with design systems workflow, documentation and distribution, in addition to the traditional section on isolated component testing. Documentation for the new @storybook/web-components package is available in the GitHub repository.

Storybook is used by front-end development teams at Airbnb, Lyft, Slack, Twitter, and more companies. It is used to build design systems like Shopify Polaris, IBM Carbon, Salesforce Lightning and more. Storybook supports popular frameworks like React, Vue, Svelte, Ember, Angular, React Native and more. Storybook also supports web components and vanilla HTML. Storybook is localized in 5 languages.

Storybook is available under the MIT open-source license. Contributions are welcome via the Storybook GitHub project and should follow Storybook’s contribution guidelines and code of conduct.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpsRamp Introduces AI-Driven Suggestions

MMS Founder
MMS Helen Beal

Article originally posted on InfoQ. Visit InfoQ

OpsRamp, a SaaS platform for hybrid infrastructure discovery, monitoring, management and automation has launched OpsQ Recommend Mode, a capability for incident remediation. OpsQ Recommend Mode provides predictive analytics to digital operations teams with the goal of reducing mean-time-to-resolution (MTTR).

OpsQ is OpsRamp’s event management, alert correlation, and remediation engine. New AIOps capabilities help teams ingest, analyse and extract insights for event and incident management. The OpsQ Bot works with the new Recommend Mode with auto-suggested actions alongside alert escalation policies. Other new artificial intelligence for IT operations (AIOps) capabilities in the release include visualisation of alert similarity patterns and new alert stats widgets to provide transparency into machine learning-driven decisions.

The alert seasonality patterns feature in OpsQ can learn which environment alerts recur at a predictable frequency (a seasonality pattern) and automatically suppress them. Teams can visualise seasonality patterns that OpsQ has learned which helps them understand the auto-suppress decisions that OpsQ makes and trace recurring alert patterns to underlying activity. The new Alert Stats widget shows the total number of raw events, correlated alerts, inference alerts, auto-ticketed alerts, and auto-suppressed alerts handled by the OpsQ event management engine. This widget shows how OpsRamp OpsQ reduces event volume at each stage so that IT teams can build confidence in machine learning-based techniques for alert optimisation.

The release drives full-stack visibility for multi-cloud workloads with nineteen new cloud monitoring integrations (added to the existing one hundred and twenty) including: AWS Transit Gateway, AppSync, CloudSearch, and DocumentDB, Azure Application Insights, Traffic Manager, Virtual Network, Route Table, Virtual Machine Scale Sets, SQL Elastic Pool, and Service Bus, GCP  Cloud BigTable, Cloud Composer, Cloud Filestore, Firebase, Cloud Memorystore for Redis, Cloud Run, Cloud TPU and Cloud Tasks. In addition to AWS cloud topology maps, OpsRamp now offers topology discovery and mapping for Azure and GCP. Teams can apply cloud topology maps to analyse the impact of changes in their multi-cloud environments. Cloud topology is also applied in OpsQ’s event correlation engine to increase the accuracy of machine learning models.

OpsRamp offers agentless discovery for Linux and VMware compute, network, and storage resources, and the new release introduces agentless discovery and monitoring for Windows compute resources. OpsRamp’s enhanced synthetic monitoring provides insights and analysis for troubleshooting multi-step transactions. Application owners can break down each synthetic transaction and gain visibility into the performance of each step in a web transaction. InfoQ spoke with Michael Fisher, product manager at OpsRamp, about the new release.

InfoQ: What are some examples of typical seasonality patterns teams experience?

Fisher: Seasonality patterns are frequently rooted in human routines. These routines are generally expressed in daily, weekly, monthly or yearly patterns. For our customers, the most common patterns generally express a daily or weekly pattern. For example, they might see high spikes in network traffic when their end users login to the network in the morning, or  increased disk read/writes when they are performing their weekly backup jobs on their virtual machines. OpsRamp has the ability to learn these seasonal patterns and suppress alerts that occur seasonally, thus reducing false-positive alerts and alert fatigue.

InfoQ: How does OpsRamp provide insights to Kubernetes, containers and microservices?

Fisher: OpsRamp has a variety of different mechanisms to provide insight into Kubernetes environments. Native Kubernetes instrumentation allows teams to gain insight into the overall health of the cluster down to the individual container runtimes. This monitoring visibility is coupled with our Kubernetes Topology, which maps the cluster to the nodes, to the containers. This topological context is fed into our machine learning models to enhance correlation and alert deduplication, which aids Site Reliability Engineers (SREs) when troubleshooting Kubernetes related events.

InfoQ: How does OpsRamp handle security related incidents?

Fisher: OpsRamp provides the capability to monitor common firewall, or security centric hardware, for its overall health and performance. On top of this, OpsRamp has a generic web-hook API framework which can be leveraged to ingest security events from various vendors, which are fed through OpsRamp’s correlation models for further analysis.

InfoQ: Does OpsRamp have any features that help teams perform blameless retrospectives post incident?

Fisher: OpsRamp’s native help desk, dashboarding and reporting allow teams to track the lifecycle of an incident as it moves from incident creation, to incident resolution.

InfoQ: If a team has a “we build it, we own it&” mentality, how might this change the way in which OpsRamp is used?

Fisher: As an extensible platform, teams that seek to build their own custom monitoring, or integrations, are encouraged to do so. The strength in the OpsRamp platform is not only what we provide out of the box, but what we enable teams to do with the tools that we provide.

InfoQ: Can teams extract business metrics relating to web based customer journeys from the tool?

Fisher: OpsRamp provides various different synthetic offerings, from monitoring the round trip time (RTT) between an SMTP server and OpsRamp’s globally located data-centres, to creating a synthetic transaction modelling a user’s flow within your application. These various synthetic options provide businesses with visibility into their critical applications whilst helping them stay ahead outages that may affect their end users’ experience.

InfoQ: What is a machine learning model?

Fisher: For OpsRamp, a machine learning model represents OpsRamp’s ability to ingest data and then interpret it using various different features, such as topology relationships, resource attributes, time, etc. OpsRamp has several different models, each with a different intended purpose and degree of automation. For example, OpsRamp’s new recommend mode provides businesses with the ability to stay informed of the machine learning model’s action (“analyst in the loop”) and be the final decision maker if, for example, an alert should be suppressed or turned into an incident. Recommend provides an opinion on how to handle alerts, but leaves it to the operator to “push-the-button”, providing a blend of automated response and operator control.

InfoQ: How does OpsRamp perform topology discovery and visualisation?

Fisher: OpsRamp’s topology discovery spans from L2 – L7. At the bottom of the stack, OpsRamp leverages various discovery protocols (such as CDP, LLDP, OSPF etc) to map the relationships between infrastructure components. Moving up the stack, OpsRamp is also able to model business applications, public cloud services and Kubernetes workloads. In addition to the ability to visualise these services, OpsRamp’s machine learning models are able to train from the discovered relationship data to more accurately correlate and deduplicate alerts, which can reduct alert fatigue and MTTR for operators.

Learn more about the OpsRamp Winter 2020 release here.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


PyTorch 1.4 Release Introduces Java Bindings, Distributed Training

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

PyTorch, Facebook’s open-source deep-learning framework, announced the release of version 1.4. This release, which will be the last version to support Python 2, includes improvements to distributed training and mobile inference and introduces support for Java.

This release follows the recent announcements and presentations at the 2019 Conference on Neural Information Processing Systems (NeurIPS) in December. For training large models, the release includes a distributed framework to support model-parallel training across multiple GPUs. Improvements to PyTorch Mobile allow developers to customize their build scripts, which can greatly reduce the storage required by models. Building on the Android interface for PyTorch Mobile, the release includes experimental Java bindings for using TorchScript models to perform inference. PyTorch also supports Python and C++; this release will be the last that supports Python 2 and C++ 11. According to the release notes:

The release contains over 1,500 commits and a significant amount of effort in areas spanning existing areas like JIT, ONNX, Distributed, Performance and Eager Frontend Improvements and improvements to experimental areas like mobile and quantization.

Recent trends in deep-learning research, particularly in natural-language processing (NLP), have produced larger and more complex models such as RoBERTa, with hundreds of millions of parameters. These models are too large to fit within the memory of a single GPU, but a technique called model-parallel training allows different subsets of the parameters of the model to be handled by different GPUs. Previous versions of PyTorch have supported single-machine model parallel, which requires that all the GPUs used for training be hosted in the same machine. By contrast, PyTorch 1.4 introduces a distributed remote procedure call (RPC) system which supports model-parallel training across many machines.

After a model is trained, it must be deployed and used for inference or prediction. Because many applications are deployed on mobile devices with limited compute, memory, and storage resources, the large models often cannot be deployed as-is. PyTorch 1.3 introduced PyTorch Mobile and TorchScript, which aimed to shorten end-to-end development cycle time by supporting the same APIs across different platforms, eliminating the need to export models to a mobile framework such as Caffe2. The 1.4 release allows developers to customize their build packages to only include the PyTorch operators needed by their models. The PyTorch team reports that customized packages can be “40% to 50% smaller than the prebuilt PyTorch mobile library.” With the new Java bindings, developers can invoke TorchScript models directly from Java code; previous versions only supported Python and C++. The Java bindings are only available on Linux.

Although rival deep-learning framework TensorFlow ranks as the leading choice for commercial applications, PyTorch has the lead in the research community. At the 2019 NeurIPS conference in December, PyTorch was used in 70% of the papers presented which cited a framework. Recently, both Preferred Networks, Inc (PFN) and research consortium OpenAI annouced moves to PyTorch. OpenAI claimed that “switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days.” In a discussion thread about the announcement, a user on Hacker News noted:

At work, we switched over from TensorFlow to PyTorch when 1.0 was released, both for R&D and production… and our productivity and happiness with PyTorch noticeably, significantly improved.

The PyTorch source code and release notes for version 1.4 are available on GitHub.
 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Equifax Hackers Charged With Crime

MMS Founder
MMS Erik Costlow

Article originally posted on InfoQ. Visit InfoQ

The United States Department of Justice has identified and charged four members of the Chinese military with hacking Equifax to steal private data on over 145 million individuals. The information theft had previously been suspected of being a state actor, as the information has not been located for sale on the dark web.

China has since denied responsibility over the attack through foreign ministry spokesperson Geng Shuang, “The Chinese government, military and relevant personnel never engage in cyber theft of trade secrets.”

A crucial distinction for developers and architects is that attribution often does not matter. Once an application is hacked and data is stolen, that data is gone, and the organization that was hacked is financially responsible for any clean-up. The primary areas of focus for software are on detection, knowing when/where security issues or CVEs are, and prevention, to protect and/or patch those security issues.

The hacking quartet had allegedly previously compromised Equifax through their failure to identify and patch a known vulnerability in the Apache Struts library. By failing to update the library in the running application, hackers were able to access the system and move laterally across other environments and ultimately exfiltrate the information. The Apache Software Foundation issued an official statement about the availability of their security patch combined with the failure to utilize the fix, the included recommendations for other organizations who do not wish to be hacked:

  • Understand which supporting frameworks and libraries are used in your software products and in which versions. Keep track of security announcements affecting these products and versions.
  • Establish a process to quickly roll out a security fix release of your software product once supporting frameworks or libraries needs to be updated for security reasons. Best is to think in terms of hours or a few days, not weeks or months. Most breaches we become aware of are caused by failure to update software components that are known to be vulnerable for months or even years.

Two sets of tools exist that help development teams identify where these types of vulnerabilities exist in their code. By using both tools, Software Composition Analysis, and Runtime Application Self Protection, development teams can more quickly identify flaws and roll out updates to minimize the window of exposure for vulnerabilities. The four members of China’s military leveraged their heist in the window when the vulnerability became known:

  • CVE-2017-5638 was publicly disclosed on March 10, 2017.
  • The breach occurred in May, approximately three months after public disclosure.
  • Equifax discovered the breach towards the end of July.
  • Official statements were released by Equifax and Apache in September.

If the application using the insecure library had been patched following the vulnerability disclosure, this attack would not have worked and determined hackers would have required more effort to locate a different vulnerable path.

Software Composition Analysis (SCA) is a technique that helps teams track where different libraries are used and where CVEs are located. As new vulnerabilities are discovered, these systems will leverage this information to pinpoint which applications need to update which libraries in order to avoid exploitation. Development teams can then update the application’s libraries and roll a patched version into production, typically through CI/CD controls.

The other technique, Runtime Application Self Protection (RASP), offers a mitigating control for organizations that maintain a defense or cannot upgrade the software in a timely manner. Unlike traditional Web Application Firewall (WAF) defenses that are unaware of what they defend, RASP works inside the application to defend APIs with context of what it is defending. Whereas many techniques can bypass WAFs, attacks such as the deserialization work of CVE-2017-5638 are not typically caught by WAFs. Deserialization attacks work by exploiting the object hierarchy and making the runtime execute arbitrary code as part of deserializing the object. While the particular payload used in Equifax is undisclosed, many attackers use tools such as ysoserial or ysoserial.net, as well as GadgetProbe to execute arbitrary commands against Java and .NET applications that deserialize user input. Rather than relying on layer 7 network signatures of an unknown class hierarchy, RASP uses techniques such as instrumentation to monitor inside APIs as data is being deserialized, trapping sensitive methods at the method entry point before the body executes.

Although charges have been filed, it is unclear what the result will be as China does not have an extradition treaty with the United States.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Diana Larsen on the Origins of Agility and Agile Fluency

MMS Founder
MMS Diana Larsen

Article originally posted on InfoQ. Visit InfoQ

In this podcast recorded at Agile 2019, Shane Hastie, Lead Editor for Culture & Methods, spoke to Diana Larsen about the origins of what became agile development, where business agility is header and the agile fluency project. 

Key Takeaways

  • There is a deep history of business improvement initiatives that predates the agile manifesto
  • It was a part of a cultural movement that was moving more toward more humane workplaces that could deliver more value
  • When you give people a good environment and good support to do their work, you get better work and better products
  • The ideas of business agility predate the work in agile development – engaging support structures in organisations to enable change
  • You can’t change one part of a system without it having effects on other parts of the system
  • The Agile Fluency Model is a tool to help teams diagnose themselves and to expose the system to leadership 

Subscribe on:

Show Notes

  • 00:25 Could you give us the two-minute overview.
  • 00:27 Yes. I think maybe, I had a career in work process redesign and organization design. For a while. When in 1997 I bumped into some guys, Martin Fowler and Ron Jeffries and Ward Cunningham and Joshua Kerievsky and some other folks like that. And they started telling me about what they were thinking about right then and I started telling them about what things I had been doing then and we decided there was some resonance.
  • 00:55 And shortly after that I started presenting at agile conferences and writing and doing coaching and consulting. And I’ve been in the agile community ever since – I found my tribe. It was my home. So part of that was writing the book on Agile Retrospectives with Esther Derby and writing the Liftoff book with Ainsley Nies and developing the Agile Fluency Model, with James Shore. and just trying to find ways that I can make my best contribution back to the community because it’s given me so much.
  • 01:26 One of the things in terms of your contribution to this particular event is you gave an introduction to a little bit of history and it’s probably a useful thing for us every now and again to step back and think about where did these ideas come out of? I know there’s often a perception that Agile leaped into being on, what was it, the 15th of February, 2001 and nothing had happened before then. We know that it wasn’t that case, but maybe with your background and research, tell us a bit of the story
  • 01:58 I started thinking about it and you know, Frederick Taylor gets kind of a bad rap, but really he was the first one to say, “I want to take a systematic look at what it takes for work to work”. Right, and he was focused mostly on time and motion and efficiencies and those kinds of things. But he really was the first one to sort of take, at least the first one that we know a lot about, to actually take the time to study that what was going on. And he kind of kicked off a whole movement of people who did that.
  • 02:31 And that led into the twenties and thirties where Elton Mayo was doing the Hawthorne Effects studies, you know, where, you know what really will help these women be more productive. And what if we turn the lights up? Oh, that seems to work. What if we turn the lights down? Oh, that also seems to work. And then looking at it more deeply and saying, Oh, when people are working, they appreciate people who were trying to help them do better work. And so that was the actual, result that came out of that. And then we rolled into world war two and lots of things were going on then, the training within industry in the U S. Where the idea of employee development and training and helping people really understand how to do a good job and how to be successful in their jobs that started and giving them the skills that they needed.
  • 03:24 And then over in the U K. Fred Emery, Eric Trist, and Elliot Jacobs, were looking at the coal mines because the coal was so central to the UK war effort and they noticed that some coal miners were doing better than other coal miners and they wanted to figure that out and they went down into the mines with them and discovered self organizing teams. Right? Whatever the managers told them to do, once they got down there, they made some of their own autonomous choices about what was the best way to get the work done that day and who is best equipped to do which kind of work. And then that kept rolling along. And you know, after the war, we got Theory X and Theory Y and a lot from the coal mine studies turned into what we call sociotechnical systems study, which is how do we blend social systems and technical systems and work, and how does the environment affect that? And that was the Tavistock model. And that took off for a long time. And also out of that came statistical process control that had started during the war and after the war, Edwards Deming discovered that nobody had any interest in that anymore because there wasn’t a war.
  • 04:35 So he took it off to Japan and then we got quality circles and more emphasis on Total Quality Management, which for a while we were just calling Japanese management. Nobody knew where it came from. And that ended up in the Toyota Production System, which became Lean Manufacturing, which also floated into the 80s as Business Process Reengineering.
  • 04:58 And then that gave rise to Theory of Constraints. And so, now we have DevOps and we have Lean and Kanban, and then there was also the technological improvements that were made during the war. Bletchley Park and Grace Hopper and her crew. We began thinking about how do we work with these machines and software development life cycle came to be, and Winston Royce’s paper and everybody got the wrong idea about waterfall, right?
  • 05:29 And then improvements started being made in that, and we got Evolutionary Project Management. And then Jim Highsmith’s Adaptive Software Development and all of that rolled up into how we got Agile. I mean, all of those ideas were in the wind. Now, whether any of the manifesto signers studied any of that stuff or not, I don’t know.
  • 05:53 But I do know it was a part of a cultural movement that was moving more toward more humane workplaces were places that could deliver more value and I saw that those went hand in hand. That lot of work going into understanding that when you give people a good environment and good support to do their work, you get better work and better products.
  • 06:17 So that was just fun. And I was stimulated by a couple of things that I really want to give credit to: Kevin Behr did a talk at Lean Agile Scotland a couple of years ago called something about coal mine in DevOps. I’ve got the citations in my slide and it really got me reflecting on some stuff I already knew, but I started thinking about it in a new way and its connection to software development, the kind of work that we do.
  • 06:45 And then a couple of months ago, Jessica Curr, did a talk at the J On The Beach conference. About something she called somastamy, which is what happens when you make a space for people to do learning together in a creative way. And what results from that. And she went all the way back to the Florentine Camerata.
  • 07:12 And came forward with that different groups of folks who got together for the express purpose of how can we be creative and how can we support each other in our creative efforts, and then of course, she was able to apply that to teamwork and software teams. And so it just seemed like there was a lot going on that was really interesting to me, and I’ve been thinking about this a lot because as James Shore and I have been working on Agile Fluency, I think we talked about this a year ago in March, we gave a whole update of the article. We expanded it quite a bit, added a lot more material to it that we had learned over time.
  • 07:52 And what I’ve noticed about the model is, you know, we wrote the model. We researched it from like 2009 through 2011 and then really wrote the article in 2012 and that’s when it was published, the original one, and I’ve been pleased and amazed as new ideas have been coming into the agile world that they just fit right into the model.
  • 08:18 We talked about focusing teams and delivering teams and delivering teams could release at will and did continuous integration, and we talked a little about pairing, but mobbing fits there too, and none of us had been thinking about mobbing when we were writing the article and DevOps, you know, and where does that fit in?
  • 08:35 And the emphasis on UX and design now, and, all those things that there’s place for them in the agile fluency model, even though we didn’t know there was going to need to be a place for them for anything, you know? And so that’s been a real joy for me to watch as that happens, that it absorbs and we intended it to be a very inclusive model, and it’s turning out to be inclusive in ways that we didn’t even anticipate, you know? And that’s been great. Yeah.
  • 09:01 If you step back a tiny bit and look at agility. What is the state of agility? What are the things that are happening in the world today?
  • 09:10 Well, I mean certainly the movement toward business agility that you’ve been so much a part of, and that is true and way back in my work process design days, where we always started was we went and talked to the HR people and the facilities people long before we ever talked to people that we suspected would form themselves into a team.
  • 09:30 And we would get those folks on board. And I think one of the things is that that awareness, that agile isn’t just something that happens in the it department or just to teams, or that’s becoming more and more, I mean, there was even the Harvard business review article, I think it was last fall, about when ING and some of these other big companies have started to initiate an agile effort, they’re discovering that it cannot succeed without the involvement of other parts of the organization. And that, of course, fits into open systems theory perfectly. Right? You can’t change one part of a system without it having effects on other parts of the system.
  • 10:10 And so we are seeing that more clearly now. And business agility is a piece of that, you know, the desire of the organizations to just take advantage of that resilience and that quickness in their marketplaces. So that’s a really important one. The other thing that we’re noticing, we’re just beginning to notice because we pay attention to this from these weak signals because of our strengthening zone.
  • 10:34 You know, it’s like there’s something out there that’s the future of agile and we’re not exactly sure what it looks like, but we have some glimmers. Some of those things are things like team self-selection teams that are more devoted to the overall health and wellbeing of the company than they are to their own product, that they’re willing to make adjustments in the work they’re doing on their product so that the company can survive.
  • 10:58 And things like companies making spaces, I’ve been hearing about this more and more, companies making spaces for startup incubators because they know that if there’s a free flow of ideas, they’re going to get good ideas from those startup incubators and they can give them mentoring and so there’s a really nice quid pro quo going on there and so just those kinds of things, software becoming more aware that it’s a part of a larger community. And I think as it has been less just embedded in pockets, you know, I mean, it was just Silicon Valley and then Route whatever, 68 or whatever it is, outside of Boston, , there were just these few places that were the software places.
  • 11:40 Well now Seattle, Portland, Demoine, Iowa. I mean, all kinds of places are becoming the software places in their regions and Austin. And so I just feel like there’s more of a sense of, we aren’t separate from our communities. We are a part of them. And the obvious way is by giving space for user groups to have meetings, community groups, to have meetings, those kinds of things.
  • 12:08 So that feels like the edge of something new to me.
  • 12:12 Tell us about the Agile Fluency Model, perhaps a starting point, a very quick introduction for the audience that haven’t come across it before
  • 12:19 The Agile Fluency Model is a way of thinking about the needs of a business for agile. It’s a model of team behaviours, and we use the metaphor of language fluency that depending on your need, the fluency that you might need to speak a language that’s not your native tongue is going to be different. If you’re traveling, just touristing, you need one kind of set of language, be able to ask and answer certain kinds of questions, be able to use greetings, those kinds of things, but if you’re going to stay longer, if you’re going for an extended homestay or something like that.
  • 12:58 You gotta be able to tell stories about what you did last week and, that gets a little more sophisticated. And then if you’re there to open a business or teach at a university, that’s a whole other level of fluency. What we saw was that fluency in agile, the proficiencies that teams need to develop to be able to work in agile ways had the same thing.
  • 13:20 And so we named those different kinds of fluency, Focusing, Delivering, Optimizing, and Strengthening. If your need, for instance, for your business and your kinds of products is something that serves your customer, but maybe overall only in the short term, but you still need to be able to shift with the marketplace. The team’s work needs to be able to be redirected, but the team is really devoted to what is going to bring customer and business value. That’s a Focusing team. And those teams don’t have to worry so much about technical debt, because maybe their products aren’t long lasting and maybe they’re working with the marketing department and they roll out a new web design every three months or something, new functionality every three months.
  • 14:10 So, that has a very different need or some internal IT projects are like that. They’re very different need than, a Flikr that needs to be able to release. That’s an old example now, but that needs to be able to release several times a day and has to be able to do that with confidence that nothing’s going to break.
  • 14:28 So that’s what the model is, is it looks at what are these areas of fluent proficiency that company needs from its teams. And then the other piece of that is in order to get that, what do they need to invest? And so we sort of codify that. What are the benefits, what are the investments? And then what impacts can you expect from that?
  • 14:51 So it’s a model that assesses where the team is, right? But the organization invests.
  • 14:57 Yes, because it has enormous organizational implications. Along with the model, we now have some additional tools that we use. The first part was the model, but since then we’ve developed a diagnostic instrument and we’ve developed a, we call it the improvement cycle.
  • 15:13 It’s kind of a protocol for how can you effectively help an organization transform. And what we look at is if you can get three to five or better than that, even more teams themselves do a diagnostic and they get the benefit of their own diagnostic, but then you aggregate that and look at the overall picture.
  • 15:34 Then you have something to give to the organizational leaders as a picture of their system. How is their system working? Not how are a bunch of individual teams working, but in an aggregate altogether, what is that like?And that enables leaders and managers to better manage the work system . And also to spend their investment dollars or their investment of attention more wisely as opposed to, Oh, “we always approach agile transitions or agile transformations in this way, you have to do all these steps”. Well, maybe you don’t have to do all those steps and maybe you don’t have to have all these various trainings for your teams. Maybe they already have some fluency in some of that, but where don’t they? And where can they use help? And it helps to amplify the team’s voice.
  • 16:21 They get an opportunity to ask for what they think they need. And if we see that as reflective of a broader impact, or then we will recommend to the leaders, you know. You’ve got great team members here. They’re doing great work. They’re doing these kinds of practices. They could do so much better if they were co-located, if they weren’t spread across all the cubicles in your whole open office area.
  • 16:46 Or if you have remote teams, if you give them an adequate electronic team space that they can own and they can work in without the noise of everything else that’s going on in the organization. So those are the kinds of things or better access to business information. How do they really get to know what’s the next most valuable thing?
  • 17:06 Some organizations have very strong product ownership that really communicates well with the team. Other organizations, not so much. And so it helps us pinpoint where would be the best place to invest what you have to invest, and to give some choices around tradeoffs. If you don’t have enough to invest in everything, well, what are the most important things you want to invest in?
  • 17:28 Where are you going to get the most leverage? Because we know any place you make a change, it’s going to cause other changes. So yeah, that works there too.
  • 17:36 Many teams, many organizations are going through the diagnostic process. Are you consolidating that information and using it to look at trends?
  • 17:45 We’d love to be able to do that, and we do not demand that those companies disclose their outcomes to us.
  • 17:53 We thought about doing that at one point in time, but it’s very difficult to ensure that you’re getting that back from all the people who are using our tools. So it’s hard to gather. A lot of companies feel that that’s proprietary in nature and they don’t really want to share it. Even if we shared it all anonymously, it would show some trending for folks, , potential benefit.
  • 18:16 But, there’s such privacy issues now in our world that we’ve just let go of that for now. But we do hope that they keep their own information in their organizations so that in a number of months after they’ve made some investments, they can run the diagnostic again with their teams and they can tell what impact has that had.
  • 18:38 And now what’s the next thing to do? ‘Cause there’s always going to be more to do. but that way you can sequence, we’re going to try this first and then see what kind of an impact we have, and then we’ll try that.
  • 18:50 Inspect and adapt.
  • 18:51 Yes exactly. On a grand scale. Yeah.
  • 18:56 Shane: 18:56 What else is happening with the agile fluency project?
  • 18:59 Diana: 18:59 Well, so we formed a project. We now have a business and an organization, and we were getting enough demand from folks to be able to use our things that made sense. In this last year, we’ve actually gotten our own office space. I mean, it’s an interesting thing, when all of a sudden I find myself in what some people would call their retirement years, and I’ve got a startup as my retirement project. So, we’re going through all those same kind of startup things, and it’s very different from gardening or boating or golfing, or, I mean, it’s not that different from some volunteer things that some people really seriously get involved in. But I’ve always loved the work I do and so this was a logical extension of that.
  • 19:40 We are continuing to expand that community of people. We have a couple of things. We’ve got what we call the licensed facilitators. Both Jim and I had come out so strongly about certifications over the years. We knew we couldn’t run a certification program, but we do license people to use our materials and if they’ve gone through the training with goodwill, which is an extensive training program and have completed it and had done all the work and had all the discussions. It’s a lot of application. Then we give them a license and then they can go and use the materials with whomever they want. And so we’re growing. We had licensed people before, but it was a much more lightweight process.
  • 20:21 So two years ago we started something new. So last year, at this time, there were about six licensed facilitators, and now we’re moving up toward more like 60 worldwide. So the trending is more and more and so we’ve got a community of people who can support each other and speak the same language, and I’m very proud of them.
  • 20:42 And then we have another small group of people who are carrying the word about the agile fluency game and using that simulation of two and a half years in a software project to help educate people, help educate teams about what practices they might be missing, but then also other folks in the organization.
  • 21:01 Just last week, I played the game in an organization where I was working with the business leaders and the product development leaders. So, the engineering, product management, marketing, sales, and customer support, all the leaders of all those came together. And we ended up with four games going at the same time.
  • 21:24 And they played the game and they all did not win the first time, and then we talked about why that was. And then we played the game again. And this time they all did very well, and we talked about why that was and how some of that reflected what’s happening in their organization and how they might want to change what’s happening in their organization.
  • 21:47 It’s a very powerful tool. One person told me it was like they got nine months of agile training in three hours, or four hours. So that is also a powerful tool.
  • 21:56 They’re two separate trainings, but we’re just finding that they both add richness on their own and they are very powerful put together.
  • 22:05 So I’m interested in what trends and new horizons you’re seeing, Shane. I’d like to add that to my bucket of things that are going on in the world.
  • 22:16 Shane: 22:16 Perhaps it’s a personal bias, but what I’m seeing is coaching and what we call agile coaching is something that needs a lot of help.
  • 22:28 Diana: 22:28 Yeah. I’ve been hearing in different places in the world that the need for that function is growing and there are not enough folks who have skilled proficiency, enough fluency in being an agile coach.
  • 22:42 To do that a couple of months ago, I put out a thing on a LinkedIn group. For agile coaches and on the LinkedIn group that’s called lean agile development. And on both of those, I put the same question. “When you think of the coach, the agile coaches that you really admire, that you think are really superior, what are the couple of qualities that come to mind”, and I was really interested. I’m going to write about this because it was just such an interesting collection and it didn’t necessarily match what I had sort of predicted would come out, but there were an awful lot of responses like, Oh, they must have empathy, or they must have good communication skills.
  • 23:23 And I thought, you know, those are sufficiently vague that how does that really work? And I think you’re right. I think we aren’t clear about what it takes to do that really well, yet. And it’s different from management consult. I mean, I’ve been a management consultant. It’s different from that work. It’s certainly different from executive coaching or life coaching or those kinds of things.
  • 23:48 It’s not sports coaching, but it very definitely has elements of that.
  • 23:56  Lisa Adkins’ model gives a really nice overview, but it’s got to go deeper.
  • 24:02 Personally, I’ve been exploring the professional coaching areas and all of the professional coaching bodieshave a requirement that you align with a code of conduct. And you can become an agile coach by printing a business card
  • 24:23 Pretty much.
  • 24:23 There is no requirement to put yourself under the discipline of a professional body and coaching can do harm. Well, I think that this is a gap.
  • 24:34 I agree with you. I agree with you. And I know the need is out there. Absolutely. The need is out there. People are still trying to figure out, well, who do the scrum masters report to at our organization? And they very often end up reporting to someone who’s wildly inappropriate just because there just isn’t any place else for them to go. I mean, there has to be someplace and then, you know, I think that’s a really interesting line of inquiry that you’re on. I would support you in that for sure.
  • 25:03 Diana, thanks so much it’s been great to talk to you.

Mentioned

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


A Guide to Writing Properties of Pure Functions – John Hughes at Lambda Days 2020

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

John Hughes, co-designer of Haskell and Quickcheck, recently discussed property-based testing at Lambda Days 2020. Hughes presented in his talk five different strategies for coming up with properties and compared their effectiveness. Metamorphic and model-based properties reportedly show high effectiveness.

Hughes first discussed a common problem faced by developers using property-based testing (PBT) to test pure functions. With example-based testing, the expected output of the function under test can be computed ahead of time and compared to the actual output of the function. Property-based testing, however, typically generates semi-randomly a large number of test sequences. Computing the expected outputs of the function under test for a large set of values that are not known ahead of time is not easier than implementing the function in the first place. This challenge is called the test oracle problem and is described by Prof. Tsong Yueh Chen as follows:

The oracle problem refers to situations where it is extremely difficult, or impossible, to verify the test result of a given test case

Developers may end up replicating the original code in the tests, and as such introducing the same bugs in the testing code than in the tested code.

To solve this conundrum, Hughes recommended identifying and checking properties of the output of the function under test. One such property for the reverse function, which takes a list and returns the list in reverse, is:

reverse (reverse xs) === xs

The challenges now turn to finding a good-enough set of properties which allows developers to either find bugs, or ensure the correctness of the function implementation. Hughes demonstrated that using only one property was often not enough by giving the following erroneous `reverse` implementation:

reverse xs = xs

Hughes thus proceeded to give 5 systematic ways of generating properties and analyzed their effectiveness. The first method consists in identifying invariants of the function under test. To illustrate that method, Hughes considered a binary search tree (BST) data structure, endowed with the operations find, insert, delete, union. Such data structure has the property that the insertion of a key/value pair in a valid tree results in a valid tree. The same applies to the other deletion and union operations. A tree itself is considered valid if it adheres to the contracts of the binary search tree (in particular those referring to the ordering of keys).

The second method consists of finding a postcondition verified by the output of the function under test. For instance, Hughes gave the following post-condition for the BST’s find operation:

After calling insert, we should be able to find the key inserted, and any other keys present beforehand.

The third method relies on metamorphic properties. Metamorphic properties relate the outputs produced by the system under test when fed related inputs. Hughes gave the following metamorphic property for the BST’s insert operation:

insert k v (insert k' v' t)
===
insert k' v' (insert k v t)

The previous property simply means that the resulting tree from inserting two key/value pairs does not depend on the order of the insertions. Metamorphic testing is an active research area. A thorough examination of the technique can be found in Prof. Tsong Yueh Chen’s Metamorphic testing: A review of challenges and opportunities paper.

The fourth method is based on what Hughes termed inductive properties. Scott Wlaschin, senior software architect, explained in one of his talk on PBT:

These kinds of properties are based on structural induction – that is, if a large thing can be broken into smaller parts, and some property is true for these smaller parts, then you can often prove that the property is true for a large thing as well.
(…)
Induction properties are often naturally applicable to recursive structures such as lists and trees.

Hughes gave the following inductive property for the BST:

union nil t ==== t
union (insert k v t) t'
=~= 
insert k v (union t t')

In other words, a union operation on a tree can be related to a union operation on a smaller tree. By induction on the size of the tree, it can be shown that the only function satisfying those properties is the correct union operation of the BST. Inductive properties are often themselves metamorphic properties with the peculiarity that they form a complete specification, being linked to inductive proofs.

The fifth method relies on model-based properties. This implies the existence of a model of the system under test. In the BST’s case, a simple model is the list of key values stored in the tree, as obtained by traversing the tree data structure. Operations on the system under test translate into operations on the model. Checking properties of the data structures translate into checking properties on the model, with the idea that the latter checking is significantly simpler or easier to perform reliably.

To assess the effectiveness of the identified properties, Hughes introduced bugs in every operation of a correct BST implementation and counted the bugs found by each property generation method:

Type of property Number of properties Effectiveness
Invariant 4 38%
Post-condition 5 79%
Metamorphic 16 90%
Model-based 5 100%

The model-based properties reach 100% effectiveness and are the closest thing to having an oracle for the system under test. The model acts as a simplified version of the system under test for which an oracle is available. Metamorphic properties are also very effective in finding bugs. This is in line with results stemming from different industries. As related in Metamorphic Testing: A Review of Challenges and Opportunities, a major feat of metamorphic testing is finding 100+ new bugs in two popular C compilers (GCC and LLVM).

Hughes appropriately concluded in the paper associated with his talk:

These results suggest that, if time is limited, then writing model-based properties may offer the best return on investment, in combination with validity properties to ensure we don’t encounter confusing failures caused by invalid data. In situations where the model is complex (and thus expensive) to define, or where the model resembles the implementation so closely that the same bugs are likely in each, then metamorphic properties offer an effective alternative, at the cost of writing many more properties.

The full video recorded at Lambda Days 2020 is available online. Lambda Days is a developer conference focusing on functional programming methods and techniques. Lambda Days 2020 was held on the 13th and 14th of February 2020 in Kraków, Poland.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.