Mobile Monitoring Solutions

Search
Close this search box.

Article: The Value and Purpose of a Test Coach

MMS Founder
MMS Matt Stephens

Article originally posted on InfoQ. Visit InfoQ

Introducing business-oriented automated testing can involve a huge cultural change. For this we really need a Test Coach role, just like we have agile coaches and scrum masters. In this article we hear from someone living this new role, using Domain Oriented Testing on a daily basis to ensure acceptance tests have full story coverage, and unit tests verify business behavior, not implementation

By Matt Stephens

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Optimization Strategies for the New Facebook.Com – Ashley Watkins at React Conf

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Ashley Watkins discussed at React Conf some of the technologies and strategies powering FB5, the new facebook.com, addressing topics such as data-driven dependencies, phased code and data downloading, and more.

The new facebook.com website is a single-page web application built on a React/GraphQL/Relay stack. GraphQL is a query language with which developers may specify the pieces of data that they need. Relay is a framework that integrates with React for constructing GraphQL queries and handles data fetching. Watkins explained that by standardizing Facebook’s tech stack around the previous three technologies, Facebook was able to rethink how to operate at scale while optimizing for user experience.

A key challenge of single-page applications is the minimization of the time-to-visually-complete, i.e. the amount of time it takes between when a user navigates to a website and when the above-the-fold visible content is rendered. In a standard implementation of a single-page app, the client request a page, the server sends the HTML and JavaScript for the page, which triggers the fetching of some data. While data is being fetched, a loading screen is displayed.

With the new facebook.com architecture, the HTML document requested by the browser is streamed down to the client and as that happens, the browser incrementally parses it and starts to download scripts referred to in the HTML file while the file continues to arrive. Facebook servers write the HTML page so that it starts with CSS and JavaScript resources that all users are going to need, and continues with page- and user-specific resources which are slower to compute. At this point, the server can start fetching data as indicated by Relay. The HTML page ends with a script tag including the data prefetched by Relay, when that becomes available. The end result is that JavaScript and data are downloaded in parallel. This optimized process is thus rendered possible by HTML flushing and the centralized description with GraphQL of the data required by the page. In the best cases, there is no need for displaying a loading screen.

Facebook also optimizes for time-to-first-paint, which is the time necessary to display the first pixels on the screen (thus necessarily shorter than the time-to-visually-complete). To that purpose, Facebook uses phased code-splitting. With phased code-splitting, code to be downloaded is split across three buckets of increasing priority. The first bucket relates to the loading page, the second bucket contains the necessary information that impacts visually the page while the third bucket gathers the code and data that are orthogonal to displaying concerns (like analytics). To support phased code-splitting at build time, Facebook added two APIs, importForDisplay() (phase 2) and importForAfterDisplay() (phase 3) to assign phases. The downloaded code is thus downloaded in three phases. At the end of the first two phases, a render occurs. The first render thus occurs before the full code for the page is fetched, shortening the time-to-first-paint. Because phase 3 only incorporates information that does not affect the screen, the screen is complete by the end of Phase 2 and the time-to-visually-complete is also shortened.

Facebook additionally optimizes to get the primary content as early as possible, by minimizing time-to-meaningful-paint. While the previous optimization strategy involves smart code-splitting, the time-to-meaningful-paint optimization strategy involves data splitting. In this strategy, critical data must arrive first and be used immediately. It is for instance rarely necessary to show more than one newsfeed post for the initial render of the page. To that optimization purpose, Relay introduced streamed queries in which queries parts which can be streamed down are annotated with the @stream marker:

fragment HomepageData on User {
  newsFeed(first: 10) (
    edges @stream
  }
  
}

The @defer annotation also allows indicating which parts of the queried data are not critical. The meaningful paint thus comes earlier, and as additional data is received, the view is hydrated and the screen updated. Additionally, the time-to-visually-complete may also be lowered, as the critical code and data delivered first typically relate to above-the-fold content.

The last optimization strategy consists of not fetching code that will not be used. Watkins gave the example of two variations of the same component, as may occur for A/B testing purposes. The second component version offers distinct features, and comes with additional code which is only necessary when the user is selected in the A/B group. Watkins observed that a first idea – implementing the feature toggle with dynamic imports, does not marry well with streamed renders. Dynamic imports result from the execution on the client of previously downloaded code which is now being executed. This means there are gaps between dynamic imports while code is parsed and executed, which further delays rendering. Facebook implements feature toggles so that they can be detected when the request is received by the server which then ships only the right code.

Another case is that of components with several variants, one of which is selected dynamically depending on the fetched data. For example, a Post component may delegate to a VideoPost or PhotoPost component depending on the fetched post type or content. Each of these components may have its own data requirements. A standard implementation thus leads to download the code and data for all variants. In this case, facebook.com applies an optimization strategy dubbed data-driven code-splitting, which relies on the GraphQL description of the variant components’ queries and @module Relay annotations:

... on Post {
  ... on PhotoPost {
    @module('PhotoComponent.js')
    photo_data
  }
  ... on VideoPost {
    @module('VideoComponent.js')
    video_data
  }  
  ... on SongPost {
    @module('SongComponent.js')
    song_data
  }  
}

As code and data are streamed in parallel, the page is progressively rendered, and component rendering needs to be coordinated to avoid having content moving around as data and images arrive at different times. The facebook.com website developers leverage React Suspense for rendering coordination purposes, optimizing the perceived user loading experience.

Watkins’s full talk is available on ReactConf’s site and contains further code snippets and detailed explanations. React Conf is the official Facebook React event. React Conf was held in 2019 in Henderson, Nevada, on October 24 & 25.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Metaphors We Create By

MMS Founder
MMS Jabari Bell

Article originally posted on InfoQ. Visit InfoQ

Transcript

Bell: The importance of receiving quality medical care cannot be overstated. It is often the difference between life and death for many people. A person should have the right to fair care when visiting the doctor regardless of that person’s race, age, or gender. This is something that many of us agree on in theory, but in practice, things are far from ideal. Statistics show that different groups of people in this country have wildly different experiences when going to the doctor. There have been reports of patients that have not been believed when they told their doctor that they were feeling intense pain. In many of these cases, this misdiagnosis has led to death.

Gender and Racial Bias

Gender and racial bias in the medical field has been determined as the cause for a majority of these misdiagnoses. Having biases is human, but when those biases are the basis for making life and death decisions, people die unnecessarily. Not only that, but marginalized people will become skeptical that they will receive good care in the first place. As a result, they’ll pass up on valuable opportunities for preventative treatments that can lead to further health complications.

Women are twice as likely to die as men in a year, following a heart attack, yet only 40% of women’s health care routines include a heart risk check. Furthermore, a recent Wall Street Journal article stated that women who came to the hospital with heart attack symptoms are seven times as likely to be sent home than their male counterparts. Women are also twice as likely to suffer from chronic pain than men. Studies show that they are twice as likely to be dismissed from the hospital as men as well.

Things aren’t much better if you’re a minority. A New York Times article last year showed that black infants in America are more than twice as likely to die as white infants in childbirth. The gap is wider than it was in 1850 when slavery was still legal. Upward mobility is not a protector against these statistics either. A black woman with an advanced degree is more likely to lose her baby than a white woman with less than an eighth grade education. It can be hard to remove bias in a field that has textbooks, like the nursing textbook that was reported on in 2017, in which some of the following quotes were taken. “Hispanics may believe that pain is a form of punishment, and suffering must be endured if they are to enter heaven.” “Native Americans may prefer to receive medications that have been blessed by a tribal shaman.” “Blacks often report higher pain intensity than other cultures.” These quotes were taken from the Focus on Diversity section of this nursing book.

This is crazy. These are some tough statistics to digest. It’s hard enough to change the minds of people who hold these biases, front of mind. What’s even harder is changing the behaviors of people who aren’t even aware that they carry these biases in the first place. This systemic change can take generations, and even then it’s not even guaranteed. That’s a long time to wait. We don’t have a lot of time. I sincerely believe that we don’t have to wait that long to get equal access to quality medical care for everyone.

I believe that with today’s technological advances in machine learning, big data, and anti-discriminatory algorithms, we can solve the problem of gender and racial bias in the medical field. What do we need to get this future of equality today? After talking to doctors and patients, we’ve identified two main groups to focus on. We’ve designated one group, at risk, and the other group, not at risk. After thinking about it, we decided to call the not at risk group, just normal. After analyzing our data points collected from many of our hypothetical users, you start to see that there are two main qualities that affect one’s chances of being in the at-risk group. One is being a woman and the other is being a minority. We combine these two attributes to derive our core key metrics that we focus on since the start of our efforts. We ended up combining survivability, general satisfaction, and experience to compute our overall user satisfaction score. It turns out, to get the max satisfaction, a person’s got to be not a woman and not a minority.

We’re at the point now, where we’re focusing on collaborating with the normal group, who are 99.9% made up of white guys. Furthermore, from the data that we’ve gathered, we’ve determined that white guys are in possession of a special privilege that everyone could benefit from. What if we could figure out how to democratize the privilege that these white guys have today? Sharing it with others could actually save lives. The data from our user interviews with white guys showed that white guys really want to share it, but they just have no idea how to. I’ve got news for you. Those white guys can drop that burden. We’ve figured out how to transcend the biological limitations of white male privilege today.

We are proud to present, I Need a White Guy, the first two-sided marketplace for white male privilege. Is your doctor not listening to you? No problem. Are you afraid that nurse handling your IV thinks that you have thicker skin because it’s darker than hers? We’ve got you covered as well. With, I Need a White Guy, access to white male privilege is always just around the corner. That’s not it. If you got white male privilege, and you’re not sure what to do with it, we’ve got you covered as well. All you have to do is sign up and get certified, and helping people in need is just two taps away.

PaaS (Privilege as a Service)

I Need a White Guy, is a first-in-class PaaS, Privilege as a Service technology. We are ready to deploy white male privilege to our users so that they can get the care that they deserve. Here’s how our privilege pipeline works. Jamila is a Mexican teenager who’s been having a nagging ankle pain. She’s gone to the doctor three times, and she’s been told not to worry about it. After spending five minutes on WebMD, she’s determined that she has tendinitis in her ankle. She returned to her doctor with the diagnosis and her doctor told her that she was wrong, and in fact, her ankle did not hurt. He suggested that the pain was all in Jamila’s head. Since she’s Mexican, she probably is thinking that the pain is an essential requirement of getting into heaven.

No service no problem, Jamila. Jamila pulls out, I Need a White Guy, and searches for a certified white guy in the area. She confirms her location. Stan arrives in three minutes and explains to Jamila’s doctor that Jamila is in fact suffering from tendinitis and looks like she may need surgery. Jamila was rushed to the ER immediately. Problem solved. With, I Need a White Guy, spreading the benefits of white male privilege really is only a few taps away.

This is something completely fake that I created two months ago. I saw an episode of John Oliver. He did an episode on bias in the medical field. Towards the end, Wanda Sykes comes on, and she takes over and they do this skit with Larry David. Where she’s like, “If you need help…” It was a website where Larry David has his canned answers to questions that doctors can answer. I was sitting on the couch with my fiancé and I was like, “I don’t know if that’s funny enough. I feel I can do better.” Instantly, because all startups are about making Uber for something, I was like, “What if we had Uber for white privilege?” I took two days and put together, I Need a White Guy. I was really excited creating this because, if you couldn’t tell, I might be in one of the at-risk groups. On the other side, the episode was really eye-opening for me. I didn’t know about a lot of these statistics. I just hate the doctor.

When I took a step back and really thought about, I Need a White Guy, and when I started sharing with people and seeing how people were reacting, there were some lessons that stood out to me that I think we can take and apply to our mindsets when creating or attempting to create socially-conscious software. Some of that is going to take us taking a look at conceptual metaphor, and how it works, and how it affects people’s behavior. From that, maybe we can take some things away that we can put into practice when we create.

Conceptual Metaphor

Let’s dive right into conceptual metaphor. Metaphor is commonly thought of as a poetic device. At its core, it’s understanding one thing in terms of another, where those two things have a shared characteristic. Let’s look at an example. “You are my sunshine.” In this sentence, it is implied that just like the sunshine brings life and warmth, you do the same by bringing happiness and warmth to my day. Said another way, I’m describing you in terms of something else, sunshine. You become associated with all the characteristics of sunshine by virtue of using the metaphor. If you look at the box here, you become enclosed in this encasement of metaphor that if somebody is seeking to understand you in terms of sunshine, there’s no way you can actually get to understanding you without going through some of the characteristics of sunshine. I’m going to explain this a little bit more with some more examples.

This sentence is an example of linguistic metaphor. It also follows that our conceptual systems are metaphorical in nature as well. A concept is a general idea of something. Concepts determine what we perceive, how we get around in the world, and also how we relate to other people. Our conceptual systems define a large part of how we actually experience reality. The crazy thing about conceptual systems is that, though we use them, we’re normally not aware that we’re using them in the first place. It’s like a fish swimming in the fish tank. The fish is moving through the water, but the water is invisible to the fish. It swims and moves without really knowing what’s going on.

In a similar way, this is how we move through conceptual systems, otherwise known as cultures. We use them without necessarily being aware that they’re there. Sometimes it’s easy to confuse a conceptual notion with actual reality. Since these things are largely invisible, how can we get clues into our conceptual systems? How are they structured? Furthermore, how can we investigate how these structured, conceptual systems determine how we actually feel and act? When we were looking at the linguistic metaphor earlier, we took a look at how the sentence, “You are my sunshine” was structured. This gave us clues as to how metaphor was working to describe you in terms of sunshine. Maybe looking at a concept and trying to determine what it is defined in terms of, is a good place to start looking for clues about how a metaphor shapes those actual concepts.

Argument is War

In the opening of his book, “Metaphors We Live By,” by George Lakoff, he uses the example that argument is war. The implication here is that we treat argument conceptually, like we treat war. Looking at a few sentences can help to clarify this fact. “Your claims are indefensible.” “He attacked every weak point in my argument.” “Her criticisms were right on target.” “I demolished his argument.” “You shot down all of my arguments.” The way that we shape the language about argument shows that we do treat argument as war. War here being a two-sided battle with a determined winner and loser. The conceptual metaphor not only shapes the way that we speak about argument, but it also shapes the way that we prepare, how we act, and attitudes that we carry while arguing. We’re not usually aware that we’re referring to argument in this way because of the invisible nature of conceptual metaphor. We think that the war is the stuff that argument is made of. Normally when we’re thinking about it, this distinction is not really aware. It’s not front of mind. This is the conceptual water that we swim in. We move through it, but we’re not necessarily aware that it’s actually there. What happens when we begin to poke a hole in our conceptual fish tank?

Imagine for a moment that we are transported to another culture, where instead of argument being war, argument is seen as dance. Let’s imagine that we attend a debate in this culture where a group of people, they’re arguing. Our attempt to understand what’s happening might look a little something like this. Instead of one side trying to defeat the other, we would bear witness to a group of people seeking to create an aesthetic performance pleasing to us, the audience. It’s doubtful that we would even recognize it as an argument at all. The different metaphor, the different conceptual system breaks the line of sight of understanding. We would probably look at this argument and it wouldn’t even click that it’s argument at all. How does this affect how we behave and how we feel? We talk about argument as war, because we conceive of argument as war. It follows that we act in accordance with how we conceive of things. This is another connection that can be easy to miss at times. Said another way, it’s a cup half-empty, half-full. If you view situations as a cup half-full, then your behaviors and your associated feelings will most likely be positive. It would be really weird if you saw the cup half-full but you were still feeling crappy and down.

How can we take this understanding and apply it to the major conceptual metaphor that’s working in, I Need a White Guy? How can we use that to expose some of the things that make it really seductive? “Maybe this could work, but then, why does it feel icky at the same time?” What behaviors does this conceptual metaphor promote? How do these behaviors relate to the stated goal of, I Need a White Guy? There’s this relationship between the metaphor and the behaviors it promotes. How does that relate to the goal that, I Need a White Guy, says that it’s trying to accomplish?

Let’s start with some of the sentences from the website, instead of with the metaphor, and see if we can infer some of the characteristics about the conceptual metaphor at play. “Share some of your privilege, white bro.” “We delivered privilege to 3 million customers this quarter.” “I got fair treatment at a great price.” “Get access to our cohort of over 300,000 vetted white guys.” “Subscribe here”.

Privilege is Commodity

I Need a White Guy, treats privilege as a commodity. What types of behaviors are associated with commodity? You have a buyer. You have a seller. You have a marketplace. You have a transaction. The values that are associated with commodity that get applied or projected onto privilege is that a commodity, it can fluctuate over time. On the other hand, we have privilege, which at its core is something that we claim that should be static and fixed. If you are human and you’re born, you should just be treated fair. We have a conflict of something that’s dynamic, and something that’s static and fixed.

This is at the core of what makes, I Need a White Guy, ultimately a self-defeating effort. It’s claiming to reinforce the static nature of a human right using a metaphor that promotes its variability. The more effort you pour into the transactional metaphor, the greater the distance you’ll create between your goal of inclusion. In other words, people feel like shit.

Seeing here, what’s happening is that, I Need a White Guy, is trying to take something that’s static, from the static conceptual metaphor and pull that over into this dynamic space. There’s a conflict. There’s tension going on there. That tension actually reflects to what we do with humans in real life. That’s what, I Need a White Guy, is seeking to expose. Seen another way, we’re taking a human right and trying to pull it into the space of commodity.

I Need a White Guy, is a commentary on the trend of commodity fetishism, not only in tech, but in society today. What are the costs of commodification? How does that affect society? We have kids right now who are really suffering from self-esteem issues because their self-esteem and their self-worth is measured from this arbitrary determiner of likes. We have relationships that are just passed through these algorithms on apps like Tinder. Elections, you just need to turn on the news to see how trying to commodify our democratic system is going for us.

Let’s zoom in on human rights. No matter how hard you attempt to be inclusive using transactional metaphors, you won’t get to inclusion. In fact, the harder you try the worse things will get. It’ll be better if you didn’t try in the first place.

3 Takeaways for Creators

What are three takeaways that we can walk away with for us as creators? Because we’re all susceptible to this, all of us are immersed in culture. One, metaphor of compatibility, you really want to make sure that the idea or the change that you’re effecting is rooted in an idea or a concept that will promote the behavior that you’re looking to change. Does my conceptual metaphor encourage the behaviors that I’m looking to promote? For example, if we were trying to start a business that promoted the general well-being of mental health of dogs, you probably don’t want to push a cup half-empty conceptual metaphor, because you’re going to get some unhappy dogs. Where are some places that we can look to get a better sense of how metaphors work, because this stuff can really be hard to keep front of mind?

Comedians, they’re really good at this, at focusing in on metaphors and playing with them and inverting them. When we see them out of place, all of a sudden we gain this awareness and we get to laugh. Another benefit of satire in comedy is that it can remove the inherent guilt or the inherent defensiveness that often happens when we start to bring up some of these more sensitive issues. That can facilitate greater conversation and collaboration. Poets are really good at this. Musicians, linguists, psychoanalysts, they’re all really good at this, of making you aware of the metaphors that are in play in your life and how they’re affecting your behaviors.

Second, using personal experience. When you make something personal, you get a free pass into this metaphoric conceptual space, because it’s core and essential to you. That’s why when we try to abstract the pain of another, oftentimes, we stumble and drop it. It’s because it’s our actual experience in reality. It roots us in the conceptual metaphor, and it makes it easier to keep those things in mind.

For example, when I did, I Need a White Guy, that was rooted in personal experience. When I started the talk, I said I didn’t really know much about the bias in medical field, all the statistics behind it. I am a black dude in America, and I did grow up really scared of the police. There was one day at work, where I had a nervous breakdown. It was one of those summers where every news report was some unarmed black dude getting killed on tape, and I just snapped. That resonated with that part of me. I was like, “I can create something.” It was immediately cathartic. When we create in these ways, it can be really powerful because what it does is it’ll draw other people to it.

These are some ways that you can become aware of what’s going on with you without having a nervous breakdown. Some reflective journaling, therapy, intense physical training, or meditation can offer you an outlet to create some distance between your perception of metaphor and your actual experience of it. Non-violent communication is a great way to get an insight into some of these things, and also, systems thinking.

Collaborate

Last, rubbing your mental models and conceptual metaphors up against other people’s, is a great way to make you aware of inconsistencies. This is one of the very pragmatic and practical benefits of diversity or having different ideas and different points of view. Those different ideas and those different points of view, they’re contingent upon having psychological safety, because if people don’t feel safe, they’re not going to be open about what they really think. If they aren’t open about what they really think, we don’t really collaborate well. This is one of the downsides of using the metaphor of competition, when we apply to our teams and our groups. Because when you have a competitive environment, the behaviors that subsequently come from that, work in direct opposition of collaboration.

In terms of how to collaborate, active listening. I was reading a book about organizational theory. There was one guy who put some of these ideas into play. The guy who wrote the book interviewed him and said, “How did you know that this stuff was actually working?” He said, “I was in a meeting one day and people were having an argument,” or some conflict. He noticed that there was a shift from before they put these things into play, people would really say, “I know the truth.” When afterwards, once you start to understand how these concepts work, you might say, “I have a truth.” Then you make space for other truths. There isn’t this need to fight for this either/or, which is often coupled with competitive spaces, and by extension, argument is war. Meditation, non-competitive creation, and systems thinking.

I would urge you, if you want to hear more about this or you’re interested, I’m going to be sharing some information. If you go to ineedawhiteguy.com and sign up for early access, you can get access to that.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Facebook's CSS-in-JS Approach – Frank Yan at React Conf 2019

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Frank Yan discussed at React Conf some of the technologies and strategies powering FB5, the new facebook.com, addressing topics such as Facebook’s approach to CSS-in-JS.

Facebook’s website started as a simple PHP webpage. As richer and more interactive features such as messaging and live videos were added to the platform, the stack complexified to include a combination of CSS, vanilla JavaScript, Flux, React, and Relay, with a negative impact of page size and performance. To build the user experience they needed for their products, Facebook decided to rebuild from the ground up with the following requirements: modern design system with support for theming, improved accessibility, faster page loads, and seamless interactions. The new Facebook website is a single-page app with a React/GraphQL/Relay stack.

Given Facebook’s specific constraints, it decided to create its own CSS-in-JS library. The underlying idea was to not discard idiomatic CSS but to make it easier to maintain and keep the good parts of CSS that developers are used to enjoying. The number one priority was readability and maintainability, which are issues compounded at scale. Yan gave a code sample featuring Facebook’s CSS-in-JS version in action:

const styles = stylex.create({
  blue: {color: 'blue'},
  red: {color: 'red'}
});

function MyComponent(props) {
  return (
    <span className={styles('blue', 'red')}>
      I'm blue
    </span>
  )
}

In the previous examples, CSS specificity issues which may lead to unpredictable styling are eliminated: the latest style to be applied (red) will be in place.

One of the first thing designers asked for was the dark mode. To implement theming abilities, the Facebook team used CSS variables, also known as custom CSS properties. CSS variables propagate their values to the element’s subtree the same way than React Context does, which led to replacing use cases of Context for holding theme-related values with the use of a native browser feature. Instead of passing SVG files to img tags, SVG icons were passed as React components, which allowed to customize the icon according to the theme. Icons thus may change color at runtime without requiring additional downloads.

Accessibility-wise, font sizes have been changed from a fixed pixel (px) basis to a font-size-proportional rem basis.

As styles are embedded in JavaScript files, JavaScript tooling can be used to perform checks whether static checks (like type checking style objects, or linting). or runtime checks. In Facebook’s development environment, if some elements appear at rumtime to be inaccessible or incompatible with dark mode, or slow to render, developers can visibly and functionally block them and are provided with actionable recommendations to solve the detected problems.

From the style rules provided in JavaScript files, atomic CSS files are generated, in which each class refers to a CSS property/value pair. For instance, the previous code example may lead to the following CSS declarations:

.c0 { color: blue}
.c1 { color: red}

Those CSS declarations can then be reused anywhere the developers is using a blue or red color. The reuse combined to the short names reduces the size of the generated stylesheet but also breaks the linear growth pattern that was observed between CSS file size and features. Yan mentioned that CSS file size has been reduced from 413 KB to 73 KB, while still including the dark mode implementation.

Further optimizations may apply. Yan showed how a stylex style declaration disappears as constant CSS properties are replaced in the component code by their values, thus eliminating a large fraction of the runtime cost of using the CSS-in-JS library, while maintaining compatibility with server-side rendering.

The described CSS-in-JS approach is reminiscent of the library Atomic CSS, which relies on single-purpose styling units, the no-runtime CSSS-in-JS library Linaria, and Facebook’s Prepack, a JavaScript bundle optimizer.

Yan’s full talk is available on ReactConf’s site and contains further code snippets and detailed explanations. React Conf is the official Facebook React event. React Conf was held in 2019 in Henderson, Nevada, on October 24 & 25.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Machine Learning on Mobile and Edge Devices With TensorFlow Lite

MMS Founder
MMS Daniel Situnayake

Article originally posted on InfoQ. Visit InfoQ

Transcript

Situnayake: Today, we’re going to be talking about machine learning on mobile and edge devices, specifically with a TensorFlow Lite flavor. I’ll talk a little bit about what that is in a moment.

My name is Daniel Situnayake. I work at Google. I’m a developer advocate for TensorFlow Lite, which means I’m an engineer who works on the TensorFlow Lite team, but helps the TensorFlow Lite team understand and integrate with our community. I do stuff like building examples, and working on bugs that we find from our community. I’m also the co-author of this book, which is coming out in mid-December, called “TinyML.” It’s the first book about machine learning, specifically Deep Learning on devices that are really small. These are the models Wes mentioned that are 15, 20 KB, but can do speech recognition or gesture detection.

What is TensorFlow Lite?

TensorFlow Lite, which is what I work on at Google, is a production framework for deploying ML on all different devices. That’s everything from mobile devices on down. I’ll talk a little bit about some of those categories of devices as we get further along.

Goals

In our goals today, I want to inspire you to let you know what is possible with machine learning on-device, at the edge of the network. I also want to make sure we all have the same level of understanding of what machine learning is, and the things it can do, and how we do that stuff. Finally, I want to give some actionable next steps. If you’re interested in this space, how do you get involved? Where can you learn more? How can you get started? I wanted to see right at the beginning, who has a background in machine learning? Who’s heard of TensorFlow? Then, who has worked on edge devices? Maybe you’re a mobile developer or even mobile, web, or embedded.

I’m going to be able to give an intro to ML. Presumably, you’ve all heard of ML, to some degree, but I’ll cover the basics of what it is. Then we’ll talk a little bit about ML on-device and why that makes sense. Then I’ll go into some specifics of TensorFlow. Some of the hairy stuff I’ll skip over really quickly, because it might not be relevant if you haven’t used TensorFlow a lot already. We can always talk at the end about that, too.

What is Machine Learning?

First of all, I want to talk about what is machine learning. The easiest way to talk about this is, talk about what is not machine learning. The standard way that we build programs obviously is not machine learning. If I’m going to build a program, generally, I am writing some rules that apply to some data. I might write a function here. Here we’re doing a calculation based on some data, and that happens through rules that we express in code. When that function runs, we get some answers back. The computation happens in that one place where we’re taking the data, running it through some rules, and getting some answers.

Similarly, a video game works in the same way. There’s some stuff going on in a virtual environment. There are some rules which apply whenever stuff happens. All these types of things that we’re familiar with as engineers, in the past, generally use this type of programming. We’re coming up with rule sets that handle stuff that happens in an environment. Pretty much what is going on is we create some rules and we create some data, and we feed them into this box, and out of the box we get answers. Machine learning screws this up a little bit. Instead of feeding in rules and data, we feed in answers and data. Then our box actually figures out how to return some rules, which we can then apply in the future.

Activity Recognition

Imagine we’re doing the classical style of building an application to determine what activity people are performing. In this case, we can look at the person’s speed. Imagine someone is walking. If they’re going less than 4 miles per hour, maybe we can say that they are walking. If they’re going greater than 4 miles per hour, or 4 miles per hour and above, their status can be running. Then maybe we can come up with a rule that says, “If this person is going even faster, faster than a human can run, they’re probably cycling.” Then, what do we do if they’re doing something completely different? Our rules just don’t work for this. They break down the simple heuristic that we’ve chosen. It doesn’t make sense anymore. This is how the traditional programming model works. Let’s have a look at how this might work in a machine learning application.

In this case, we have maybe a sensor that’s attached to a smart device that the person is wearing. We’re taking raw data from that sensor, and we’re going to train a machine learning algorithm to understand that. In the case that the person is walking, we feed in the data for that, and we feed in a label that says they are walking into our model. We do the same for running, the same for biking, and the same for golfing. We have basically fed all of these things into this model that we’re training. We’ve said, here is what the data looks like for walking. Here’s what the data looks like for running. Our model can learn to distinguish between these categories without us knowing the exact rules and the exact heuristics that indicate those. The model figures that out for us.

Demo: Machine Learning in 2 Minutes

I want to give a quick demo of this in action. It’s a live demo. I’m going to use a tool that we have released recently at Google called Teachable Machine. You can try this yourself. It’s totally free. Teachable Machine is basically a studio for training your own machine learning models very easily, for prototyping experiences superfast. I’m going to do an image based project. What I’m going to do is do rock, paper, scissors recognition model. Each of the activities that I’m trying to classify is either rock, paper, or scissors is represented here. I’m going to make those. I got rock, paper, and scissors. Then I’m going to capture some camera data of myself doing the rock, paper, and scissors’ sign. Does everybody know what rock, paper, scissors is? I’ll do that via the webcam. I just need to capture a bunch of photos of myself doing this rock sign. I’m going to try that now. Here’s my rock. I’m turning it all around. You can see it from a bunch of different angles. It understands not just one image of a hand, but generally, a hand rotated all around can still represent rock. I’m going to do the same for paper.

I don’t really need that much data here. Let’s do the same for scissors now. I’ve got less than 100 samples for some of them. The rock has a few more because I was talking while I was doing it. It doesn’t really matter. What we’re going to do now is train a model that uses these images and the labels to understand what is meant by rock, paper, and scissors gesture. What happens during training is very complicated. I’m not going to go into it now. There’s a lot of literature and a lot of interesting stuff online that you can read about how this works. Essentially, what we’re doing here is we’re taking a model that was already trained on vision. It understands how to break apart a visual scene into lots of different shapes, colors, and objects, and things. Then we take that pre-trained model and we can customize it a little bit so that it specifically understands what the rock, paper, and scissors’ gestures mean. Right now I’m not doing anything, so it’s oscillating wildly between the three. Let’s see if I can do rock. We got really high confidence there. If I did paper, also works. Then scissors, scissors is a little bit harder to discern from paper, but yes, there we go, it’s working pretty well.

This is how machine learning works, you capture data, label it, feed it into a model during training, the model gets adjusted so that it can do this stuff in the future. Then you get something pretty robust, potentially pretty quickly. The technology to do this, the basic concepts has been around a while, but to be able to do this reliably, and so easily and so quickly. This stuff’s only been figured out in the last five years or so. It’s pretty exciting.

Key Terms

I want to cover some key terms just so that we’re able to talk about this stuff fluently. The first thing that I’ll define is a dataset. A dataset is the data that we’re going to be feeding into the model during training. That includes, in the case I just showed, the photos of me doing the gesture, and also the label.

Training is the process of taking a model. A model is basically a bunch of data structures that are woven together in a certain way, and gradually adjusting that model so that it is able to make predictions based on that dataset.

The model itself, at the end of training, can be represented either as arrays in memory, or as data on disk. You can think of it as a file. A package of information that contains a representation of how to make the predictions that we train the model for. That’s portable. You can take it from device to device and run it in different places.

The process of running the model is called inference. When you take a thing the model hasn’t seen before, and you run it through the model and get a prediction. That process is called inference. That’s separate from training, which is where you take some labeled data and teach the model how to understand it. Those are the two main parts of machine learning: training and inference.

What I’m going to talk about today mostly falls into the bucket of inference, because inference is the thing that’s most useful to do on edge devices. Training usually takes quite a lot of power, quite a lot of memory, and quite a lot of time. Those are generally three things that edge devices don’t have. We’re really mostly talking about inference here. There are some cool technologies for doing training on edge devices. We’ll talk about those later if anyone has any questions.

What inference looks like in an application is this. First of all, we have our model and we load it into memory. We then take our input data, and we transform it into a way that fits the model. Every model has different input parameters. For example, the model we just trained here, the model would have had a fixed input size. It takes an image with a certain number of pixels. If we’ve got data from a camera that has a different resolution, we want to transform that so it fits the size of the model. We have to do that generally too. We then run inference, which is done by an interpreter, which takes the model, takes the data, runs the data through the model, and gives us the results. Then we figure out how to use the resulting output. Sometimes that’s very easy. It’s just some category scores in an array. Sometimes it’s something that’s a little bit more complicated, and we need to write some application code to make it clear what’s going on.

Application Code

To show you all the parts of this application, a typical ML application, first of all, we have our input data that could be captured from a sensor, or a device, or it could be just data that exists in memory somewhere. We then do some pre-processing. We’re getting it ready to feed into the model. Every model has a different format that it expects. That’s just defined by whoever created the model. We then load the model and use an interpreter to run inference using the model. Then we do some post-processing that interprets the model’s output and helps us make sense of it in context of our application. Then we can use that to do something cool for the user. TensorFlow Lite has tooling to do all of this stuff. It has components that cover every aspect of this process, and that you can use to easily build mobile and embedded applications that use machine learning.

In our exploration of TF Lite, we’re going to go through an intro. We’ll talk about how to get started with TensorFlow Lite, how to make the most of it once you start using it seriously. Then we’re also going to talk about running TensorFlow Lite on microcontrollers or MCUs, which are the tiny devices that power all of our gadgetry that is smaller than a mobile phone or embedded Linux device.

Edge ML Explosion

The thing doing ML on edge devices is threefold. First of all, if you do ML on a device, you have lower latency. The original model, from a couple years ago model of ML, is that you have some big, crazy ML model running on a big powerful server somewhere. If you want to do machine learning inference, no matter what it’s for, you send your data up to that server, and the server does some computation and sends you the result back. That results in pretty high latency. You’re not going to be able to do nice, real-time video or audio stuff in that case. All your interactions are going to have some latency and you’re going to have to worry about things like bandwidth. Network connectivity is an issue if you’re trying to do inference that is not on-device. This comes down to bandwidth and latency.

The other thing is, if we’re able to do ML on a device, then none of the data needs to go to the cloud. That’s much better for the user and it’s much better for you as a developer, because you don’t have to deal with the hairy issues surrounding user data.

If you can do away with some of these problems, you’re able to build a whole generation of new products that weren’t possible before. If we’re thinking about medical devices that can operate without ingesting loads of user data, or devices that are doing video modification in real-time. This is an example of something which you’d really struggle to do with server-side ML. On the device here, we have a model that’s doing facial landmark detection. It can pick out where your eyes, ears, mouth, nose are. That’s allowing the app developer to add some animations and add some features onto a photo. If we’re trying to do this over a network connection, it would be really laggy and slow, and you wouldn’t have that great an experience. Whereas here, running on a phone, it works really nicely.

Another example of this is we’re doing pose estimation. This is a type of ML model that takes an image of a person as an input. It’s able to figure out what their different limbs and body parts are, and give you coordinates for those. In this case, the kid’s able to get a reward in the app for having their dancing matched up with the dancing in the little video, this in-set. This is another thing where you need super low latency. It has to happen on-device. Also, you probably, if you have built a game or a toy, don’t want to be streaming loads of data. It’s bad from a privacy perspective. It also means you have to spend a lot on bandwidth. In this case, it’s super suited to Edge ML.

Here’s another use case. Imagine you’re on vacation somewhere, or you’re reading a book in a foreign language, and you want to look at definitions for words, and you maybe don’t have good connectivity. This is a really good example of another place where an edge model makes sense, because you can do all stuff on-device that maybe you wouldn’t be able to do if you didn’t have an internet connection.

1000’s of Production Apps Use it globally

There are thousands of apps using ML and using TensorFlow Lite for Edge ML at the moment. Google uses it across pretty much all of its experiences. Then there are a bunch of big international companies that are also doing really cool stuff. There are 3 billion-plus mobile devices globally, that are running TensorFlow Lite in production. This is a really good platform, a really good tool to learn if you’re interested in doing this type of thing, because it’s already out there. There are a bunch of guides and a bunch of examples of how to use things. It’s battle tested by some of the biggest companies.

TensorFlow Lite – Beyond Mobile Devices

Beyond mobile devices, TensorFlow Lite works in a bunch of different places. Android and iOS are obviously big targets. Another place is embedded Linux. If you’re building stuff on things like Raspberry Pi, and similar platforms, you can use TensorFlow Lite to run inference on-device in maybe an industrial setting. Or we’ve seen people doing things on wildlife monitoring stations that are set up in the jungle somewhere, or places that are disconnected from a network, but you want to have some degree of intelligence. We also have support for hardware acceleration. This category of devices that are basically small embedded Linux boards with a chip on board that is dedicated to running ML inference really fast. There are chips from Google. We have this thing called the Edge TPU, which lets you run accelerated inference on these devices. NVIDIA has some similar products. There are a ton of them that are on the way.

Our final target is microcontrollers. Microcontrollers are a little bit in a class of their own here because they have such smaller access to resources. In terms of memory and processing power, they’re vastly smaller. They might have a couple 100K of RAM, maybe a 48 MHz processor. They’re designed for very low power consumption. We’re able to run TensorFlow Lite on those. We can only run much smaller models, but you can do some really exciting stuff still. We’re pretty much running the gamut from really powerful little supercomputers like mobile phones, all the way down to tiny, little microcontrollers that cost a couple of cents each.

I want to talk a little bit more about on-device ML. The previous model is that we have an internet connection to a big server that’s powerful and running ML. The device is located in an environment where it’s collecting data, and in order to run inference on that data and do anything intelligent, it has to send that data back. With ML on the edge, the only part of this system that exists is just the connection between the environment and the device. You don’t have to worry about other connectivity. That helps you with bandwidth. You’re not sending lots of data everywhere. It helps you with latency. Things like music or video. You can actually build applications that work that have lower latency that humans are able to perceive. It’s much better from a privacy and security perspective, because you’re not sending people’s data anywhere. It rules out a lot of complexity. You don’t have to maintain this back-end ML infrastructure that you might not have any experience with. Instead, you can just do everything on-device.

There are some challenges that come with this. One of them I mentioned a few times are you might not have access to much compute power. The extreme case is these tiny, little microcontrollers, with very little memory and processing ability. Even a mobile phone, you have to think about how much computation you want to do in order to preserve battery life and things like that. You might not have a lot of memory no matter where this thing is running. Battery is always really important. Whether it’s on a smartwatch or an embedded device, you’re always going to think about power.

TensorFlow lite is designed to make it easier to address some of these issues. The other big thing it allows you to do is take an existing ML model, convert it for use with TensorFlow Lite, and then deploy that same model to any platform. Whether you want to be running on iOS, or Android, or on an embedded Linux device, the same model will be supported in multiple places.

Getting Started with TensorFlow Lite

I want to talk a little bit about how to actually use TensorFlow Lite. I’ll probably go through this at a fairly high level. There’s a lot of documentation available. I just want to give you a taste of the level of complexity here for an application developer. The first thing I want to do is show you an example of TF Lite in action in a bigger experience. This year for Google I/O, we built an app called Dance Like. Basically, it’s a fun experience built on TensorFlow Lite that uses a bunch of chained together ML models to help you learn to be a better dancer. I will show you a quick video about how it works.

Dance Like

Davis: Dance Like enables you to learn how to dance on a mobile phone.

McClanahan: TensorFlow can take our smartphone camera and turn it into a powerful tool for analyzing body pose.

Selle: We have a team at Google that had developed an advanced model for doing pose segmentation. We were able to take their implementation, convert it into TensorFlow Lite. Once we had it there, we could use it directly.

Agarwal: To run all the AI and machine learning models. To detect body parts. It’s a very computationally expensive process, where we need to use the on-device GPU. TensorFlow library made it possible so that we can leverage all these resources, the compute on the device and give a great user experience.

Selle: Teaching people to dance is just the tip of the iceberg. Anything that involves movement would be a great candidate.

Davis: That means people who have skills can teach other people those skills. AOA is just this layer that really interfaces between the two things. When you empower people to teach people, I think that’s really when you have something that is game-changing.

Situnayake: To give you a sense of what this looks like in action, the way it works is that you dance alongside a professional dancer who’s dancing at full speed. You dance at half speed, then we use machine learning to take your video and speed it up and beat match you with the dancer’s actual movements, and give you a score for how well you did.

We’ve made it Easy to Deploy ML on-device

The whole idea of TensorFlow Lite is to try and make it easier to deploy these types of applications. You can focus on building an amazing user experience without having to focus on all of the crazy detail of managing ML models and designing a runtime to run them.

There are four parts to TensorFlow Lite. One part is that we offer a whole bunch of models that you can pick up and use, or customize to your own end. Some of the models we’ve seen already, like the pose detection model, they’re available. You can just grab them and drop them into your app and start using them right away. We also let you convert models. You can take a model that you’ve found somewhere else online, or that your company’s data science team has developed, or that you created personally, in your own work with ML, and basically translate that into a form that works better on mobile. You then have the ability to take that file, deploy it to mobile devices. We have a bunch of different language bindings and support for a bunch of different types of devices. We also have tools for optimizing models so you can actually do stuff to them that make them run faster and take up less space on-device.

Workflow

The workflow for doing this is pretty simple. First, you get a model. Secondly, you deploy it and run it on devices. There’s not much more to it than that. I’m going to show how that works. First of all, you don’t have a model. You don’t even know what one really is. You can still get started really easily. On our site, we have this index of model types. You can go in, learn about each one. Learn about how it might solve the problems that you’re trying to solve. We have example apps for each of those so that you can actually see them in iOS and Android apps running. That covers everything from image classification, where you’re figuring out what’s in an image, all the way through to text classification using BERT.

An example of image segmentation here, that’s when you’re able to cut the foreground and background of an image, or you can basically figure out which pixels in an image belong to which objects. In this case, we’re figuring out some parts belong to a person. In the left-hand part, we’re figuring out what the background isn’t blurring it so it looks like a pro photo. In the second one, we’re letting the user replace the background entirely so they just look cool.

The second model, this is PoseNet model for figuring out where your limbs are. You can use that data as a developer to do loads of stuff. Maybe you’re drawing stuff onto the screen on top of people’s body. You could also take this data as input to another Deep Learning network that you develop, that is able to figure out what gestures they’re doing, and maybe what dance they’re doing, or what moves in a fighting game.

MobileBERT

A really cool thing that we just launched is MobileBERT. This is a really small version of BERT, which has almost as good accuracy and works on mobile devices. BERT is a cutting edge model for doing various different classes of text understanding problems. In this case, we can put a corpus of text. You can see there’s a paragraph of text here about TensorFlow. The user can ask questions about it. The model picks out parts of the text that answer the question. You can paste in any text you want. It could be FAQs for a product. It could be a story. It could be a biography. The model is able to answer questions based on it. You can just weave that into your application for whatever thing you want to do with it.

Beyond the models that we’re giving away, and we’re actually adding more and more all the time, so we have a whole team devoted to just doing that. We also support all different types of models that you can run through the TensorFlow Lite converter and use on your mobile device. These are some of the ones that we’ve identified as the most exciting from mobile application developer’s perspective. You can convert pretty much any model.

We’ll talk a little bit about how that works. Imagine you’ve built a model with TensorFlow. If you’ve never used it before, basically, there are some high-level APIs that let you join layers of computation together to build a machine learning model, and then train it. It’s actually pretty easy to use. You can get up and running really quickly. There are some really good guides online. Once you’ve done that, you can just convert your model to run on mobile with a couple lines of Python.

Once you’ve got your model ready, we want to run it. Running it is also super easy. In this example, first of all, we’re loading a model file, and instantiating an interpreter with that model. We’re then processing our input to get ready to feed into the model. Then we run the interpreter with that pre-processed input.

The New TF Lite Support Library Makes Development Easier

We also have this support library, which actually can provide you with high-level APIs, and eventually, auto-generated code to be able to pre-process your data and feed it into whatever type of model that you want. You’ll be able to find a model online and run it through the support library. The support library will generate some classes for you that you can drop into your application that will do all of this pre-processing work for you. You really just think of it as an API to get the results of the inference.

This is what the pre-processing code looks like, without using the support library, for transforming an image into the form that it needs to be for the model to understand. With the support library, it turns into a few lines of code. This is pretty awesome. It makes it a lot easier to use random models that you’ve found without having to really deeply understand the input format that they require.

We talked about the converter and the interpreter, and these make use of another couple of high-level things called Op Kernels and Delegates. We’ll talk about those a little bit more later.

Language Bindings

We have language bindings for a ton of different targets. You might be working on iOS, or Android, or on embedded Linux. We’ve got you covered. We have Swift, Objective-C, C, and C#, Rust, Go, Flutter. You basically either have libraries supported by us or by the community for pretty much anything you can think of.

Running TensorFlow Lite on Microcontrollers

I want to also talk about microcontrollers, but a little bit separately because we have two interpreters. There’s an interpreter that runs on mobile devices. Then there’s a super efficient, super handcrafted interpreter that runs on microcontrollers, because they need such efficient code.

Microcontrollers are these tiny computers that are on a single piece of silicon. They don’t have an OS. They have very little RAM. They have very little code space as well for storing your program. You can’t put really big models on there. You can’t do loads of computationally intensive stuff quickly. They are built into everything. They’re really cheap. There are actually, I think, 3 billion microcontrollers produced every year in all types of devices. By being able to add Deep Learning based intelligence to all things, we’re talking about having stuff like microwave ovens, and kitchen appliances, and components inside of vehicles, and even smart sensors that can just take an arbitrary input and give you a very simple output that you can then build into other products.

An example of how you might use microcontrollers for inference is, maybe you’ve got a product that is figuring out what a person is saying. Imagine you’re building a smart home device that can understand speech. You want it to use as little power as possible. You might have a Deep Learning network running on a microcontroller that, first of all, figures out if there’s any sound that sounds like is worth listening to. When that sound happens, the output of that model is used to wake up a secondary model, which is actually looking to figure out whether the sound is human speech or not. That’s something that would be difficult to do without Deep Learning. Once you’ve figured out that, yes, this is human speech we’re hearing, you can wake up the application processor that actually has a deeper network that does speech recognition. By cascading models in this manner, we’re able to make sure we’re saving energy by not waking up the application processor for every little noise that happens. This is a really common use case for this type of technology. TensorFlow Lite for microcontrollers, you use the same model, but there’s a different interpreter, and the interpreter is optimized very heavily for these tiny devices.

This is a tiny little microcontroller using very little power. The whole cost of the actual microcontroller itself would be a couple of dollars. The camera is also very low power and cheap. It’s able to detect whether a person is in the frame or not. The way we’ve got this set up, there’s a display. This can actually be boiled down into a tiny device that’s the size of your fingernail, which you could put in any product. It can give you a Boolean output of, if there’s a person visible near the device, it gives you a 1, and if there’s no person visible, it gives you a 0. It uses barely any power. You don’t have to know anything about machine learning to be able to use this thing. You just put that in your hardware product. You can have a TV that shuts off when no one’s watching it automatically for the cost of a couple of extra dollars of the manufacturing cost. These smart sensors are going to absolutely transform the world around us. This type of technology has only existed for a matter of months. TensorFlow Lite for microcontrollers was announced in February of this year. We’ve not even remotely started to see the applications people are developing. If you have any interest in embedded development, you should definitely start playing with this stuff because it’s really fun and surprisingly easy.

Here is another video from our partners at Arduino. We’re able to run TensorFlow Lite for microcontrollers on the most recent Arduino devices. They’ve got some tutorials for doing all cool stuff like recognizing gestures or recognizing objects from the sensors on-device. You can actually just grab the examples for Arduino from within the Arduino IDE because we’ve published a library that’s really easy to use.

On an MCU, we can do everything from speech recognition through to interpreting camera data, we can do gesture recognition using accelerometers. We can do stuff with predictive maintenance where you’re looking at the vibration of an industrial component to figure out when it’s going to break so that your whole factory doesn’t explode. This is all stuff that is really exciting because you can push intelligence down close to these sensors, and do this type of inference really cheaply.

Speech recognition, we have an example for. There’s a 20 KB model that can discern between the words yes and no. We also have scripts you can use to retrain it for other stuff. This is really exciting to just play with.

Person detection is my favorite, really, because it’s just so mind blowing. You have a tiny, little camera. The model is 250 KB. It won’t fit on every embedded device, but it will fit on some tiny devices. You can run scripts to retrain this easily to recognize other objects. If you want to build a smart sensor that maybe on your bicycle will tell you when there’s a car coming up close behind you, you can do that super easy.

We have an example where you have a device with an accelerometer. You can use it as a magic wand. You can do different gestures and cast different spells. We built a game that lets you do that. Obviously, there are some practical implications for this too in activity trackers. The model for that is also really small, 20 KB, and it’s trained with data captured from 5 people. It probably took an hour to capture all the training data. It’s super easy to really be able to build something powerful.

Improving your Model Performance

Beyond microcontrollers, and across all of these Edge ML stuff, you have to think about, how do you make models that perform well on tiny devices? We have all the tooling to do that, too. The big thing for TensorFlow Lite is performance across different types of devices. We’ve got really good performance across a bunch of different types of accelerators. If you’re just running on CPU, went pretty quick. If your device, like most mobile phones, has access to a GPU, you can run models superfast because they involve the calculations that are highly parallelizable. If you have a hardware accelerator like the Edge TPU, you can run inference ridiculously fast.

There are also a bunch of techniques you can use to improve the performance of your model. The TensorFlow Lite converter can help with all of these. It can do things that make your model smaller and make it run better on different types of devices. If you’re on CPU, you can do something called quantization, which basically involves reducing the precision of the numbers in the model while taking that into account during inference. By default, the numbers in a TensorFlow model are represented as 32-bit floating points. If we reduce them down to 8-bit integers, we can make the model a fourth the size but still keep most of the same performance.

Pruning

Pruning is another really cool technique. You’re able to identify, the model is basically a network of neurons, some of the connections between the neurons are very important, and other ones are not so important. If you figure out which ones are not that important and cut them and just ignore them, you don’t have to represent that data anywhere. You don’t have to do that computation. We have the tooling that lets you do that, so the model can basically run more efficiently without any reduction in accuracy.

We have a bunch of really low-level stuff that you can get into the weeds around to figure out how to do this stuff more efficiently. I won’t go into a lot of detail here. We have mechanisms for making use of the types of accelerators that are in all devices, from mobile phones through to these specialized accelerators. GPU Delegation is one of those. You can run a model on the GPU of your device. You can also make use of DSPs which are purpose-built chips that are inside of a lot of devices that allow you to do really fast calculations on these type of stuff. We made use of Android’s ANNI, and also Metal on iOS. It’s really easy to do this. You can add an option to your interpreter to tell it to use acceleration.

Here’s an example of how you can basically optimize models for different types of uses. Inception by itself is a 95 MB model for image classification. Google developed MobileNet, which is a model that does exactly the same thing, almost as good accuracy but much faster. When you’re thinking about deploying models to your device, there are often mobile-optimized versions of different types of popular models. You should look for those and use them if available.

We have tools for profiling how long it takes to do various stuff. You can even do that down to the operator level. ML models are built out of these different operations and you can identify which ones are taking the longest. If you’re trying to run a model on a device, you can figure out which parts of the model are not running fast, and you can work with your ML engineers to optimize that.

There are also ways to use models that are not fully supported by TF Lite. TensorFlow Lite has a few hundred ops. TensorFlow has 1,000 or so ops that you can use. You can import those ops from TensorFlow to use in your mobile applications. You can also selectively modify the TensorFlow Lite runtime so it only uses the ops that your model uses. This is another way to get your binary size as small as possible. You can set some stuff up in your code, and then use stuff like Android Build-Tools to tell it to only grab the stuff it needs.

How to get started

Hopefully, I’ve covered some very high-level ML stuff. I’ve talked about on-device ML and why it’s called, and why it should exist, and how it’s going to make a big difference in the future. Also, I’ve given you an idea of what you will start to think about when you start working in this stuff as an engineer. Let’s point to some resources that you can use to get started.

We actually just launched a course with Udacity that covers TensorFlow Lite, end-to-end. If you’re interested in getting started with TF Lite, definitely search for this, check it out. We cover inference on Android, iOS, and Raspberry Pi. If one of those platforms interests you, you can ignore the other stuff and you can pick and choose what you want to learn.

If you’re interested in the microcontroller side, I’ve just co-authored this book with Pete Warden who is the guy on our team who pretty much helped invent this space. This book is going to be available in mid-December. If you are an O’Reilly subscriber, you can read the early release version already. We basically give you an introduction to how embedded ML works and how you can use TensorFlow Lite to work with that. It’s written so that even if you’re not an embedded developer or if you’re not an ML developer, you can still build all the projects in the book.

If you’re especially interested in these embedded stuff, we run monthly meetups on embedded ML. There are two right now. One of them is in Santa Clara, and one of them is in Austin. We are actually launching more all the time. We get an inbound request from someone who’s interested in starting up a group every couple weeks. There are going to be these all over the world. If there isn’t one in your local community now, there will be soon. It’s really cool to go and meet people, and see presentations from people who are doing cool stuff in this space.

The main place to go for info on TensorFlow Lite and all the stuff we talked about is our TensorFlow Lite doc site. We’ve got information on everything that I’ve talked about today. We’re actually going to be revamping the docs over the next month or so, just so that they are even more inclusive of all the information that you need.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Audi Releases Autonomous Driving Dataset

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Researchers at Audi have released the Audi Autonomous Driving Dataset (A2D2) for developing self-driving cars. The dataset includes camera images, LiDAR point clouds, and vehicle control information, and over 40,000 frames have been segmented and labelled for use in supervised learning. The dataset can be used for commercial purposes.

The research team described the dataset and compared it with other similar datasets in a paper published on arXiv. A2D2 was captured using six cameras, five LiDAR units, and vehicle bus data, which includes steering and throttle control state as well as speed and acceleration data. 41,277 frames contain image and point cloud labels from semantic segmentation, with every pixel assigned one of 38 labels such as “pedestrian” or “truck.” 12,497 of those frames also contain 3D bounding boxes of the objects. The dataset is published under the CC BY-ND 4.0 license, which allows commercial use subject to the terms of the license. The research team says,

We release A2D2 to foster research, in keeping with our ethos of promoting innovation and actively participating in the research community.

The data collection platform was an Audi Q7 e-tron, with the six cameras and five LiDAR units rack-mounted on the roof. Three cameras faced front, one to the rear, and one to each side, providing 360◦ coverage. Three LiDAR units faced front and two to the rear. The LiDAR scan patterns were setup to provide maximum overlap with the camera images, which allows them to scan large areas above the vehicle and identify tall buildings, making the dataset “particularly relevant for SLAM and 3D map generation.” The dataset includes “extensive” vehicle bus data; according to the team, “to the best of our knowledge other multimodal datasets do not provide such data.” The data was collected in a variety of urban and rural locations. Besides the labelled data for use in supervised learning, there are an additional 390,000 unlabeled sequential frames “suitable for self-supervised approaches.”

Audi’s paper compares A2D2 to several other publicly-available autonomous driving datasets, including Waymo Open Dataset (WOD) and Lyft Level 5 (LL5), which were released last year. While all three datasets were collected from comparable numbers of cameras and LiDARs, the Lyft and Waymo datasets were captured exclusively in urban sites. Lyft’s dataset contains no vehicle data, while Waymo’s contains only vehicle velocity, in contrast to A2D2’s extensive vehicle data.

Commenters on Twitter and Reddit have also noted that A2D2’s license, unlike that of most other publicly-available autonomous driving datasets, allows commercial use:

The good thing about this dataset is that, unlike KITTI, Waymo, etc, you can use this for commercial works. This is because it is licensed under CC By-ND 4.0.

A2D2, WOD, and LL5 join the growing list of datasets from both commercial and academic sources. Udacity’s dataset recently made headlines when its self-driving car project’s dataset was found to contain images with “thousands of unlabeled vehicles, hundreds of unlabeled pedestrians, and dozens of unlabeled cyclists.” Udacity has since updated its repository to note that the data is “intended for educational purposes only,” urging users to “explore newer, more complete datasets.”

Audi’s A2D2 dataset can be downloaded from the project site, which also contains a Jupyter notebook tutorial.
 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Collective Sensemaking and Deliberately Developmental Conversations

MMS Founder
MMS Antoinette Coetzee Jason Knight

Article originally posted on InfoQ. Visit InfoQ

In this podcast recorded at Agile 2019, Shane Hastie, Lead Editor for Culture & Methods, spoke to Antoinette Coetzee and Jason Knight about  Collective Sensemaking and  Deliberately Developmental Conversations

Key Takeaways

  • We are generally unaware of our own developmental stage in building relationships
  • Raising awareness and exploring our own perceptions is possible and a powerful tool for building relationships with others
  • Psychological safety is a precondition for developmental conversations, and it needs to be paired with psychological challenge
  • You can’t have psychological challenge without psychological safety and you won’t have any growth unless there is psychological challenge as well
  • The participants have to be mutually committed to each other’s development and to their own development in order to help each other grow in areas that they need

Subscribe on:

Show Notes

  • 00:00 Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. I’m at Agile 2019 and I’m sitting down with Antoinette Coetzee and Jason Knight. Antoinette. Jason, welcome. Thanks for taking the time to talk to us.
  • 00:18 Antoinette: Thanks for asking us. It’s a pleasure.
  • 00:20 Shane: Antoinette, you and I have known each other for a number of years and communicated for quite a long time, but it’s nice to get you behind the microphone for change.
  • 00:29 Jason, this is the first time you and I have met, but I’m reasonably sure most of our audience have probably not come across either of you. Would you mind briefly introducing yourself, please?
  • 00:38 Jason: My name is Jason Knight. My life has consisted mainly of growing up in a small town in Oklahoma called Pryor Creek, going to school and college for computer science, becoming a software developer and then discovering one day scrum and agility and realizing that there was this larger system that I could care for and that I was quite good at doing so, and it was more fulfilling to do so. And so that pivot led me to meet many cool folks, some at Agile 2017 and before that, even I got to know Antoinette here through Evolve Agility and Michael Hamman and that group. And today I’m here with you talking about leadership, another subject of passion.
  • 01:18 Antoinette: I am a South African. I live in lovely Cape Town, but I love traveling. So, it was actually in South Africa that I came across agile. We were very fortunate in that some of the original manifesto signees were out in Cape town for a conference, and somehow we secured them to come and work with us.  And it changed my life forever.
  • 01:38 So after that, I went to the US thought everybody,  it’s the big US versus the tiny little country at the bottom end of Africa, thought that in the US you know, I’m going to learn lots of stuff. And to my surprise, I introduced a lot of people that I worked within the US, it was kind of early 2000s, introduced them to this concept of, it wasn’t even called agility at that point.
  • 02:01 And I realized after a couple of years that I enjoy that more than doing software development and became a coach and had the fortune of working with quite a few of the big names in the agile coaching world I’m the part of the ACI faculty and also through the work that we’ve done with Michael Hamman. Working with him as well. Bringing all of that, my passion is taking all of that back to South Africa and sharing it with people in South Africa.
  • 02:29 Shane: At the conference, the two of you gave a talk on collective sense-making. What does that mean?
  • 02:35 Jason: Well, specifically, it’s this idea of a conversation where we together examine deliberately slowly and with the sort of meta skillfulness, our sensations, what we’re telling ourselves about what we’re sensing,  the sense that we’re making so that, in a richer way, we decide what to do and we avoid the trap of doing that instinctively by rote. There’s more to it, but that’s a start.
  • 03:02 Antoinette: Yes. So the big advantage for us when we have these kinds of relationships with one another that make these conversations possible is the fact that we can examine, we can help examine one another’s thinking, which very often stands in the way of real answers.
  • 03:18 So we are oblivious to our own thinking and our own developmental level that we access at the time that we try and make sense of stuff when there’s more than one of us, when we create the right conditions in terms of relationship, psychological safety as well as psychological challenge, we can break through those boundaries, and not only does that provide us with better solutions, but it also enables us to grow one another.
  • 03:44 So that’s where the deliberately developmental part of the whole conversation and way of interacting coming.
  • 03:50 Jason: Yes. And so it may be, we’ll say, advantageous to think more deliberately about things, and to say, be much more aware of our own biases or limitations of our perspective. That’s a pragmatic use to an organization.
  • 04:04 However, the thing that she just mentioned is the real thing that gets me excited about this. So the idea that you and I, in a certain kind of mutually beneficial relationship can develop one another. In ways that would be very difficult or even impossible to do if we were by ourselves.
  • 04:21 Shane: I heard a buzzwould, psychological safety. Every team needs psychological safety. Google’s done it. Then the real deep work from Amy Edmondson and others that does show very clearly the value for that, but then you brought in something we don’t hear often. Psychological challenge. Tell me more.
  • 04:43 Antoinette: The idea of psychological safety in its basic form is that we should be able to say whatever needs saying without fear, for any difference in the way people see us or regard us or any,
  • 04:56 Jason: any harm that might come to us.
  • 04:58 Antoinette: That’s right. So that quickly creates a kind of, I want to say mealy-mouth, the kind of “meh” in a team unless there’s some psychological challenge as well. So you know, if we are always just making it okay for everybody to say what needs to be said without challenging one another.  So, for instance, in our regular collective sense-making group, we give one another very strong feedback. We call one another on stuff. So very often we see Jason as the youngest member of the group, and very often when we call him on stuff, he gets a lovely beetroot color.
  • 05:34 Jason: I know you can’t see this on the podcast, but it happens.
  • 05:38 Antoinette: The growth is not going to happen if we only provide psychological safety.
  • 05:43 They are equally important. You can’t have psychological challenge without psychological safety and you won’t have any growth unless the psychological challenge as well.
  • 05:51 Jason: A way that I think and talk about this is you’ve got a group of people and they are a high-performance sports car. Going faster on hairpin turns is not safe necessarily, and it’s even more unsafe unless you have high-performance brakes.
  • 06:05 To me, I think of the psychological safety that is necessary inside of a group like those high carbon brakes, then you can really take your group and your team and push and find those really difficult conversations in the safety of knowing that if we need to slow down, we will. It won’t be held against me if I say something that could be edgy.
  • 06:24 Antoinette: Our growth lies on the other side of our worldview. , of our current worldview,  growth happens when we cannot solve the problems that we currently are experiencing, given what our current worldview is. And it’s your relational buddies in collective sense-making that will call out to you when you are staring yourself blind against that worldview.
  • 06:48 Jason: That’s right. The word that we use, one of the conditions, says the topic of discussion and collective sense-making conversation, it’s a word that most people don’t use. I’ve found: disequilibrating something that has that quality of edginess or off-balance or causes you to challenge your deeply held belief, but that is necessary for you or the group to encounter, suss out the boundaries, the limits of it. It’s very difficult to do that by yourself or in a group that you don’t trust who might be out for themselves.
  • 07:19 Antoinette: Very often we see that showing up in conversations as well as when we are collectively making sense about something in one of our situations that when somebody proposes something to you that you’ve thought about a hundred times already, you know, and you’ve thought about this and realized, well,  this is not going to work, and somebody proposes to you. That,  it’s a psychological challenge to really consider what they’ve just told you and see whether you can find any truth in it and maybe look for what is the 2% of truth that could be there.
  • 07:51 Shane: Collective sense-making, even sense-making. How do we make sense collectively? What’s involved in making sense collectively in a safe, high performing team that has psychological safety and the ability for psychological challenge?
  • 08:13 Jason: What’s involved? Well, at the very basic end of that is more than one person. The topic, for example, is interpersonal, not something that say you could take with you with some analysis and go into a room and simply see through yourself. So there’s something that would take multiple minds, multiple perspectives, multiple ways of making meaning.
  • 08:37 We come together; we bring our multiple perspectives. We share them in a way where each of them can have, sort of, I wouldn’t say equal consideration, but equal opportunity for consideration. We used the example in our talk yesterday of the ancient story of several blind men who encounter an elephant and each touches part of it and reports I found a tree or I found a spear or a rope or a boulder or you know, whatever they might think the ear is.
  • 09:06 Each one of them has encountered one aspect of reality or the situation. But this situation is big and it’s multifaceted, and the collective aspect of this conversation is necessary if we are to get our hands around what’s actually happening.
  • 09:24 Maybe in our organization there’s this general feeling of disengagement or malaise and we don’t know what it is or why it’s here, that’s an elephant. So the collective part would be us sitting down together and saying, what does that feel like? What are we sensing about that? And being very careful not to jump too quickly to make sense of it. We generate as much of that sense from different perspectives, like blind men feeling along the thing that they’ve encountered.
  • 09:55 Until finally someone says something about, you know, this makes me think of this, or the story I tell myself about this thing is this, and the next person does the same. And finally enough of that sort of coalesces until one person says, I bet you it’s an elephant and it’s named. And after that occurs, we decide this is the step we’ll take now that we know that.
  • 10:20 Antoinette: Not only are we bringing multiple perspectives, multiple lenses, but we also,  there’s what happens inside of us. So sensing is not only an observational thing, with our five external senses, it’s also what happens on the inside. And beyond that, even from what basis of development are we thinking about something?
  • 10:40 So that’s where we go into the adult developmental phases. We’ve specifically used the work of William Torbert and Suzanne Cook-Reuters later on called action logic, the action logic developmental stage model. So action logic is developed from when we’re kids.
  • 10:57 So if we look at the expert action logic phase. That happens from when we were about six or when we’re about 26 and it’s in a lot of cases, leaders tend to go further than that. So if we look at leaders, 80% of them sit between expert and achiever logic, and that brings a certain flavor to conversations. Now, as we need different perspectives, we also need those different action logics to you really get to a, I want to say superior, I want to use it lightly, but to have a more collective solution of things. So part of what we do, just by being who we are, we bring different action logics into a conversation. And the other thing, like we said earlier, is the fact that as the observer, you can sense what action logic in that person is coming from and you can offer the perspective of a  later developmental stage to see what would that problem look like from there.
  • 11:53 Shane: You’ve mentioned expert action logic. What are the others? What does this framework look like, briefly?
  • 12:00 Antoinette: What we did for the talk because there are actually multiple books written about this, there’s a deep amount of knowledge.
  • 12:06 What we looked at is we looked at the three different kinds of world views of different action logic. So if we just start with the opportunist and the diplomat, they tend to have a traditionalist world logic. So there’s a truth, it’s generally provided by somebody outside of the individual.  There tends to be, in the traditionalist worldview, there tends to be us against them, or an I against you. Okay.
  • 12:32 The next two levels, are the expert and the achiever, and that’s really where a large part of leaders sit, So that tends to be the modernist view. So, it’s the scientific view.  So, this is, science tells us we can prove this we can…  it’s not given to us. Truth is not given to us; the truth is something that’s proven and we can logically work it out. And, a very strong aspect of that is that there is an answer.
  • 12:58 So, if we talk about the waterfall life cycle, for instance, the waterfall life cycle is based on predict and plan,  if we analyze enough, we can predict how things should happen, we can plan it and we can execute it. So that is very strongly tied to the modernist.
  • 13:12 The postmodernist is very strongly tied to complex problems, from Cynefin for instance, we can no longer predict and plan, the world’s become too complex. We have to sense and respond, and unfortunately, there’s a very small percentage of leaders that develop into the postmodernist worldview, and that’s what we need right now, that’s what we need in the world, we need leaders who realize that we can’t analyze, predict, and plan. We actually need to do sense and respond, so which really ties in very nicely with our agile experimentation mind view, inspect and adapt, all of those good things.
  • 13:49 Shane: You make the point that not many leaders have this inspect and adapt, complex, responding to complex states. How do we help?
  • 14:00 Jason: I think it’s important to point out that every individual has these different action logics in them, so to speak.  Accessing them, and we have a sort of center of gravity where we tend to hang out most times.
  • 14:16 And there are good tools for helping you, I think it’s the Leadership Development Profile, which is a good tool for helping you understand where your sort of tendencies lie, but with awareness and with some skillfulness and discipline, you can learn to shift.
  • 14:31 This is the meta skill that we talked about before and helping people can look something like learning to have a deliberately developmental relationship with the person where you’re committed to mutually pushing and pulling and developing one another. Maybe that’s in a collective sense making conversation where we bring in the meta skill of what is it about our thinking that is causing us to think or assume that; what are the particular action logics that we may be using? Are those appropriate? And that’s the point at which this awareness is brought into the group and we can say, you know what, I think maybe, this is simple enough that we can use the expert action logic. Let’s start talking like that.
  • 15:14 Or we might say, I think this is too complex to be using the expert action logic. Let’s try using this new vocabulary that’s a little bit along this developmental scale. Maybe the individualist action logic would make sense, or perhaps the strategist as we move along. So it’s becoming aware of our thinking, thinking about our thinking, which is a really powerful aspect of these things that we’re describing.
  • 15:39 And I think that’s what can help leaders, especially those for whom what they’ve been doing has not been working well.
  • 15:47 Antoinette: There’s a saying that leadership cannot be taught, it can only be developed. Now, coaches are leaders too, and first of all, we need to develop ourselves. It’s very hard if your center of gravity sits in the expert or the achiever to then work with a leader where their centre of gravity sits in the postmodernist,  so our first job , be the change that you want to see in the world. Our first job is to develop ourselves and, like we know, leadership development is individual development.
  • 16:19 So we all have to do that hard work to grow our own leadership capabilities.
  • 16:24 Certainly collective sense-making conversations, I mean, the reason in the first place why Jason and I decided to do this talk is because we wanted to plough back a little bit of the benefit that we got from doing collective  sense making, because in a really gentle, , we keep on talking about the fluidity of the sensemaking  conversations, and that fluidity has somehow changed something within me, , doing collective sense-making conversations has developed me. I now, when I’m in conversations with others, I realize how rigid the structures are that we generally use when we converse with one another, I realized how limited we are by, you know, doing, I want to say kind of yes, no conversations.
  • 17:08 Okay. So the pure act of being in a deliberately developmental relationship with somebody and having those types of conversations will help leaders evolve they own action logic and shift their center of gravity.
  • 17:23 Jason: I’d like to add that at one point for me, as she mentions what has been developing for her, I had this moment of thinking the way I act now is not necessarily the best that I can do. There’s this idea of, maybe it’s just me, that I’m always developing and I’m always developing for the better or to the right place, but what I realized was that a place I had come from the  “I know the right way. I have memorized the scrum guide. I could tell you exactly why it should be a daily scrum and not a standup”. Was not working. And so I sort of moved away from that into something that we would call more of an individualist action logic, able to see many different perspectives and appreciate them and realize it’s all part of the truth that can be put together to construct this collective reality.  Until I realized I was paralyzed and. Was afraid to act. 
  • 18:12 And there was a moment in a conversation, one Saturday morning at my parents’ house when it became clear to me that I needed to act, either that was out of a sort of need to apply a strategy and catalyze a response in the people I was working with, or perhaps that was actually me needing to go backwards a bit into the achiever, and that word backwards is probably not the right one, along it, towards another direction. But I was not achieving the results that I felt were appropriate or that the team wanted. And so I realized I need to shift actively.
  • 18:46 Antoinette: Exactly what he’s describing in terms of understanding that there is this kind of a pallet to paint from, just having that awareness is also something that can help leaders.  So generally, once we learn that there is a developmental path, I’m not doomed to be where I am forever because there’s a natural progression that’s available to me and that’s accessible to me. It’s also,  just that makes things possible for leaders.
  • 19:12 Shane: One of the things we were talking about earlier before we started the recording was this concept of crafting relationships and the conditions for the right relationship.  Tell us more.
  • 19:26 Antoinette: When we do collective sense-making, one of the things that’s really important is blurting. It’s a concept called blurting.
  • 19:33 So as we move up our developmental spectrum, there’s a strong need to start incorporating things like intuition into our makeup. I want to call it that.  So blurting is when, , there’s just  a couple of connections that come together in your brain and when you want to do it, it feels like people are gonna think, what on earth is she on about, that’s not related to what we’re talking about.  It doesn’t get us any closer to solving the problem. . Yet, very often, that’s kind of somebody being in tune enough with what’s happening in the space between the individuals taking part in the conversation to be able to grasp onto something that’s not yet expressed.
  • 20:17 Jason: I’ve heard that called an intermediate impossible. When you’re going from one point where we are now to this other point that we either want to go or sort of playfully going towards. There are a lot of little steps that could be in between that are somehow related and positionally going that way, but not quite the thing.
  • 20:37 And the blurting can sometimes get you to the next step, and if you continue like an improvisational comedian, might you eventually get to that beautiful scene ending joke.
  • 20:47 But to get there blurting is a bit strange and it’s a bit disconcerting to watch if you’re not  ready to accept that the weirdness can come out, or if you can trust those around you with what you’re about to blurt, which means that the relational system has to have a few qualities to bear that, and I’ve found them. So here we go:
  • 21:07 The participants have to be mutually committed to each other’s development. And to their own development. So I must be ready with appropriate, sort of, restraint, stretch myself, and grow myself, and I expect that you want me to do the same.
  • 21:22 Beyond that, when you act in this relational system, you act with integrity. So with a sort of wholeness and consistency and truthfulness and honor, that sort of thing.
  • 21:33 You have to demonstrate respect for one another. That your ready and willing to give them and their beliefs, their traditions, their backgrounds, a full hearing without this sort of harsh, judgmental “that’s wrong”.
  • 21:47 Antoinette: For instance, in our group we have quite an interesting, I mean, we did deep work, and we’ve got quite an interesting spectrum of spiritual beliefs,  there’s some devout Christians all the way through to new agey to Buddhist.  And there’s an unconditional acceptance and a valuing of perspective that comes in regardless where it’s grounded in.
  • 22:10 The other thing that’s also there is a lot of playfulness,  I mean, I love Jason’s way of saying  there are still rules,  if you play in the sand pit, you don’t throw sand. it’s a playfulness and an unconditional acceptance of whatever  somebody brings in a full scene.
  • 22:24 Shane: I don’t have to agree with you, but I have to respect your opinion. 
  • 22:27 Antoinette: Yes.  And even if you come with an opinion based in something that I don’t agree with what you’re bringing to me, I will consider , I undertake to consider that I might have to translate it into what does that mean in my own spiritual beliefs, for instance.
  • 22:44 But I will consider the essence of what you are bringing and see what I can make. So it’s very much, and Jason mentioned improv. It’s very much a yes and conversation. It’s always a yes. Let’s build on what the previous person, build on the previous person, build on the previous person.
  • 23:01 Jason: A commitment to mutual growth and development to me is quite key. We mentioned how with psychological safety, there must also be psychological challenge. It’s almost like if we go through the difficulty with reward and with challenge to build this relational system where we can trust each other and can say things that we otherwise might not, what do we do with it, but talk about the difficult things, the things that will stretch us and make us, our capacity to lead or to enact change in the world around us more possible.
  • 23:35 Shane: If I’m a team lead, architect, maybe in a technical team, somebody in the position of influence in a team. What concrete advice, how do I make it real for my team?
  • 23:47 Antoinette: Certainly, precondition for this is to have a real team, to be able to create spaces for people to have these kinds of conversations.  To have a space where anything goes and there’s openness towards  whoever says what.  So there’s work for you to do to build your real team.
  • 24:07 And we don’t have to go into that today because there’s more than enough work on that. So that’s definitely a precondition. There need to be strong relationships,  there needs to be a real appreciation and seeing of one another and a real appreciation and respect for what each of us brings.
  • 24:22 Jason: Yes. I would say a sort of advice I might give to a team lead who had come to me with a problem and the problem might sound something like  my team-mates and I aren’t working on the problems together. We’re sort of off and we realize that’s not working to solve the problems in our product. I would say something like, well, there needs to be a sort of cohesiveness and mutual reason to do this work together, and if so, now we’re really getting into complex problems that are interpersonal. One of those aspects of the topics. So if the problem we need to work on needs to involve us all, then we need to start actually focusing on the quality of our relationships. Now, if you’re off in your corner working on your own little project, and that’s all fine, I would say that although this would be very good for interpersonal development, that team lead might not sense the immediate need to do this. 
  • 25:17 But it is when you’re working on something as a focused group of people that you will encounter these topics that are salient to you and interpersonal and so on. And if you do, spending time to strengthen the relationships between the individuals is of the utmost importance.
  • 25:35 And then there are skills that we could talk about., about how to hold the actual conversations that are collectively sense-making.
  • 25:42 Antoinette: Somebody asked me yesterday afternoon after the talk, she came to me and she said, so I’ve got a group of people and everybody, they go, go, go.  People, they want to get stuff done,  and we just have all these unproductive meetings  where everybody has their own idea and they want to go from there, defend their own idea, et cetera.  There needs to be a need for a better way of doing things as well.   If you’re in that position, there needs to be some kind of acknowledgement about the fact that we are all pulling into different directions and that that’s not getting us anywhere.
  • 26:12 We’re all so keen on getting somewhere and we’re not getting anywhere. And that’s a real good starting point to then introduce the concept of maybe we should look if we put all our heads together, instead of all our heads on our own and our own solutions, what would become possible for us
  • 26:29 Shane: Coming back to psychological safety and bring your whole self. That’s gotta be a really important part of this invitation, isn’t it? If we’re asking people to maybe be a bit vulnerable, but also be able to challenge, how does that play out?
  • 26:45 Antoinette: It took us quite a while before we got to the point where we became adept at collective sense-making. So we spent quite a bit of time relationship building.  There’s a very, very strong emergent part in all of this,  in terms of not making rules, in terms of not having preconditions, but seeing how the collective likes to work, what everybody requires, and also having somebody who is quiet, able to be an observer very, easily step out into the observer or them being part of the conversation. So there was a kind of a learning through doing phase, quite a lot.
  • 27:25 But for all of us, we kept on practicing, if I think about the very basic rule for us was the rule of respect, there was regard, there was unconditional positive regard always, , not only positive intent, but unconditional positive regard.
  • 27:41 And then, really practicing, practicing, practicing and calling one another out, you know, that psychological challenge. When we were breaking that…
  • 27:48 Jason: There was a point at which one of the members of our group said that we were being too nice, too polite. So it was maybe him sensing that while we did have that psychological safety, we also weren’t bringing our full self to the conversation. We weren’t saying what we might want to say, that should be provocative for the purpose of getting somewhere. I think particularly of one of our group members who had a really, really deep skillset that she wasn’t bringing to the table because of this sort of over-regard for another member and then we experienced a failure because that skill wasn’t present in the work being done, and then she realized, I’m not bringing my whole self to this for whatever reason. And then she decided I’m going to do it. And it was critical to the success of that particular action that she did. So bringing your whole self, I may not want to bring my whole self in this interview. I know a lot of really corny jokes that I love to tell.
  • 28:48 Antoinette: He does,
  • 28:50 Jason: And maybe it’s for the better that I do, but if I feel a bit self conscious with you. I don’t feel self conscious with Antoinette, she’s heard a lot of them. I may not tell those, and perhaps that would have made this the best podcast ever if I had ended with a corny one liner
  • 29:09 Shane: And on that cheerful note, if the audience wants to continue the conversation, where do they find you?
  • 29:15 Jason: I live and work out of Tulsa. You can find me at my website, jasontknight.com or I write blog posts from time to time and there’s contact information they can get to from there.
  • 29:26 Antoinette: My website in South Africa is justplainagile.co.za   so you are welcome to contact me there.
  • 29:33 we should also tell you that a lot of this work is described very well in Michael Hamman’s book Evolve Agility. So if you want to read more about the conditions and et cetera, et cetera, that would be a good place to go to. The other book if you’re interested in action logics, would be Action Inquiry by William Tolbert.
  • 29:51 Shane: 29:51 Thank you both so much.
  • 29:53 Antoinette: Thank you. Thank you. Thank you. Thank you.

Mentioned

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Panel: Microservices – Are They Still Worth It?

MMS Founder
MMS Luke Blaney Alexandra Noonan Manuel Pais Matt Heath

Article originally posted on InfoQ. Visit InfoQ

Transcript

Moderator: First, I’ll get the panelists to say who they are, what they do. What’s the one thing they wish they had known before starting their journey in microservices? Matt, we’ll start with you.

Introduction and Lessons Learned before Microservices

Heath: My name is Matt Heath. I work at Monzo. I work within our platform team as a back-end engineer. I’ve been working on microservices for about seven years now. The one thing I wish I’d known? How things can spiral, not out of control, but how things can rapidly get more complex as your business develops.

Noonan: I’m Alex. I’m a Software Engineer for Segment. I’ve been working with microservices for about three years now. I think the thing that I’d wish I’d known in the beginning was that you can solve some of the trade-offs that come with moving to microservices without actually moving to microservices. Sometimes people get a little bit blinded and they see that they’re having issues where they don’t have this good fault isolation and microservice gives you that. You can actually address that without making the switch to microservices. I wish we’d known that it was possible to address a lot of the issues we were seeing without going to microservices.

Pais: I’m Manuel. I’m a consultant and I’m co-author of the book, “Team Topologies.” From the people here, I’m probably the one who has least direct experience with microservices. As a consultant, especially in continuous delivery, what I’ve felt is a lot of the pain from the clients that went for microservices too early, trying to optimize for something that was not really the problem they had and not thinking about other problems in terms of their engineering practices or dependencies between teams. Just thinking it’s all around the architecture when there were other problems. As Sam Newman was saying this morning, identify first what problem you’re trying to solve and then seek out the solution, which might be microservices or not.

Blaney: I’m Luke. I’m a Principal Engineer at the “Financial Times.” I think the one thing I wish I’d known is how important tooling is when it comes to microservices. We already knew to some extent things like orchestration and stuff, you needed some level of tooling. Stuff that is useful without microservices becomes essential when you do have microservices.

Moderator: Luke, as we’re talking, this panel is supposed to be, Microservices – Are they still worth it? Can you see real benefit to your organization or your customers from this microservice approach at the FT?

Blaney: We can see real benefit to our organization and our customers from this microservice approach, definitely, in terms of speeding up delivery. Like Alex was talking about, isolating things. It’s really beneficial for that way. It’s brought us a long way forward. It does have its pitfalls and stuff, but I think on the whole it has been beneficial.

Moderator: Matt, how about Monzo? How’s Monzo benefited from using microservices?

Heath: Monzo’s benefitted from using microservices, I’d say, in quite a similar way. By using microservices, our teams can work quite independently. For context, Monzo is a bank. It’s quite easy to say that banks are complicated, but we have lots of different systems. That means that we can work on those very independently. It’s helped us deal with the complexity.

Moderator: Alex, your company has done a bit of a U turn, and gone back to a more monolithic approach. What do you think the key benefit for that U turn was?

Noonan: The driving force behind that U turn to a more monolithic approach, for us, was two things. One, the operational overhead that came with microservices was too great and we didn’t have good tooling around testing and deploying and maintaining these microservices. Then one of the other driving factors was we finally understood what the root issue was in our system that we were trying to address with microservices, and it didn’t actually address that. Once we finally had this understanding of what the core issue was, we were actually able to address it, and moving back to a monolith was actually what helped us address it more easily.

Moderator: Who’s seen the architectural diagram for the microservices at Monzo on Twitter? We’re all quite aware of the pure scale of the microservices. There is a lot going on there. Are you seeing cracks at this monumental scale? Is it the right approach, though?

Heath: Are we seeing the cracks at this monumental scale of microservices? It’s quite easy when you look at a diagram. With that many moving parts, it’s quite hard to represent it on a single diagram. I don’t think that’d be any different if our organization had 50 applications. We have 1600-and-something. That means that there’s a lot of moving parts, but it doesn’t mean that everything’s quite small. The main problems we have are making sure people have the right information. People don’t have to understand how everything works. They can just know how that cluster of things work. Tooling is the main thing. We’ve had to build out a lot of fairly complex tooling. Those are the things we’ve had to focus our efforts on.

Moderator: Manuel, you look at microservices in organizations from a more team angle. Surely, there are a lot of beneficial things from team organization that this technology can provide.

Pais: Are there beneficial things from team organization that microservices can provide? Yes and No. I think it’s all about when is the right time to think about it, microservices to fix the problem you have. Often, it’s premature trying to optimize something because microservices is all-round, a lot of talk about it. We think that’s going to fix our problems. Organizations often don’t realize, to actually benefit from microservice, you need to do a lot more changes. It’s not just the technical architecture. It’s if you go one step back, you need to think about Conway’s Law, and how the team structures are mapped with the services that you have. They should be aligned to a large extent, not one-to-one, but they should be aligned. Otherwise, you’re fighting against that.

Then also, have you thought about things like decoupling teams but also decoupling environments if you have static environments for testing, decoupling the release process. All these other aspects you need to consider to actually take full benefit of microservices is not always thought out. Once you have that done, then I think microservices are really good at making sure we’re aligning the teams and the services with the business areas, or the streams of value in our business. That can be very powerful.

Promoting Faster Change

Then if you want to promote faster cadence of change, then microservices can help you on that with separation at multiple levels, the data store level, and at the releasability, the deployability level. It’s a journey that you need to go through until it’s actually going to bring the benefits that you expect. It’s often done too early. We should be thinking then at other areas that we might be overlooking, for example, Conway’s Law. Also, cognitive load is something we talk about in “Team Topologies,” which is almost every team, or in almost every organization, some teams feel they’re overloaded. If you ask them to own the services and be able to build, test, operate, support, and understand enough to do that, they probably have too much on their plate. We need to think about the right size of software for teams, whether that’s microservices or not, is the next question.

Moderator: Coming back to Conway’s Law. How do you recommend changes in that organization? Business functionality evolves, and so does the team structure. How do you marry that with the technology and the ownership alone?

Pais: How do we marry business evolution and team structure with the technology and the ownership? What we see happening, which is probably not what we want, is sometimes decisions are being made, especially around the team structures in the organization, by HR people or senior management. To some extent, because of the mirroring effect of Conway’s Law, they are deciding on our software architecture to some extent. We don’t want that. What we’re saying is we actually need to bring those two worlds together. We need to make technology and design decisions together with the team structure decisions. Otherwise, one is restricting what the other is able to do.

Moderator: Matt, you must have come across a very similar thing because Monzo’s growth has been stratospheric for quite a while. You must have learned to change your organization, but you don’t want to check out all tech at the same time.

Heath: Monzo’s growth has been stratospheric for quite a while. Our company’s probably changed pretty much to a new company every 6 to 12 months for the last 5 years. That means that we have some areas that are a bit more stable, but other areas that are quite rapid influx. I think that’s one of the reasons that such a granular set of services works for us. All of our services are very similar. We have a box that you put the code in and you get lots of nice properties. Netflix call that The Paved Road. That means that because the differences are not big, so it’s quite easy for people to pick up a new application. When we form teams, or when we refocus areas, we can divide the boundaries a lot more granularly. Otherwise, we would have essentially, a whole ton of payment systems and we wouldn’t be able to move people. People wouldn’t be able to move off if they wanted to go and work on a different area because they have specific expertise there. Yes, we can change that quite a lot.

Continuity of Knowledge

Moderator: How do you keep continuity of knowledge when people are moving around?

Heath: How do we keep continuity of knowledge when people are moving around? That’s something where, honestly, you have two problems. I think documentation. The real question is what you’re documenting. In the case of Monzo, we have debit cards. We used to use an outsourced card provider. We brought that in-house. We’ve built a MasterCard processor. There’s quite a lot of domain knowledge there. That’s one of 10, 20 different equivalent scale projects.

You have both a load of people who have gone and read 10,000 or 15,000 pages of MasterCard documentation to understand how those systems work and have deep domain knowledge. That’s not something you can easily transfer. I think we have a mixture of people who want to spend an area and become domain experts there. That works really well for them. Then in other areas, by having the services, each service being quite consistent in how it works, it means people can pick up how our system that models that quite quickly. Then build on the domain knowledge there.

Moderator: Alex, as you’ve moved more back towards a monolithic approach, what now is your biggest operational challenge.

Noonan: As we’ve moved back towards a monolithic approach, I think the biggest operational challenge that we deal with now is, at any point in time in our monolithic worker, we’re running about 2000 to 4000 workers. This means in-memory caching is not very efficient across these thousands of workers.

Something that we’ve introduced to help us with this is Redis. Redis has now become a point that we have to consider every time we need to scale. We’ve had issues in the past where the Redis becomes unavailable and the workers are crashing. That has now added an additional point of scaling for us. It’s something that’s been consistently coming up that we have to deal with. We’ve done different things like moving to Redis Cluster, sharding it, but it’s still something in the back of our minds that we wish wasn’t there. It’s not enough for us to move back to microservices just to get the benefit of caching. It’s a trade-off that we were willing and are comfortable to take.

Microservices Architecture

Moderator: Luke, on the opposite end of the scale at the FT, what would you see? Is a widely adopted microservices architecture your biggest operational challenge?

Blaney: Is a widely adopted microservices architecture our biggest operational challenge? I think it’s where there are team boundaries, I find is often the hardest. If there’s an incident or stuff, it’s tracking it across multiple teams. It’s understanding where the fault lies. Teams are very good at understanding their own little pocket of things.

We have a publishing pipeline. It goes from the tech team that works with their editorial staff. Then there’s a publishing platform team. Then there’s a team that are working on the front-end website for the users. Each of these team’s very mature in their thinking, but whenever it goes from one to the next bit, things get missed. It’s hard to trace exact problems.

Moderator: Manuel, do you have any advice for this?

Pais: Do I have any advice for this? Yes. On one hand, I think that there’s a key term there, team boundaries. With Conway’s Law, we want the service boundaries, team boundaries to be aligned. Then, how do they interact when there are problems and we need to diagnose what’s going on?

A couple things. One is, if we have good platform teams. I get a feeling that at Monzo, you have a pretty good platform team to allow that scale and that repeatability in a way, between different services. Having a good platform team that provides you the tools and the mechanisms to do that fast diagnosing on the tool side, things like swarming. Different teams work on different services too, or people from those different teams to be on a rotation, where this person is going to be part of this swarm to identify, to diagnose this problem, where is it exactly coming from? Or, possibly there are multiple reasons. Then address that more quickly than this hierarchical support from, someone identified a problem the customer has, and then now we have to try to sort out who is responsible. Having that swarm can be helpful.

Moderator: Matt, does Monzo address it in a similar way because it must be very hard to diagnose where one capability is actually being affected by a single service?

Heath: Does Monzo address it in a similar way where one capability is actually being affected by a single service? Sometimes. A lot of the time, it’s fairly straightforward. I think the way that we think about that, we have lots of different services and we have very strong consistency between the ways they work. That means we have standardized dashboards for every single service. We can use tools like tracing to track that back.

The on-call Team

What we usually see is some particular business metrics, say, cards are declining, or payments of a particular type aren’t working, or a particular API doesn’t work. In that case, it’s relatively straightforward to pin that through into the particular root cause. Each of those are then owned by different teams. Up until relatively recently, we had one group for on-call, who had a rotation. We have a specialist on-call. That’s being pushed out into teams. At some point, we’ll probably switch that around. The initial on-call team will be the owning team, which can then escalate it to a global team. I think, yes. The boundaries between those teams are definitely the hardest things. It’s because everything is so similar in the same language, literally, the same service structure. Many of the people who are on-call can look at a particular thing and work out why, while we’re waiting for other people to get online and go through that problem.

Moderator: Alex, you must implicitly get the benefits of that going back towards a monolith because you’re not going to have a monolith made up of five languages?

Noonan: We implicitly get the benefits of going back to a monolith. Our monolith is in one language. That was one of the benefits that just came naturally of moving to a monolith. Luckily, when we did move into microservices for a bit, we still kept everything in one language. We were good about defining certain standards of how these microservices were going to work because we knew the complexity that was coming with it. It was a benefit of moving back to a monolith was getting that stuff for free, and forcing us all to use some of the same things.

Participant 1: As Monzo has more than 1500 microservices, so how do you basically define the bounded context when you build the so many microservices?

Heath: How do we define bounded text when we build so many microservices? I think if you look at it as if we had a relatively simple product, it would be very difficult to take an image website, for example, and slice and dice that into 1500 pieces. That would be pure insanity. Don’t do that. If you then add 10 or 15 different payment networks, and you start operating in multiple countries, and you have various product features that are quite different, we integrate with. I’m honestly not sure how many different partners, for a variety of different things. We launched a credit scoring product a week or so ago, which means that’s a whole set of services that interact with a couple of particular third parties. That allows the user to give consent, we can track that. Those things are very isolated from how your card works.

Bounded Context

The bounded context in those things is pretty much, take a problem. For credit scoring, we clearly have something that needs to interact with a third party. There’s probably some bounded thing there, their API. We also have our API to the app. By the time you divide that up, you probably got five or six different services. They all have a very tightly defined responsibility. They all have their own API. We use HTTP internally. One of the things that we’re thinking about is, that’s a lot of APIs for someone to think about. Some of those aren’t really used outside of teams. Maybe they’re more of a private API versus a public API. That’s something we’re thinking a bit about. We essentially have lots of different teams who are providing a service, almost like an S3 API, or some other API to a number of other teams. By the time you add those things up, there’s a lot of services.

Participant 2: Speaking of microservices and team boundaries, how do you suggest adopting ownership of services, and changing of teams? You have a growing organization, team responsibilities change, and then suddenly we have a bunch of microservices from the organizational structure 6 months ago, and they become tech debt. How would you cope with all this stuff then?

Adopting Service Ownership and Team Changes

Pais: I think one of the keys is to have stable teams. The other key is that all the services need to be owned by a team. Once you have those two things, the organization can evolve. You can have people changing teams. You don’t recreate teams or create a team just for this new microservice, and then, sometime, there’s the team that’s doing another microservice. Once you have stable teams and you have ownership, it’s much clear the alignment between team structures and services.

Also, you can try to promote that all the teams, or most of the teams might be working on some new service but also retain ownership on an older one that doesn’t change much. Perhaps with microservices it’s not so clear yet because it’s more new. For older type of architecture, then if you have a older service and you’re also responsible for the new service, then you bring in the good tooling, the good telemetry, centralized logging, whatever it might be, into the old service even if it’s not changing that often. You keep parity on and you evolve both at the same time. To me, that’s useful.

Moderator: Luke, do you have anything to add, because this is definitely an area that you’ve suffered with, good and bad?

Team Ownership

Blaney: In the FT, it’s quite hard because different teams use completely different programming languages, different tech stacks. If there’s something that could be different, someone will be doing it different. It can be really hard to move things from one team to another. Although, one benefit I do think of microservices is it becomes easier to rewrite small bits of your stack. If it is moving from one team to another that is radically different, and they’re like, “That’s a new programming language we’re not comfortable with. We don’t have the skills.” It’s a lot smaller undertaking to go, “That’s one or two microservices that need to change with this org structure.” It’s a smaller undertaking for them to go, “We’re going to rewrite two microservices than we need to replace this entire monolith.” It’s always about making that judgment call, of, is this worth the rewrite if a team feels much more ownership of things? I’ve seen many a cases where I’ve seen a perfectly good system sitting there and a team just doesn’t feel comfortable with it. There’s not really a good technical reason to rewrite it. For the sake of team ownership, and that stuff, it actually pays dividends in the long term to just be like, “We’re going to replace it. We’ll build a new one.” Then everyone feels comfortable with it.

Participant 3: I’ve seen some scenarios where within teams there, multiple services are created. These teams did not consider, for example, to structure the code within the same code base, to make it more modular. I’d like to hear from the panel their opinion about that. When should you decide to take the step to microservices instead of just structure your code to make more modular within the same unit of deployment?

Moderator: Matt, what’s your opinion on this one?

Steps to Microservices

Heath: I think it depends. I’m not going to be as extreme as you might think I’d be. There needs to be a good reason to pull things out of an application into another application. Whether that is because you’ve added functionality, and now it’s too complicated, or it’s sharing responsibilities that now potentially two teams are involved. We talked about features a second ago. Is that feature being maintained and is it in this code base? What’s the life cycle of that thing? There are certain points where it’s useful to pull those out if you already have an architecture that supports that, and you already have lots of tooling in place. Then in our case, we do. We have a service and maybe we’ve added something that’s made a bit more complex, and there may be a point that we refactor that to pull that out. Normally, we’ll make it backwards compatible. We’ll move the service out, proxy it through to this one. Then at some point, update the clients to use this. I think it only makes sense if you have that tooling already. Otherwise, you’d be much better tidying up the modules, getting consistency within your code base so that it’s simpler and easier to understand within an application.

Moderator: Alex, you’re the natural opposite to this one.

Breaking out Repos

Noonan: I think that we did actually break apart our repos at one point. It didn’t turn out to be as much of an advantage as we thought it was going to be. Our primary motivation for breaking it out was, failing tests were impacting different things. You’d go into change one thing and then tests were breaking somewhere else, and now you have to spend time fixing those tests. Breaking them out into separate repos only made that worse because now I go in and touch something that hasn’t been touched in six months. Those tests are completely broken because we aren’t forced to spend time fixing that. I think one of the only cases that I’ve really seen to break stuff out into separate repos and not services is maybe if you have a chunk of code that is shared between multiple services, and you want to make it a shared library. Other than that, I think we’ve found even when we’ve moved to microservices, we still prefer stuff in a mono repo.

Participant 4: Basically, you said earlier that you were facing some issues so you took a microservices approach. Then later on, you realized that this is not working, and you again come back to monolithic one. What problem did you think could be solved by microservices? Then later on, you realized that this is not a good thing and you should move to the monolithic one?

What were the problems for which you think that microservices is a better approach? Then, after developing it using a microservices approach, you then again changed back to the monolithic one. What were those problems and how you solved them using the monolithic approach?

Problems Solved by Monolithic Architecture

Noonan: The original problems that we were facing were with our monolithic setup, and we were using a queue that the monolithic worker consumed from. Something that we were facing was, if there were issues consuming from this queue, it would back up, and that would impact everything in our monolithic worker. We wanted to be able to separate them from one another. We didn’t want one piece of the worker able to cause issues with the rest of the stuff that was in the worker. It was really the queue that we were using, you could only consume what was first-in-line. What microservices provide, though, one of the benefits of them is environmental isolation. We thought, “If we break out into separate queues and separate workers that isolate them nicely from one another, so now they’re not impacting each other.” What we learned with that over time, actually, was that it did isolate them from one another, but now if this worker was having an issue, all customers using that were impacted. Even if it was only one customer was the cause. It isolated the issue and decreased the blast radius but it didn’t solve the root problem with our queuing system, was really what was crippling us because it was this first-in, first-out way of consuming.

The Queuing System

How we solved it was we actually got rid of all of that queuing system and we built something internally to isolate them better from one another. That’s how. I could see how in the time when we decided to move to microservices, we were like, “You’ll get great environmental isolation out-of-the-box. It solves our problem.” Since we didn’t have a good understanding that it wasn’t actually everything in a monolith that was the issue and it was the queue, we made that decision potentially. Financially, it wasn’t the best decision but you never know. I think then once we finally had that understanding and this moment of realizing what the problem actually was, and then we were able to fix it. A monolith worked better in that situation. That was the main reason for this.

Moderator: If you had one bit of advice for the audience, either on adoption or maturing their microservices architecture, what would it be?

Blaney: One bit of advice for the audience on adoption or maturing their microservices architecture? I think a lot of it’s about keeping track of what you’ve got. It can easily run away from you. You start with a couple and you’re like, “These are easy,” but having to link data, monitoring all these things, keeping track of everything. Do it from the get-go. Don’t wait until you’ve got 100 microservices, and you then go, “What was the first one we built?” Start at the start. Make sure you’re documenting these things as you go along. Because once you’ve got a lot of them, it’s really easy to lose one in the mix.

Pais: One bit of advice for the audience on adoption or maturing their microservices architecture? First, start assessing your team’s cognitive load. This is a common problem. Hopefully, we have end-to-end ownership in the teams, but whatever their responsibilities is, make sure the size of the software and other responsibilities they have matches their cognitive capacity. There is a specific definition of what cognitive load is, but you can look it up. Essentially, ask the teams, because there’s a psychological aspect here, too, if people don’t understand their part of the system well enough, whether it’s microservices or something else. If they feel anxious, if I have to be on-call, and I really hope nothing happens in this part with software, otherwise, I’m going to be in trouble. Then you need to address that.

The second thing is, remember Conway’s Law. I would suggest, print it out, put it up in your office, just so everyone remembers that we can’t design software in isolation from thinking about the team structures.

Align Microservices with Business Value Stream

The third thing is, microservices are really beneficial if they’re aligned with business stream of value. This happens all the time, just go on Twitter and you’ll see people complaining, “I have microservices, but for any type of feature or business change, I need to have coordination between three, four, or five different microservices teams.” That’s not what you wanted to achieve in the first place, this independent deployability. It’s also, independent business streams is the goal. It’s not just the technical side by itself.

Noonan: One bit of advice for the audience on adoption or maturing their microservices architecture? When you’re considering to move to microservices, that make sure they’re actually addressing the problems that you’re having. For example, us, we thought we were going to get this great environmental isolation but didn’t actually fix that problem. I would say, make sure you’re addressing the root problems with microservices. Then from there, make sure you fully understand the trade-offs that are going to come with it. I think what we got really bit by was the operational overhead when we moved to microservices. We only had 10 engineers at the time, and you really almost need a dedicated DevOps team to help build tooling to maintain this infrastructure. We didn’t have that. That’s what ended up biting us in the end, and why we moved back. Because we just didn’t have resources to build out all this tooling and automation that come with microservices. I would say, just make sure you’re extremely familiar with the trade-offs and you have a good story around each of them, and that you’re comfortable with those trade-offs when you make the switch.

Heath: One bit of advice for the audience on adoption or maturing their microservices architecture? It might sound weird, but keeping things simple. The simpler and more consistent your services are, the more consistent your monitoring tools, all of your tools, your development processes. The easier it is for people to pick stuff up and move around. The amount of code in one of those units is smaller, so it’s easier to understand. For us that’s changed my mental model from what’s happening in the box to how the boxes tie together. Keeping those things simple pushes a lot of the complexity out into essentially the platform. You need lots of tooling and potentially investments in teams to work on that tooling. Although, I don’t know if we’d started in five years, most of that exists a lot better than it did five years ago and will continue to.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: Panel: JavaScript – Is the Insanity Over?

MMS Founder
MMS Miguel Ramalho Eoin Shanaghy Katelyn Sills Gordon Williams

Article originally posted on InfoQ. Visit InfoQ

Transcript

Moderator: We have a great selection of panelists here to talk about the future of JavaScript. Gordon, whose talk was just on, that you might have seen, talking about fantastic devices and JavaScript. Eoin, who’s going to be talking about serverless later. Katelyn, who’s going to be talking about running third-party JavaScript. Miguel, who’s going to be doing his quantum phasing in act later on.

I want to get started with the developer experience around JavaScript. One of the reasons I moved to JavaScript from Java 10 years ago, was for the developer experience, you got immediate feedback. You didn’t have to wait for your code to compile. If you remember, 10 years ago, Java didn’t exactly compile all that quickly. I find myself today, especially with front-end code, back in the same situation where I’m waiting for the transpiler, or Babel, or even NPM, to download all the modules. I ask myself, how did we end up in the situation where this wonderful developer experiences, immediate feedback that you used to get with JavaScript seems to have degraded?

Gordon, because you’ve built stuff all the way up from assembly, do you think that the nightmare of transpiling is a phase that we have to go through until we get better support? Or are we stuck with it now as the mode of using JavaScript as a developer?

Williams: Do I think that the nightmare of transpiling is a phase that we have to go through until we get better support or we are stuck with it now as the mode of using JavaScript as a developer? I would hope that it’s just a phase. I think one of the issues is that the benefits you get from running through a compiler and being able to actually type-check your code, for the bigger projects, in a way, that far outweighs a lot of the other trouble you’d get.

Debugging

Actually, it does seem an older phase that effectively debugging has got more or less completely lost in lots of cases and for a lot of people. Having a debugger adds so much productivity to your programming. Also, not only a way to avoid errors but avoid making the same error over again, and learn things that aren’t working the way you hope. Also, for performance as well, if you don’t really know what your code is turning into in the browser, then that’s a really bad sign.

Moderator: We’ve all got bitten by that for sure. Eoin, your background is actually similar to mine. You started off in the Java world, or you actually have a computer science degree unlike me. Where do you think this is going? Are we stuck with this transpiling now?

Shanaghy: Are we stuck with transpiling now? Yes. Definitely I’d agree. I think we’re definitely stuck with it. I do see it as a positive. Ultimately, we’re at the early stages of evolution of the tooling. That’s such that the transpilation time is slow. That’s something that’s going to improve.

ES Build

I tried a recent development called ES Build, which is a native transpiler written in Golang, which is significantly faster than any of the alternatives. I think that’s ultimately what’s going to happen because the benefits are there. You need it. You want to ensure that the code sizes, particularly for browser applications, are going to be small. Not just for browser applications, actually, but for serverless applications, too. You’re always trying to reduce the code size and do bundling. You need to have a solution for it.

Moderator: You think we’re stuck with it and you think the solution is, swallow our pride, and go with Rust or Golang as the tooling?

Shanaghy: Do I think we’re stuck with it and the solution is go with Rust or Golang as the tooling? Yes, absolutely. You don’t have to build every JavaScript tool in JavaScript. Those benefits are enormous.

Front-end User Experience

Front-end development has always been hard, actually. The user demand on front-end experience is just getting bigger and bigger. It’s just getting higher and higher. Users demand better user experiences. They have to be fast, very responsive, load time is important. The bar is always constantly getting higher. We just need to improve the tooling. I suppose there was a period of catch-up when mobile development came along, users got used to very fast experiences. Then the web was caught with a lack of tooling to be able to deliver very fast responses. That’s where webpack, and Babel, and everything that you’re complaining about today comes from.

Moderator: Katelyn, how do you feel about this fact that the code you see in front of you, we’re just getting further away from that code?

Sills: How do I feel about this fact that the code I see, we’re just getting further away from it? I know from some of my co-workers’ experiences in TC39, the JavaScript Standards Committee, that there is actually a lot of work being done to make the plain old JavaScript actually work better.

Node

We’ve seen the ECMAScript modules being able to use Import rather than Require in Node. We’ve seen the experimental mode there in Node, allowing us to use that. I think basically, what we’re seeing is that being able to use the type module in the webpage, things like that. We’re actually getting more out of transpiling these things and squishing them down into the minimized JS, and actually being able to use the real modules in the webpage and in Node, and things like that. I think we’re moving away from it. I may differ a bit. I think that’s a really good thing.

Moderator: You think the browser ultimately will support us as developers so that the browser is essentially doing this work in the background for us?

Sills: Do I think the browser ultimately will support us as developers so that the browser is essentially doing this work in the background for us? What we’re going to see is that standardization is going to be helping us make this transition. It’s going to be on all of the hosts, all the browsers, things like that, are going to be supporting these standard module types so that it’s not just that Node has its own weird way of doing things. Is that there’s standardized JavaScript ways of doing it.

Moderator: Miguel, you have the benefit of coming to this world, fresh, and not having suffered all the terrible ways of developing, even going back to the ’90s. I remember seeing Netscape came out with server-side JavaScript in the late ’90s. I remember looking at it going, “No way am I ever going to do this?” Fifteen years later, I’m doing Node.

Without having the burden of history, what is your opinion? How do you feel about working with JavaScript, coming at it from a modern perspective?

Ramalho: How do I feel about working with JavaScript, coming at it from a modern perspective? I’ll start off by saying that I’ve been using JavaScript for five years. I’ve got some also bad experience in it.

As a matter of fact, I feel that today things are getting much better. I was very used to using libraries like jQuery and other things, for instance, for development. Nowadays you have things like dollar sign that is natively supported by JavaScript, and queries, functions. Everything seems to be coming into a point where JavaScript is going to be much better than it used to be. It used to be 5 years ago for me, maybe for you it’s 100 times better than it used to be. I really feel that although we have a lot of things that are building on top of things, and we are losing focus on the lower-level stuff. I feel that it’s a trade-off between functionality and things that we get out of the box, and things that we can really understanding. If you really want to, I think you can still develop things in JavaScript that you understand. Of course, it’s a trade-off. It’s not going to be easy to get away from all the hustle of using all these languages like CoffeeScript and TypeScript. They appeared for a reason. We’ll see. I’m hopeful for keeping on using JavaScript. I’m not feeling it’s a burden at the moment.

Moderator: I think this is the message from the panel is that things are getting better. The question is whether we’ll stick with transpiling or not. Can I call for a vote, actually? Do you want to see the transpiling tools, Babel, all that, stay as part of the ecosystem, or would you prefer for them to die away?

You can’t deny that there are certain advantages to the tool. You are dealing with a language that was written in two weeks. There’s always going to be gaps. Perhaps the way around this problem is to not use JavaScript at all. Perhaps the way around it is to start using WebAssembly and start using other languages. What do you think Gordon? As someone who can code in assembly?

Williams: As someone who can code in assembly, I think, for some things, it will be very useful. I would hope that people wouldn’t all start using WebAssembly.

WebAssembly

My thing is that if every webpage was written in C, the web would be a very different place and would be much more difficult to use, I feel. It depends why you’re going to WebAssembly. If it’s as part of a bundling process where you develop in JavaScript, and then you try and you effectively do it as an optimization process before you ship as a form of minification, then maybe that’s ok. If it’s a tool to get something running on a web browser, that you wouldn’t normally have done.

I’ve been at companies where you have a big black box of code that some guy wrote 50 years ago before he retired, in Fortran. No one dares change it. In an ideal world, you would change it. You’d rewrite it. You’d make it better. That’s not what happens. You’ve got this BLOB. If it means that you can then run that BLOB in the browser, make everyone’s life nice and easy, and have it running very quickly and in a user friendly way, rather than running on the server somewhere with seconds of delay between requests, then it’s a big bonus.

Moderator: Eoin, you’re focused more on the serverless end of things. Where does WebAssembly fit into that? It feels redundant in that context.

Shanaghy: Where does WebAssembly fit into the serverless end of things, and does it feel redundant in this context? Yes. I would say so. If you’re going to write serverless back-ends in Rust, you’re going to do them in Rust and compile them with native.

JavaScript

WebAssembly is really an opportunity for people who aren’t comfortable with JavaScript on the client to have an alternative. I think that’s a good thing, actually. I’d not like to see it become too popular, in that, JavaScript is a very productive language, gives you a lot of freedom. A lot of developer creativity. I’d like to see WebAssembly succeed, but only in areas where it’s really necessary to optimize hotspots within the code base. Maybe within frameworks like React or Vue, where they have to do Virtual-DOM doffing, and therefore, you got hotspots, which necessitate some level of optimization to make applications run faster.

Moderator: That’s a browser specific use case as well. Katelyn, in terms of running third-party JavaScript code, if that code has been compiled down to WebAssembly, and that’s what you’re given to run, surely that makes your job more difficult to be able to make sure that code is safe. Or is it easier, perhaps?

Sills: Is it difficult running third-party JavaScript code compiled down to WebAssemly and ensuring the code is safe? Yes. It does make it more difficult.

There are certain security guarantees that you get at the JavaScript language level. For instance, you don’t have access to an object, unless you’re given access to it. JavaScript has unforgeable references, unlike C. In C, you could try to guess a pointer, and then maybe you’ve actually gotten it and you can get the data at that pointer. In JavaScript, you can’t do that. For Wasm, this is an area that I’m not that familiar with, but some of my co-workers are. I know there are certain JavaScript language guarantees that actually are not met at the Wasm level, if you try to bring everything down to that level. There’s a proposal, I think it’s called Wasm GC. Somehow this is connected to the garbage collection. I don’t quite understand that. There are certain proposals that are trying to fix that. I think we’re going to see if people do Wasm right, hopefully, we’ll be able to maintain the same security guarantees that we have at the JavaScript level at the Wasm level. That remains to be seen.

Moderator: Miguel, the interesting thing about quantum computing, and if anybody has actually looked into it, I believe there are some really interesting tutorials that take you through the mathematics behind as well, which is something that you have to learn. The binary nature of the whole basis of our ordinary programming languages is not really appropriate. You end up with this world where you have to deal with things like Qubits. Does having a compilation target like WebAssembly make your life easier?

Ramalho: Does having a compilation target like WebAssembly make my life easier? What I will focus on is saying that when you’re handling things like Qubits, usually you want to focus on using them for computation. You’re writing an algorithm on top of it directly. What you can have is a phase before getting into that code that would resemble some assembly code. You already have a few things like that. In terms of trying to put both things in the same bag, I think it will be complicated, although it might be interesting in a few years. A lot of these concerns that pertain to getting good standards and getting good performance, it’s a bit like JavaScript, you build something that works and then you start building on top of it. I think we should wait for me to answer that.

Moderator: You mentioned something interesting there, which is JavaScript is all about building stuff on top of other stuff that people have done. One of the most amazing things about the JavaScript ecosystem and about the Node ecosystem, especially if you were there in the early days, was the phenomenal growth of NPM. How it completely took off. Before that, you had CPAN, and you had various other artifact repositories. The Node one just took off and went completely crazy. You got hundreds of thousands of modules, most of them quite bad. Most of them had security flaws. We’ve had all sorts of interesting things happen with developers canceling themselves, and completely removing all their code, or code being maliciously taken over.

We now have this really difficult dynamic where it’s very easy to participate as an open-source developer directly in the Node and JavaScript ecosystem. At the same time, you end up depending on hundreds of other people. Hundreds of other people end up depending on you, which raises not only security issues, but also raises issues around, as an open-source maintainer, how do you participate effectively in that ecosystem? I think it’s a problem that we haven’t yet solved, and which is really important to the future productivity of JavaScript developers.

This is a two-part question for each member of the panel. First of all, how do we deal with these huge dependency trees? Have we gone down the wrong path? Should we try and pull back and in some way reverse what’s happened? Or should we embrace it?

The second is, if you participate in this ecosystem, do you implicitly create for yourself a burden or a responsibility to everybody else in the ecosystem. If so, if you’re just doing this in your spare time at the weekend as a hobby, we’ve seen lots of people get burnt out from trying to be a good citizen. You have people who just open up their entire code bases, which lead to all sorts of security problems, or you have people who tightly control it, but then are working 80-hour weeks on a hobby. That’s not a hobby anymore. Gordon?

Williams: How do we deal with these dependency trees? I think in terms of being part of this dependency tree, creating a huge amount of modules you’re dependent on. I wonder whether, again, it’s a sign of the fact that everything’s moved so quickly and the tools haven’t quite caught up.

NPM

Already, we’ve got ways for NPM to try and check for security risks. Perhaps, there are other metrics it could work on. When you’re searching for a package to do a specific task, you could actually look at the number of dependencies and whether those dependencies can effectively be merged with other packages that you’re using. Because, yes, at the moment, if you want to solve a specific task, you’re basically Googling to try and find out that task, you find a few different packages. Then, either you try them all out, or it’s hit and miss. If you had a good way of finding out something that was an efficient solution to your problem in line with all the other stuff you’re using, that would probably.

Moderator: People have tried to do this, where you have some curation around the packages. One of the pushback from the community around that has been that it excludes people. If you were there, in the early days, your package is going to win. We see some interesting things happening to try and address that. Mikeal Rogers, who is one of the original Node people, wrote a module called Request, which was the first professional grade HTTP request client. He has recently completely put that module to bed. That module is now totally deprecated, and he says don’t use it anymore. If you read his explanation, it is explicitly so that new modules that are better can start to become the prevalent ones.

It’s not so easy I think to say, there needs to be curation and there needs to be somebody who decides what the good modules are.

Williams: No. I wasn’t necessarily saying curation, but some way of actually figuring out when you’re looking through a bunch of modules, programmatically what their dependency tree is like. Because often you just install one thing and you think it really should be there to provide a very simple task. If you actually look at it, it’s installed 400 modules. If this goes out to a whole project, you end up with this massive tree.

Moderator: I’m not going to let you answer just yet. That is fair enough. The problem is, that’s going to be true of almost every module. You still end up with analyzing a tree. This module might have a different tree to that module, but it’s still a very deep tree. I don’t know if that solves the problem.

Dependency Tree Security

Williams: If people care about it, and people obviously do care about the dependency trees, especially now that tools are checking the dependency trees for security issues. You’re more likely to have more security issues the more modules you depend on. If you make that something that people care about when they’re selecting packages, if you want your package to be more popular, then you will try and make your package dependent on packages that you will try and reduce your dependency tree as well. If you provide people the tools that they can actually see what’s going on, hopefully, over a period of time, people will start to change the way packages are made.

Moderator: Eoin, to bring us back to the code’s question. As an open-source maintainer yourself, and as someone who has to use a large number of Node modules for enterprise projects, how do you address this issue of having such a big dependency tree, and yet having to maintain a level of quality for your clients?

Shanaghy: How do I address this issue of having such a big dependency tree, and yet having to maintain a level of quality for my clients? The benefit of it outweighs the disadvantage. I think that you rightly say the successive Node was just ballooned with the proliferation of all these tiny modules, and small components in software are really effective. It makes it very easy to be agile in how you develop and swap in and out components, and experiment, and innovate. The benefits are there. Ultimately, I think what I’d like to see is some content security policy where I can install the module and know what it has access to in the same way that when you install an app on your phone, you have to explicitly grant it permissions to access the file system, or the network, or whatever native resource. If I deploy a Node.js serverless function, I can give it access to very specific minimum privileges in a cloud environment. At the moment, it’s actually quite difficult to say, but don’t give it access to the disk. Or, don’t give it access to the internet, so it can’t steal my environment variables, take out secrets, and post them to somebody’s email account.

Moderator: I like this idea. In this way, you’re effectively whitelisting what modules can do. At least one source of frustration I have, and I don’t know, Katelyn, perhaps you know if this is still the case. There is a way to create an internal VM in Node itself. If I remember correctly, if you read the documentation it says, “Don’t use this for security purposes.” Does it still say that?

Sills: Is that the built-in Node module VM?

Moderator: Yes.

Sills: If I read the documentation, does it still say, “Don’t use this for security purposes?” I don’t have any expertise in that in particular. I will say that in JavaScript already, there’s the concept of a realm. Each webpage that you run JavaScript in is a realm. JavaScript that operates in one webpage can’t affect JavaScript that operates in another. There are a number of proposals that are going through TC39, right now, that are trying to take this idea of a realm and make it actionable for security. You might be able to create a new realm, or you might be able to create an even lighter realm, that allows you to execute third-party code safely. I think people are taking that concept and they’re actually bringing it to real life and allowing people to use it.

Getting Security Right

To go back to your earlier question about the dependencies, I think the number of JavaScript dependencies that we have, and the fact that NPM has over 800,000 different packages that you can use, that’s a fantastic thing. We should be proud of that. I think the fact that we can’t see the entire dependency tree, that’s a problem security wise. If we look at how real life works, I don’t know how the bread that I had at breakfast was made. I don’t know who made it. I don’t know where it came from. If we get the security right, that’s fine. I shouldn’t have to know those things. What we have to do is we have to create the environments that allow us to get the security right.

Moderator: I think both of you actually touched on an interesting point. Because it strikes me that perhaps we should invert the problem. Perhaps it’s not so much a consequence of having used JavaScript as a consequence of software development, moving towards the idea of finally having a composable architecture where you can pull together lots of small modules. Perhaps JavaScript is just the canary in the coal mine because it’s so easy for us to have small modules.

If you think about the wider question, surely, this applies to all languages really, at the end of the day. In the Java world, which is the only other world which I have a great deal of experience in, that problem is solved through curation. You’ve had Apache, and people like that, and IBM. Those were the guys that you trusted. Speaking for myself, I still apply that mentality in the world of Node. I don’t know if any of you are familiar with the Hapi.js ecosystem, and that web server? I just trust that guy and all of his stuff, and I use as many of those packages as I can. That’s the way I reduce the problem space for myself. That’s not optimal. I think we should perhaps think about this in a little bit of a wider sphere. Maybe this isn’t just JavaScript’s problem. What do you think, Eoin?

Shanaghy: This isn’t just JavaScript’s problem. I think we should perhaps think about this in a little bit of a wider sphere. In theory, the same problem applies to Python modules, but you don’t have as many small modules in Python. It hasn’t really manifested itself in the same way. I agree. It is a general problem.

Publishing Public Open-Source JARs

I think Java has solved it with Maven. It also made it very difficult. I’ve tried to publish public open-source JARs in the last couple of years, it’s amazingly difficult how much friction there is. Therefore, people will just be discouraged from using it. I think we have to be permissive in allowing people to publish, and then encourage people to just be restrictive in how they consume. That’s really the way forward.

Moderator: Katelyn, I just want to get your take on this.

Sills: What part specifically?

Moderator: In terms of the small module approach not being specific to JavaScript. This is actually a wider trend in the software world, and in particular, how does that apply to security?

Sills: How does that apply to security in terms of the small module approach not being specific to JavaScript? We’ve definitely seen what they call supply chain attacks, I think in other languages as well. There might have been a Ruby gem attack. Python, we mentioned.

JavaScript – The Canary in the Coal Mine

I think you’re right, that we’re seeing it from the JavaScript side, because JavaScript is so widely used and it has so many packages. It is the canary in the coal mine, I think. One of the great things about JavaScript is that it already has this clear separation between the pure computation part and the outside world. A lot of languages just don’t have that separation. Anything that you get from the host window, or document, or something like that, you could potentially just cut off all of the access to those things. Then your JavaScript doesn’t have access to the network, or it doesn’t have access to the file system, or the things that are really dangerous. I’ve heard that Java, for instance, it’s really hard to make that separation because things are too embedded. We actually have something in our favor in terms of JavaScript and security.

Moderator: Miguel, after having heard everybody. What’s your take on this?

Ramalho: Are we over the edge on using modules? I think we are. I don’t think that’s a bad thing. What I think happened is JavaScript became something so worked on and with so many people giving it a background that it became so tempting for big companies to use it. If I use it on my personal projects, I don’t think it’s problematic if there’s a security flaw, or at least it’s not as much as a company deploying something in production.

I think it’s good, but we do need some way of curating this. We need to do it in a scalable way. Because we cannot have someone go through 8,000 packages trying to find if they have security flaws, and so on. There’s a need for something that’s scalable, and that’s a way of curating it.

Open-Source Developer

As an open-source developer, I only find that open-source developers, at this point can only do it for the community. They cannot do it to make money. You can get some incentives. GitHub, for instance, has a sponsored program now, which is supposed to give some background on it. I don’t think it’s working as they intend to, or at least not at this point. One other thing that GitHub is doing very well, I think, and this happens with Node modules, for each repository they say how many times this was used, or how many other repositories mentioned this one, which is also a good way of curating some things. Because you get a good sensation of how many people use it, and how trustworthy it might be, even without a standardized way of auditing for security flaws and so on.

Moderator: I think the message for me coming out of this discussion is there’s a tooling gap here. It’s an interesting space for somebody to step in and start helping us solve this problem.

Last question, what is going to happen to JavaScript in the next 10 years, Gordon?

Williams: What is going to happen to JavaScript in the next 10 years? I am slightly biased about it because I was brought to JavaScript because I liked the fact that it had a very clear grammar, almost like Java at the time where you could pretty much have a A4 cheat sheet of everything you needed to write code, not how to use the inbuilt APIs and stuff, but how to actually write an algorithm in it. It feels every year there’s a new version with more features that in lots of cases, early on, effectively features really filled a big gap. They let you do something that you couldn’t previously do. Recently, I feel features are just adding extra fluff.

Moderator: [inaudible 00:31:11] parser, don’t you?

Williams: Yes. I have to deal with all these. Every year, I have to deal with all the new stuff, which is also often written without much of an eye to the people who are writing the parses. Often, it’s not very efficient for parsing or very fast, which every year, it makes the compiler writer’s job harder, it slows down the interpreter as it tries to parse all these potential different grammars. Yes, for me, that’s a slight concern going forward is the bloat of the language past what was needed, especially as it gets to the point where a lot of people are transpiling. They’re not using all these extra features really. They’re just using the base that’s needed, which is basically the output.

Moderator: You don’t predict, you hope rather that people could just leave it alone. It’s not there, stop.

Williams: Yes. I predict this could get bloated by, I really hope that could have stopped.

Moderator: Eoin, where do you think JavaScript is going to go?

Shanaghy: Where do I think JavaScript is going to go? I don’t know about 10 years. I feel we’ve peaked in terms of JavaScript’s ascendancy. I don’t think it’s necessarily a bad thing. It’ll continue for the next 10 years in browsers for sure, but I think more people will be writing TypeScript than JavaScript ultimately.

TypeScript

It’s not something I do myself, actually, very often. I like dynamic languages. I like the frictionless nature of it, as long as you’re writing small components. TypeScript is useful when you’ve got large projects, again, which is a little bit of a red flag. Sometimes you have to do it. There are good use cases for it. I think TypeScript has been very well designed, and it’s an unstoppable force at this point. The tooling is fantastic for it. I really see, just as you had the dominance of Java in the early 2000s, ultimately, other languages emerge, and it becomes more of a level playing field. JavaScript won’t dominate as it has in the last 10 years.

Moderator: History doesn’t repeat but it does rhyme.

Katelyn, where’s JavaScript going to go?

Sills: Where’s JavaScript going to go? I think we were right when we were saying earlier that JavaScript had this attitude of, you put it out there, and then you clean it up afterwards. I think we’re going to see an increase in the amount of cleanup, and refinement, and learning about best practices.

Don’t Break the Web

If you look at what the JavaScript Standards Committee was trying to do, their main imperative was, don’t break the web. It’s really hard to clean things up after the fact when you don’t want to break what people are doing right now. They were able to introduce JavaScript strict mode. You could choose to put the strict imperative at the top of your file and that would enforce certain restrictions on what your code could do rather than the sloppy mode that was usual. Doug Crockford has this great book, “JavaScript: The Good Parts.” You can take this very messy language, and you can extract the parts of it that are really good. You can enforce these certain restrictions on your own code and the code that you use. I think we’re going to see certain parts of the JavaScript ecosystem become more refined and just really embrace better practices.

Ramalho: What is going to happen to JavaScript in the next 10 years? I’m going to have a positive view now. I think from what I’ve seen in the past years, JavaScript is really taking over things.

Libraries for Back-end Development

You see libraries, for instance NestJS, and other libraries for back-end development, that are completely replacing what we used to do in PHP and so on. Then you have a lot of libraries that are open-source from Google and Facebook, like React, and Angular, and so on. Those are really taking a good stance of the markets. Then you have multi-platform development, which you can do in JavaScript. Although it’s not as fast as native code at the moment, I think it’s walking very close to closing that gap. That’s the point. You also have libraries for using machine learning with JavaScript, and quantum computing with JavaScript. The question is, what can’t you do with JavaScript? If the answer is nothing, why should it stop, or why would it? I’m confident that it will stick along for the next 10 years.

Moderator: Full speed ahead, 100 miles an hour.

Would anyone like to make some comments on the future of JavaScript, or indeed any of the other points that the panelists made.

Participant 1: I’m actually from outside the JavaScript ecosystem. One thing I’m always wondering is, why won’t you guys have a standard library? Wouldn’t that solve a number of issues, proliferation of packages, and [inaudible 00:36:17], and curation?

Moderator: Yes, I’d love to have a standard library. What do you think guys? Gordon?

Standard Library

Williams: I think to a certain extent there is a bit of a standard library, like all the functionality in string and regular expressions.

Moderator: It’s rubbish. We need more.

Williams: I don’t know.

Shanaghy: How do you respond to that? Yes, I go the other way. Part of the success of it was that we didn’t have a standard library, and that you had this composable set of modules around it. I’ve never really actually felt the lack of a standard library to be a problem. You have a lightweight core, and then you pick and choose around it. That’s my opinion on it. Say, for example, you have this response then, so Golang came out with this very complete and robust standard library, and that draws a huge response. It’s interesting to see how that dynamic works.

Participant 1: Left-pad, looking at that, it’s not what happened.

Sills: Left-pad?

Shanaghy: Left-pad? Yes, that’s true. It might be something else. I don’t know.

Sills: Yes. I think the example of Left-pad is a really good example. For those of you that don’t know, this very widely used package just got taken out of the ecosystem, and then everything that used that was just broken. I think that means that we need some redundancy. We’re missing the infrastructure that you would actually want to have in an ecosystem like that. I think the fact that it’s not centralized is actually a good thing. Ideally, what you would want is a competitive landscape of different packages that you could choose from. Maybe if Left-pad wasn’t there anymore, you automatically already had a backup, or something like that.

I think it goes back to what Gordon was saying about having more information or some way of sorting through the packages so that your competitive landscape is actually an informed decision and not just, “This package looks like it’ll do what I think I want. Let me install it, and let’s see if anything bad happens.” I think we have a long ways to go in terms of actually making the package infrastructure work correctly. I don’t think centralizing all of the functionality is the way to go.

Ramalho: I would actually add that, I think it makes sense to have something of a standard library. For instance, what makes Python very successful is that it already has a lot of things you can import without installing it. I don’t think you can have everything on it. You will still need to have these packages, even with a standard library. I’m not opposed to it. I actually think it would be a good thing. You still need to have some space for people that are building sometimes redundancy, or redundant things that are already in the library, for instance, because they want to focus more on performance, or something like that. It’s not something bad from my point of view. It’s not something that’s going to make JavaScript much better than it is at the moment.

Participant 2: I think it’s not a problem of the module and the ecosystem to say, there’s an extra. You don’t always write everything in vanilla JavaScript and write your own modules and everything. It’s just that you’re lazy and you just want to take a module. That’s your own problem you need to take first. Then, it’s your responsibility to evaluate those modules, and it’s for safety and, I believe, the tendency to hear that.

Moderator: It comes down to a trade-off between getting the stuff done, but taking personal responsibility for modules that you import. That is a valid point.

Do you feel it’s actually practical? As a JavaScript developer, I often feel under pressure to deliver and it feels very hard to assess the dependency tree?

Williams: Do I feel it’s actually practical to get stuff done and take personal responsibility for modules I import? I think your thing about JavaScript giving you flexibility is really good. I feel it’s a little bit like C, in a way, or C++. There are a million ways to shoot yourself in the foot with C++. If you are a company that’s developing a project with it, you are almost certainly going to have coding standards otherwise you’re going to end up with a completely unmaintainable tree. I think that’s probably something that just has to happen with JavaScript as well. You have to have standards about which features you use of JavaScript and how you choose the modules you’re doing. Even if only right now that’s because of licensing concerns. It’s very powerful. As long as you don’t use everything that’s available to you, you can actually produce some really good maintainable code in a very productive way.

Shanaghy: Do I feel it’s actually practical to get stuff done and take personal responsibility for modules I import? I think the free lunch is over in terms of consuming modules and forgetting about them, and thinking it’s the open-source maintainer’s responsibility to give you perfect software. The tooling is there, actually. You got NPM audit, you got Snyk, security audits are available. You also have to do all the sensible things. Security problems are just going to increase. Multifactor Authentication on your NPM repo, using a proxy for NPM so that you’ve got your own artifact repository as well, they’re responsibilities that we’re just stuck with now. We just have to adopt them.

Sills: Do I feel it’s actually practical to get stuff done and take personal responsibility for modules I import? Part of my background is actually in food safety. There is an interesting analogy here. Imagine if someone said, “You can eat the food that someone else sells you but if you really want to be smart you got to grow the wheat yourself and then make the bread. Then you got to control the whole supply chain.” People would say, “That’s so much work. You must be insane”.

Ecosystem of Code

I think we’re building up these packages. We have this rich ecosystem of code. What we need to do is add the layer of civilization on top of it. Part of that is maybe more curation, more auditing. I also think, because it is code, because it is not physical items, we have a unique situation. We can actually start out with perfect security. We can start out by putting things in the realms and the compartments that I was talking about earlier, and not allowing any authority to get in except what we explicitly grant. Because it’s code, we have this unique opportunity to try to do things differently, and to actually be able to use other people’s code safely.

Ramalho: Do I feel it’s actually practical to get stuff done and take personal responsibility for modules I import? If you want to do an MVP, if you want to do something really fast, or if you want to compete with a startup and so on, and you’re using JavaScript, it’s not going to be feasible to develop things from the bottom up. I agree with you, this is the way to go. Although there are some trade-offs. There are some things you can do. There are some things as well that are going to help the situation.

Summary

Moderator: One of the really interesting takeaways for me was this idea that perhaps we have entered a brave, new world of small modules. It doesn’t just apply to JavaScript. It’s a universal problem that I think is going to come to all software engineering. We have to grow up and be civilized.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: My Team Is High Performing But Everyone Hates Us

MMS Founder
MMS Stephen Janaway

Article originally posted on InfoQ. Visit InfoQ

Transcript

Janaway: I’m going to talk to you today about “My Team Is High Performing… But Everyone Hates Us.” As slide says, I’m the VP of Engineering at Bloom & Wild. We are a flowering plant gifting company. What we do that’s a little bit different is a lot of our range is what we call letterbox flowers. If you look at the picture in the middle, that’s how you get your flowers from us, so in a specially designed box, and you can then arrange them however you like. You can make them look like that beautiful bouquet there. The great thing about that box is it fits through a letterbox, which means if you want to send flowers to somebody, they don’t need to be in their house, you don’t need to phone them up and go, “Surprise. Please be in for something which might be flowers.” I run the tech team there. We are 24 people in a company of 100. We are a very technology-focused company, and I love that.

I’m not here to tell you about flowers. I’m not here to tell you about Bloom & Wild per se, either. I’m here to tell you about a short period in my life, because conferences are littered with stories of perfection. Whether that’s teams that delivered on time, challenges that were overcome, products that were perfect. I often wonder why that is. Is it because, as humans, we naturally want to tell stories that reflect positively upon ourselves? Is it this idea of kind of culture inflation a little bit, particularly in startups where, yes, you’re selling your product, particularly when you’re pitching, say, to VCs, but you’re also selling your people and you’re selling your culture, and you want that to appear positively? I thought I would redress the balance. I’m going to tell you a story about a high-performing team, a short period of my life, because high-performing teams don’t last forever. They make mistakes, just like every team does. Sometimes those mistakes can threaten the life of the product or the life of the team.

Let me tell you a little story about a high-performing team, a short period of my life, and a team that we felt like we were going to change the world. This is a story of luxury fashion. It’s a story of catwalks, and fashionistas, and influences, and celebrities, but above all, it’s a story about what makes a team high performing, and what could derail them. When I was putting this together, I thought, “Should I name the company that it’s about? Should I not?” I tried not to. Maybe that’s not the done thing. I failed completely when putting this together, so this is about the Net Set, powered by Net-a-Porter.

This is what the Net Set is, or was. It was a social network. The team I ran, we built, from scratch, a social network. It did all the things that social network would do. You had a profile, you could upload content in an Instagram-like feed. You could group people together, you could comment on things, you could love things. We were the first team in the company to write an Apple Watch app when the first Apple Watch came out. This is kind of 2015. More importantly, this was a brand new team. We were a brand new codebase, we were starting from scratch, we could decide what we worked on, how we worked on it. We were even in a different floor of the office, away from everybody else, getting on with launching our social platform.

There was a really cool bit of tech in here, which is, let’s say you saw me and you thought, “Steve, that’s a wonderful blue polo shirt you’re wearing.” You could take a picture of it, upload it to the Net Set, we would image match it against Net-a-Porter’s catalog. We would show you all the things you could buy from Net-a-Porter that look like my blue polo shirt. You could then buy them from within the app. It was right at the beginning of the kind of age of social commerce before the big players were really doing this. We thought it was pretty fantastic. Frankly, being in that sort of situation is great. We loved being a part of that team.

Think about the best team you’ve ever worked in. What made that team great? Let me tell you what was great about being part of the Net Set. We were sponsored by the company founder. This made getting things done very easy. If we needed a floor of the office to do our Net Set thing on, it made getting that floor of the office very easy. We had a very clear deadline and a very clear focus. We had to launch this platform with a big party on our founder’s 50th birthday, which is like the hardest of hard deadlines I ever worked towards in my life. That doesn’t move. It’s kind of important, very important to her, very important to us, but great when you’re trying to build a high-performing team, because you’ve got a very clear deadline and focus to build around.

We were starting from scratch. We had no dependencies. We had no debt at all. We could work however we liked. This meant we could build a great, safe team spirit within our team. We could cluster around something we all really believed in. This meant we shared, we supported, and we self-organized. This is one of the easiest teams I’ve ever managed, because effectively it kind of ran itself with a little bit of kind of tweaking behind the scenes. All of this helped us have a great culture and really perform.

This is the sort of culture we were trying to build within the team. One small, entrepreneurial team against the world. We were sat on our own floor of the office, building our brand new product that the founder loved, no technical debt, no dependencies, working how we wanted. We involved everybody in all the decision making. We were a classic case of autonomy, mastery, and purpose, as Dan Pink would say. It was fun. I mean, it was a heck of a lot of hard work, but it was really fun. We met our deadline. Life was pretty great. It’s not often you get to put an update like this on your social network platform of choice. I don’t know whether talking about social network on another social network is a bit meta, but I’ve been up since 5:15 a.m. Who can blame me?

We went big when we launched. We launched with a massive party. There were Premiership footballers, fashion designers. Matthew Williamson stood on the end. For anyone who don’t know who Matthew Williamson is, he’s a very famous British fashion designer. Somebody made us a cake with the dev team on it. That was a good cake, actually. I should have put things to scale in that picture.

Ultimately, we launched. The press loved us. “Has Net-a-Porter found the holy grail of 21st-century fashion?” “It gives Instagram a run for its money.” We thought we were fantastic. We were going places. This was going to be the future of luxury fashion. It was all good, right? Product was great. Team was awesome. We launched on time. I reckon that’s probably it, right? I’m joking. I bet you’re probably thinking I got you in here with a clickbait-y style title. I’ve told you about how it was all super successful, and we changed the world. The history of conferences is littered with stories of greatness.

So What Happened?

What actually happened? You might be wondering why we’re not all Net Set users now, perhaps. You might be wondering why the flowers don’t look quite so happy. There was a merger, and our company founder left as a result. It was at this point we realized that we had all of our eggs in one basket, which was not a good thing. Autonomy is great, but total detachment from a technology, from a business, and also from a geographical perspective is not. We built one small team against the world, and that made everybody else jealous. Effectively, what we’ve done is built ourselves one big silo. This was not good. It felt like a bit of a cult of the Net Set to other people.

We found maintaining that entrepreneurial spirit that we built once we launched a bit of a challenge. Our attempts at reintegrating back into the company, a company where codebases could be 10 years old, for example, was hard. It was a company with things like change-advisory boards, for example. Whatever you think about change-advisory boards, I’m with Steve Smith on this, they’re risk management theater, so just didn’t bother going because, “Don’t need them. We’re the Net Set team”.

Six months after we launched, that kind of thrill of selling to market, and the sort of people that really got that thrill out of it were starting to get bored. We started to find they didn’t quite maybe fit so well when we needed to kind of scale up our processes, and scale up documenting things, and generally working in a slightly more process-driven way. We stumbled on for a bit. Some people left, including me. the Net Set got rolled, merged in with another team. Then eventually, a couple years down the line, the whole thing got axed. This is what you get now when you google the Net Set. I’m not wondering about the sports shop, though. What is The Dog’s Meow? I’m going to Salt Lake City just for The Dog’s Meow.

As one might imagine, you’d be thinking, “What could I have done differently?” It’s human nature, right? When something doesn’t go right, you look, and you analyze, and you think, “How could I have made it so that thing didn’t hurt? If only I’d done this, that would have happened.” Hindsight bias kicks in massively. The problem with this is you look at the negative stuff. I really firmly believe the first thing you should do when something goes wrong is look at the things you did that went well. Think about how you can amplify those, because it’s human nature, in times of failure, just to look at that negative.

There’s a lot of things we did in the Net Set team, which I’ve taken and I’ve used in other companies, I use in Bloom & Wild, that’s helped make the teams that I run successful. Don’t forget, in times of hardship, when things go wrong, your teams do some really awesome stuff, and you shouldn’t forget that. Remind yourself once in a while.

We Were High Performing – How?

Why were we high-performing as a team? Firstly, there was room for people to grow. This was a team that started small and then grew. We tried to bring people along with us. We involved everyone in decision making, and we set clear shared set expectations of people within the team. We recognized that this team and a high-performing team is greater than the sum of its parts. There are some people in your team who, when measured on output alone, will not look as strong as others, but they’re there gluing together the team. They’re doing this often without you actually realizing.

It was hard to join this team, we set our hiring bar high. We protected our culture, whether the culture that we built was the right one or not, let’s say, but it was hard to join. We ensured the team was diverse, whether that’s diversity of gender, of sexual orientation, of ethnicity, of age. Diversity brings diversity of ideas. We had feedback loops, we built feedback loops into our culture. We measured our health regularly, and we course corrected. This was really important. From this comes a little bit of, I guess you could call it my playbook for happy people in happy teams. Set clear expectations in your team. Make your hiring and your joining great. As much as you can, make your own tech choices. Share your learning. Measure your health. Then check in regularly.

How do we do this? Firstly, setting clear shared expectations is all about charters for us. A team charter effectively sets the rules of the game for a team. It’s effectively the rules the team live by, written by the team for the team, and shared outside of the team. So this is an example up here of some of the stuff we had in a charter from the Net Set team. We made this together. We held ourselves accountable to this. It’s sometimes hard if I say to you, “Give me the characteristics of a high-performing team, and tell me how you’re going to hold yourself to them,” you’d probably go, “That’s a bit hard.” If I go to you and I say, “Tell me about the worst team. Tell me some things about the most low-performing team,” you’d probably find that a little bit easier because you’re all humans.

We’ve put this together using a technique called reversal. We got the team to tell us all the bad stuff. Like we never turn up on time, for example, or we get nothing done. Then we got them to reverse those statements. Those formed the basis of our charter. We put this charter together, we stuck it on the wall. All the teams knew what our expectations were of them and theirs of us. As a team grew, we revisited this.

Secondly, I mentioned it was hard to join the team. If you want a high-performing team, make hiring and make your joining great. Make sure you spread your net wide. Make sure you’re not consciously or unconsciously discriminating, whether that’s, for example, through the gender coding of the words you have in your job descriptions. Whether it’s your job descriptions are full of bullet points which we know appeal to men and don’t appeal quite so much to women, for example. Whether it’s, you’ve got blind reviewing of, say, take-home tests, or whether you’ve even got take-home tests at all. Some people have commitments, they can’t do them. Make sure you raise your brand first. Always be raising your brand, because then when you need to hire, people know who you are. Make sure you take your people in your team through that idea of teach them how to sell you, ultimately.

Then when people do join, make it easy for them to join. What we’ve started doing at Bloom & Wild is this, which is our onboarding board. This is an idea I got from Melinda Seckington. I saw a great talk she did at LeadDev last year. This is one thing. Everyone who joins gets an onboarding board. This sets their expectations. It also gives them a sense of progress as they move things through the board, as they become more and more experienced. When somebody joins, do that charter exercise, because your team has changed and that new person’s ideas are super important.

We were lucky in the Net Set. In the Net Set, we had no technical, no organizational debt. We were totally fresh start, new team, we could choose our technology. As much as possible, I know we were very lucky there, but as much developer agencies you can give, that will pay back. That helps people’s engagement. There’s a direct link between that agency and engagement itself. This helps your hiring, it helps your retention. It can also go horribly wrong if you allow everybody to pick any technology you like. We started writing the Net Set in Rails, and then decided three months down the line that wasn’t the right thing to do, and the back end should have been in Scala. We started again writing it in Scala, and wasted three and a half months when we had a very hard deadline. Hindsight, probably not a good idea.

What Net-a-Porter did really well, it’s this idea of a golden path. One technology path, which is supported particularly by the shared teams in the business. You can follow that if you want to, and it’s easier to follow it, but if you want to go off on your own, you can go off on your own. [inaudible 00:19:37] you own production, ultimately, for your apps or your technology solutions. This gave enough agency.

There’s downside sometimes to giving too much agency and that is you lock up information in people’s heads. We have one guy in the team who was like the expert on our back end platform. That caused us problems when he wasn’t there. It caused us problems when he decided to leave. Since then, I’ve always made sure teams that I run have shared learning built into them. The first step being share what you want to learn. This is another example from Bloom & Wild. We have a shared Trello board of ideas and areas we want to learn so that we can pair together and get people learning together.

We make sure that we allow space to learn. We have something called Tech & Share, which is regular time, no agenda, anyone can present anything, anyone can run any sort of sharing or training that they want. Because I’m very much with Richard Feynman, the very famous physicist, on this. It’s like you do not know something unless you try to teach it to somebody else. I think he said teach it to a child. We don’t have that opportunity at Bloom & Wild, or Net-a-Porter. I’ve tried to teach my kids about what I do, but they don’t seem to be very interested, so there we go. Allowing those spaces for training and Tech & Share, whether it’s through communities of practice you might have, or whether it’s through persuading people to come and do stuff like this, or to talk at meetups and share that way. It’s super important.

I mentioned feedback loops before. Measuring health as a team is super important. We started this in the Net Set, we do it now. Who’s heard of the model that Spotify use for health checks? We use a little bit of a modified. This is the Bloom & Wild version. As you can see, it contains flowers, obviously. This is effectively an activity you can run that really helps focus coaching efforts. It’s not a maturity model or some sort of competition between teams or ways of actually measuring one team against another. It allows the team to tell you how they are, tell you how they’re failing. Then it gives you the opportunity then to work with them to figure out what they could improve.

We run this every three months. We have a deck of cards. I read out what’s on the card. If the team have a deck which consists of a green card, a yellow card, and the red card, if they agree with the positive statement, and they think we’re doing that most of the time, they hold the green one up. If they agree with the negative statement, they hold the red card up. If they can’t make their mind up, and they want to sit on the fence, they hold the yellow one up.

Because we do this every three months, we can look at the deviation from the previous time we’ve done this health check. Then we can say, “Let’s look at the two that have got better, because there must be a reason why they’ve got better. Let’s look at the two that have got worse, because there must be a reason why they’ve got worse, and we should do something about it. Then let’s look at one in the middle because why is everyone still ambivalent about it.” Then we form little groups, which come together around each of those five areas. They come up with some actions on how we can improve. It’s empowering to the team.

Here’s an example. “We are a totally gelled super-team with awesome collaboration!” This one, right down the middle. We thought, “Better have a look at that one.” We did. We got the team together. This particular team loves doing things in Lego, if you’ve heard of Lego Serious Play. We got them to build their perfect team out of Lego. Then we got them to effectively come up with some actions on next time that particular health check question about teamwork is asked, how it might score more positively.

The final part on the happy teams playbook. This was about measuring team health. Don’t forget about measuring individual’s health. When I say measuring individual’s health, I mean how people are feeling. At Net-a-Porter we had a review timeline a bit like this. Look familiar to people. What did you do? Review every year, review every six months if you’re lucky. We should set some objectives at the beginning of that six months, then we have a review at the end, the notion stuff happens in the middle, probably.

That happens in the middle. We get excited about setting goals, and then we forget about them because life gets in the way. Everybody gets asked to give 360 on everybody else at exactly the same time, which when you’ve got a team of about 16, gets super disruptive. The manager then ends up having to do 16 performance reviews all at the same time, which again, gets super disruptive. We don’t do this very often so we get anxious about it. It’s natural. It’s really demotivating. I have seen this as a factor in people leaving high-performing teams as well.

In the spirit of the Net Set, in the spirit of do it however the hell you like, even though your company works in this way, we decided to make a bit of a change. We did it because of this. Feedback is always better when it’s timely, when it’s given in the moment, when people have an opportunity to course correct. Slightly selfishly, because I was managing 16 people at the time, I also did it because of this. It’s always better when people prepare for their catch-ups with you.

We took a lot of inspiration from Atlassian. We started to run monthly, aligned, themed sessions. Everybody in the team would have the same theme session every month. Their goals will be shorter, so they would only last from one of that sort of session to the next. We got people through a form to tell us how they were before the session, so we could spend the 30 minutes of the session talking about them, not going, “How are you?” “I’m all right.” And then afterwards, you suddenly think, “No, actually, I wasn’t alright, but he put me on the spot and I hadn’t really prepared. I haven’t got the most out of this.” Effectively, what we’re trying to do here is take a performance review process and chunk it up into timely bite-sized pieces of feedback. We operated this within this six-month cycle, but we had to because HR told us we did, and everyone in the team absolutely hated. We asked questions in these.

We have one called removing barriers. We say, “What barriers have you encountered to your work over the past few months?” For example, at one stage in the Net Set, everyone was telling us the bill pipeline was too slow. We know we need to invest, therefore, in the pipeline. We have one called love and loathe. “What have you loved over the past few months? How can we do more of it? What have you loathed? How can we do less of it?” One that looks at career, “Where do you want to head to in your career? Has anything changed since last time?” Sometimes it’s just nice to get feedback.

The timeline looks a little like this. We do this in Bloom & Wild as well. I’ve actually written about this. These happen at different times throughout the year, but everyone gets the same one. Effectively, what we’re doing is we’re helping engage the team. By doing this often, we’re helping to remove the anxiety and the demotivation. We’re introducing short, small feedback loops.

Above all, frankly, keep it fun. It’s one thing for a team, keep it fun. This is a Christmas retro we did with beards and Santa hats and dodgy Christmas jumpers. Doesn’t quite go with the really nice-looking office, does it?

Effectively, this is what made the Net Set high performing. These are the things that I’ve taken on to other teams. Don’t forget about the good things, especially in times of failure. Meanwhile, let’s get back to our story.

Recap

A little recap. We were this team sponsored by the company founder. Starting from scratch, no dependencies, no debt, we could do whatever the heck we like. That meant we could build a really great team spirit. We built a one small team against the world culture within a much larger company. Then the founder left, some of our people kind of got a bit bored. Everyone hated us because of all one small team against the rest of the company approach. That meant maintaining that spirit was challenging. Our product ultimately got axed.

We could and we should have recognized that there will be change. We were bringing something to the market quickly, with a team that was working very hard. We were a bit blind to change. We didn’t open up to the team about the fact there will be changed. We didn’t tell and work with the team to understand what that change would be when this thing launched, because that’s a big difference.

I’m sure everyone’s seen this before. It’s kind of the standard change curve thing. We believe, as humans, we go through change in roughly this way. There’s a few subtleties to this that I wish I had explained to particularly people in the Net Set team. You can move backwards on this, as well as forwards, this isn’t linear. You don’t just start in denial and then suddenly move to commitment after a while. Something may change that may move you backwards. Maybe you might work really hard and then launch a social network, for example. It’s all about making this okay, and making this clear to people that it’s okay to feel anxious about change.

Because as any product ages, the team changes. We let people sometimes get a little bit frustrated in their roles, for example. I can’t remember who told me this quote, but it’s a really great one. Your role as a manager is ultimately to get your team member ready for their next role, whether that next role is in your team, in your company, or elsewhere. We didn’t do this in the Net Set team. We tried to hold on to everybody, one small team against the world. Why wouldn’t you want to be a part of this one small team against the world? Ultimately, we lost people as a result.

I really like this quote as well. This is from Heidi Helfand. She’s written a book called “Dynamic Reteaming,” which is really good. There’s a picture of it in the last slide. I’d really recommend it as book. It’s very good. All these things you do as your team changes, because your team will change because your product is changing. One person is new, or one person leaves, you fundamentally changed that team. Anything you do, whether it’s team building activities, whether it’s charter activities, whether it’s resetting how you measure or when you measure health in that team, you should do that when one person joins, or when one person leaves.

This applies to leaders too. As I touched upon earlier, I left the Net Set team, and I found the replacement to me. Some of those activities I’ve done around charters, around health, around check-ins and measuring individual health, this made it very easy to hand that over then to somebody new. Cake can play a big part in all of that, of course. The cake has a dark side. We had this cake ultimately, and everybody wanted a slice of our cake, but we were one small entrepreneurial team against the world, and we weren’t going to let them have a slice of it. Ultimately, we were arrogant, we were hidden away. We cared about ourselves. We were missing a lot of this as a result.

Trust

This is super important. This is the most important thing to ensure that you build, in my opinion, in a high-performing team, both inside and outside. We had a problem with trust, other people didn’t trust us. When the company founder left, we realized that none of the senior business leaders really trusted us either. They didn’t get what we were about. I thought a lot about trust since the Net Set team.

I have come to this. Watch out for the signs. Your role as a leader of a team is to watch out for these signs, whether it’s frustration, “I can’t believe the Net Set team are doing this.” Or equally, “Wat are the Net Set team doing? I don’t care what the Net Set team are doing.” That should be a good heuristic to you as a leader to say, “Hang on a minute, something’s not right here. There’s a trust problem.” Or, “No, the Net Set team, why would I want to use something they’ve used? I didn’t invent it.” Or the worst one, “Well, of course, you can do it like that, because you’re in the team that’s special.” I wish in those situations I talked to that person, “How can I get you involved? Whether directly or indirectly, how can I engage with you? Not, ‘Well, of course, you can do it like that.’ Yeah, I can. Thanks very much. Bye”.

I wish I’d done more of this. Sharing. Much more sharing. Without visibility in sharing, there is no trust. As an aside, this is a really, really good blog post to read. I would very much recommend it. Trust itself is made up of many different things. I really like this. This is from a book called “The Trusted Advisor.” It’s called the trust equation. And it goes something like this. Credibility, i.e., how well do you actually know what you’re talking about? Plus reliability. Do you do what you say? Plus intimacy. Ultimately, how safe do people feel sharing with you? All divided by your apparent self-interests. How much do you care about yourself?

Let’s use an example. Let’s say you’re getting a builder into your house to quote on something, you want an extension to your house. You might do some research on them. You might get some referrals from somebody else to establish whether they’re credible. Do they turn up on time to give you the quote? Maybe they’re reliable. As they’re quoting, and as you’re talking through with them what you want done to your house, do you feel like they really get what you want, and they really understand it, and they really want to build you the best extension to your house that they can? Or do you get interrupted three times, they take other calls for other quotes, and they appear massively self-interested? I found this is a really good way of just sense checking trust in different situations.

This applies in a couple of cases. Let’s look at trust within the Net Set team. Credibility, high. We talked a good game, we knew what we were doing, and we knew we knew what we were doing. Reliability to our founder, high. “Please launch it on my 50th birthday.” “Click, there you go.” Intimacy within the team, very high. We looked after ourselves. We understood each other. Our apparent self-interest was low because, to each other, we cared about our product. We cared about each other. Our trust in that situation, I would say, is pretty darn high.

What about outside the team? Let’s say to the top dogs [inaudible 00:37:17] in the company. Credibility, I’d say, was kind of around about medium. They knew we had some good devs, so we must be doing something good. Our reliability wasn’t very good because we weren’t delivering anything for them, we weren’t really making any money for the company. Yes, we launched, but I’m not sure whether they even got an invite to the party. Intimacy was low. We were arrogant. We were hidden away. Our self-interest was massively high. Where we relied, for example, on APIs written by other parts of the company, we’d spend more of our time complaining that the responses were slow. Not working with the team to try and figure out how they could be made more quick. Ultimately, trust outside of our team, pretty low. Always focus in both directions. Focus out and focus in when you’re thinking about trust.

We focused in far too much and that got us to our launch, but after that, we realized we’d made a mistake. By that point, I’d argue it was too late. What could we have done to focus out more? What could you do in these situations to focus out more? Firstly, we had an API. Why not open up that API to the rest of the company? Why not have some sort of shared hack day, for example, where you can work on solutions building on top of that API, talk in a language that everyone wants to talk in and understand? We could have opened up our team more, we could have shared more through blogs, through newsletters, through all-hands meetings, perhaps a little less arrogantly. We could have helped others to discover and engage better. Shared user testing and beta testing groups are great example of this. If you’ve got something new and you want people to use it, get people within the company excited about it. That will build trust too. It’s all about alignment rather than ignoring. I mentioned the change-advisory boards earlier. I should have gotten to the change-advisory boards. That was important. I didn’t. It’s all about engendering trust, rather than jealousy.

If you do one thing, do these things, build credibility, reliability, and intimacy, reduce apparent self-interest, and ultimately, this will increase trust. Trust is what you want.

And If All Else Fails, Keep Building High Performing Teams Anyway

You may be in a situation right now where you’re saying, “Yes, I’d love to do that, but the rest of the company is actually a bit rubbish. I’ve got this high-performing team and it’s really important. What do I do?” Firstly, don’t give up. You got a high-performing team, that’s an awesome thing. Figure out how you can work around this. Maybe focus your team inwards, and as a manager, do part of your job which is shielding them a bit. Be careful how much you shield. Don’t build one small entrepreneurial team against the world where the world is the rest of the company. It will come back and bite you. Ensure that alignment yourself, whether it’s through goal setting or whether it’s through how you communicate with the people within your team.

If people aren’t getting the support within your company, find them an external network, whether it’s through reciprocal brown bag sessions, for example, and lunch and learns, whether it’s through meetups, whether it’s through coming to stuff like this and promising not to talk to each other for two or three days, and meeting new people. All of that is really, really important. If all else fails, leave. Sometimes it’s better to go somewhere else and build a high-performing team somewhere else. I was part of a high-performing team. It was a short period of my life, it didn’t last forever. We made mistakes, just like other teams do. Those mistakes threaten the life of the team and the life of the product.

What would I do if I could do this all again? Firstly, I’d keep doing the good stuff because the good stuff is really important. I’d recognize and I’d be more open on the need for change. I’d reboot the team regularly. I’d watch out for those signs from other teams, the frustration and ignorance, not invented here. Trust. I’d build that trust by focusing out as much as I focused in, whether it’s opening up that technology and opening up that team. I’d build trust, not jealousy. Then finally, if all else fails, I’ll just get on with it anyway, frankly. What did I do? I left and I joined the company where I can do all the good stuff that I’ve talked to you about, and I can do it across an entire organization. Maybe you could join us. If not, maybe you’d like lovely flowers, and you’d like to get them a little bit cheaper.

These are the two books that I mentioned as we went through. The slides, I’m sure, will get shared out. They’ve got the links to the different blog posts and stuff.

Questions and Answers

Participant 1: I run a team of about eight people [inaudible 00:43:31] locations and people are one of the things. I was quite pleased to know that it gelled quite a lot with what I do. Where does empowerment sit here? Because one of the things that I try to do with my teams is empower everyone. Did you come across any situations where you felt that if the team was empowered, then they wouldn’t really need you to do all that shilling and everything? Where does that sit?

Janaway: If the team’s empowered, then great, frankly. You need to do effectively less of that. What you need to do if you’ve got a much empowered team and a disinterested everybody else is much more of that focusing out. I found that empowerment was needed. I needed to do a better job of empowering the engineers in my team to want to focus out. If anything, I did too good a job of shielding them, and I let them be shielded. If I could do it again, I wouldn’t do that. I would double down on it, making sure that they, as much as myself, were building that trust outside of the team.

Participant 2: You’ve talked about how you looked after your team or your teams. I was wondering if there were any similar pieces of advice you’ve either tried, or not tried, or advised for maybe dealing with your peer group, or managing up? Have you tried some of those techniques? Have any of them worked?

Janaway: I think peer group wise, it’s a lot of that focusing out activities. It’s about actually getting people interested. I was trying to figure out what that hook is. For some of my peer group in this particular scenario, it was quite easy because they were technology leaders in another part of the business and we were using some of their technology. It was about doing a decent job of showing that we cared about their solution. For example, we were pulling a bunch of products through a product feed through an API. When we launched, I went out to that team and said, “Look, we’re going to launch and this is the impact. We’ve done some load testing. Can we work with you to do that,” so about effectively building that trust.

The managing up piece is kind of interesting. I think it depends very much on the organizational structure you’re in. In this example, I had a boss who managed a number of different disparate groups of technology solutions, rather than managing one product. He was very, very interested when we were about to launch because it was the founder’s pet project, and it was the founder’s 50th birthday. If I didn’t do my job right, we didn’t launch right, then it wouldn’t reflect great on him. A lot of the rest of the time, he wasn’t quite so kind of interested, which I guess is because we were doing things right. We didn’t need help. It was about keeping putting what we were doing on his radar so he wouldn’t completely forget about us.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.