×

Presentation: Records and Sealed Types – Coming Soon to a JVM Near You!

MMS Founder
MMS Ben Evans

Article originally posted on InfoQ. Visit InfoQ

Transcript

Evans: My name is Ben Evans, I use he/him pronouns. Because I work for a public company, I have to show you this slide. What it means is despite the fact that I don’t work for Oracle and don’t even have a commitment on OpenJDK, you shouldn’t take anything I’m about to say about the future of Java at all seriously, especially if you’re going to make financial decisions based upon it, which I’m sure many of you will.

For those of you who have not met me before, I’m principal engineer and architect for JVM technologies at New Relic based in sunny Barcelona, which is lovely. Before that, I co-founded a company called jClarity with the aforementioned Martijn Verburg, which was based out of the London Java Community originally, and last year, we sold it to Microsoft. That was quite an eventful few months. Before jClarity, I was chief architect for Listed Derivatives at Deutsche Bank, and before that, at Morgan Stanley, where I did a number of things, including the Google IPO.

I’m also known for some of my work in the community. I’m a Java Champion, if you know what that is. I’m JavaOne Rock Star speaker, which I probably should rename given that JavaOne hasn’t existed for a number of years now and younger people in the audience might not even know what it is, so I should probably change that part of my interest slide. I also served on the Java Community Process Executive Committee, which is the body that makes all your Java standards, for six years. Then, during my time living in London, I was part of the organizing team for the London Java Community and co-founded a project that you might have heard of called AdoptOpenJDK.

Why Enums?

I want to talk about something which I know is very dear to everyone’s hearts. I’m talking, of course, about enums. Now everyone’s looking at me, and they’re thinking, “Why on earth has Ben [Evans] just said the word enums six times in a row? We’re talking about records and sealed types, aren’t we?” Well, we are. Why enums? There’s a general principle and there’s a specific principle, and I’m going to talk about the general principle first.

The general principle is part of language evolution, is that patters in one language become language features in languages that follow. Think about that. A pattern, as we all know, especially in object-oriented language, is the idea that you have a small group of classes that somehow operate to provide a reusable and recognizable language construct. My thesis is that, over time, languages, which obviously belong to a community of languages that existed beforehand, are able to take things which were encoded as patterns and turn them into part of the language core.

What do I mean? That seems like a pretty bold statement that I might have to justify. I mean things like vtables in C becoming the virtual keyword in C++. Hands up if you’ve got C++ programmers in the room, anyone knows some C++? A few people here know what I’m talking about, the idea of a table of function pointers turning into the virtual keyword. Of course, in Java, we’re taking this one step further. We don’t have a virtual keyword anymore. Virtual has become so much part of the language landscape that it actually disappears from view, which is the stage of evolution which is beyond the one that I’m talking about here. From a pattern to a language feature to being just so much part of the woodwork and so much part of the background that you cease to even see it as a feature, never mind the pattern anymore.

The iterator pattern in C++ becomes the iterator interface in Java. If you have read the old Gang of Four patterns book, you’ll find patterns like iterator in there, and for younger people, you look at it with fresh eyes and you think, “Why are we talking about this? Why is this a pattern? It’s just a thing. It’s just a part of the library. It’s just become part of the landscape.” Of course, you knew I’ll get back to enums. The enum approach in C++, which, of course, is just a horrible hack over integers, in Java, there’s quite a bit more to it than that. That’s the general principle. I could list out a bunch of others, but I thought that those were the closest to the surface to try to, at least, provide some sort of anecdotal justification for the point that I’m trying to impress upon you here.

Then, of course, there’s actually the specifics about Java’s enums individually. Java enums are a restricted form of class. They have semantics that are defined by a pattern. That pattern we might call finitely many instances. Notice the way that I’ve framed this. The semantics are defined by a pattern. From the pattern comes compact syntax, and it’s that way around. The semantics are defined by the pattern, the syntax comes from the semantics.

Demo – Decompile an Enum

Now, let’s actually go to my first demo of the day. I have a simple Color enum, pretty much the simplest thing you can think of when doing an enum, and because I have to work with multiple JDKs, let’s just use Java 14. Now, let’s just javac. There’s the javap tool to disassemble it.

What have we got? We can see that each of the individual names have been turned into public static final fields, each of the correct type. We have a thing called values, which gets back this field $VALUES, which has an array of color in it. That’s actually a private field, so it doesn’t show up in this particular view, but then we clone that to return this as a value field. There’s an ancient region in there for why you do it that way, to do a serialization, but I shan’t trouble you with that. You also have a Color valueOf which you use to look up this. Notice that there’s a parse called java.lang.Enum up here, which every enum extends. Now, if you go and try and extend this class directly, the compiler won’t let you. Then we’ve got this static initializer block down here, which sets up and basically initializes each of the enum constants, which now exists as public static fields.

That’s a lot of generated code, a lot of stuff which has been provided by that many characters. Decompiling it actually shows us that we already have language constructs in Java where stuff is being auto-generated for us by the compiler. When I come on to show you some things with records, in particular, this is not a new idea that we have the compiler doing significant work for us. Ok, just going to pause there. That probably is enough about enums for right now, although, spoiler, I’ll be coming back to talk about them in a bit.

Project Amber

Now, I’ll introduce something you might not have been aware of in OpenJDK, which is this thing, which is Project Amber. Project Amber is one of the projects which has been happening in OpenJDK to try to explore research directions, and there are several of these things out there. The one which you might have heard more than Amber is Project Valhalla. Everyone’s heard about inline types and inline classes, and that’s kind of the big bombastic boil the ocean, change the entire world project.

The goals of Project Amber are rather more modest, or so it, at first, appears. Project Amber is about trying to find smaller, in some sense, features which are about productivity and which are core to the Java language itself. There’s very little VM level of change necessary for these things. There’s a bit of stuff in the compiler, there’s a bit of stuff in the class file format, but very little in the type system and almost nothing in the VM. That means that you can try to deliver your large goal by breaking it apart into smaller pieces. If you build one feature on top of the other in the right way, you might get to some quite surprising basis. The old saying about you might grow a mighty oak from a tiny acorn, or in this case, a sequence of delivered tiny acorns which are part of a concerted set of deliveries to build up some much larger ideas. That’s Project Amber.

Why Records?

Let’s talk about records. What are they? This is by analogy with enums, and you’ll hopefully start to see now why I spent the first 10 to 15 minutes talking about enums. We want a first-class support for modeling a data-only aggregate. There’s a pattern here. The pattern is defining some semantics for us. The pattern is called the state, the whole state, and nothing but the state, which sounds remarkably totalitarian, so maybe we should find a better name for it. We could call it the data carrier pattern. What it means is that there is a subsetting of classes, a reduced form of the class where we want to make it clear that nothing really happens except that the instances of the class are totally and completely defined by the state that they carry and they have no real further semantics or further behavior other than that.

It turns out that this also closes a rather annoying gap in Java’s type system, which we’ve had since the very beginning. Then, notice that our next goal is to provide a language-level syntax for this pattern that we came from up here. Finally, notice the bits at the bottom of this list, and this is in descending order of importance. It reduces the class boilerplate. This is actually a Java example of a very important principle in programming language design.

Wadler’s Law

Wadler’s law has this idea, emotional intensity of debate on a language feature increases, so the debate gets more intense and more vicious, more unpleasant, the further down this list you go: semantics, then syntax, then lexical syntax, then comments. Given that we’re Java people, I might also put in a new entry here between lexical syntax and comments, multiline strings, a feature which is effectively trivial but yet which has attracted more debate than anything that I can think of recently. Because I was around for the Java 7 days, I would also put strings and switch here as well.

Wadler’s point here is that the reason why this happens, the reason why people debate these things more and more as you go down the scale, is because there are more and more people who feel competent enough to have an opinion about them. At the very highest levels of the language, there are very few people who can coherently argue about what major language semantics should look like. Everybody has an opinion about what comments should look like. The other term for this is called bikeshedding. If you’ve ever heard of the idea of bikeshedding, which really came from the operating system community, whereas, in the programming language community, we like to call it Wadler’s law instead.

Boilerplate

Because we want to talk about boilerplate, let’s talk about boilerplate. It’s nice and accessible, all the stuff you hate to type, all the things you don’t want to, the toStrings, the hashCode, the equals, the Getters, public constructors the list goes on. What do you do? There’s really two things you can do. You either get a new ID to generate them. Hands up if you do that. Keep your hand up if you have had a bug in production which caused an outage by not keeping these things up to date by regenerating them when you know you should have. Yes, I thought so. There always are. Then, you might think, “Let’s solve this problem in another way. Let’s have Lombok.” The laughs in the back of the room tell me that there are also people in here who’ve been bitten by Lombok. Lombok is really not a good solution for this, because it does some really terrible things in the way that it’s implemented. It’s extremely clever, but the older you get and the more senior hopefully you get a programmer, the less in love you are with clever solutions.

We need something new. If we’re going to get rid of the boilerplate, how are we going to do it? Let’s start with the Java Cashflow class, like this. We set it up, and notice how we’ve got this array for a public constructor. Notice that we’ve got an awful lot of repetition here. We have a currency and a field called currency, so parameter, field, method, all of which are doing the same thing. They are referring to the same fundamental thing. What we have is a pattern. We are doing a lot of typing here to represent the pattern that this thing is just these things, and they access like this, and there is no other way of doing it, and there is no more semantics other than those fields.

Fortunately, I found some help. What we get is this, this legal Java 14 syntax in preview mode, and we have got rid of all of the boilerplate, but that’s not the important thing. Remember Wadler’s law, the important thing is what we’re saying is that those three things are everything which matches about one of those. That is nothing more than the sum of these parts or maybe I should say the product of these parts. What are they?

What Could Records Be?

If we’re concerned with semantics, what are our semantics here? This is kind of backwards. I’m showing you the answer before I’ve really shown you the question. When Brian and the others on the expert group were designing it, this was the question they started with. In the design space, what are these things? What could a record be? They come up with really four possible alternatives. They could be boilerplate reduction. Yes, they could be, and for some people, that might have been the right answer, but it does kind of violate Wadler’s law principle. They could be Java Beans. Hands up if you are really hoping these were going to be Java Beans. Only a few people or at least only a few people would prefer to admit it in front of the room. Obviously, as you can probably guess the way that I’ve set this up, they’re not Java Beans. They don’t have setters, and the naming convention for the getter methods on the previous slides, you may well have noticed, was not a Java Bean convention.

They could be these things called product types as a form of algebraic data type. Or they could be something in the middle, a named tuple, which is kind of similar to a product type but it’s got a name. If you’ve been around Java for a while, you will know that, actually, we tend to like names in Java. We’re very sure that everything in our language, every type has a name. Even if we can elide it or get rid of it or not have to talk about it explicitly, as we do with lambda expressions, deep buried in the heart of it, there actually is a name involved. You probably won’t be surprised if I tell that, actually, records are named tuples. They have names. They are not structural typing. They emphasize the semantics above everything else. They emphasize the fact that it is simply a collection of fields and nothing more elaborate than that.

There is a tight binding between what you call the parameters versus what you call fields versus what you call the access of methods, and that’s just the way that we do it. This is not to be a complete replacement for everything that classes do. If this pattern doesn’t fit with what you’re doing, don’t use it. It’s there for those cases which people believe are very common where this is what you do want. One of the things that I found about it as I’ve been working with them is that it’s actually very natural. You get a sense quite early on when working and building with records, “This thing has other semantics. It’s not just a plain collection of fields. It, therefore, needs to be a fully-fledged class,” and that’s ok.

Demo – Working With Records

Let’s show the second demo. I started my career in finance, as you saw from my opening slide. At Deutsche Bank, before I was involved in Listed Derivatives, I was actually in foreign exchange. I secretly have a bit of a soft spot for FX applications. This is not to be taken seriously as an FX application. I would not put this into production. I’m not claiming that I would. It’s assumed to be open-sourced application which basically is just designed to show off Java 14 features just as a reference application. Once it’s open-sourced, if you actually bug request and bug report about how terrible my matching engine code is, I’ll be very happy to accept them.

The way it’s laid out is called foxy, and under the foxy packages, we have a couple of things. We have the domain model, we have an engine, and then we have a very simple Jetty Handler, which is just there for status checking and making sure that Kubernetes doesn’t kill it and stuff like that. I’m going to focus, first of all, on the domain. I’ve got a bunch of enums, currency pairs, which are going to be what I’m going to trade We’ve got some currency pairs enums. This is actually a fully-fledged record. If you look just very carefully at the top left of the screen up here, this is the early access preview of IntelliJ. This is actually able to cope with previous and 14 and so forth.

I define my record to be an FXOrder telling number of units, current scale trading, the side, a price, sentAt, time to live, and because I want to do everything immutably, because records are immutable, I want to chain the order to say, “This one came from another order originally,” so that if I trade and something partly matches one order, I can link it back to an original order that was originally sent.

That’s the structure of my data. You’ll notice something quite interesting here, which is, down here, I’ve got some static factory methods, and I’ve found that this is a pattern for working with records that works really well, is rather than having multiple constructors – and this is very much an emerging pattern I’m quite happy to be shut down about this once we’ve actually got a bit of experience working with records – but I find static factory and a single canonical constructor is actually a better way to work with these. In these, all that happens is these just basically fill in some parameters, because, of course, we don’t have before parameters in Java, and then they create new objects.

The thing I want to draw your attention to is this thing down here. Look, it just fits on one slide. In a class declaration, you say public class, name of class, curly braces, then you say constructor, and the parameters live on the constructor declaration. With records, our parameter declarations live on the record declaration itself. In the duality of syntax, the constructor does not need to repeat that list of parameters. This is why I think the canonical constructor approach works well. Because I know what the parameters are, because they’re the parameters of the record itself. If I want to do anything special in the constructor, I don’t have to repeat myself. I can actually just say, “This thing is the constructor, and I do all these things in it.” Basically, it’s very straightforward. It’s simply some states checking. I want to make sure that any time I create one of these order objects, it’s completely set up the way I want, and any misbehaving things like time to pass in nulls to the enum types, we’ll just throw in an argument exception.

This is actually called a compact constructor, because it doesn’t have full declaration and because I don’t actually say to set all the parameters. I don’t need to. It’s taken as read. In these constructors, at the very end, if you’ve passed all the checking, you can set all of these fields. That will be automatically generated in the compiler as well. The units, the currency pair, the side, all of the other parameters will be set invisibly down here. This, by the way, shows us one way in which these are better than structural or shape-based tuples. I can do this kind of error checking, because I actually have a constructor to hang them off. If all I was doing was taking things and putting them together into a round bracketed tuple of multi-values, there would be nowhere for this to actually run. One of the key advantages of having the named approach is to do it this way.

That’s the FXOrder object. We’ve got a couple others as well. I’ve got an FXResponse. I’ve got this sad little code comment up here which says, “This interface really wants to be a sealed interface, but there are no sealed types yet.” Instead, what I’ve had to do is have an interface called FXResponse. You can have records to implement interfaces, but this is the design sense. It’s actually difficult to build out of, because how much further down this route you want to go before this thing is genuinely a class is a separate question. In this case, I’m using FXResponse purely as extra type information, and if you notice, it was actually a marker interface. I think that this pattern would be more common if we did have sealed types in play as well.

I’m going to show you one other thing which happens, which is actually not in the domain package, but eventually, we have to get back to this class here. This class is what I’ve called a ClientsideManager. This basically is managing all the connections for an incoming protocol in the financial industry called FIX, and what we’re going to do is we are going to connect to the main matching engine by a pair of blocking queues. We break open the message, we send it down as an FXOrder, we get a response back. Basically, the communication to the matching engine is done with queues for a bunch of different reasons, not least of which helps with testing.

The thing I want to draw your attention to is this line here, because I’ve snuck in another new feature. This is also in Java 14 in preview mode. Do you see the syntax? if (response instaceof FXReject reject) This is combining two things at once. It’s combining an instanceof test and a variable declaration. I could change the code here. I could write something like this. That is what we would expect. As Java programmers, I think we’ve been steered away over the years from doing much with instanceof. It’s considered [inaudible 00:28:39]. Now, we can do this. It’s clean, there’s no unsightly cast, this looks like a small language feature. It’s a very small language feature, but sometimes small language features are where this stuff starts from. This is called an instanceof pattern. For those of you who speaks smaller languages, maybe Scala, maybe Haskell, maybe some others, the word pattern has a different connotation here to how it’s typically used in Java. I don’t mean a software engineering pattern and I don’t mean a regular expression. I mean the other type of pattern. This is the first example we see of it. Just another look to tease you there.

Equals Invariant

A couple of other things I want to tell you about records, and then we should move on and talk about sealed types. The first of them is that records have an additional equals invariant. This is in addition to your standard equals hashCode contract, but for records, you must also obey the following. If you take the individual components of the record and do a copy constructor based on it, basically, to say new record made up of the same components as the old one, then the copy must be equal to the original in a .equal sense. If you think about it, that follows directly from the semantics, the state, the whole state, and nothing but the state. If we mean that, this must be true.

Record Serialization

There’s a second point which seems incredibly minor and trivial but is going to turn out to have a deep connection to the instanceof pattern we just meant. Serialization. There is a constraint on how records must be serialized. Serialization, as Brian [Goetz] tells us, is a second mechanism for constructor. It’s invisible but public constructor and an invisible but public set of accessors for your internal state. The next line is “but not for records.” Records must be serialized and deserialized using the idea that they are simply the composition of their state components. You must obey that rule, and if you do anything else, bad things are going to happen to you.

Java Switch Expressions

One other minor point that I want to make before we start to look at sealed types, and again, this will become obvious as to why, let’s talk about enums. Specifically, let’s talk about Java switch expressions. Hands up if you’ve seen a Java switch expression before. Lots of people have. It’s quite nice. Personally, I would have much preferred it if we’d actually been able to call it something other than switch, but oh well, lost that one as well. The idea here is that now switch comes not only as a C-like statement form but also as something that must return expression. You can do some nice things in it. You can have multiple labels which correspond to the same thing. Notice that, in here, what we’re using is an enum, DayOfWeek, after java.time. If it returns Saturday or Sunday, it’s false, and if it returns Monday to Friday, it’s true. Then, I’m going to handwave away doing bank holidays and anything like that out of this question.

There’s something quite interesting about this. Can you see what’s not present here that you might expect in other kinds of switches? The default case is missing. Why is the default case missing? Because I’ve covered the entire space. The enums are implementing the pattern. The pattern is called finitely many instances. This means that the compiler can check this code and know that it is impossible for it not to go down either branch. Quick teaser question, what happens if I, for some reason, I pass null into this?

Participant 1: No pointer exception.

Evans: No pointer exception, absolutely. This either throws an exception, if it doesn’t throw an exception, all cases are covered, and the compiler can check that and verify that at compiled time.

Why Sealed Types?

Now, let’s talk about sealed types. Let’s remember one more time. Enums are instances of classes. Enums are exhaustive. Two questions arise. First of all, if I have a Pet as an enum and I have two instances, cat and dog, well, what happens if I want multiple cats or multiple dogs? How can I have a way of saying that a Pet object either is a cat object or is a dog object? That’s a new type of OO construct. It’s not straightforward “has a” or “is a.” IS-A A or B. That’s what the concept is. You may have seen this concept in other languages, in C#, in Scala, etc. It’s actually quite an old idea. Probably little about what I’m talking about, actually, it’s theoretically new, even if it’s just mostly been surfaced in modern JVM languages like Scala and Kotlin.

Option 1 – The State Field

Returning to this, how do we do this? How do we represent this? How do we model this in existing Java syntax? Two options. Option one, the state field. You have a single class called Pet, which has a field of enum type that holds the “real” type. Effectively, we’re breaking the OO missile for this. We are making the programmer keep track of the “type bits” by examining the field to say, “Is this a cat or is this a dog?” We’re moving something which is in the proper domain of the type system down into the programmer’s bookkeeping code, and that’s horrible. The second problem that you have is that you can’t have any type specifically or specific functionality. The cat can’t purr. Your choice would be either cat doesn’t purr or dog does purr. You either superset all the functionality into the base class or you disallow it altogether. If that starts to sound like a nasty ORM mismatch problem, that’s because it essentially is the same thing. This is an ORM anti-pattern rewritten into pure Java code.

Option 2 – Abstract Base

Option two, the abstract base. We could start with an abstract base, a Pet class, with a package-private constructor and two separate concrete subclasses within the same package, and only they can call the package-private constructor so everything is fine, apart from this abstraction leaks. Outside of the JDK packages, there’s no protection. Maybe modules help, but what do you do about if you need this construction in one of your API packages of the module? It can still be defeated there. What about reflection? That doesn’t help either. Let’s see how we actually do solve this.

Solution – New OO Construct

We actually introduce official supported sealed types. A sealed type is one that can be extended by a known list of types but no others. That’s enforced, not a compiler level, but also a class file format and runtime level as well. There are different ways of thinking about them. In Java’s case, the way that we think about them, we believe that we should treat them properly as almost final classes. They are a class which submits a known list of subtypes, but they’re really part of the finality mechanism rather than anything else. Java as it sounds, we have two options, open and closed. You’re final or you’re open for extension by basically anyone. This is a halfway house, this is a middle ground between the two.

We’ve got two new keywords, sealed and permits. I teased those in a comment in the records code earlier. That’s what they are. Notice that, just to build some bridges with other languages, these are also sometimes known as “union types” in other languages. Particularly, people talk about disjoint unions. Curiously, it’s actually tied into a Java language feature that already exists. Sometimes big things grow from small places. Does anyone know the one place in the Java language where we already have something that looks a bit like this or something that looks like a union type?

Participant 2: [inaudible 00:38:06]

Evans: That would technically be an intersection type, I think, and we are going with that. It’s related but it’s not quite right.

Participant 3: Objects can be null.

Evans: Objects being null. No, the null type is special, so that’s not really an answer either. That’s some very interesting question, but that relates to things like how Kotlin handles nullability. I’ve run short of time so I’m going to call it here. Multicatch. The multicatch of expression, “is one of those or one of those or one of those.” For those of you that speak other languages, notably Haskell, you can probably see why I’m shying away from calling these union types. It’s to deal with the fact that the Java type system is single-rooted, at least for references.

There’s something else we need to make all of this work. Hands up if you’ve heard of these things. It’s not quite as well as I’d hoped. Java 11 Nestmates. Hands up if you’ve heard of Nestmates. Nobody, wow. Inner classes done correctly, fixing a design bug that goes all the way back to Java 1.1 actually making inner classes work the way that they always intend to do. With that, it’s demo time.

Demo – Sealed Types

Unfortunately, sealed types aren’t actually in Java 14 yet. Here’s one I made earlier at my own personal build of OpenJDK in order to allow to play with sealed types. Here’s what they look like, public abstract sealed class, permits Cat and Dog. This is a standard pattern to have the base as the abstract so that you don’t actually have to deal with the supernode case. You are simply dealing with the possible disjoint subcases. We have a name, we have an abstract speaking method, and then we just have a very simple constructor. Now, we have a public final class. Yes, you do need to say final for all of these. I would much prefer if this was final by default, but there are some particularly bizarre use cases where you want not all of the classes which extend this to be final. Why you would ever want that backdoor to break the sealed-ness, I don’t know, but there we go.

We have a constructor, and now we have a couple of things. We got the implementation of speak, which is part of the base functionality, but now, crucially, I’ve got my own functionality. I can go and hunt mice, which is handy if you’re a cat. This is how it’s laid out. Let’s just show some compilation, javac. Something I realize, I didn’t actually show you the bytecodes, the compilation record either, so let’s just show that too.

Quite an interesting thing to notice here is that the Cat class really shows no real sign of having been the subtype of the sealed class at all, and it really doesn’t need to. The actual sealing magic happens in the supertype. Now, you might notice something a bit weird down here with some invokedynamic, but don’t be confused by that. That’s simply the way that, in modern Java, things that are to do with shaping, such as string concatenation and various other methods, are now being built by invokedynamic factories. That basically is just the same as we do for lambdas, you’re going to see that more and more. That dynamic stuff will be done using invokedynamic magic.

Let’s actually look at this sealed type case. Again, nothing really to see here. We actually need to dig in. See, right down at the bottom, there’s a new thing which says permitted subtypes. That means that this class, when you try to compile anything new, whether you have the original Java file for this or whether you have a class file compiled version of this, if you are not in this list, you will not get compiled. You will be rejected by the compiler. This solves the problems that we saw in the other half where with the package-private constructor, someone could still get around it. This also will prevent reflection as well. This really is a completely water type mechanism. Not so much to show in there.

Let’s just, very quickly, switch to Java 14 again. It’s going to look like this, Cashflow.class. Look, there’s a new class called java.lang.Record. Just as for enums, if you try to directly extend that, the compiler won’t let you. We have the public constructor, which has been automatically created and just basically does all of the things that you would expect a simple constructor to do. You have toString, hashCode, and equals, which are all provided for you by these invokedynamic factories that I was just talking about. Then you have some getter methods, and that’s it. All of that boilerplate creation has been done in exactly the same way as it would be for enums.

The Path So Far

Here’s the path so far. These are some of the pieces we needed to put together. Nestmates, inner classes done right. Switch expressions, which are now actually a standardized version of Java 14. Records, instanceof patterns. At some point, maybe 16, we’ll get sealed types as well. There are also some other pieces which are coming into play behind the scenes. The instanceof pattern, a deconstruction pattern. We can now do things like think about how the instanceof pattern was written where you tested something and then you declared a variable. Imagine doing that to a record. What do we know about records? Records are just the product of their parts. What about declaring a new variable of record type but destructuring it in just the same way as we do in many other languages back into its components?

Conclusion

Those start to build towards a feature called pattern matching, not patterns as regular expressions, not patterns as language ideas, but the idea of being able to compose and decompose the structures of our objects on the fly. Hands up, have we got any Scala users? Think match expressions, with similar power, but implemented down in the runtime, down at VM level.

Records are about semantics. They implement a pattern. They’re not a general replacement. They’re very useful though. Sealed types are new OO construct, and together, they make up this idea called algebraic data types. For people that come from pure functional languages like Haskell, there are restrictions. The Java type system can’t be fully modified to do exactly the same way that it works elsewhere, but this is our version of how algebraic data types will work. The big hope, and again, I work for a public company, nothing I can say should be taken as gospel, one that’s forward-looking, hopefully, I would think that all of this would be final by Java 17. One thing I should also make clear is records have nothing to do with inline types. Inline classes and records are completely orthogonal and completely independent concepts. Records, sealed types, pattern matching by 17, yes, I think that’s reasonable. I don’t know about classes.

If you might want to grab a couple of URLs here, Brian [Goetz] was kind enough to do an article for us about records in InfoQ. A couple of my pieces about for Oracle’s “Java Magazine,” records and sealed types on JVM.

What can you do to help? Try out the new Java 14, and I suppose 15 betas now, because 14 will be released in a couple of weeks’ time. Keep an eye out for the sealed types betas and deconstruction patterns. Please try and write some code using the new features. Even if it’s just research or innovation time, as we call it at New Relic, give feedback. The sooner is better. I actually found that records were a fantastic feature to start using in code, and I hope to actually release that demo application I showed you as open source pretty soon.

Questions and Answers

Participant 4: Two questions. One, do records allow validation annotations on the parameters so that Java Beans validation, for example, can be triggered? Second, records are not Java Beans, but it is Java Beans now that is a lot of use this boilerplate and this is why we use Lombok. Will it really be so beneficial to the current applications and the current libraries that we use for all the Java Beans stuff?

Evans: Ok, so two good questions. The first of which is, yes, you can use validation annotations on them. There’s some work to do to bridge this because you want to think carefully about what that means and make sure you don’t bring in additional semantics. Secondly, it’s unfortunate that they can’t be easily retrofitted to Java Beans’ capabilities, but the problem is you have other guarantees, like the copy constructor and the serialization, which you can’t definitely rule in or out. People do some messed up things with Java Beans and people also use the Java Beans conventions when they also have additional semantics beyond what is meant in the Java Beans. Unfortunately, I think this is an example of a language feature where people are being cautious and just doing what you can with the first car. Will someone come up with a clever bridging thing? I’m sure they will. For all that Lombok has surface-level advantages, the more I’ve used it, the more I’ve come to realize that, actually, it’s a bad solution. That’s not to say anything bad about the technical ability of the people that wrote it. They did the best they could with the design space they had, but it doesn’t make a good solution. The boilerplate, although it’s bad, in my opinion, is the lesser of two evils compared to what Lombok does to you.

Participant 5: I also have two questions. First, with regards to sealed type, do the permitted types have to be in the same package as in your example or is it just for brevity?

Evans: I’m pretty sure it was just for brevity. It’s a while since I wrote that code, but I think that as long as it’s a correct fully qualified name, I think you’re fine.

Participant 5: The second question, can records be generic?

Evans: No, they can’t. There are good reasons for that. I’ll tell you about it later if you want to know.

Participant 6: You gave a bit of a hint there, about permitted types not being final by default because there were good reasons for it. Could you then tell what those good reasons might be? That seems like quite a strong ant-pattern.

Evans: Not being final by default. I would have to point you to the appropriate place on the mailing list where Brian [Goetz] comes up with an example, where, in the example, it definitely would be problematic if they were final by default. This was part of the discussion about whether we should also have a keyword called non-final. Because the other proposal was to make them final by default. Introduce a new hyphenated keyword saying non-final and then basically allow that the other way around so that they were final by default but you could specify the non-final. I’d need to find the appropriate reference.

Participant 7: I’ll follow that pattern of two questions. How do records differ from Scala case classes? Can you implement interfaces?

Evans: Yes, I showed an example of implementing interfaces. No, the Scala case classes are a good mental model, I think. They’re similar in some ways. What will happen, in my opinion, given the way that we’ve seen this happen with other features in the Java language, is I wouldn’t be at all surprised that in the future versions, Scala 3.3 or 3.5, that Scala case classes retrofitted on top of records. That’s what happened with traits. Stateless traits just became interfaces with default methods. I think the same thing will happen here.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Johanna Rothman & Mark Kilby on Their Book From Chaos to Successful Distributed Agile Teams

MMS Founder
MMS Johanna Rothman Mark Kilby

Article originally posted on InfoQ. Visit InfoQ

In this podcast recorded at Agile 2019, Shane Hastie, Lead Editor for Culture & Methods, spoke to Johanna Rothman & Mark Kilby about their book From Chaos to Successful Distributed Agile Teams.

Key Takeaways

  • There are important mindset shifts that are needed to help enable distributed teams to be effective
  • You can’t take practices and approaches that are designed for co-located teams and apply them to distributed teams without adapting them to the new context
  • Distributed teams need to identify and align on their hours of overlap
  • Transparency and experimentation are important for a distributed team to build their culture
  • Communication needs to include personal context, not just focusing on the work but get to know the people
  • Let the teams identify and evolve their own ways of working, do not impose it from above

Subscribe on:

Show Notes

  • 00:00 Shane: Good day, folks. This is Shane Hastie for the infoQ Engineering Culture podcast. I’m at Agile 2019 and I’ve got the privilege of sitting down with Johanna Rothman and Mark Kilby. Johanna, mark, welcome. Good to see you both.
  • 00:18 Johanna: It’s so nice to see you. Thank you.
  • 00:20 Mark: Wonderful. Thanks for having us here, Shane.
  • 00:22 Shane: Now, you two recently released a book From Chaos to Successful Distributed Agile Teams. Before we get deep into the book though, let’s take two steps back and, who are you and why are you talking about distributed teams?
  • 00:38 Mark: I’ll jump into that one first. So, I have been using agile practices since some of the first books came out, but I always had this problem that I could never be in the same place as my teams, which really caused me anx for many years.
  • 00:54 But then I realized there was some things I could do to try to make connections and try to help the teams inspect and adapt and really to try to get the teams to connect with each other and their stakeholders, even if they were dispersed. I thought I was the ugly stepchild of agile for a long time, but then I realized, well, there’s many people that that are put in this situation where they cannot  I have a collocated team, and that’s when I started presenting about it and then bumped into my future, or my now current, co-author of the book. As we started comparing notes on these.
  • 01:28 Johanna: And you and I had actually collaborated on several workshops and presentations about geographically distributed agile teams in the past.
  • 01:38 And more and more of my clients were trying to collaborate over very long distances and very few hours. And I said, there has to be a better way to do this. So, at agile 2017 was that when it was?
  • 01:55 Mark: The very last day of that conference. 
  • 01:57 Johanna: Well, we were walking to the last session and I said to Mark, “you see anything useful?” And he said, “not really.”
  • 02:07 I said, “you want to write a book?” He said, “yeah”.
  • 02:12 Mark: Actually I paused for half a millisecond because I’d heard so many terrible things about going through the book writing process, but with the opportunity to pair with Johanna and having worked with her, I took one of her writing classes and have an a chance to collaborate on that.
  • 02:29 I went: this will be fun. And it has been. It’s been an absolute joy.
  • 02:33 Johanna: Yes. We have laughed.
  • 02:34 Mark: We laughed a lot while writing.
  • 02:37 Johanna: I think that we had perturbations, but I don’t think we ever cried during anything.
  • 02:42 Mark: No. Nothing to cry over.
  • 02:44 Johanna: Yes. We didn’t lose anything, so it was fine.
  • 02:47 Shane: And you did this as a distributed team.
  • 02:49 Johanna: We pair, wrote the entire book
  • 02:51 Mark: Every line in that book was written together synchronously.
  • 02:54 Johanna: Yes.
  • 02:55 Shane: From different locations.
  • 02:56 Mark: Different locations and not always in the same time zone. So, we are both in the U S East coast time zone normally, but sometimes Johanna travels, sometimes I travel. We basically use the principles of our book to write the book, right?
  • 03:12 Shane: What makes a successful distributed team
  • 03:17 Mark: There’s essentially three mindsets that we first talk about. So, one is just being willing to experiment and getting into that simple experiments. Not a huge one, not a bunch at the same time, but doing one simple experiment at a time to see if we change this, would it make our collaboration a little easier? Can we connect with our stakeholders? Things like that.
  • 03:42 Johanna: And the next one is about communication and collaboration; that if you focus on how can we use our communication modalities, is that the right word, to collaborate better, we are much more likely to succeed.
  • 03:58 Mark: And then the last, and this is where we see many people get tripped up, is they try to take those collocated practices and apply it in distributed settings.  Which is why some people are up at 2:00 AM or 3:00 AM for a standup call or planning, and it’s just ridiculous. What we did is we went back to the principals and said, okay, what are the principals telling us we need to have in place to deliver value, to collaborate effectively? To make sure that we are having a sustainable pace as a distributed team?
  • 04:30 And that’s why we went back to those principles and then evolved them into our eight principles for distributed teams..
  • 04:39 Shane: So can we just run through what are those eight principles.
  • 04:43 Johanna: Sure, the first one is the whole notion of establishing acceptable hours of overlap. If you do not have at least four hours of overlap, we don’t see how you can really use an agile approach, which is so insane.
  • 04:58 You can be a team, you can work on a project, you can work on a product, but an agile approach with all of the collaboration and the culture changes. We just don’t see it.
  • 05:09 Mark: And then the next one is transparency. So one of the traps that teams fall into is if you have distributed team members, everyone gets their little piece of work, their story, and then hides in their micro silo and does their thing.
  • 05:24 So instead it’s more about, are you being transparent about what’s happening within the team, but also are you being transparent about what’s happening across the teams? If you have a larger initiative or even more so across your organization, because if you understand more about what your customers are experiencing and that’s getting shared throughout the organization, each team can adapt appropriately.
  • 05:48 Johanna: And the third one is that business of continuous improvement, preferably with experiments. I am very big on experiments as opposed to let’s try something, but even let’s try something, there might be some value in that. But if you’re not experimenting and inspecting and adapting on a regular basis, we actually don’t think that you’re really an agile team, where you’re not living up to the principles, the agile principles and the lean principles. Cause you’re not seeing the whole, for the lean principles.
  • 06:22 Mark: And so another principle is, can you create a resilient culture through taking a more holistic approach in how you communicate? So, in that case, it’s not just communicating about the work, but do your teams feel comfortable talking a little bit about their personal context and even their personal goals?
  • 06:43 Because then you have a sense of when they might be in or out or when they might be available. If there’s a family member that might be ill or something. Can the rest of the team adjust for that? Can they be resilient? But also in understanding each other’s personal goals, can we shift the work so that people can learn and achieve some of those goals?
  • 07:04 So maybe a backend developer wants to pick up some front end techniques?
  • 07:08 Johanna: Not okay. No, no,
  • 07:10 Mark: no, no, no, no, no. Well, we do that. We do that, we do that. And I know you disagree with that, but some of ours; do not necessarily to try to be a front end developer, but just to say, okay, I want to better understand what the front end developers are dealing with.
  • 07:25 Johanna: See, those are really nice people
  • 07:29 Mark: I work with very nice people
  • 07:30 Johanna: I know you do as an old back end developer. Yes. Okay.
  • 07:38 Mark: And we don’t agree on everything, as you see
  • 07:40 Johanna: It just kills me every time you say that. I know it’s true.
  • 07:43 Mark: I know. I get the eye roll, which your listeners are not going to see that.
  • 07:47 Johanna: Yes, but it’s true. I am eye rolling.  One of the things I really like is assuming good intention as a principal because I find that it’s so easy to misunderstand each other when we only use asynchronous communication, and that leads to all kinds of issues in the team and in the work. Do you want to do communication?
  • 08:08 Mark: Pervasive communication.
  • 08:10 So in that one, especially with distributed teams, you’ll get into situations where something important is transmitted once and then lost in the ether. So has there been an important pivot in some of the work of a product and did the product owner, product manager, whoever the leader was, just send it out one way and expect that everyone immediately understands it.
  • 08:33 Or in a distributed environment, you might have to send it out a few different ways. So in the organization I’m in, sometimes some of those important information start off with a blog post from that senior executive, he will then a couple of days later, have our normal all hands meeting where he talks about it, allows for some questions and then realizing in a distributed environment, we do have a few introverts.
  • 08:58 He provides some easier channels through some chat channels, and he has some ask me anything sessions later in smaller groups where people can come back and ask him, okay, what is it about this change that we need to understand? So it’s providing those multiple ways of people to understand what is this important change and how does it impact our work.
  • 09:18 The flip side of that is taking it too far, where the important message is drowned out by thousands of pull requests and everything else. So watching for that as well.
  • 09:28 Johanna: Yes. The next principle I find so important is to create a real project rhythm, and this is not about iterations specifically. I often find that a distributed team might need a cadence of demos that might be different, not every two weeks, and if you don’t have enough hours of overlap, I’m not sure when you can say where the iteration starts and ends.
  • 09:53 Right? When does it really starting and end? But if you have a cadence and you say, at 10:00 AM Eastern time, we will do this for the people who can be there and we’ll record it. And then at 5:00 PM Eastern time, sorry, I’m in Eastern, so I think in Eastern time, either we’ll do the same thing or we will do something different for the people who can then participate that way. And that allows you to have a cadence of retrospectives, of demos, of planning, however you do it, of whatever you need. And that’s what will help a team actually collaborate as a team.
  • 10:32 Mark: And then the last one is defaulting to collaboration because we see so many distributed team members, again, kind of getting in their micro silos.
  • 10:42 And how do you encourage more continuous collaboration, especially if you have those hours of overlap. So as an example, we recently formed a team that they’re all in the same time zone. So they decided to go with scrum because that works well when you’re all in the same time zone and they connect so well that they will do their five 10 minute stand up and then they’ll usually spend about 60 to 90 minutes after that basically doing mob programming.
  • 11:12 So they enjoy each other’s company. They enjoy solving problems together, and so they’ll warn any visitors that come in, in their stand up, the standup will be short, but if you stay on for the whole time, you might be here for two hours, but they have become known as the unplanned team because product management has realized they collaborate so well this way, that anything that pops up on the portfolio that was unexpected, they can usually route to this team and they’ll take on the work and get it done quickly.
  • 11:43 Shane: Why is it the distributed teams seem to be becoming the norm for organizations today?
  • 11:50 Johanna: So there’s a couple of reasons. Years ago, a senior executive said to me, I want to be able to hire smart people anywhere they are in the world, and he’s right up to a point. The problem is if you hire one smart person in China, one smart person in India, one smart person in Israel, one smart person in Germany, one smart person in France, I could continue, but this is an actual team. Well, they call themselves a team, and then of course, all of the US time zones. So having people collaborate where they have expertise is really good, right? People might not want to move to where they are. You have plenty of stories of somebody who was really valuable, whose spouse got moved somewhere.  So the spouse moved with them. There are a lot of really good reasons. There are some bad reasons, and that is to save money on salary. Cause then almost never saves you money on salary. But people are smart all over the world. And why would we not want them as a part of our organizations?
  • 13:02 Mark: As part of that being smart is if you are hiring smart people, then allow them to apply that in multiple ways.
  • 13:10 So my teams have a lot of choice in how they set up their meetings, set up their process. They have myself and a couple of other coaches in my organization to help them with that, but we tell them upfront, we’re going to get you the point where you own your process and that means you figure out your hours of overlap.
  • 13:31 So the, you can tell when each other can be the most effective and be the most responsive to collaboration. And so with that, we tend to form smaller teams. So we go on that smaller side of the seven plus or minus two because it’s much easier for the team of five to figure out those overlaps than a team of 12.
  • 13:52 Shane: We know in the agile environment the importance of individuals and interactions over processes and tools, but for distributed teams, tools become important.
  • 14:06 Mark: Yes, they become important, but they still should not drive our interactions. And there’s some tools that are out there that can be quite flexible in how they’re used and others that are very structured in how the collaboration should proceed.
  • 14:23 I tend to shy away from those highly structured ones because, as we have all seen, especially like in a retrospective, sometimes something will come up and you might make need to take the retrospective on a 90 degree turn and you might have to go somewhere else. And a tool that kind of forces you down a path makes it very difficult.
  • 14:43 So as I mentioned in my talk yesterday, every team should have a tool box and they should be able to choose what tools they pull out of there to accomplish their work. Now, certainly there would probably be some standardization, like do you standardize on, you know, JIRA, Rally or things like that, but I try to have our teams have a collection of tools, especially for collaboration.
  • 15:09 Johanna: And by that you mean audio. Video. Right? Any meeting that does not include video as a default is probably not sufficient for a distributed team. And one more point about the boards: it’s fine if the organization decides this is our tool vendor. It’s not fine if they decide this is what every teams board needs to look like.
  • 15:34 Mark: Yes, the teams  have control of the boards.
  • 15:37 Johanna: They need to control the boards. Yes.
  • 15:39 Shane: You mentioned the importance of video. You mentioned the importance of flexibility in terms of the, the control of the boards. What are some of the other important tooling characteristics that people need to consider?
  • 15:53 Mark: So for many of our meeting tools, there is chat built into those tools. I don’t use those. Instead, I find every distributed team usually has some sort of dedicated team chat channel, so why not use that in a meeting to allow people to ask questions if they can’t break in on the conversation or to let the rest of the team know that, Oh, I’m having issues. I’m trying to reconnect. But really what I find the value is there’s a lot of conversation that happens in the chat, and if you have that in a meeting tool, as soon as the meeting gets shut down, that chat goes away. Even if there’s a chat log, nobody ever saves it in a useful place, or nobody ever looks at it again.
  • 16:43 But if it’s in the normal teams chat channel, if there was important dialogue that happened in chat, the team has that history where they normally talk. And so I always encourage teams, use that as your back channel. Another important thing is, and this goes back to the toolbox concept, is we try to provide a couple of different collaboration tools.
  • 17:05 So the, each team might have Zoom and Google meet, so if one service goes down, they’re in their back channel already said, okay, we’re going to switch to this other meeting. Most of my teams do that in under 30 seconds, so they almost lose no time in a meeting if one tool goes down because they can pull something else out of the toolbox and go right back into their meeting.
  • 17:25 It’s almost like getting bumped out of the conference room, you know? How do you find a new space to meet.
  • 17:30 Johanna: Yes. But you’re not wandering the halls,
  • 17:32 Mark: You’re not wandering the halls, looking for an admin.
  • 17:36 Shane: So, we’ve looked at the principles, the importance of tooling. What are some of the other tips and hints for success?
  • 17:43 Johanna: So, one of the things that a lot of new to agile teams don’t realize is that the more they collaborate as a team, the more they pair or swarm or mob, the more they keep their work in progress limits very low, the faster their throughput. So one of the things we did, we have a whole chapter about avoiding the chaos and we show a whole bunch of value stream maps.
  • 18:09 If a team could somehow look at their value stream and say, where does the work go, first of all, how much is above the line in the work part? How much is below the line in the wait part? And then think about how much flow efficiency do we have? Are we working as resources? I hate that word, or are we working as a team  in flow efficiency.
  • 18:33 Every organization is different. I’m not going to say you need to aim for this kind of a thing or that kind of thing. I will say that as an agile team, the more collaborative we are, the more we work in flow of some variety, where we keep our WIO low and we understand our value stream, the more likely we are to succeed.
  • 18:56 So the tip is. Make a value stream map of your team of the last couple of features or stories or whatever it was, and see what your cycle time is
  • 19:06 Mark: And then how can you reduce those delays, which usually plague a distributed team.
  • 19:12 Johanna: Yes, so one of the things that I started to promote several years ago is instead of the teams starting from the product owner, hands stories to the team, which is a horrible thing anyway, the team should collaborate on what the stories are with the product owner. So you actually have the conversation there. Once you understand what you’re going to do for the next bit of time, start the work from the East, right? If the East most person is the first person to touch that work, not the last person.
  • 19:44 So what I often see, at least in the US is that most of the team is somewhere in the US, there might be a European person, and then there’s the lone tester in India. I feel for that person because they almost always feel as if the work gets dumped on them. But instead, if the tester can say, what do we need to do for ATTD?
  • 20:07 What do we need to do for possibly TDD or BDD, right, all of the DD’s, which is driven development. And we can say if that person actually approaches the work first, we might actually have a really good idea of what to do for the development, what to do for the design, what to do for the user interface, all of this stuff based on what that tester has done
  • 20:33 Shane: And the inverse.   What are the common mistakes that organizations make and how do they avoid them?
  • 20:41 Mark: Well, we have a long list of traps in the book. Yes. So every chapter in the book has some traps and suggestions on how to either avoid or get out of those traps. There are many, maybe some from the leadership part of the chapter will be good.
  • 21:00 Johanna: So one of the interesting things about an agile team, especially a distributed team, is that the leaders need to model the same behaviours as the team. So if you have leaders who do not have sufficient hours of overlap with their team, or with all the teams, how can they actually lead the teams? We don’t quite see that.
  • 21:23 And I see this a lot where the base of the corporation is in one country, and I’m based in the US so I see this a lot in US-based organizations and they are supposedly responsible to serve all these teams all over the world, but they don’t have any hours of overlap. One thing we actually said in the book is that it’s important to get people face to face, together at regular intervals actually, and especially the fewer hours of overlap, the more important it is to get people together. And that would be an ideal time, right? If you want to, as a leader, not just go to all of your teams, but to bring them all together, you guys call those meetups.
  • 22:08 Mark: So we actually do several things here. So we have our annual meetups where everyone in the product group comes together for essentially a week, especially with our international travellers, and this is run as an open space, so it’s whoever shows up are definitely the right people, and they build the agenda each day on what are the problems we’re facing? What are the ideas? And you’ll usually see the first day a divergence of conversations and then by the second day you start seeing the ideas starting to converge and seeing sort of a prioritization emerge. This last time we had over 200, ’cause it wasn’t just engineers, it was also support. It was sales engineers.
  • 22:50 So we tried to get business representatives and marketing in there as well. Also, when we have larger initiatives that span multiple teams we’ll bring them together. And it might be we rent a space and pull in some whiteboards and say, or flip charts and say, okay, we need to map out, you know, what are the ideas? How are we going to put this together? And they’ll kind of put together a rough roadmap of that.
  • 23:14 And, so there’s actually a third concept, and this came from our senior VP, said, if you are traveling in any area where you know people from the company are at, you are welcome to take them out to lunch, dinner and send me the bill, and just so no matter where you are at, you can build connections with other people in the company.
  • 23:36 Johanna: So one of the things we had talked about in our paper in our workshops is that notion of breaking bread, that is so important to eat with people because it’s such a social opportunity. I believe that there’s the same thing in Fearless Change, right? If you want people to collaborate together having a meal, is there a really good thing.
  • 23:59 Mark: Yes. Well, and for our meetups, it’s not only having a meal, but having social activities driven by the individuals.  So, for instance, we have a big board game community within our organization, and so this year we provided them lots of board game tables and the hotel staff actually got a little irritated because they didn’t want to quit until like one in the morning and it’s like, okay guys, we have an early start, we gotta shut this one down. But it strengthened all those people that were across multiple teams,
  • 24:29 And there’s business value in that, because in those special interest groups, we have now completely run by individual team members, it’s not leaders doing this, you build connections across the company so that when you form new initiatives and new teams, they already know each other.  They already know side interest a little bit about family. So there was a connection there already and it’s been much easier to form new teams that way.
  • 24:53 Shane: Bring the people together and let them share the space and share food.  weird, it’s a very hippie kind of thing.
  • 25:03 Johanna: Especially since we’re here at the Agile 2019 conference where we are sharing space and sharing food and building relationships that will then last throughout the year. Yes. Or hopefully more than for the year.
  • 25:19 Shane: Any advice for leaders or team members.
  • 25:23 Johanna: So I’ll take the leader part of it, at least for now, and feel free to chime in. I think it’s really important not to put too much structure on the teams. If you really want to make distributed agile teams work, you have to trust the people on the team.
  • 25:39 You have to tell them the results that you want, and the results are not velocity or any nonsense like that. Results are product results or something that affects the customer in some way, right? Those kinds of results and then provide enough tooling for the team to create their own agile environment and team environment, right?
  • 26:00 Don’t skimp on the tooling. Give everybody licenses, this is still a pet peeve of mine, and encourage people, especially if they have long commute times, to see if you can’t create an environment for them at home where they feel comfortable working so that they have the option of extending their times to something that feels reasonable for them, but they’re not spending an hour everyday commuting. We have seen team members where if they didn’t have to commute for an hour or an hour and a half every day, they could actually have a lot more overlap with the rest of the team.
  • 26:39 Some of them live in areas where they do not have adequate Wi-Fi, right? Internet is just not going to be there phone is not there. It’s only in the business centre that they have that. But if anything you can do to help, you might think is a leader, that’s a very significant capital cost –  it turns out it will save you an enormous amount of money on the cost of delay and on the cycle time for everything that the team does.
  • 27:08 Mark: And for the team members, it’s actually not terribly different from the co-located teams. I often say, I realize you have expertise in this technical area or several technical areas, but if you don’t try to understand a little bit about what the business is trying to accomplish, what you build may be completely useless.
  • 27:27 Don’t fall into that trap. Right. And even more so with distributed teams if they don’t have those connections. So, if you’re a team member that doesn’t see your product owner very often, you might want to say something about that, Hey, we’d love for you to join us in this meeting, or just come sit with us in the stand up, and that way if we have a couple of questions that day, we can bounce it off you. Or maybe that product owner can offer an ask me anything session every once in awhile, which might be useful for the teams. We’ve done that as well.
  • 27:57 Shane: Really interesting stuff. What else is going on?
  • 28:00 Mark: Oh my. So, the book is available in print and electronic forms everywhere, but the next project will be getting an audio book, which a few people asked us about here at the conference, and Johanna has been poking me on that one,
  • 28:18 Johanna: Poor Mark – I’ve poked him a lot
  • 28:21 Mark: But also, we realize we can’t fly everywhere to teach the material. So, we’re putting together some online courses. So, there’s one available now. You can get to it in the resources section of Johanna’s site, or my site, one course is up now, which is essentially the three mindset shifts, some of the basic principles and team types. The second course, which we’ll also be working on right after the conference, we have it outlined, but we haven’t started recording. We’ll be focusing more for the team and the team leaders going deeper on that, and that’s not just video content and worksheets, but also some lab time with us.
  • 28:59 And then. The third one will be for those leaders who are setting up those environments that support this distributed team. And we’re considering a fourth, but we’ll see how the first couple do.
  • 29:10 Shane: And if our listeners want to continue the conversation, where do they find you?
  • 29:15 Mark: So they can certainly visit my website, markkilby.com M A R K K I L B Y. Blog posts there,  articles and pointers to all the talks and courses and everything. I’m on Twitter as M Kilby, K, I, L, B, Y, and also you can find me on LinkedIn
  • 29:35 Johanna: And I’m jrothman.com because back when I got a URL, I did not need my first name and. I’m on LinkedIn, I’m on Twitter and everything is on my website. I have a ton of stuff.
  • 29:49 Shane: Johanna, Mark – thanks very much. It’s been great to catch up.
  • 29:51 Johanna: Thank you.
  • 29:52 Mark: Thanks, Shane

Mentioned

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NativeScript 6.3, 6.4, 6.5 Releases Add Svelte, WebAssembly, KotlinJS and Performance Improvements

MMS Founder
MMS Dylan Schiemann

Article originally posted on InfoQ. Visit InfoQ

The recent NativeScript 6.3, 6.4, and 6.5 releases add a wide range of new features to their framework for building native mobile apps with TypeScript or JavaScript. Highlights in these releases include performance improvements to CSS parsing and CLI commands, support WebAssembly on Android and Svelte, 3D View Transformations, and experimental KotlinJS Support.

By Dylan Schiemann

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Data Science Community Reacts to COVID-19 Pandemic

MMS Founder
MMS Shay Palachy

Article originally posted on InfoQ. Visit InfoQ

The data-science, AI and machine-learning communities are publishing numerous data-oriented articles and blog posts on the COVID-19 pandemic in both industry and mainstream publications.

Content includes suggestions and grassroot efforts to provide access to data and utilize ML techniques to help deal with the crises, and several Kaggle competitions have been organized to pose challenges based on COVID-19 data.

The situation has also drawn attention to AI and ML-powered companies developing products or services in the space of pandemic predictions, medical diagnosis and treatment discovery, with some hoping to see these companies play an impactful role in the ongoing crisis. Naturally, research teams and labs in academia are also reacting to the crisis, with rapid publication of research papers on the topic.

It is not only tech companies and academia, however, that attempt to apply data science techniques to help the fight against the coronavirus. The US Centers of Disease Control (CDC) is working with researchers at the machine-learning department of Carnegie Mellon University to forecast the spread of coronavirus. The team built a machine-learning model that processes data from several sources such as flu-related Google searches, Twitter activity, and web traffic to predict the spread of the virus.

Significant efforts that are made by the scientific community as a whole also offer a unique opportunity to the data science community. One such example is the effort to create the COVID-19 Open Research Dataset (CORD-19), an extensive machine-readable collection of coronavirus literature available for data and text mining, with over 29,000 articles. Requested by the White House Office of Science and Technology Policy, the dataset was created by researchers from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health. Such a dataset enables, among other opportunities, the use of various data mining, automated knowledge discovery and insight extraction techniques that might help the science community answer high-priority scientific questions related to COVID-19.

Amid the considerable coverage these efforts received in the tech media, some critical voices expressed concern over the scope of expectations and hype generated about the role the AI and data science communities can play in the COVID-19 crisis. It was pointed out that daily predictions by companies like BlueDot and Metabiota, specialising in infectious disease outbreak prediction, did not surpass those made by human experts, and have gotten significantly less accurate after the first two weeks. Furthermore, AI-powered efforts to automate and improve diagnosis and treatment discovery are both months and even years away from playing a significant role in these processes, due to the low amount of data and partial understanding of the disease. Finally, the astounding amounts of opinionated articles analysing the breakdown and the response to it by data scientists and ML practitioners with little to no background in epidemiology has generated a significant backlash by professionals, advocating for more responsible writing and avoiding baseless speculations and conclusions.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DNSSEC Signing Potentially Interrupted by Coronoavirus

MMS Founder
MMS Alex Blewitt

Article originally posted on InfoQ. Visit InfoQ

The internet is underpinned by DNS, which converts textual names like www.infoq.com to IP addresses that computers use for routing. Served over unencrypted and unverified communications, DNS is an easy target for network infiltrators to be able to silently redirect traffic to other hosts. To combat this, DNSSEC was created in RFCs 4033, 4034, and 4035.

DNSSEC still serves DNS over unencrypted communications, but adds a level of security by signing the content of the DNS zone, so that modifications can be detected. When a DNS zone is transferred between name servers, its signature can be checked to verify that the domain zone hasn’t been compromised.

These signatures have a root-of-trust, like HTTPS sites do, resulting in the root of trust which is managed by ICANN. Like in-built browser roots, these are self-signed roots that underpin the content of the signatories used by the root name servers, which in turn provide signatures for those zones that they delegate for.

As with all good cryptography procedures, these root level keys are regularly rotated, similar in the way that Let’s Encrypt has popularised the use of auto-renewing HTTPS certificates. Since these keys are critical to the infrastructure of the internet, there is a ceremony involved in regenerating these root keys, including multiple people and key material in locked safes that are live-streamed to ensure that there aren’t any compromises of data. These happen every three months and thus require regular meetings at the key signing sites with people from different countries to ensure that DNSSEC continues to operate.

Unfortunately, travel in modern society has been heavily curtailed due to Coronavirus, and many countries, including the United States of America, have closed their borders to non-citizens. This means that the next signing process, due before the end of June, is almost certainly not able to happen. Although the keys generated at the last meeting in February are valud through to the end of June, to keep DNSSEC operating after June will require a change to the normal procedures.

The current plan is to use Californian ICANN staff, and breaking into the key material holding security boxes in order to allow DNSSEC to continue, as described on the APNIC blog:

Several options are on the table and input is being sought. The least desirable but, simultaneously, the most likely given the current situation, will be a ceremony using the part of the disaster recovery process where only California-based ICANN staff, and possibly a locksmith, go into the facility in Los Angeles and force their way into the security deposit boxes containing the necessary credentials (no safe drilling this time, as the set of ICANN staff that can forcibly open the safes are presumed to be on site) and perform the signing while everyone else watches attentively.

This is a variation to the standard practices, but in these ever changing times, different processes may be required. If the event is live streamed, then it may be acceptable to use this as a change to the standard procedure until such time as travel resrictions have lifted and the necessary quorum may be able to take place. Although the disaster preparation has been part of the standard practice, the idea that the United States of America would close its borders and prevent travel was not something that had been considered likely.

As we move closer to the end of June, it is likely that an agreement will be reached to allow DNSSEC to continue securely. InfoQ will follow up and cover the outcome of this change when it happens.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Ionic's Stencil Component Compiler Design Considerations — Adam Bradley at DotJS2019

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Adam Bradley, creator of StencilJS and co-creator of Ionic Framework, reviewed at dotjs2019 the design and architecture that went into Stencil, a component compiler which generates framework-agnostic components.

Bradley first explained the rationale behind a component compiler. Within the same organization, a technology team could use Angular for a purchasing application, while the internal support team may use Vue for an expense report application, and yet another team may create a marketing PWA with React. While this may be locally satisfying, at the organization level, the duplication of efforts resulting from implementing the same components in different frameworks is a source of loss of efficiency. Even those companies using a design system benefit from the reusability of components only within the framework the design system was implemented for.

While this occurs less in small companies, large organizations with many teams face such costs and challenges deriving from the multiplicity of frameworks. The alternative of imposing a single framework to the whole organization is not only often not a possibility, but also poses itself another set of problems.

Ionic, as a mobile-focused UI library for the web featuring hundreds of components, faced similar challenges. Ionic first adopted AngularJS component model when it started in 2013. As other frameworks appeared, including an entirely new Angular, the developer community was consistently asking the Ionic team for flavors of Ionic catering to specific frameworks. However, adopting a given component model for Ionic meant that any bug fix or design improvement would only be available for the component model’s framework, thus forcing to duplicate work across frameworks. The situation would likely repeat itself in the future with the new frameworks unavoidably poised to appear.

Ionic then decided on a common component model, the web component standard that reuses existing, standard browser APIs, APIs which are not poised to change at any point in the future. Ionic also crafted a component compiler called Stencil which operates purely at build time, is complementary to existing frameworks, and can compile a component description to several framework component models. Bradley emphasized:

Stencil is a built-time tool. It is not a framework. The problem Stencil is solving for Ionic is allowing our components to work in many of today’s frameworks and hopefully tomorrow’s too. It is a tool that helps us generate and maintain reusable components.

Bradley then went more in detail about the design goals and technological choices made for the compiler. Stencil uses TypeScript custom transformers to statically analyze Stencil component source code, and generate the compiled code according to the chosen target component model.

The compiler applies many optimizations in order to generate lean, optimized and performant source code. Bradley gave one example of minification optimization in which functions are converted to arrow functions when possible, removing the function and return keywords. The output code is then reduced to a constant being assigned to an arrow function, which significantly reduces the size of the code.

With such aggressive optimizations, the generated code for a <hello-world /> component is only 87 bytes. The TodoMVC application, which is commonly used for benchmarking front-end frameworks, amounts to 2.2KB. For additional technical details on optimizations performed by the compiler (including native module preloading), readers can consult the video of the talk.

The Stencil compiler can output, from the same source, any of the following and more: lazy-loading components, legacy ES5 and SystemJS components for IE11 support, bundled custom elements library, framework bindings, NodeJS hydrating script supporting server-side rendering and pre-rendering. As a primary example, Ionic provides genuine Angular components to Angular developers, and genuine React component for React developers, from the same source code.

The full talk is available on the dotJS 2019 website, and contains additional source code examples and technical details of interest. dotJS is a JavaScript conference, and is one of the 7 conferences part of the dotConferences series. dotJS 2019 took place in Paris in December 2019.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Adobe Open-Sources Adaptive, Accessible Color Palettes Generator

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

Nate Baldwin, designer for Adobe’s design system Spectrum, released the first major version of Leonardo, an open-source color generator. Leonardo strives to enhance designer productivity and end-user experience by automating the creation of accessible, adaptive color systems using contrast-ratio based generated colors. Leonardo also supports full theme generation and is intended for both designers and engineers.

Leonardo consists of a Javascript module (@adobe/leonardo-contrast-colors) and a web interface that aids in creating color palette configurations, which can be shared with both designers and engineers.

Leonardo creates adaptive color palettes, based on target contrast ratios. The following image from the web interface shows a color scale being generated according to a list of contrast ratios ranging from 2 to 6, together with the generating code (bottom right):

Browser window showing Leonardo web app. Color and target contrast ratio inputs showing a generated color scale

By default, Leonardo proposes two contrast ratios (3 and 4.5) which are those recommended by the Web Content Accessibility Guidelines 2.0 (Level AA) for large text (3:1) and normal text (4.5:1). The interface allows to view the generated colors in a combination of background contexts.

With Leonardo 1.0, designers may also build multiple contrast-based color palettes at once (e.g. themes), with each output color being based on its contrast with a shared background. Adobe provides the following example of adaptive theme, based on Adobe Spectrum’s colors:

Leonardo interface for themes

The previously shown interface allows designers to adjust a theme brightness, contrast and base color. Designers may simulate what a theme would look like in a variety of color vision deficiencies (CVD). Designers may also work in the CIECAM02 or Lightness-Hue-Chroma (LCH) space, which align by design on the human perception of color.

Leonardo’s JavaScript module exposes an API to developers with three main functions: generateContrastColors, generateBaseScale, generateAdaptiveTheme. The API allows developers to build applications that are both inclusive and adaptive by letting the user adjust key perceptual parameters (like brightness), and having the application’s palette change automatically while respecting the contrast ratios. Leonardo web interface provides an example of an accessible, adaptive calendar application, in which the user can set or remove a dark mode, and adjust the brightness of the interface:

Leonardo calendar application demo

Baldwin emphasized the importance of including accessibility concerns in design and how open-sourcing Leonardo fits that purpose:

After years of creating and maintaining accessible color systems, I’ve put my mind to solving this problem in a flexible, scalable, and rational way.
(…) Inclusive design affects us all, which is why it’s a high priority to make Leonardo an open-source project. We want it to be easier for everyone to create accessible color palettes, and enable products to put accessibility and inclusive design in the hands of their end-users.

As a matter of fact, the working update to Web Content Accessibility Guideline’s contrast standards (Project Silver) stresses out the importance of personalization. Impairments to vision are complex and empowering users to adjust themselves their visual experience to their environment and constraints makes for a better user experience.

Leonardo automates the back-and-forth audit and refine process involved in selecting colors. Designers and developers have enthusiastically received Leonardo. One designer reacted to the release on Twitter:

This looks fantastic! CIECAM02, target contrast levels, and manual key colors! Can’t wait to play with it more.

A web developers emphasized the integration possible with other tools and libraries:

Works well to generate @TailwindCSS / ChakraUI palettes !

Leonardo is an Adobe open source project, and is used to generate the color system for Spectrum, Adobe’s design system. Leonardo is available under the Apache 2.0 license. Contributions and feedback are welcome and may be provided via the GitHub project.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Announces Cloud AI Platform Pipelines to Simplify Machine Learning Development

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

In a recent blog post, Google announced the beta of Cloud AI Platform Pipelines, which provides users with a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility. 

With Cloud AI Pipelines, Google can help organizations adopt the practice of Machine Learning Operations, also known as MLOps – a term for applying DevOps practices to help users automate, manage, and audit ML workflows. Typically, these practices involve data preparation and analysis, training, evaluation, deployment, and more. 

Google product manager Anusha Ramesh and staff developer advocate Amy Unruh wrote in the blog post: 

When you’re just prototyping a machine learning (ML) model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make an ML workflow sustainable and scalable, things become more complex.

Moreover, when complexity grows, building a repeatable and auditable process becomes more laborious.

Cloud AI Platform Pipelines – which runs on a Google Kubernetes Engine (GKE) Cluster and is accessible via the Cloud AI Platform dashboard – has two major parts: 

  • The infrastructure for deploying and running structured AI workflows integrated with GCP services such as BigQuery, Dataflow, AI Platform Training and Serving, Cloud Functions, and
  • The pipeline tools for building, debugging and sharing pipelines and components.

With the Cloud AI Platform Pipelines users can specify a pipeline using either the Kubeflow Pipelines (KFP) software development kit (SDK) or by customizing the TensorFlow Extended (TFX) Pipeline template with the TFX SDK. The latter currently consists of libraries, components, and some binaries and it is up to the developer to pick the right level of abstraction for the task at hand. Furthermore, TFX SDK includes a library ML Metadata (MLMD) for recording and retrieving metadata associated with the workflows; this library can also run independently. 

Google recommends using KPF SDK for fully custom pipelines or pipelines that use prebuilt KFP components, and TFX SDK and its templates for E2E ML Pipelines based on TensorFlow. Note that over time, Google stated in the blog post that these two SDK experiences would merge. The SDK, in the end, will compile the pipeline and submit it to the Pipelines REST API; the AI Pipelines REST API server stores and schedules the pipeline for execution.

An open-source container-native workflow engine for orchestrating parallel jobs on Kubernetes called Argo runs the pipelines, which includes additional microservices to record metadata, handle components IO, and schedule pipeline runs. The Argo workflow engine executes each pipeline on individual isolated pods in a GKE cluster – allowing each pipeline component to leverage Google Cloud services such as Dataflow, AI Platform Training and Prediction, BigQuery, and others. Furthermore, pipelines can contain steps that perform sizeable GPU and TPU computation in the cluster, directly leveraging features like autoscaling and node auto-provisioning.
 
Source: https://cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-ai-platform-pipelines

AI Platform Pipeline runs include automatic metadata tracking using the MLMD – and logs the artifacts used in each pipeline step, pipeline parameters, and the linkage across the input/output artifacts, as well as the pipeline steps that created and consumed them.

With Cloud AI Platform Pipelines, according to the blog post customers will get:

  • Push-button installation via the Google Cloud Console
  • Enterprise features for running ML workloads, including pipeline versioning, automatic metadata tracking of artifacts and executions, Cloud Logging, visualization tools, and more 
  • Seamless integration with Google Cloud managed services like BigQuery, Dataflow, AI Platform Training and Serving, Cloud Functions, and many others 
  • Many prebuilt pipeline components (pipeline steps) for ML workflows, with easy construction of your own custom components

The support for Kubeflow will allow a straightforward migration to other cloud platforms, as a respondent on a Hacker News thread on Google AI Cloud Pipeline stated:

Cloud AI Platform Pipelines appear to use Kubeflow Pipelines on the backend, which is open-source and runs on Kubernetes. The Kubeflow team has invested a lot of time on making it simple to deploy across a variety of public clouds, such as AWS, and Azure. If Google were to kill it, you could easily run it on any other hosted Kubernetes service.

The release of AI Cloud Pipelines shows Google’s further expansion of Machine Learning as a Service (MLaaS) portfolio – consisting of several other ML centric services such as Cloud AutoML, Kubeflow and AI Platform Prediction. The expansion is necessary to allow Google to further capitalize on the growing demand for ML-based cloud services in a market which analysts expect to reach USD 8.48 billion by 2025, and to compete with other large public cloud vendors such as Amazon offering similar services like SageMaker and Microsoft with Azure Machine Learning.

Currently, Google plans to add more features for AI Cloud Pipelines. These features are:

  • Easy cluster upgrades 
  • More templates for authoring ML workflows
  • More straightforward UI-based setup of off-cluster storage of backend data
  • Workload identity, to support transparent access to GCP services, and 
  • Multi-user isolation – allowing each person accessing the Pipelines cluster to control who can access their pipelines and other resources.

Lastly, more information on Google’s Cloud AI Pipeline is available in the getting started documentation.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


WebDriverIO Version 6 Release Adds Native Chrome DevTools Automation Protocol Support

MMS Founder
MMS Dylan Schiemann

Article originally posted on InfoQ. Visit InfoQ

The recent release of WebDriverIO version 6, a browser test automation framework for Node.js, adds Chrome DevTools protocol testing to its existing support for WebDriver and makes it easier to leverage tools like Puppeteer and Cypress.io.

When running tests via a local test script, developers no longer need to download a browser driver. WebdriverIO checks if a browser driver is running and accessible and uses Puppeteer as a fallback if not. The WebDriverIO remains the same, whether leveraging WebDriver or Puppeteer. However, Puppeteer support only works for running tests locally and when the browser exists on the same machine as the tests.

While WebDriver provides true cross-browser testing, Puppeteer currently supports Chromium-based browsers, and Firefox nightly builds. The WebDriverIO team is following work on Playwright, which offers true cross-browser support. Currently Playwright remains an integration challenge for projects like WebDriverIO as Playwright requires a custom browser built for Firefox and Safari support.

And though WebDriverIO has added support for an alternative automation protocol, the project remains confident that eventually, a new WebDriver standard will emerge to reunify efforts in browser automation protocols.

WebDriverIO version 6 should be a straightforward upgrade for users of WebDriver IO version 5. A key breaking change is that Node.js version 8 is no longer supported, and users are encouraged to upgrade to Node.js version 12 at this time. The project also introduces a few breaking command-line interface changes. Users of TypeScript will receive notifications of which APIs have changed.

WebDriverIO adds several performance improvements with the version 6 release. Beyond the benefits of running Puppeteer locally, the project removed its dependency on request and switching to got, shrinking the bundle size of WebDriverIO by 75%. Other internal improvements to the WebDriverIO 6 codebase result in faster test execution and lower CPU and memory consumption.

The WebDriverIO 6 release also introduces a new assertion library inspired by Jest’s expect package with features including waiting for an expectation to succeed and built-in types of TypeScript and JS autocompletion.

The WebDriverIO community creates a series of services and reporters for more test integration and reporting flexibility.

Read the complete WebDriverIO 6 release article for more information on additional changes and improvements to the project.

The JavaScript ecosystem offers many options for testing in a variety of different approaches, though relatively few feature-complete packages for cross-browser functional testing. Another primary actively maintained alternative to WebDriverIO is Intern. Both Intern and WebDriverIO are open-source projects that are part of the OpenJS Foundation. Beyond these testing frameworks, many developers today choose to leverage Jest and Cypress.io and minimize their cross-browser testing. Many Angular developers continue to prefer Protractor + Karma for testing Angular applications. More coverage on the testing ecosystem is available in the JavaScript and Web Development InfoQ Trends Report.

WebDriverIO is open-source software available under the MIT license. Contributions are welcome via the WebDriverIO contribution guidelines and code of conduct.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OVHcloud’s Harbor Kubernetes Operator Becomes Part of CNCF’s goharbor Project

MMS Founder
MMS Hrishikesh Barua

Article originally posted on InfoQ. Visit InfoQ

OVHcloud released their Kubernetes operator for the Harbor container registry as open source under the CNCF’s goharbor project.

By Hrishikesh Barua

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.