MMS • Zack Jackson
Article originally posted on InfoQ. Visit InfoQ
Transcript
Jackson: I’m Zack Jackson. I’m here to talk to you about Module Federation, a mechanism for at runtime code distribution. What is the motivation behind Module Federation? Why did we even build it? It mostly came from personal experience. Sharing code is not easy. Depending on your scale, it can even be unprofitable. The feedback loop for engineering is often quite laborious and slow. What I’m really looking for here is some system that allows parts of an application to be built and deployed independently. I’d like to make orchestration and self-formation simple and intuitive. I’d like the ability to be able to share vendor code without the loss of individual flexibility. I don’t really want to introduce performance hits, page reloads, or a lot of bandwidth overhead, that would generally have a drawback on the user experience. Let’s set the stage here. Module Federation is ideal for developing one or more applications involving multiple teams. This could be UI components, logical functionality, middleware, or even just server code or sideEffects. These parts can be shared and developed by independent teams. I want little to no coordination between different teams or domains.
The Problem
Let’s understand what the problem is with the technologies we have today. Sharing code is painful and a laborious process. Existing options often have limitations that we do end up hitting. As an example, we have native ESM, but that requires preloading. It only works with ESM. It has a pretty high round trip time. The sharing mechanism is pretty inflexible. Native ESM usually depends on the ability to share code based on the asset’s path. If you have multiple applications with different folder structures and asset path structures, the chances are you might not be able to leverage the reusability that ESM has. Overall, ESM is close, but I believe that it still needs some form of an orchestration heat. This is where the webpack runtime really comes into play. If we look at a single build, there are challenges there as well. It’s slow. Any change that you make requires a full rebuild. That feedback loop and monolithic structure really does get in the way at scale. We do have something from previous versions of webpack, which was also a little bit hit and miss. That was the DLL and externals plugins. DLLs or externals also have a few drawbacks. There’s a single point of failure. Everything has to be synchronously loaded upfront, regardless of if you use it or not. It requires a lot of coordination and manual labor just to deal with. It also does not support something like multiple versions, which makes it very challenging to depend on a centralized single point of failure system, where you have to be very tactical about how you can upgrade your dependency sets across multiple applications.
We Need a Scalable Tradeoff
What we really are looking for here is good build performance, something that has good web performance. A solution for sharing dependencies. We need something that’s a little easier and more intuitive in general. What we’re looking for is something similar to npm meets Edge Side Includes, but without the overhead that comes with these approaches. The existing tradeoffs that we make without Module Federation are usually going to reveal themselves in the form of operational overhead, complexity, or knowing development and release cycle, not to mention all the additional infrastructure you would need outside of your code base to stitch the application together. This is where Module Federation comes into focus. I believe that Module Federation, or at least the technology behind it, is an inevitability for the future of web technologies.
What Is Module Federation?
What exactly is Module Federation? The easiest way to describe it would be a similar concept to Apollo’s GraphQL Federation, but it’s applied to webpack modules for an environment-agnostic universal software distribution pattern. There’s a few terms that we often use when we talk about federated applications. The first one is something that we call the host. The host is considered the consuming webpack build. Usually, it’s the first one that’s initialized during an application load. We also have something called a remote. A remote refers to a separate build, where part of it is being consumed by a host. We have something called bidirectional hosts, which is a combination of a host and a remote application, where it can consume or be consumed, which would allow it to work as a standalone application for individual development environments, or allow it to work as a remote where parts of it can be consumed into other standalone web applications. The last one that we have is a newer term, and we usually refer to it as omnidirectional hosts. The idea behind an omnidirectional host is it’s a combination of all of the above. It’s a host that behaves both like a host and a remote at the same time, meaning when an omnidirectional host first boots, it is not aware if it is the host application or not. Omnidirectionality allows webpack to negotiate the dependency supply chain at runtime between everything connected to the federated network. Then determine which dependency is the best one to vend to the host itself, as well as share across the other remote applications too.
The Goals – What Are We Trying to Achieve?
What exactly are the goals of Module Federation? One, I would like dynamic sharing of the node modules at runtime with version support. I really want team autonomy, deploy independence, and I want to avoid something that I refer to as the double deploy. A double deploy usually is that process of if this was an npm package, you would have to apply the changes to the npm package. Publish that package. Go to the consuming repo, install the package update. Then open a pull request, and push or deploy that to some ephemeral environment to see your change. Hence why we call it a double deploy. You have to release two things in order to see one change. If you have more than just one application, this double deploy convention starts to get really out of hand. Imagine you had something like a header, and you had eight micro-frontends or independent applications, independent experiences but they all generally use the same navigation UI. I would first have to release a copy of the nav. I would have to open pull requests to each individual code base, and then create a merge train to merge each one independently. This is not a very scalable solution, especially if you’re trying to have a consistent experience across the applications. Synchronizing a package update everywhere all at once, is not easy to do. Another goal that I want is the ability to import feature code which is exposed from another team’s application. I want to be able to coordinate efficiently at runtime and not at build time. That is really where Module Federation stands out from most of the other approaches. With things such as DLLs or externals, it’s all coordinating this at build time. What I really want is the ability to dynamically coordinate dependency trees and code sharing at runtime.
In addition to those first set of goals, what do I actually need to make Module Federation something that’s viable? One is redundancy. I need to make sure that I have multiple backup copies that can vend any of their code to anyone else connected to the network. I would like the capability to create self-healing applications, where webpack has mechanisms that would allow me to automatically roll back to previous parts of the graph in the event of a failure. When designed well, it should be extremely hard to knock one of these applications offline. I also still want the ability to have versioned code on the dependencies and on the remotes themselves from another build. While versioning is great, there are also going to be times where I want the opposite, and I would like to always have the evergreen code where it’s always up to date, always have the latest copy on the next execution or invocation of that environment. Lastly, I’m really looking for a good developer experience. I want it to be easy to share code, and work in isolation without impacting performance, page reloads, or degrading the user’s experience.
In summary, what I’m really trying to build here is something that just works. With several approaches in the past, we often find any code sharing options that we come across to usually be limited to a specific environment, such as user land. If we want to try and apply some code sharing technique to another environment, we usually would have to have a separate mechanism in order to achieve some similar solution. What I’m really looking for here is distribution of software that works everywhere, works across any compute primitive in any environment, such as node, the browser, Electron, React Native. I’m looking for simplicity with little to no learning curve. I don’t want to have to learn a whole framework or be locked into framework specific patterns. I really want to leverage the known ways of working with code today.
How Simple Is It? (Configuring and Consuming a Federated Module)
The real question is, how simple is it? Let’s take a look at configuring an app that’s going to utilize and consume a federated module. These are two separate repositories, two separate applications. We have application A, and we have application B. Inside of there, I can see that application A has a remote referenced as application B. Application B is going to expose button and dropdown. I’m also going to opt in to sharing some additional dependencies, where some of them can support multiple versions, and other ones such as React or react-dom. I really need it to be a singleton in order to ensure that we don’t have any tearing of state or context between these individual applications.
Now that you’ve seen what the configuration looks like, let’s see what the consumption of federated code would look like. What you’re going to see here is a code snippet from application A. To demonstrate how flexible Module Federation is, I’m consuming the button as a dynamic import from application B, but I’m going to consume the dropdown statically and synchronously from application B as well. As you can see in the JSX here, my dynamic import is using React.lazy wrapped around a suspense boundary, but my dropdown is just standard JSX. What’s really cool about this is you can require asynchronous code distributed and coordinated at runtime. I can do so in a synchronous or asynchronous manner.
When to Use Federated Modules
Now that you know a little bit about Module Federation, when should we actually utilize the technology? There are a couple categories that it fits quite nicely into, one of them being global components. These could be your headers, footers, general Chrome or Shell of your application. They’re typically global, and they’re a very good first candidate if you’re looking to federate something. It’s also very good for features where it is owned by another team, authored by another team, or be consumed by other teams’ applications that are not strictly owned by the team providing that feature in the first place. It’s also very useful for horizontal enablement. If we think about the platform team, your analytics, personalization, A/B tests, all of those type of things usually would require sticking JavaScript outside the scope of the internal application, which causes a lot of limitations, especially if we think about analytics or A/B tests. If I can’t integrate as a first-class piece of software inside of the React tree, I’m usually having to overwrite the DOM with a very primitive A/B test that can’t really hook into state, context, or any other of the React lifecycle hooks.
The last one would be systems migration. There’s a couple different ways where you could see this show up based on the patterns that you’re trying to use. One of them is the LOSA architecture, which stands for Lots of Small Applications. LOSA systems generally depend on mounting several small parts of an application onto independent DOM nodes, under their own React render trees, or whatever other framework you might be using. Module Federation is not specific to React. It’s also very useful for standard micro-frontends in whatever way, shape, or form that you choose to design them. It also offers the opportunity for polylithic architecture versus a monolithic architecture. A polylith, it is really a modular monolith, where pieces can be interchanged easily, but the application as a whole still behaves in a monolithic manner. It’s just not deployed in a monolithic manner.
There’s also the good old-fashioned strangler pattern to get rid of a legacy system. There’s a couple ways where federation could be very handy. One way is you could federate your updated application code into the legacy system, and slowly strangle it out that way. The other approach could be that you make the legacy system federated, and you start building in the new modern platform that you’ve got. What you do is you import pieces of the legacy monolith as needed into your new development environment, which completely reverses the strangler pattern. Instead of taking new and putting it into old, we could take old and pull it into new, and strangle it out slowly that way. There’s also a very unique advantage here for multi-threaded computing, especially in Node.js. We have also been able to make this work with web workers in the browser. Since you have a federated application that’s exposing functionality inside of a worker on the browser or server, I could hand off any specific part of my application to another thread to handle processing seamlessly.
Component Level Ownership
In order to get the most out of Module Federation, we do need to design software in a slightly different way. Considering that this is a very different paradigm to the traditional ways we’ve always had to build and deploy software, it also requires some new paradigms on how do we actually just build code that’s designed to work in a, at runtime orchestrated environment. Component level ownership really tries to establish a pattern of software that shifts as much responsibility to the component as possible. How this would usually show up is as something I refer to as a smart component, where instead of having a page container that does all the data fetching and all the business logic, and passes all of this data to dumb components, we reverse this process a little bit. We say, let’s have smart components, and dumb pages. These smart components should be able to work in a near standalone manner, and they should also remain self-sustaining.
We also want to really focus on colocation. We want the code to be well organized, easy to understand, maintainable, and in general, reduce the fragility. When you have a smart page and dumb component, what generally ends up happening is, all of the page logic starts to get bunched up in that page container. This can introduce risks and challenges to scale. Because if you need something like let’s say, inventory, what exactly is driving the API call and the data transformations to retrieve inventory and supply it to a specific component? If it’s inside of the page, and the page is feeding data to several components, this could end up being pretty hard to untangle and actually understand what feeds what data? If somebody doesn’t fully understand the data flow, and they alter some piece of your query, or the shape of data, you could risk impacting the stability of your code base.
We really want loosely coupled code. This is even before Module Federation came along, loose coupling and modularity has always been something that’s been encouraged. With Module Federation, though, it really gives us a strong reason to use loosely coupled code and to build things more in that pattern. If things are loosely coupled, you could almost independently mount it. When this component is mounted or rendered, it would more or less just work, making it very portable, whether it’s Module Federation, npm, or just a monorepo with some symlinks. The loose coupling of the code means that it could fetch its own data, be self-sustaining, regardless of distribution patterns.
The last area that we really want to think about with component level ownership is the ownership portion. What are the ownership boundaries, and when do they apply? That is something you really want to choose wisely. Because you need to understand, where does the scope of what one team owns, ends, and where does the scope of what another team owns, begins? With clear ownership boundaries in place, it makes it a lot easier to maintain an application and split it up so that the responsibilities of components owned by certain teams is resilient, easily known, and doesn’t have a lot of bindings or data dependencies associated with the page itself. The one thing that I would caution is, beware of granularity. Component level ownership is very nice, but you don’t need to make it super granular. This is again where understanding the boundary of ownership is important. An example of being too granular could be making a title use component level ownership. There’s no need for it to do that. Let’s take something like PurchaseAttributes of a product page where it’s a decently sized feature, and it handles several responsibilities that may be owned by a different work stream. It might need to get inventory, get sizes, colors, price, anything like that. It’s very easy to draw those ownership boundaries or boxes around who owns what. That’s really what I would suggest is trying to break it up into what is a complete feature or zone on the page, and whose responsibility is that? Who owns that? Who works on it? That would be the primary place where you would want to implement something such as component level ownership.
Difference between MFE (Micro-Frontend), and Components
With component level ownership, a question starts to emerge about what’s the difference between a micro-frontend, and something that uses component level ownership, especially in a Module Federation world. A micro-frontend is a pretty loose term these days. It’s meant several things as the years have gone by. A micro-frontend can be small, it can be large. It could be whole user flows. There’s not really a good boundary on, what exactly is the scope of a micro-frontend? With component level ownership, what we’re looking for is almost a hybrid between a normal React component and a micro-frontend. The granularity here can be friend and foe. You don’t want to federate everything or make everything a micro-frontend, it just doesn’t make sense. A micro-frontend is usually mounted, utilizing some form of serialized communication bus, or browser events, or network stitching to assemble these independent parts of an application. Federated components, on the other hand, can coexist in a single application tree. They could also be used for a traditional micro-frontend where you mount several pieces of an application on two independent DOM nodes. The key here is really that level of flexibility. If I want to hook into React context, and I want a micro-frontend-like resilience, independence, autonomy, component level ownership and Module Federation really marry the two together in ways that it’s just not been possible to do in the past.
To go into a little bit more detail here. What federated components offer us in comparison is you can parse functions. You could share context. You can inherit and compose from class based components. It’s designed to behave in a self-sustaining manner. It’s modeled loosely on micro-frontend concepts with an effort to remove the drawbacks that would usually come with micro-frontends as we think about them today. What we do want from the micro-frontend concept in general is, if something breaks, we don’t want it to crash the entire application. Component level ownership and Module Federation give us these type of capabilities. It can all exist in a single app tree, go through a single render pass, but it can avoid crashing the entire application in the event that one of these components fails for whatever reason. The components themselves can be self-sustaining, as I said, but we also don’t want to lock other teams in to something that they can’t really plug into, or recompose in ways that might make sense. While we have this concept of a self-sustaining smart component, we also want to expose the base primitives that make up the smart component that allow teams to utilize it through Module Federation in multiple different ways. The primitives that I really tend to expose most often is the data element, whether it’s a GraphQL fragment, a fetch call, or any other data fetching system, I would treat as an independent export inside of the file. I would also still want access to the dumb component, where it doesn’t fetch any data on its own. It just expects props to be sent to it, and it will render based on those props. Then, of course, I still want to export out a smart component, because the majority of use cases would likely be, teams aren’t really going to be passing a whole lot of data to smart components that are owned by other teams.
Where to Use It
Where exactly should we use Module Federation? Knowing where to use it is just as important as knowing when, with some slight nuances between the two. The one big thing that always stands out to me is exposing arbitrary code can lead to brittle systems. Leveraging federated modules along well defined ownership boundaries is generally the safest bet. Federating modules should be strategic, and have patterns or contracts that are relatively standardized across an organization. I’m a big fan of Conway’s Law. Conway’s Law essentially states that a code base will more or less represent the organizational structure at a company. That works fine most of the time, but if everything operates under a Conway’s Law type structure, it starts to break down when you have shared components or horizontal components, or horizontal enablement teams. Where the code base is unable to mimic the organizational structure of a business, because some of that software is used across multiple different teams yet still owned by an individual delivery team. Module Federation is extremely useful for being able to break out of Conway’s Law when needed.
Example – Component Level Ownership (The Before)
Let’s see an example of component level ownership just to tie everything together here. This would be the before. What we would have is a product display page. It accepts some props that would come from a data fetch, or a container. We would pass some of that information into a component that we’re going to call PurchaseAttributes. This structure is very dependent on data supplied by the host system, which means that it could introduce fragility if PurchaseAttributes was, say, federated. If the data changes in any way, it could break the component, or if the component changes in any way and the data pipeline has not been updated inside of the host system, it could also break the component. What we want to really do here is try to limit the blast radius and surface area of what API we expose to consuming teams.
Example – Component Level Ownership (The After)
The after, when we’ve implemented something like component level ownership, would look more like this. The product page would get less information. Really, what we would want PurchaseAttributes to know is a very small API scope, such as, what’s the product ID, and maybe what’s the selected color of the product that was chosen. With something like that, it’s a lot harder to actually break a component, because that surface area and all of those data bindings are not really depending on the parent page. The parent page is dumb. All it does is provide some very simple hints to a smart component. The smart component can use those hints as instructions on what it should do.
What Does a CLO (Component Level Ownership) Look Like?
If we drill one in, let’s see what component level ownership actually looks like when we look at it on a component itself. In here, you can see I’ve got the three export rule in place. The first thing that I expose here is the dataUtility. It accepts one argument, which is the product ID, and it can go out, fetch the product data and return it as JSON. The second attribute that I’m going to expose here is the actual PurchaseAttributes component itself, but the dumb component. What that one expects is you give it props, and it will render. The nice reason about having these things split up is maintainability becomes quite easy. If you want to know how PurchaseAttributes works, all you need to do is go to the PurchaseAttributes component and you can easily find, what feeds it data. How does it work? What does it accept? The last thing that I would export out of here would be the smart component, which is really just a combination of the dataUtility and the dumb component together. Now what I end up with is the smart PurchaseAttributes component. The only contract that I have with the host system is I expect it to receive a prop that has the product ID in it. If they send me that prop, which I would consider a hint, this component, server or client side, will fetch its own data and pass that data into the dumb component.
Federation + CLO
I want to take it one step further here. We’ve got component level ownership, which is really useful for any organizational structure, especially if we look forward at the future of React 18, where we’re going to be getting native async data support out of the box. Component level ownership is geared toward the future, while still offering some solution for what we have today. If we combine Module Federation and component level ownership together, there’s still one additional step that I usually prefer to put in place. I call this extra step the proxy module. The idea being, I don’t want to expose arbitrary pieces of the code base to other applications to consume. That could become risky. It would also be harder to find, what does an application or team expose that we can consume? With a proxy object or proxy module, what we’re really going to do is import the actual base feature from some code base inside of that team’s application, wrap it in a function. Then export it back out, wrapped around this proxy function. What this lets me do is create very easy, intuitive ways to create contracts, to create tests. To guarantee that the component adjustments or rewrite that I’ve done to PurchaseAttributes internally does not break the contract expected under the proxy export that would be exposed out of that team via Module Federation. If it does break that contract, because it is proxied out, we have the opportunity where we could take the existing contract and transform the data into the new shape that our PurchaseAttributes component might anticipate under the hood. It gives us a little bit more future proofing. It also is really useful, because now we have a very simple way to create local unit tests where you can still test that PurchaseAttributes works, but you have another file, where you can go and actually test that if I supply the agreed upon contract to this proxy module, it will still render and return the desired functionality we are already anticipating.
SSR and Hot Reloading Prod
Now that we’ve gone over Module Federation, and component level ownership, let’s tie this all back together. Next.js is a very popular framework. It’s also not really good at micro-frontends. I think, in general, any server-side rendered application tends to be quite tricky to create a micro-frontend pattern out of. You either have to depend on stitching layers, or infrastructure, or ingresses to bounce the traffic to the right application. If you’re trying to just change a component or something like that, you pretty much have to npm install it, because when you go granular enough, there’s not really a good way to stitch an individual component into another application, and server side render it. The first thing that we’re going to see is this is the consumer. I’m importing other components from independent remote. This is independent remote. Currently, it says, Hello QCon London. We’re going to change this component. We’re going to commit it. We’re going to deploy this repo out to production independently. We can now see that it is in the process of building. I come back to the page, I still see my old content. I refresh it. I now have the new content in here. If you check it out, it was last deployed 13 minutes ago versus the independent remote, which was deployed one minute ago. If I look in the HTML of this checkout application, you can see the HTML is there with the updated code that got server-side and client-side rendered.
Why this is really powerful, and what you might not have really seen or noticed in that video is that homepage or landing page that you saw, it’s not just a page with a header and a homepage body content. That page doesn’t actually exist in the checkout application itself. The header part comes from an independently deployed repo, and uses Module Federation to pull the navigation in to this checkout application. The body content or the homepage that was there doesn’t exist in the checkout app either. It’s actually using Module Federation as well, and it’s pulling that runtime through a software stream into this checkout application, performing the data fetch and the server or client side render and hydration as well. The content that we just changed, our little Hello QCon London component is federated inside of the homepage content itself. Which means that the checkout application is using Module Federation to pull in the homepage body. The homepage body is using Module Federation to pull in another application’s hello component. This also works with circular imports and nested remotes without sacrificing round trip time, hydration mismatches, or anything else.
In a traditional world, I would usually have to take the Hello component, package it on npm. Publish it to npm. Go to the home application. I would have to install the Hello component into the home application. Publish that to npm as well. Then go over to my checkout application or shell and install the latest copy of the homepage. It would require at least 20 minutes of work to actually get the change propagated through all the applications that we need to in order for it to show up the way that we just saw it. In a Module Federation world, however, it took 56 seconds to propagate across all applications at all layers without any other coordination required from anybody else. In short, it really gives us that just works feel.
Summary
This is really just the beginning. You can imagine what’s going to come next, now that we have server-side rendering, client-side rendering, we have React Native support. We even have WebAssembly support for polyglot at runtime orchestration. Some of the things that we’re going to look at doing in the future is making the entire npm registry available via Module Federation. My ultimate goal is to try and create a world where we still have a resilient system that can be versioned, but does not require redeployment or software installations in order to retrieve updates.
Questions and Answers
Mezzalira: There are different ways that you can use Module Federation. What is the craziest implementation that you would see for Module Federation? Instead, which one is the most useful one from your perspective?
Jackson: The craziest one that I’ve seen is using federation inside of cars to actually control the microcontrollers, so that when teams need to deploy updates to the cars themselves to controlling the various onboard systems of the vehicles, they can do it at runtime. Whenever the car next starts the application, instead of having to do firmware updates, you just boot the system, and it automatically has the latest stuff. I’ve seen a similar thing done to popular game consoles that most of us probably own, where the Bluetooth controllers and all the services for running the game console are using the same thing to avoid having to do firmware updates to your game consoles. The most useful scenario that I’ve come across is probably on the backend side for something like hexagonal architectures. With federation working on Node.js, it offers a very different design pattern where we can do something like automatic rollbacks, or hot reloading of the production servers without having to tear them down or have multiple instances of something out there. For hexagonal architecture on the backend, it’s super helpful because I can create high density computing models. If I have a multi-threaded Lambda or container, I don’t need to deploy each service out to another container, but I could use federation, stream the code from these individual deploys into a single container. Once that container is under too much load, I can still shard them back out and just go over the network to another container which can repeat the same process. You can dynamically expand and contract these systems. They’re usually very fast because everything that you need would be in memory. If I can just pull the API into process and use in-memory communication, it’s typically more resilient and faster than going over the network.
Mezzalira: Towards the end, you were sharing what are the opportunities and the future of Module Federation. Do you reckon that Module Federation will be implemented not only in the JavaScript world, like Java or other solution, maybe HTML even, or CSS? That is using the same logic, at least.
Jackson: One thing I didn’t really cover well was that federation works with anything webpack can understand. It’s not just JavaScript limited. It can be images. It could be CSS, sideEffects, WebAssembly, pretty much anything that can be processed during the webpack build can be distributed and consumed in any way. What I have seen on the polyglot side is having something like a hexagonal backend where it’s Node orchestrated, so it’s powered by federation, but through the port flow, we’ll be pulling in the Java payments processing. As we need to process a payment, I can stream the Java application in there. Execute Java code, pass that then to something in Python. It’s definitely not limited to Java. It’s limited to what can webpack compile as a whole. Anything that you can import and successfully build, you can distribute however you want from federation to any other thing that can consume it.
See more presentations with transcripts