×

Presentation: Secure, Performant Platform Extensibility through WebAssembly

MMS Founder
MMS Saul Cabrera

Article originally posted on InfoQ. Visit InfoQ

Transcript

Cabrera: My name is Saúl. I work in WebAssembly at Shopify. The title of this talk is secure performant platform extensibility through WebAssembly. This talk is not a deep dive in a particular area. Instead, the objective is to highlight in a more general way, the importance of server side WebAssembly to execute untrusted code, and to achieve extensibility at a platform or application level. Shopify is an e-commerce platform. We want to make commerce better for everyone.

Extensibility

You might be asking yourself, where’s the connection between an e-commerce platform and server side WebAssembly? The answer to that question relies on a principle that is not directly related to e-commerce, but it’s mostly related to software in general. That principle is extensibility. Extensibility is critical because no platform can provide all features out of the box, at least not in the same way. Custom needs must be met through some extensibility. In e-commerce, this is especially relevant given how complex and broad the domain is. In a platform, there might exist needs for many types of extensibility, like UI extensibility, for example. In this talk, when we’re referring to it, we’re going to assume it’s going to be server side business logic extensibility.

Extensibility is not a new thing. It has been around for quite some time, basically, since software exists. What’s interesting about it is not the concept itself, but how, in practical terms, platforms and applications achieve some extensibility. Because there are many ways, but in most cases, they are dependent on the needs of the business, the capabilities of the platform, and the type of value that each extensibility type brings to the platform or software that they are extending. I think in general, we can divide extensibility into two types, sync and async extensibility. Let’s talk about async extensibility. Two of the most common and well known approaches to extensibility are APIs and webhooks, and both of them can be considered async extensibility. This type of extensibility is generally best suited for cases in which there isn’t a strict constraint on execution time, or there isn’t that business dependency on real time execution of code. We have the other type, sync extensibility. This type of extensibility is better suited when customization is to be synchronous and extremely fast to be able to bring any value to the platform. This is the type of extensibility that we’re currently trying to solve at Shopify through WebAssembly.

Example – Checkout Flow

This might be a bit ambiguous, so let’s analyze a more concrete example, the checkout flow. Checkout is one of Shopify’s fundamental pieces. It basically encapsulates the notion of commerce. It’s where the exchange of money for goods and services happens. Just by this definition, we can conclude that the checkout is a very performance sensitive context. Let’s assume the following scenario. Let’s assume that we want to apply a particular discount, go to a checkout when a current buyer is tagged as VIP. One important thing to consider about this scenario is that the self-conditions to determine if the current buyer is VIP or not, are opaque to a platform like Shopify. Here’s where asynchronous extensibility comes in. The synchronous extension that we see in the square is responsible for determining if the current buyer is a VIP, and calculating the discount for that buyer. This means that it needs to be secure, it needs to be extremely fast. The responsibilities of this extension are critical to the buyer experience. If the execution is too slow, the buyer might just abandon the checkout. If the execution is erroneous, the buyer might get no discount at all, or a wrong discount.

WebAssembly

Before taking a look at how this process relates to WebAssembly, let’s do a very quick recap of WebAssembly. WebAssembly was designed as a new binary format for the web, making it possible to bring other programming languages to browsers, along with JavaScript, so that makes it polyglot. More than that, WebAssembly was designed with very clear goals in mind. It must be fast. Its execution should be near native. Then, it must be secure. Execution should take place in a memory safe and sandbox environment. WebAssembly’s goals are generally applicable to most environments in which programs can be executed. It quickly became clear that WebAssembly’s potential was big outside the browser. I think in general, WebAssembly is suited for most general computing purposes.

If we combine both WebAssembly with the requirements of synchronous extensibility, we end up with the possibility to have a polyglot, fast, and secure extensibility platform. The overall experience of Shopify’s platform that’s built on top of WebAssembly looks like this. Developers writing extensibility code, compile their code ahead of time to WebAssembly. Then that code is deployed to our platform ahead of time. Once that code has been deployed, the execution of WebAssembly takes place at various specific touchpoints, depending on what the extensibility code is doing. In the example before, that touchpoint is checkout. At checkout, that WebAssembly gets fetched and executed, and the results, if correct, they are applied to the current checkout. It’s important to note that the execution here is synchronous, and with very tight SLOs. In fact, most of the complex extensible code should run in 5 milliseconds or less for our current use case.

High Level Programs Compiling to WebAssembly

We’re going to dig in in all of these touchpoints and try to dig deeper on what’s going on here, and where WebAssembly comes in, and what features of WebAssembly we’re using, and what features we could be using according to the future state of WebAssembly. The first obvious section is trying to dig deeper on how the high level programs that developers use compile to WebAssembly. There are several options to consider here. Depending on your target audience and the needs of the platform that you’re working, there might be considerations to make. Generally, the options to compile to WebAssembly can be divided into three main categories, or at least that’s how I divide them. The first one is languages that offer tier-1 support for WebAssembly, so languages that have very good support for WebAssembly. You can think of C, C++, Rust, Go, Zig. These languages have very good support, they have very good tooling. They’re normally mature. The flip side of these languages is that they have a big learning curve. Again, this depends on who is your target audience. If you’re using this internally, you can probably ramp up all your developers. If you’re distributing this to third party developers, you might want to consider how this is going to affect their productivity in order to ship extensibility code into your platform.

The second category is the category of languages that are designed exclusively for WebAssembly. In this category, you can think of languages like AssemblyScript or Grain. These languages offer good support for WebAssembly, and they are performant. One of the downsides here is that they’re young in the same way that WebAssembly is a young technology. This means that they sum things like features. They like support for editors, support for debugging, and support in their ecosystem compared to a language that has been around for many years.

The third category here is languages in which the runtime needs to be compiled to WebAssembly. The cons here is that these languages are often associated with slow execution. That’s totally true up until sometime back in which there have been some research on how to improve the cold start of these languages. What I refer to cold start is these languages need to do some preparation before actually executing any code that you want to run. They need to prepare an engine. They need to prepare a context to be able to fully execute the code that you have given them. If that can be improved, then probably the general performance of that language on top of WebAssembly can be improved.

The WebAssembly Pre-initializer (Wizer)

At Shopify, we have experimented with all of these three categories. Lately, we’ve been investing in the third category. That’s thanks to a tool called Wizer, which stands for the WebAssembly Pre-initializer. This tool is a tool built by the Bytecode Alliance. What this does is that Wizer allows the ahead of time pre-initialization of WebAssembly modules in such a way that at runtime there is no need to perform any initialization work. This is a bit ambiguous, but if we apply these concepts to a concrete use case, it hopefully becomes more clear. One of those concrete use cases is a language runtime, for example. In this scenario, we are pre-initializing a JavaScript engine, which gets embedded into a WebAssembly module. Then that module is ready to be executed without incurring in cold start. The approach potentially applies to other languages with runtimes, like Ruby and Python. We have built an open source implementation of Wizer using QuickJS. It’s available in this repository, Shopify/javy. What’s important to share here is the results of using Wizer. Using QuickJS, we observed a 50% improvement on cold starts versus WebAssembly modules that were not using Wizer.

Module Execution

The next important section of our pipeline is executing modules. This is arguably the most important part. To dig a bit deeper, we can ask questions like, how do we get input and output from the WebAssembly modules? To answer that question, we need to recap on the data types that are supported by WebAssembly. WebAssembly right now supports i32s, f32s, f54, and i64. Basically just numbers to parse data around and to define programs. Given the current limitation in data types, we take advantage of WebAssembly’s memory model to be able to parse in and out complex data to our programs, more concretely rely on MessagePack to encode data in and data out. MessagePack is just an implementation detail. The reason why I’m mentioning it here is because as long as you can parse pointers in and out, you could point the WebAssembly module toward the datastore so that it can read from there, and decode or encode in a specific format, and use that as a driving mechanism on how to parse data in and data out, and then execute the logic of the program. In the hopefully not so far away future, interface types should be available, which is a proposal that solves exactly this problem. Interface types is a proposal for WebAssembly that allows defining a set of interface types that describe high level values. It describes how more complex data types would look and interact with WebAssembly modules.

Witx-bindgen

For now, if you want to have a taste of what that future might look like, you can use tools like witx-bindgen, for example. Witx stands for WebAssembly Interface Types Extended, but this might change in the future, this naming. Witx allows you to define interfaces that describe imports and exports for WebAssembly and for their hosts. You can define imports and export for the WebAssembly module which have a different meaning than what it has for a host for example. You can define interfaces for a host. In this case, let’s assume a host is Wasmtime, and then that’s going to create some glue code that is going to deal with how data is parsed in to the WebAssembly module. The same concept applies when defining interfaces for the guests, or the WebAssembly modules. Some glue is going to get generated in order to make that interaction possible. The advantage of this approach is that this liberates the programmer from dealing with pointers, and ABI details.

Module Size

The other important topic to talk about when executing modules is module size. There is a belief that on the server, module size is less important than in the browser. That could be true, but I think that it’s not an absolute truth. If you’re trying to execute WebAssembly modules at massive scale with very tight SLOs like in our case, and depending on your network configuration, fetching a big module from storage or cache, might have negative impact on performance. Modules might get bigger for various reasons. For example, if you’re embedding a JavaScript engine or a language engine, most of the module size is going to probably be code for the runtime itself, rather than code for the extensibility that a certain developer is trying to achieve in your platform. Assuming that the language runtime is the same per request, it doesn’t change, which that should be the case, you can take advantage of a new proposal in WebAssembly, which is called the module linking proposal. Module linking will allow you to keep a single copy of the runtime in your servers, and then link against that WebAssembly engine to create the final executable that is going to run the code. Then this will allow you to ship only a small Wasm that only contains the user code and some other constructs to be able to link together to the base WebAssembly engine, and then that will form the final WebAssembly that you would execute. This would allow you to avoid shipping the language runtime on every module, drastically reducing the module size.

Recap

The intention of this talk is not to do a deep dive on a particular subject, but instead to highlight the most important pieces when trying to solve synchronous extensibility through WebAssembly, and as well to take a peek at Wasm’s future features like module linking and interface types.

Questions and Answers

Schuster: Which platform service does it actually extend? It’s your shopping platform, I presume.

Cabrera: Yes, that’s Shopify, we have this built in. This is not publicly available yet, but this is something that we’ve been working on for quite some time now. I think the concepts here don’t necessarily matter, to which one you are trying to extend. These are generic enough that can be extracted and be applied to any platform, if the requirements of that platform allow it to be done in that way.

Schuster: Did WebAssembly allow you to basically build some new capability that you couldn’t have done without it, or did you have another solution previously to do the same thing?

Cabrera: We had something similar but not built on top of WebAssembly. WebAssembly brings some new features, like better execution and better security, which are the ones that we’re leveraging to be able to do this internally at Shopify.

Schuster: Five milliseconds for the extensions seems slow, was it querying a database or something?

Cabrera: There is some context that I could add here. Five milliseconds is the max that we have. After that, it’s probably that we will kill anything running longer than that. Normally, extensions will run between 1 millisecond and 3 milliseconds, and they don’t have access to external calls like for a DB or HTTP requests. They are considered pure functions, in our case, even though you could add this functionality with WebAssembly. We don’t, to avoid adding new variables to the execution time. One thing to take into account is that inputs to those functions may vary. If the input is bigger, some serialization time is going to increase depending on the size of the input. Obviously, this is something that can be tuned in a way that you only pay for the data that you use. This is a much more complex problem that we’re trying to tackle right now.

Schuster: I ship a WebAssembly module to you and say, this is my extension. When the thing is run, do you run it in a JIT’ed WebAssembly runtime, or do we do something with ahead-of-time compilation for linking things in?

Cabrera: There are a couple of things there. WebAssembly doesn’t allow any type of JITing, it doesn’t allow dynamic generation of machine code, so we can’t do that. Everything has to be done ahead of time. The WebAssembly that we ship, or that developers ship is totally ready to run. It’s only expecting an input and it will create an output. One of the optimizations that I talk about is when you’re trying to embed a language runtime like JavaScript. In that case, to avoid shipping the runtime every time, you can make use of a new WebAssembly feature that is not totally standardized, but supported by some runtimes like Wasmtime, and it’s called module linking. It’s in the realm of dynamic linking, in which that module lives in the server that you’re trying to execute code. The other module will only contain the code that you’re trying to execute. That way, your module becomes smaller, and it’s easier to ship under latency characteristics that you can reduce by using this technique.

Schuster: Is this a WebAssembly runtime feature, or do we merge the two modules together and create a new one that’s linked?

Cabrera: No, this is a WebAssembly runtime feature. You can tell the WebAssembly runtime, I’m loading two modules, and depending on how you load them, and depending what’s the interface for the runtime to provide this, you can tell the runtime how one depends on the other. Then it’s going to link the exports of one into the other so that functions can be called between modules.

Schuster: When I ship at runtime and I ship your WebAssembly module to you, so what runtime to use, is it Wasmtime or something?

Cabrera: Yes, so Wasmtime. Wasmtime is developed by the Bytecode Alliance right now, it is maintained by the Bytecode Alliance. We have a wrapper around that, which is an external service that receives an execution request. That wrapper defers the execution to Wasmtime, and then we copy the input in, the execution takes place. Then we get outputs or any errors, because errors could happen if developers ship bad code.

Schuster: Wasmtime, does that load the Wasm bytecode and interprets it or compiles it at runtime to native code, or do you have some optimizations for not doing that all the time?

Cabrera: Wasmtime can execute a Wasm binary, and is going to compile it internally. There is a feature to compile it ahead of time into machine code. That way, when you load the compiled file, it’s going to be machine code already. What we do is that when a module gets deployed, that gets compiled to machine code ahead of time, and then fed into Wasmtime at runtime, and that’s what we execute. Because doing that process was making us miss our SLOs, so we had to do that there.

Schuster: It’s about the nice advantages of Wasm because you don’t have to pay for the JIT, essentially.

Module linking is a new feature. One thing I find slightly confusing about Wasm nowadays is we had the Wasm MVP, which was the first version. How do you keep track of features that get added to Wasm? There isn’t going to be a Wasm 2.0, or next gen, or something like that? How do you keep track of that?

Cabrera: It’s a bit tricky. What I do is normally attend the WebAssembly working group meetings, or the WASI working group meetings to keep track of those. Also, these proposals are described very well in the WebAssembly spec in GitHub. If you go there, you can keep track of what was the latest development in each of those, depending on what you’re interested in. That’s how you do it. It’s especially difficult to get a good sense of what’s happening, because they are in flux most of the time. I think a combination of attending those meetings and reading through the repos and see how some runtimes support each of these features is a good combination of things to look at if you want to keep on top of this.

Schuster: Is there any idea for building Wasm feature profiles, so you could say, here’s a bundle of stuff, like module linking, and threading, and stuff like that? We have a name for some of these features, or is that all just in flux?

Cabrera: There is this idea of a component model, in which you could have many components linked in together, forming some sort of a box that is self-contained. We could go back to the example of the runtime. In this case, I was talking about a runtime, but it could be something else. Let’s say that you have a WebAssembly module that is compiled, and you want to link that with something else, you can create a component of these things that link and then have a box that can be considered a component. We can talk about features like threading and all the features that you just mentioned, but I think that some of them might also go to core WebAssembly too. Depending on the feature, we can discuss where it goes.

Schuster: Are there any features that you particularly look forward to that are maybe being worked on or that haven’t been worked on?

Cabrera: I think interface type is definitely one that is particularly important for people that are doing the type of stuff that we’re doing. Mostly because data parsing becomes not difficult, but it’s not a standard. Everyone can be doing what they feel that is mostly convenient for their use case, but that adds some friction when you want to do extra tooling for developers to test their code, for example. In our case, we have been using MessagePack, and MessagePack is not as friendly as JSON. That adds some friction. We can talk about workarounds, but I think it will be good if all the tool chains will be able to automatically create those interfaces for you, and that the data parsing will be somewhat transparent in that regard.

Schuster: Interface types allow you to define struct type things and you automatically deserialize them?

Cabrera: Yes. Right now there is a tool, witx-bindgen, which was renamed to wi-bindgen, WebAssembly Interface Bindgen, which would generate, depending from where you use it, like if you use it from Rust, and from which side of the WebAssembly you use it, like if you use it from the runtime side of WebAssembly things, it will create Rust traits for you to extend. It will know how to glue those traits into copying data in and out from the WebAssembly runtime. That’s a peek into what the future might look like. I think there is much more work to be done in that area for it to be fully adopted and standardized.

Schuster: Are there any other limitations of WebAssembly that you had to work around besides the linking and stuff like that?

Cabrera: There’s one important one that I think I didn’t mention, but it’s worth mentioning it here, and it’s exception handling. The exception handling proposal, I think it’s reaching phase 4, which means that it’s almost standardized. There is no clear way on how to tell the developer that something went wrong when you’re running a platform that executes untrusted code. Basically, it’s like, if bad code was executed, that WebAssembly runtime will return a trap. That trap might contain some information, but not the stacks that were used to getting in other platforms when things go wrong. That’s one of the challenges that we’ve been thinking about, because observability is important, especially in performance sensitive contexts like the ones that we are running in.

Schuster: It’s a problem for all languages with compiler exception capability, basically.

Cabrera: Yes, exactly.

Schuster: I presume that’s going to need some Wasm bytecode changes or runtime changes to work?

Cabrera: The proposal describes exactly just that. I think the proposal is already being implemented in a couple of engines. I think it’s SpiderMonkey and V8, the ones that already have it maybe in experimental state. I’m not sure if this is a formal proposal, but there is something called DWARF for WebAssembly. Toolchains can emit debug symbols for WebAssembly, but the challenge with that is module size. When you keep debug information in the modules, modules become bigger. Depending on the runtime, and depending on what you’re compiling in, they might become very big. Like [inaudible 00:32:04], if you remove debug information, you can cut module size by half, which is considerable size. The other challenge with that is, how do you report, or how do you transform the debug information into something that’s useful for the developer? Because you might have it but you need to inspect it, transform it somehow, to transform that information into something that’s useful for the developer that points to the actual source code and not to the WebAssembly code, because the WebAssembly code is not meant for developers to write it. It’s a bit challenging.

Schuster: Also, it’s like a source map in JavaScript that translates bytecode to the original. Does DWARF not handle that?

Cabrera: We haven’t explored the DWARF part very much. That’s just something that I know, but I haven’t got into the details.

Schuster: One thing that was interesting is you mentioned that you ship a JavaScript runtime with WebAssembly. That seems interesting. What JavaScript runtime are you shipping? How does that work?

Cabrera: We’re using QuickJS. QuickJS is an engine that doesn’t have a JIT, in terms of all the other performant JavaScript engines like V8, and SpiderMonkey, and JavaScriptCore, the one that powers Safari, have JITs. JIT in the JavaScript engine sense is not working with WebAssembly. We started looking into something that was lightweight, and that didn’t require a JIT, it only has a baseline compiler that it’s optimized to do what it does, executing code in smaller places. That’s where we started with QuickJS. We compiled that to WebAssembly, Wasm3 to WASI is the actual target. Then we embed the code that the user gives us, and then we execute that code. One of the important parts is that we pre-initialize the engine, because JavaScript engines normally need to do some work before actually executing the code, and that is preparing the engine and then preparing the application code, parsing your JavaScript and potentially converting that into bytecode before actually executing it. We pre-initialize that with a tool called Wizer, that snapshots that phase into the data segments of a new WebAssembly module. Then at runtime, those segments are already loaded and ready to be executed. That’s, at a high level, our process.

Schuster: What type of actions can developers take in your WebAssembly runtime? Can they call out to any services? What APIs do you offer them, what capabilities?

Cabrera: We have a set of APIs right now that developers can use, like Payments API, for example, to deal with how payments are ordered. One use case of this is like, if you are in Europe, for example, there might be different payment options than what you have in North America. Or the ordering of those payment options might be important in Europe, and might not be relevant in North America. Ordering those payment methods, is one. Ordering the shipping method is another. Then applying discounts. Those are the three main APIs. Each API has a specific concept of what can be done, like reordering, hiding, sorting, and things like that. Then in terms of discount, yes, applying discounts. No, we don’t have the ability to do arbitrary remote calls to other services, because we want to keep the execution time deterministic. Each API, we give it the information that it needs in order to perform the actions that we intended to.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.