Mobile Monitoring Solutions

Close this search box.

Presentation: Living on the Edge: Boosting Your Site’s Performance with Edge Computing

MMS Founder
MMS Erica Pisani

Article originally posted on InfoQ. Visit InfoQ


Pisani: Welcome to my talk, living on the edge, boosting your site’s performance with edge computing. My name is Erica. I am a senior software engineer at Netlify, working on the integrations team.

We’re going to talk about, what is the edge. We’re going to look at hosting web application functionality on the edge, looking specifically at Edge Functions. We’re going to look at accessing and caching data on the edge to help boost site performance through that mechanism. A little bit of a surprise toward the end there, what I’m calling the edgiest part of the edge. Then a wrap-up.

What Is the Edge?

What is the edge? To understand that, we need to understand a little bit about how cloud providers like AWS, Google Cloud Platform, and Microsoft Azure, organize their servers. We’re just going to touch on that very briefly. At the broadest possible scope, things are organized by regions. Think your Canada Central, your U.S.-East-1, your EU Central, and availability zones. There’s a number of these within regions, and they contain one or more data centers. When we talk about an origin server, we’re commonly referring to a server that is hosted in one of these availability zones. The edge are just data centers that live outside of an availability zone. An Edge Function is a function that’s run on one of these data centers that lives outside of an availability zone. Data on the edge being data that’s cached, stored, I include even accessing on one of these data centers. You might also see the term points of presence, or POPs, or edge location to refer to these, like edge locations, it just depends on the cloud provider that you’re working with. To give a sense of just how many more edge locations that are there in the world relative to availability zones, I grabbed this from AWS’s documentation. This is a map of all their availability zones in their global network. Then if I pop to this next slide, these are all the edge locations. Those orangey-yellowish larger circles on this map are regional edge caches, they’re slightly larger caches than what you would find at an edge location, but maybe not as big as what you would find on an origin server.

You can see immediately at a glance that if you’re able to handle a user’s requests through the edge network, you can improve site performance significantly by lowering the latency incurred from receiving and fulfilling the request. In particular, I’m looking at maybe users that are based in South America, Africa, or Australia regions, where there’s only one availability zone, but they have multiple edge locations. If for some reason, you have to host the user’s data in that availability zone, having some of the requests, at least the most popular ones handled at the edge means that users that are located at the other end of those regions will benefit from faster responses due to the proximity of the server being much closer to them. Let’s take a quick look at where the edge fits in in a request lifecycle. In this diagram, I have a user that’s based in South America and they’re making a request. What will happen is that little red diamond thing, that’s our edge location. It’ll be the first to pick up the request. The best possible case scenario is that the edge location is able to fulfill the request in its entirety, and it sends a response back to the user. Let’s say that it can’t for whatever reason, let’s say your site was just deployed and so the cache has been invalidated. Or maybe this is a request that’s not made very often, and so you just don’t cache it, or the cache has expired if you do cache it. What’ll happen is the edge location will make a request to the origin server. The origin server will handle it normally, and then send a response back to the edge location. At this point, you have the option to cache the origin response, and then the edge location will send the response back to the user. If it’s something that’s requested often, I highly encourage caching at this location, because what that means is that other users that are in proximity to that edge location that would make a request will benefit from that cache response. You remove that need from the edge location to have to make a request to the origin server.

Web Application Functionality on the Edge Using Edge Functions

Let’s talk about web application functionality on the edge using Edge Functions, now that we’ve done an overview of what the edge is and where it fits in the lifecycle of a request. For me, it’s best to demonstrate how useful the edge can be by giving some kinds of problems to work through. One of these is handling high traffic or a high traffic page that serves localized content. Imagine for a moment that I have a thriving global e-commerce business, maybe a pet store. I’m looking for opportunities for some easy wins to remove load from the server and return cache responses. The challenge is that because I’m serving localized content, let’s say it’s the homepage and I have a banner that maybe promotes sales in certain cities, welcome messages, and others, things like that, that I’ve been relying on the server to inject that dynamic content before sending it back to the user. How can I use Edge Functions to help address this, or the edge in general? The way that I can do that is by transforming the server-side rendered page into a static one and injecting that localized content using an Edge Function. This transition, what that would mean is that by transitioning to a static page, you’re able to cache that page on the content delivery network, or CDN, for faster access of that page going forward. You’re removing the load on the origin server that existed previously, because rather than having to render the page every time it received a request, that would just be handled by an Edge Function. Then you have the option to cache that generated page on the Edge Function.

Let’s take a look at some code. My hypothetical e-commerce website is an xjsf. Both the middleware function on the left-hand side and the page that is server-side rendered based on the presence of getServerSideProps, both of these are running on the origin server. To go through the code here, starting with the middleware, what’s happening is I’m getting the geo object off of the request object. I’m getting the country off the geo object, and then I’m adding that value onto the URL before rewriting the request. Going to the right-hand side, I’m getting that query parameter off of the URL. Then based on the value of it, if the user is based in Canada, and I want to have a bit of a hometown sale going on for my store, I’m giving a promo code of 50% off their order for my Canadian customers, but a very friendly Hello world for the rest of the world. The server will render that HTML based on the value and then send that response back to the user. To recap, the server has to render this every time. It’s not cacheable on the CDN. You could create different pages to route to in order to make the pages cacheable, but that will result in duplicated logic and it can get out of hand really quickly for anything even slightly bigger than a small hobby site. We want to avoid that as much as possible.

That brings us to our static and Edge Functions example. You’ll notice that the middleware function’s a little bit bigger now. The page that we have on the right-hand side is now a static one based on the presence of getStaticProps. What’s not obvious from the code here is that in this code example, the middleware is running on the edge. Just keep that in mind as we’re going through this. The first two lines of the middleware function are the same as before, getting the geo objects off the request, and getting the country off the geo object. The next line, const request = new MiddlewareRequest. I’m leveraging some functionality in the Netlify Next package that does some cool stuff in the next few lines. The one after that const response = await, what’s happening here is the request is going to the origin. Rather than the origin server, what’s happening is it’s getting the page that’s cached on the CDN so that’ll be a faster request to fulfill. At that time, the value of response is the rendered HTML with the banner that says Hello world. As we talked about before, the use case here is that I need to show different banner messages depending on where the user is located. How I’m doing that is by determining if the user is based in Canada, and this is where the fancy middleware functionality is coming in. I am updating the text of the HTML response.replaceText with the message that I want to show, and then updating the Next.js page prop before returning that response back to the client. Where before, like a little further up, response = await, the banner said Hello world, with the following block there, that if conditional. I will have updated it so that it shows the correct thing that I want so the client will see the Hello Canada promo code banner. You can cache this on the edge, which means that more than likely requests that are coming in from users, they’re going to be picked up by that edge location, they’re likely also going to want that same response. You’re saving that extra trip to the CDN, in this case. The request starts getting handled sooner, the user gets the response much sooner, so there’s small performance gains by doing this, compared to our original thing where we had to go to a potentially distant origin server, run that, and then return the response.

Another great use case for Edge Functions and boosting site performance is user session validation. Let’s say that on this pet store site that I have, there’s some high traffic pages that require a user to have an account. I notice that the user session validation is taking a little bit more time than I’d like, especially for some users that are physically further away from the origin server. I also know that it’s often the case that all the users that interact with these specific pages are not initially signed in. They’re making a request and it’s reaching the origin server and then coming back with like, “You’re not authorized to look at this page. You need to sign in,” and redirected to the login page. How can we get some performance gains here? The way that we can do that is by doing the session validation on an Edge Function. Using Auth0, because that’s one of the more popular providers out there for this functionality. What I’m doing here is, with like two lines of code, I have made it so that you need to have an account to log in to the website. Right now, this is just guarding the whole website that I have. What you could do is you can modify this to look at either through like something as part of a cookie, or if you’re using a JSON web token in an OAuth application, you can look at roles. If someone’s an admin, they get access to this page, or someone’s a paying subscribing member, they get access to this content on your website.

The last one I want to talk about is routing a third-party integration request to the correct region. This particular problem is very near and dear to my heart, because I ran into this a few years ago. I was working at a company where we had two origin servers, one in North America and one in the EU. Users and their data would live in one of these, but not both. Because we were looking to comply with certain privacy laws and customer requests that involve hosting data in a particular region, we had to get a little bit funky at times with trying to figure out where the user was located, in a performant way. We also had third-party integrations that we need to support that would want to access or modify our user’s data. That took out the option for us to maybe use geo locating because the request that was coming in on behalf of the user might be coming from a place where the user isn’t located. The user might be in Europe, but the integrator or the integration has their servers based in Australia, or South America, or North America.

Let’s go through the authorization flow for an integration just to really paint a picture about how this was such a problem for us. An integration is being enabled by a user on a third-party integration site, so let’s say in Australia. The request would go from Australia to my company’s original origin server first, which is based in North America. We would check to see if the user existed within that region. If not, we need to check and confirm that the user didn’t exist in the other region. Then if they did, return a response back to the integration as part of the authorization data flow. One consideration that we had to optimize the subsequent API requests after that initial authorization was to return a URL that integrations would then make future requests against, either to North America or the EU as part of the authorization response. You can already get a sense that it’s leaking an implementation detail, which is not ideal, it’s easy to get wrong. It’s more data that the integration has to store. If something changed in the future, we’re now stuck with having to support all these integrations that now have these URLs as part of their data, and their implementation of their integration against our site. We also looked at encoding the region on the JSON web token as part of an OAuth authorization flow. It was ultimately the preferred approach. What this meant is that we had to run a proxy server in both regions to route the request to the correct region if a decoded JSON Web Token indicated that the user’s data belonged in the other one. You’re still having to make a transatlantic request as part of this, potentially. That’s a lot of traveling for a request. That’s a lot of latency introduced just by where the request is being redirected to. As mentioned before, the URL approach meant that the integration needed to know an implementation detail in order to have the most optimal requests sent to our servers. The encoded region approach still meant that even though it took out the need for the integration to know which URL to go to, it meant that it could still be bounced around the world and we now have the service that was potentially a bottleneck for performance or a failure point in the request. It could be hard to debug what was the issue if the request started failing.

Now that I’ve been looking at Edge Functions, I’m realizing that the edge would have been really handy here, just using a simple Edge Function would have made our lives a lot easier. We couldn’t get around needing to encode the region on the JSON web token, but it’s a small piece of data to add. Then, what we could do is relocate that proxy code to that Edge Function, and then that Edge Function would do the routing for us. What that would look like is, going back to our integration based in Australia, they would make a request and the edge location would pick it up. We could decode the token. Then based on the region that’s there, we go directly to the EU instance, or directly to the North America instance. Much faster, very straightforward. You don’t need to spin up a distinct service for this. Architecturally, it’s quite simple. It results in a reliable, better performance. If the request is one of those where the response is something you’d want to cache, now it’s cached physically closer to the integration, and it’ll be much faster to access going forward rather than caching at the origin server and you still have to make that very long-distance request.

Data on the Edge

Let’s talk about data on the edge. For those who’ve been developing in the JavaScript ecosystem for a while, you’re probably familiar with the debates that have been made over the years about how to boost website performance through how a website is architected. I’m talking specifically about whether to build a single page application leaning more into an island’s architecture, favoring the server with server-side rendering, things like that. Regardless of which camp you tend to gravitate towards with how you want to build your site, having the ability to load data on a server physically closer to the user can be an enormous help here in terms of boosting your site’s performance. Before we get into a couple providers that do this, it’s worth noting that historically there has been challenges of hosting and accessing data on the edge. They’re the same challenges that I think just databases in general have, managing the limited number of connections that databases have available to them when you’re experiencing a high amount of traffic, and ensuring that data remains consistent, especially if you’re caching it in various places. In a serverless and Edge Function environment, how do you make sure that you don’t run out of connections on your database when you’re suddenly spinning up hundreds of thousands of serverless functions or Edge Functions to handle a spike in traffic? If you’re caching stuff on the edge, how do you ensure that that data remains consistent or promptly invalidated if that data is modified somewhere else in the world?

We’re going to talk first about the limited number of connections. Much like with the traditional monolithic database, this is tackled using connection pooling. What that is, is it’s a collection of open connections that can be passed from operation to operation as needed. It gives you a bit of a performance bump with handling database queries because it’s removing the overhead of needing to open and close the connection on every operation. A tool that uses this for a serverless and Edge Function environment is Prisma Data Proxy. What they do is they create an external connection pool, and requests for the database need to go through this proxy before it reaches the database itself. Another database provider that leverages this connection pooling approach is PlanetScale. They use Vitess, which is an open source database clustering system for MySQL under the hood, and leverage its connection pooling at the VTTablet level. By doing this, they can scale the connection pooling with the database cluster. If you’re interested in learning more about how they’ve managed this, they have a blog post detailing that they were able to handle a million requests a second, which is just mind boggling to me, and they had no issues.

Going over to the data consistency side of things, Cloudflare’s Durable Objects is one approach that was taken. Cloudflare, or the CDOs, what they’ve done here is they’ve basically taken the monolithic database and broken it down into small logical units of data. When you’re creating a durable object, Cloudflare automatically determines for you where that object will live. It’s usually on an edge location closest to the user making the request that initially created it. It can change regions if you need it to. In the future, let’s say you find that it’s maybe more performant to have it somewhere else, you can do that quite easily. They’re globally unique, and they can only see and modify their own data. If you needed to access or modify data across multiple durable objects, it usually would have to happen on your application level. They’re accessed through Cloudflare’s internal network using Cloudflare Workers. The use of their internal network ensures that accessing them is performant, and much faster than if you’re just accessing them over the public internet. Going back to PlanetScale, they also use an internal network approach. They say that it operates similar to a CDN. When someone is trying to modify or access data, what will happen is a client will connect to the closest geographic edge on PlanetScale’s network, and then they will backhaul the request over long held connection pools in their internal network to reach the actual destination. Initially, I was a little bit skeptical when I was reading it. Apparently, in terms of the performance gains, since it could be traveling quite far for the data, but with how they’ve set up their internal network, it’s actually able to reduce the latency of the requests compared to if you were just to do it over the public internet.

We’ll take a quick look at how fetching data on the edge looks like. Using PlanetScale, it’s pretty straightforward. This is a function that’s running on the edge on Netlify services, so that’s why it’s using the Deno runtime here. On lines 12 and 13, that’s where we’re connecting to the database and making a request to the database, and doing a query for the five most popular items where there’s still stock available in my inventory. Then continuing on with the request. At the moment, this will query the database every single time. It’s not necessarily the most efficient. Given that this queries the kind that would be on, let’s say, the homepage, or some other high traffic page because it’s showing a list of the most purchased items that we still have available, caching would likely be useful here. Other users will likely benefit from having that cached response. Let’s take a look at what that looks like. I have to add the caveat that this is very experimental code on Netlify side. It’s still a little bit of a work in progress and not widely available. I wanted to demonstrate this just to give a sense of how easy it can be to cache on the edge. Lines 12 and 13 are the same. Then in the response that I’m returning, I’m returning the results from the database query on line 13. Then adding a cache control header to indicate how long I want this to live for. This will be on this endpoint called getPopularProducts, which is what you can see on line 23 in the config object. Someone makes maybe the initial request and it’ll make the response. It’ll invoke the function, query the database, send the response back. Other users will, rather than have the Edge Function invoked, it’ll actually have the result served from the cache. You’re saving a little bit of money because you’re not invoking the Edge Function. Then, this result can be reused across multiple clients, especially because this is such a more generic, not user specific result.

The Edgiest of the Edge

I want to mention a product that is part of the edge but not in the way that we think. The main assumption, in everything that we’ve talked about so far, is that there’s always reliable internet access. What if that isn’t the case? A lot of the world still has maybe intermittent or non-existent internet. Where can the edge come in to help serve these areas? That brings me to AWS Snowball Edge. This one blew my mind when I first heard about it. I heard about it from a colleague who works at AWS, and I hadn’t heard of it before, he hadn’t heard about it until he worked at AWS. This is an edge offering from Amazon as a way of making cloud computing available in places that have unreliable or non-existent internet, or as a way of migrating data to the cloud if bandwidth is also a bit of an issue, or just connecting to the internet reliably for a long period of time is a challenge. Locations that would benefit from using this are ships, windmills, and remote factories, which is just wild to me. On the right-hand side, this is a picture of what Snowball Edge looks like. If you’ve seen a computer in the ’90s, it looks like a computer in the ’90s. You order it online, it gets mailed to you, and you can run lambda functions on it. It sounds a little bit like on-prem, but it is a way of making cloud computing available to places that just can’t reliably connect to the internet. I really want to highlight this, because given there’s a lot of places in the world that have maybe not a reliable source of internet or it doesn’t exist just straight up, we may need to consider, depending on the products that we’re building, delivering them similar to how Snowball Edge delivers their edge and cloud computing to these areas and are so on the edge that they’re literally on our users’ doorsteps.

Limitations of the Edge

We’ve talked a lot about where the edge can help your site performance in terms of web application functionality, or serving data closer to our users. It sounds all nice, but what are the limitations? Which I think is not too bad, all things considered actually for the benefits of using the edge. There’s lower CPU time on these Edge Functions. It means that you can do less operations within that function compared to an origin function. Generally, it’s not too bad, but it depends wildly on the vendor that you’re using. It just means you can do less operations. Advantages on the edge with its physical proximity close to the users can be lost when a network request is made from there to an origin server. You ideally want to try and reduce the need to make origin server requests as much as possible, because it could be very distant and has the potential to negate the benefits of performing the operation on the edge in the first place. There’s also limited integration with other cloud services. Let’s say for a second that you’re using an AWS serverless lambda, and you’re using maybe some native integration with another AWS service, and you want that same thing to exist maybe on the edge. AWS only has maybe a dozen or so of their general service offering that can integrate quite closely with the edge, but they have hundreds of opportunities between their standard AWS serverless lambda and their other services. That might be a reason for you to not move some functionality onto the edge. It’s also very expensive to run. If cost is a concern for you, or you don’t want to make your chief financial officer too angry at you, choose what you’re moving to the edge wisely. There are, in most sites, small pieces of functionality that would get a bit of a performance boost and not cost too much to move to the edge. Moving your entire site and all its functionality might be a little bit too expensive for where you’re at, maybe in your company’s life.


We’ve covered a lot on what edge capabilities are out there that can help boost our website performance. We’ve looked at a simple use case of serving localized content that went from being served by the origin server to being served on the CDN and at the edge using Edge Functions. We’ve also taken a look at speeding up our user authorization flow by moving a common operation to the edge, validating user sessions, rather than handling it at a distant origin server. We’ve also looked at the future where data is being hosted closer to our users that are distributed all over the globe, either through something like Cloudflare Durable Objects, that allow you to no longer have to worry about which region to host the data in. Or AWS Snowball, where storage and compute capabilities can literally be shipped to your doorstep and can operate in areas with non-existent or unreliable internet. While we’ve looked at some of their limitations, we can also see some of the use cases where we should feel comfortable handling requests at the edge as the preferred default rather than at a maybe very distant origin server.

Entering an Edge-First Future

What you can do in the near future at your place of work is investigate some of the high traffic functions and functionality and see if they can maybe become an Edge Function. Some other use cases that Edge Functions can be used to great effect is like setting cookies, setting custom header, something that’s really small and focused and isn’t too tangled up in the rest of the site’s architecture. You can maybe get a very quick, easy performance boost with minimal changes on your part and just changing where that code runs. Also take a look at maybe some user locations with the highest latencies that you can see in your metrics, and maybe try handling some of the more popular requests at the edge rather than origin server. You might end up seeing some higher rates of user satisfaction as a result of changing where some of these more popular requests are handled. All that is to say we’re beginning to enter an edge-first future. For me, this is so exciting, because the more requests and data that you can serve closer to your users, the better the experience of your site will be regardless of where the user is based in the world.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.