Presentation: Borderless Cloud: Designing, Implementing, and Securing Apps Across Multiple Clouds

MMS • Adora Nwodo
Article originally posted on InfoQ. Visit InfoQ

Transcript
Nwodo: I’m going to be talking about borderless cloud, basically how to design, implement, and secure your apps across different clouds, multiple clouds.
I’m going to tell you a short story. Once upon a time there was a global company called Moota. They are a company that offers translation services for the different languages all across the world. They had millions of daily active users, from students trying to decipher their homework to international corporations that are bridging communication gaps. They run everything on a single cloud platform. This was a reliable solution at first but eventually some cracks began to appear, and the first issue that they had was redundancy. I know that this doesn’t really happen a lot but bear with me for this story.
One summer a heatwave ravaged the data center that Moota’s servers were in, and that obviously destroyed a bunch of things for them, and their translations had to halt. Their users were frustrated, businesses were stalled, there was downtime, and obviously they lost revenue as a result of this. They also had to figure out a way to fix their damaged reputation. Then they desperately needed a backup plan. Their costs were also spiraling. Their single cloud provider offered a one-size-fits-all solution and it wasn’t the most cost effective. They were paying a premium for features they didn’t necessarily need for all their workloads.
The surge in user traffic pushed them into expensive higher tiers, and obviously that was affecting their profit margins and squeezing them even if they were making revenue. They also had regulatory compliance issues, because as they expanded into new markets with strict data privacy laws, they had to figure out how to navigate these compliance issues in these different markets that existed because the cloud providers that they used didn’t have data centers across these different places that they wanted to launch in. Their growth was stifled because of the limited service offerings. Their cloud provider excelled in basic storage and computing but lacked advanced features like robust AI for translations or analytics.
They had a competitor called Polyglot that was offering real-time translation powered by AI, but they were stuck, and they were not obviously able to leverage the latest cloud innovation. They decided to go multi-cloud and it solved a bunch of problems for them. It solved the redundancy problem which I talked about earlier, because now they replicated their infrastructure across different cloud platforms, and a single outage wouldn’t necessarily cripple their business anymore because you could always just route the traffic somewhere else and it makes everything easy.
They were also able to optimize their costs in some way because the different cloud providers offer competitive pricing for specific workloads. They could leverage on-demand pricing models and spot instances. They could pay only for resources that they used. They were able to achieve the flexibility that they needed for optimizing their cloud spending. They were also able to achieve the compliance that they needed to achieve across the different regions and the new markets that they were trying to get deployed into. They also had different cloud services so they were not locked into a particular vendor. The move to multi-cloud wasn’t obviously without challenges for them. In this talk, I’m going to show you how they did it and how you can do it too if you ever want to do it.
Professional Background
My name is Adora. I’m a Senior Platform Engineer currently working at SurrealDB. I’m a non-profit founder. I’m the founder of NexaScale. It’s a non-profit helping people and connecting them to simulated work experiences. I used to work at Microsoft. I spent 4-plus years there right before joining SurrealDB. I’ve won different awards across different countries. I’m the author of three cloud engineering books. I’ve spoken at over 150-plus conferences. I’m a DJ during the weekends, and in December. I’m also a content creator featured across major media. I’m very interested in tech, education, and music.
Challenges When Adopting Multi-Cloud
I talked about some of the benefits of multi-cloud when I was telling Moota’s story, but we can see that some other benefits that I’ve talked about before, are regulatory compliance, depending on the region that you’re deployed in. Enhancing redundancy and reducing downtime. You get the best of each of the different clouds that you use. You’re able to avoid vendor lock-in because now you’re not forced to use everything a particular vendor offers because you are only tethered to that vendor.
The truth is, there are challenges when you adopt multi-cloud and there are challenges around architecture, CI/CD, and security as well. One of the architectural challenges when you adopt multi-cloud is that there is increased complexity, because when you have one cloud provider it’s very easy. Especially, if you don’t do things the right way, things might be tightly coupled and it might just be very hard to even figure out how you should use those cloud providers in a way that makes sense. When you have two, it becomes a bit more complicated. When you have three, more complicated. When you have more than three, let’s say five different providers that you’re trying to use, it’s just scary.
You don’t even want to know what that looks like. When you’re thinking of architecture as well, there’s also the vendor lock-in disadvantage or the vendor lock-in challenge, which is quite interesting, because vendor lock-in is supposed to be something that you’re running away from when moving into multi-cloud. The truth is, over time, an organization could actually become overly dependent on the specific service that the cloud provider is offering and they get locked into that service into that provider as well. It’s always important to do your architecture in such a way that that doesn’t become a problem. There’s also the data portability issue as well.
If you have to move data between different cloud providers, it can be cumbersome and sometimes annoying if the architectural foundation of that service wasn’t properly done right. When we think about GDPR’s rights to data portability as well, it can clash in multi-cloud environments. That’s because there are data silos across different clouds. You could have technical challenges with data extraction and transfer. Cloud providers have different ways that they interpret different things. Data portability will pre-purpose your data and make sure it’s readily available and transferable.
Unlike with multi-cloud, your data might be scattered across different cloud providers and everybody has their own different data storage formats and their different access control mechanisms as well. If you want to actually gather and deliver data in a structured format while abstracting the fact that you have multi-cloud architecture from your customers, it could get a bit complicated if you don’t lay the foundations right. There’s also the challenges of technical data extraction and transfer, like I talked about before.
If you can locate your data and you want to extract it from different provider systems, if you have to convert it to a usable format for another provider, it might be a technical hurdle as well, which is something to think about. Like I said, the right to data portability is part of the GDPR which is an EU thing, but not every provider exists in the EU, so different cloud providers may interpret that regulation differently and that could also be a problem.
Like I said, there’s also challenges with security. First of all, monitoring security across one environment can be chaotic enough. When you think of multi-cloud where you have different other cloud providers, it could get more complicated. That’s also something to think about. Dealing with compliance challenges as well, because if you have to be compliant, let’s say you have to do the SOC 2 compliance, or GDPR, or all the other kinds of compliance that you have to normally do, and you’re doing it for one cloud service across different regions, that’s one thing to think about.
Then, when you now have to do it across multiple regions for multiple cloud services again, that’s another layer of complexity that’s added to what you have to deal with. In programming or tech generally, the more data centers you are deployed to, the more VMs that are running your services, the more accessible your service is the easier it is for bad actors to gain access to your system, because you have an increased attack surface.
Basically, there are more entry points for them to try and get access to you. That’s also something that you should think about. The final challenge when you try to adopt multi-cloud, in the context of CI/CD, is, how do we integrate our tooling to be able to deploy to these multiple clouds in a way that it doesn’t affect developer experience on our team? How do we do version control? How do we make sure that these things are consistent? How do we test our application across these different clouds before we deploy to these different environments, because the fact that something runs perfectly on Azure doesn’t necessarily mean that if I deploy that same thing to GCP it would run as I expect it to.
When we’re thinking about doing DevOps and CI/CD, and we’re creating environments that mirror production so that we can test a bunch of things before we get into production, this is something that we have to think about when we’re now also dealing with multiple providers as well.
Fixing Issues – Part A: Architecture
There are many ways to fix these issues. I’m going to start with part 1, which is the architecture. As we can see here, imagine if we had a multi-cloud scenario where we had different user requests coming in and we had some kind of geo-routing mechanism that either maybe routes the requests coming from U.S. regions to AWS, and does the Azure routing for the EU regions. We have to think of a way to create that standardization. In the context of architecture, the first thing that we can do is use cloud-agnostic IaC. We can use tools like Terraform, we can use tools like Pulumi to define our infrastructure configuration as code.
The great things about these tools is that they support these major providers that we use every day. We’re not going to be having to use something like Azure Bicep, which is Azure’s way of doing infrastructure automation nowadays, or Google’s own Deployment Manager, and AWS CloudFormation. Imagine having to manage all these different things at the same time, it could get chaotic. If you are dealing with Terraform, one Terraform project is enough. If you are dealing with Pulumi, one Pulumi project is enough. Then you will be able to configure your AWS resources, your Azure resources, and the resources for the other clouds that you want to do as well.
You also want to use cloud-agnostic tools and cloud-agnostic libraries, so programming languages that you know could be deployed across different clouds and different things. I don’t know many programming languages nowadays that are not cloud-agnostic, but just in case there is any, maybe you probably want to avoid that. You want to use a tool like Knative as well for your serverless workloads as opposed to maybe using something like Azure Functions or AWS Lambda. If you are running a serverless application you probably want to find a way to run, use Knative so that you can run those workloads in Kubernetes and you’re able to deploy it across your different clouds.
You want to use queuing systems that are cloud-agnostic as well that you can integrate into anywhere, so things like Kafka, RabbitMQ. You want to make sure that you have an open-ended architecture basically, with a bunch of cloud-agnostic tools that you are able to bring in and plug and play whenever you want. You can also use containers to run your application so that you can run those apps across different clouds regardless of who is hosting it. You want to use databases like Surreal, MongoDB, Postgres as well that, like I said, don’t belong to a particular cloud, so you are not restricted to that.
You also want to implement a central API gateway to manage that external access to your applications functionalities. There are different things that you could use to route traffic, so whatever geo-routing mechanism or load balancing or traffic managing whatever, there are different mechanisms for doing that. You want to also make sure that you have that API gateway in such a way that it routes to either your U.S. regions, your EU regions depending on whatever customer is calling your application at that time.
You could also think about leveraging microservices here, just so that you can break down your application into independent reusable services that perform a particular function. When you have microservices, then you could always just deploy things in containers and run those workloads across different clouds, and you would be good as well. You could also consider architectures like the event-driven architecture. Because when you’re doing service-to-service communication, services aren’t directly communicating with each other. There is a queue somewhere.
There is a Pub/Sub mechanism, something somewhere that handles those events, and all the different services are just subscribed to those different things and can listen and pick up what they need to do to get done what needs to be done. It helps when you have different services, and if you are trying to do asynchronous programming as well because not everything exactly directly has to be real-time. One thing I want to note is that these different options that I’m mentioning are different things that you can try to do. You can pick up some of them, and depending on your use case and depending on the kind of application that you’re building, that would matter. You could also try to use the hexagonal architecture.
This is something that people also call the ports and adapters architecture. It promotes flexibility and testability because it decouples the core business logic from the external dependencies, so the application itself would have a port and adapters can fit into that port. It creates that kind of abstraction where, let’s say, I am supposed to be dealing with a database-like structure, but I don’t know what the external dependencies are. I just create a read and write method, and whoever is supposed to give me the functionality provides the functionality for me to read or write without me thinking about whether it is DynamoDB for AWS or Cosmos DB for Azure, or all these kinds of things.
These different hexagons in this thing shows different services using the hexagonal architecture that just fits together because their ports and the adapters basically match. Hexagonal architecture can be beneficial for multi-cloud for a couple of reasons. The first one is decoupling. Like I said, it separates the core application logic from the infrastructure, so your application doesn’t really care where the data is coming from or how the thing is even supposed to be processed, or where it’s stored. You can implement different adapters for the different cloud providers that you have. It’s just plug and play, essentially.
There’s also the loose coupling thing which is the fact that when you use ports and adapters, your core logic remains independent of this specific cloud provider, going back to what I talked about before. It creates a way for you to do cloud-agnostic design, cloud-agnostic architecture if you are maybe drawing your architectural diagrams or thinking of a way to build out that system.
You are not forced to build your application in such a way that it is very Azure dependent or AWS dependent because you are using ports and adapters basically, and wherever the provider is giving you whatever you need to use it for, you’ve created that contract in the way that your application is supposed to interact with those providers. You’ve made it in a way that it would be cloud-agnostic. This obviously is a modular design so it makes it easy for you to maintain and update your different configurations and your components within your application.
Now let’s talk about the data management strategy as well. It’s also important to design your data layer to be cloud-agnostic. This code is an example. Let’s say I have maybe an application that is supposed to write to a database, but because I’m using multi-cloud I have different kinds of databases, and let’s say the application for the different regions that I operate in, I’m not supposed to cross-pollinate for whatever reason. I’m using Azure in the EU, I’m using AWS in the U.S., I’m using GCP in Asia, or whatever, so I don’t even have to think of data replication mechanisms across because I don’t need to.
One thing I need to do is make sure that I’m able to provide the functionality for my different applications to read from whatever database depending on where the user’s call is coming from. I can create a data access interface and have my different providers implement that interface. Maybe for Azure I could have a Cosmos DB interface, and I can implement all my get users, get user by ID, save user, all the different functions that I need. I can do the same thing for Postgres. Let’s say I use Postgres in the U.S., for example, I can do the same thing for Postgres.
How you know when to call whatever database, depending on how you deploy the application, whether you deploy the application through Kubernetes or you deploy it in a regular web API, you can have configurations. You can have configurations that allow your application to run. In one of those configurations, you could add that the cloud provider is Azure, or the cloud provider is AWS, or the cloud provider is GCP, or Cloudflare, or whatever the provider is.
Now, in the startup of your code, and this is like C# code, so depending on whatever programming language it is that you’re using, it would differ a little bit but the concept is still pretty much the same. When you are doing all your dependency injection and thinking about what service to add into your DI Container, you can see that I’ve said that if the cloud provider is Azure then I want to add the Cosmos DB user data adapter as the implementation to that interface.
That’s what I’m going to use throughout every instance for that application. If it’s not Azure then I want to use Postgres instead. This is a way to just make sure that you have a dynamic data management strategy as well. You have different implementations because you have different providers, and you’ve written your code in such a way that your data layer is not tightly coupled to any data provider.
Part B: Security
The next thing is security. It’s important to have centralized logging and monitoring. You want to make sure that you have security information and events management systems so that you can aggregate logs and security events from all your different providers, and monitor them. You also want to make sure that you use cloud native security tools. As much as it’s important to use cloud-agnostic tools, I would always advocate for using cloud native security tools. It just requires that you do a lot more work. The reason for that is that these tools have been designed to work seamlessly with the particular platform.
Obviously, Azure security tools will work fantastic on Azure as opposed to a third-party tool from somewhere. Same thing with the AWS security tools as well. You want to have those things also integrated into your systems. You also want to make sure that your application’s network traffic is segmented, and you’ve isolated sensitive components so that there is no lateral movement of malicious actors, like I said. Because you now have different places that your application is being deployed to, there’s a possibility that you would have a larger attack surface. You want to make sure that you have like VNet, subnets, and you’ve done your network segmentation in a way that is not just going to be easy for traffic to hop from one provider to another, because sometimes that traffic could be malicious traffic.
Obviously, it’s important for your security team to follow security best practices for network configurations generally. I would always advocate for policy as code, because I’m an infrastructure engineer. Right now, I’m a platform engineer, but policy as code helps you have consistent security policies, because you can now define your security policies as code in terms of like even naming conventions, or you want to force certain things within your application, you want to have automated enforcement as well. You want to make sure that you’ve enforced certain things, and you can easily audit things.
For policy as code, let me just give a random example, let’s say you only want it to be possible that your Azure storage account only permits HTTPS traffic. If you have policy as code, and when you are doing your infrastructure declarations, you’ve set it up in a way that before your infrastructure gets to the point where it’s being provisioned, it passes through the policy engine. The policy engine checks that everything is right with whatever infrastructure you’ve created before it goes ahead to do the whole deployment thing. Maybe engineers on the team have worked with the security team, and they’ve said, we want to make sure that storage accounts on Azure only permits HTTPS traffic.
If I create a new storage account in my infrastructure as code, and I accidentally forget to enable the HTTPS traffic thing, my infrastructure will not get created. That’s a way to make sure that when a security team gives the platform team security things that they have to do, those policies they have to enforce, you’ve written code that can automate the enforcement of those things for you, as opposed to going manually and checking these things out yourself.
For general security as well, you want to have security automation. You want to make sure that you train people on your team about these things. You want to run regular security audits as well so that you can be proactive as well as reactive to security incidents when they happen.
Part C: CI/CD
The final part is the CI/CD part. You also want to make sure that you’re using standard CI/CD tools. You want to choose a CI/CD tool that supports multi-cloud deployments. Because you don’t want to have different CI/CD pipelines as well to monitor, or different providers for doing CI/CD, because the goal is to move to multi-cloud in a way that makes it easy for everyone else on your team. You want to use things like Jenkins. I know that with Azure DevOps and integrating multi-cloud extensions into Azure DevOps, you can actually deploy to other providers. Same thing with AWS as well. You want to have these things in your CI/CD tool so that it’s easy for you to do these deployments.
For your CI/CD, again, you want to make sure that you’re using cloud-agnostic infrastructure as code tools. I’ve said this thing before, I’m saying it again. It’s better to use things like Terraform or Pulumi as opposed to using AWS CDK and Azure Bicep because it’s just harder. Another thing that is very important as to why I would say use cloud-agnostic tools, is because you’d only have to train your team once, essentially. If you train them on how to use Terraform or you train them on how to use Pulumi, everyone is fine. As opposed to training them on how to use CDK, then you train them on how to use the Google version, and then you train them on how to use Azure Bicep.
You are wasting engineering effort as you do that. It’s from a technical point but also from an optimization point on your team, especially if you’re a manager, that this makes sense. Containerize your applications as well. You want to use container technologies like Docker. I don’t know if there’s any other one because it’s only Docker that I’ve known all my career, so that you can have a standard way to package your application and your application dependencies and deploy it across the different multi-clouds that you’re using.
I was talking about CI/CD pipelines just now. This is a CI/CD pipeline, a sample pipeline. This is an Azure DevOps pipeline actually, that you can use to deploy to Azure and deploy to AWS. What’s happening here is that in the first image, it’s installing all the npm dependencies, building out the application and publishing the artifacts. It’s a function. It’s supposed to be like a TypeScript function. It’s deploying that function to AWS Lambda and then it’s going to do the same thing on Azure Functions as well.
For pipelines, there’s different things to consider. Do you want to have environment-specific pipelines? Do you want to separate your CI/CD pipelines for different environments? Maybe your development environment, your staging environment: if you have a pre-production or a testing in production, people call it different things, testing in production, integration, depending on the company that you’re at. If you want to split your pipelines into environment-specific pipelines so that you can manage those things, I would always advocate for that because it allows you to actually control your deployments.
It minimizes the risks of introducing bugs into production because you have it split and it’s easier to manage. You also want to use cloud provider-specific tools. Whatever tool that you end up using, you want to make sure that the specific tools that are for your cloud providers and those features and integrations, you have them in your chosen CI/CD tool so that you can have the functionality to even do your deployment in a better way and have a better developer experience generally. You could also consider unifying your build and test phase because that necessarily shouldn’t change.
How you build the application is how the application is supposed to be built, and how you do unit testing, so this should be unifying your build and unit test phase. How you do that unit testing where you’re just testing the different components in your application as individual units, and when you build the app, don’t necessarily change regardless of the platform that you’re on until you have to start deploying to different platforms and running end-to-end tests or other kinds of tests before you do your deployment. You want to also think of a way to unify that and then having the other multi-cloud things differently so that you have less things to worry about.
This will be my recommendation, you could think about things differently if you’d like, but I would always recommend a hybrid pipeline because it reduces complexity so you can easily set up and manage your pipeline at the very beginning, if you go the hybrid way. It’s flexible so you can customize and tailor your deployments with phased rollouts and manage that entire process. It also offers easier maintenance compared to complex single pipelines that you have while enabling some code reuse through shared build stages. This is an example of what a hybrid pipeline looks like. Let’s say I’ve done my deploy to Azure in a different YAML file. I’ve created that template.
Then I have a deploy to AWS as a different file as well. I’ve created that template. This is my main template. Now I can have the jobs that deploy to AWS in development, the jobs that deploy to AWS in staging, Azure to dev, Azure to staging, and a lot more. I can break it down further if I feel like I also want to split it more, maybe doing all my dev things only in a dev place and having those templates only address those things.
Typically, this is what it would look like where your build and your test phase is the initial stage, like I said, and when I was recommending that your build and test phase should be unified, sort of. Then branches out depending on the many cloud providers that you are deploying to. You can manage each step across the different branches.
The Phases for Multi-Cloud Migration
Finally, I’m just going to be ending with the phases for multi-cloud migration because I’ve talked about what you need to do, but I should also maybe talk about how you should do it. There are four things here. You should plan, prototype, pilot, and then go to production.
When you’re planning, the first thing you need to do is define your goals. What are your objectives for adopting this multi-cloud strategy? What business problems are you trying to solve? Are you aiming for cost optimization? Are you aiming for redundancy? Do you want to tap into new markets and you’re thinking of a way to do that because you know that some providers are a better fit in some regions? What exactly is your goal? You want to evaluate the different cloud providers that fit into the goals that you’ve set and consider factors to making a decision for the different cloud providers that you want to choose.
Then you want to now, based on what we talked about earlier, do your whole architecture part of that. Design your application architecture, come up with a well-defined architecture for your service, while thinking about security as well, because that’s very important. Once you’ve done that plan, what follows is some proof of concepts. A small-scale prototype for that multi-cloud deployment so that you can test the new architecture that you have, the deployment methods that you’ve chosen, and how you plan to simulate security monitoring and a bunch of all these other things as well. A small proof of concept that you can test and just understand what needs to be done.
Then, when you are obviously prototyping, you encounter challenges. Most times things are not perfect at the first step, which is fine. This is the time for you to use the opportunity to identify the potential issues with performance, integration, security, whatever issues that you have, and address them before you roll out to wider audiences, or to everybody in fact. Then based on that, you want to refine your plans, like document your learnings from the prototype actually, because you would need to.
Sometimes it might mean that you want to adjust your cloud provider choices. It might mean that you want to change your architecture. It might mean that you want to think of a new strategy for doing security. It might mean that your CI/CD is not as robust as you thought it was and you want to try something else. What you identify, refine, and go from there.
Then the next thing is piloting. You want to do some limited deployment. You want to deploy a pilot project to some users within a specific area, and test your multi-cloud setup, beyond a simulated thing that we did before in the proof of concept, but test it in a more realistic setting and figure out what happens.
You might run into issues again, but at least this time, because you are not doing it on a large scale that you can’t control, you are able to minimize whatever disruption happens. You also want to monitor performance. You want to check metrics like the latency, the reliability. When I deploy to a new cloud service, if I have 1,000 API calls in 10 minutes, in terms of success or reliability, what’s the 90th percentile, what’s the 95th percentile? How many of these API calls actually succeed? How many of them actually fail? If your SLA that you’ve promised your customers in the grand scheme of things is that you should be able to give them some kind of, maybe you said two nines, so you’re 99% reliable.
Somehow, when you are testing for performance, you realize that 50% of your API calls are failing because maybe there are some things you’ve not set up correctly. This is a chance for you to fix that. Gather feedback from the users as well that are involved in that pilot. Whether it’s your Canary customers or some private preview users that you have or different kinds of things, you want to get that feedback from them because this rollout obviously is a phased rollout. You don’t just wake up and say you want to go multi-cloud.
It might take a very long time, months to probably even a year or one year plus, depending on your staff strength and the scale you are going, and things like that. You want to also collect feedback from them and improve on that before you now say, I want to be generally available for everybody.
Then, that’s when you get to the production stage. Now, you are still doing your phased rollout, but you are gradually migrating your workloads slowly to a bigger audience because you can’t obviously do everything in a day. You still do it in phases, but now with the goal of hitting that larger or global market like you’ve planned. You want to make sure that you consistently and continuously monitor your multi-cloud environment for performance, for security, for availability, for cost optimization opportunities as well. Because you don’t want to start doing multi-cloud and then you realize that you could have probably just stayed single cloud because the cost optimization you were hoping for, you didn’t get. You want to make sure that happens.
Then, you want to improve things as you go because nothing is always perfect in the first time. I think if things were perfect in the first time, maybe we wouldn’t need the app store because we’ll just go to the websites once and download the original version of the app and that would be it. There’s always room for improvement. It’s something to think about. Having documentation on the team for new engineers that join to know how things are, how things are running. Having clear processes laid out. These are things that happen in these improvement steps as well. Documenting things, learning things, and just further refining things as you go is something that is very important.
When you follow these 4Ps, and you adopt a methodical approach, you can increase your chances of successfully integrating and managing a multi-cloud environment that meets the needs of your business. I just want to leave you with this final thing, which is that a well-planned and well-executed multi-cloud strategy can unlock benefits for you such as increased agility, stability, redundancy, and cost efficiency for your teams and for your organization.
Questions and Answers
Participant: Your background is in infrastructure as code, in Pulumi, in Microsoft, and platform engineering. From that time, what do you think is the hardest part when somebody is considering going through this journey to multi-cloud? Are there any aspects around, whether policy, security, identity?
Nwodo: I think for me, I would think that the hardest part is actually the security part. Especially when you’re dealing with identity and access management, and you have to do things like JIT access, for example. JIT access means like just in time access. I want to make sure that not everybody has access to production resources or any resources at all. They get access when they need it, for the specific thing they need it for, and for a stipulated time. When you have to do this thing across different cloud providers, it means you have to manage different kinds of permissions. You need to write different kinds of rules. It gets very complicated.
If you don’t manage it properly, it could be problematic as well. The thing for me that would probably be the hardest is the security side. This is just my opinion, because at the end of the day, somebody else might have a different experience. That might be because the part of security that I engage in mostly is the compliance side of things. Maybe that’s why I would find it the most challenging. Somebody else, for them, it could be the fact that they have to think about managing different branches in their CI/CD pipeline.
For me, it’s the security thing mostly, because you have to think of all these things, managing access. Because, like I said, it’s important to have the cloud provider’s security specific service because it was built for those services, so it’s better. When you have to manage different kinds of security services for different kinds of providers, it gets challenging.
See more presentations with transcripts