MMS • Rosemary Wang
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
Transcript
Thomas Betts: Hello and welcome to the InfoQ podcast. I’m Thomas Betts, lead editor for architecture and design at InfoQ, and an application architect at Blackbaud. Let’s say you’ve committed to the journey of running in the cloud, however you soon realize it’s not quite as simple as moving your code or writing a new application on a cloud offering. Between a diverse service catalog, greater developer autonomy, rapid provisioning, surprise billing and changing security requirements, you might find that hosting applications on the cloud does not make the most of the cloud.
Today, I’m joined by Rosemary Wang to talk about the patterns and practices that help you move from cloud-hosted to cloud-native architecture and maximize the benefit and use of the cloud. We’ll talk about the essential application and infrastructure considerations as well as cost and security concerns you need to think about when approaching a cloud-native architecture.
As the author of Infrastructure as Code, Patterns and Practices, Rosemary works to bridge the technical and cultural barriers between infrastructure, security and application development. She has a fascination for solving intractable problems as a contributor, public speaker, writer and advocate of open source infrastructure tools. When she’s not drawing on whiteboards, Rosemary debugs stacks of various infrastructure systems on her laptop while watering her house plants. Rosemary, welcome to the InfoQ podcast.
Rosemary Wang: Thank you for having me.
Cloud-Hosted vs. Cloud-Native [01:28]
Thomas Betts: So you spoke at QCon London and your talk was titled From Cloud-Hosted to Cloud-Native. Can you start us off by explaining what the difference is between those two terms and what does it mean to make the transition from cloud-hosted to cloud-native?
Rosemary Wang: Sure. So most applications that run on cloud are cloud-hosted. You host them on the cloud, and they’re able to run on any of the cloud providers, so long as you’ve configured them to do so. Whether it be interact with cloud services or databases that run there, databases on-prem, there’s a lot of variety into what goes into a cloud-hosted application and what’s important about it. But not many applications are cloud-native. And the key differentiation between an application that’s cloud-hosted and cloud-native is that whether or not that application can scale in a very dynamic environment.
And I think that there is a very broad definition to what dynamic means. But in order for an application to be truly cloud-native, it has a couple of considerations that I think are important, and some of these considerations include its adaptability because whenever you have a cloud environment, you expect it to change. You expect everything to grow. You expect it to evolve. It’s important that this application that you’re putting on the cloud is adapting to all of these changes. There’s observability or it’s observable. You need information telemetry about how that application is running because that will inform decisions about how you change that application or service for the future.
Immutability is something that’s really important for cloud-native. It’s this idea that you have to create something new to implement changes. It’s not changing in place. And the reason why implementing something completely new with the changes built in becomes important in cloud-native is that it’s the only way you keep up with changes to the environment and do it in a stable, consistent manner. So it’s a lot easier to put together new components than it is to change existing ones in place.
Elasticity is a really subtle consideration. Most of the time, we think about cloud being scalable. Can I increase all of my application instances to handle increased workload, increased traffic or increased requests from my customers? But elasticity is a lot more subtle than that. Elasticity is scaling down as well, and I think we fail to think about that sometimes from a cloud-native perspective.
And finally, changeability. We always want to take advantage of the latest offerings, the latest ecosystems, and it’s important that we have this ability to change not just our services but also their underlying systems.
Designing for Adaptability [03:56]
Thomas Betts: And I think that changeability goes back to your first point when you said adaptability was we want to be able to adapt to changes both from within our application and our needs, but also changes in the cloud-hosted and the cloud-native environment. Can you talk about any tips for accomplishing that good separation that what my app has to do and what the environment around me is doing and how do I design for better adaptability?
Rosemary Wang: We’ve depended heavily on abstraction in order to accomplish this, and I think it’s important in a cloud-native environment to adopt the right abstraction. There’s just enough abstraction for you to do something effectively and to be able to change your services without necessarily affecting the underlying dependencies. And so that’s why you see Kubernetes gaining a lot of traction in the cloud-native space. That’s partly why you see service mesh gaining a lot of traction as an abstraction. So we’re slowly pushing what we consider infrastructure functions, as well as coexisting with service functions into different layers of abstraction. And as we build on those, it becomes important that we maintain the abstraction, but it becomes a lot easier to maintain that abstraction while independently evolving the services that are running on top of the infrastructure as well.
You’ll also find a lot of abstractions in the open source space as well. That’s becoming more prevalent. Open standards are becoming the defacto way that people build these abstractions. They don’t have to maintain them themselves in an organization. Instead, what they’re doing is they’re relying on these open standards as a contract of sorts between services and then their infrastructure dependencies.
Thomas Betts: And you said something like Kubernetes. I know there’s some approaches that say, again, I want to move to a cloud-hosted. Instead of just taking my app and deploying it, I’m going to first package it up in Kubernetes and then I’m going to put it on the cloud. Does that mean I’m cloud-native because it’s in Kubernetes?
Rosemary Wang: It’s closer. I would say it’s closer. It doesn’t fulfill all the other four requirements or considerations that I outlined. Not necessarily. Just because you put something on Kubernetes, it doesn’t necessarily mean it’s elastic. I think that’s the one argument that I get the most. It’s like, “Oh yeah, I packaged it. I put it on Kubernetes.” But is it truly elastic? Are you taking advantage of the ability for those service to scale up? But are you also scaling them down dynamically? And that’s usually the place where it goes from cloud-hosted closer to cloud-native but still not perfectly what we would consider cloud-native. And I think that there’s a misnomer here where if you manage to lift and shift an application from one runtime to another, that magically means that you’re more cloud-native, and that’s not necessarily the case either.
Testing in the Cloud [06:25]
Thomas Betts: And how does it affect our plans for testing our application? So I used to run all my tests locally, but now they’re in the cloud and I might not control everything, so I’m not running my web server on my machine and getting tested there. I need to say it’s inside this container or there’s other dependencies. What do I have to consider for making those tests work properly in the cloud?
Rosemary Wang: That’s a good question, and I get a lot of interest in local testing because no one wants to spin up all of these cloud-native components. Right now, if you wanted to test interactions between microservices, now you might have to implement a service mesh locally for some reason and test all integration tests locally with a service mesh in place. It’s a lot of overhead.
And so testing now looks a little bit different. I think that we’re getting better about either mocking some of these abstractions, whether it be if you’re interfacing with a cloud provider’s API, there are some community libraries available for you to mock those APIs and you get an understanding of how those infrastructure components will return important values and important information for your services.
The other options that are interesting is that there’s more and more focus on implementing some of these open source tools locally. So you can use a Docker container and spit up a Redis cache if you really wanted to. You can similarly create a Kubernetes cluster on your own local machine and test your deployments that way. So I think we’re looking at testing the abstractions. And even if the abstractions are mocked or they’re able to be implemented locally, that is much better for us from a development standpoint. But overall, there are some abstractions that cannot be mocked, unfortunately, or cannot be run locally, and those will still have to rely on either the remote cloud instance or sometimes the on-prem instance.
Thomas Betts: And then that’s when we’re getting into the subject of doing a full end-to-end test that’s saying that my application, I did my unit tests, but I need to make sure that it will work in this cloud-native environment because I have these other dependencies. And so you have to consider that some of those now have to be tests you have to write that you didn’t have to before.
Rosemary Wang: Exactly. There’s also, I think, merit to considering the contract test. We talk about contract tests between microservices predominantly, but we don’t necessarily talk about contract testing in the form of infrastructure because it’s complicated. Infrastructure schemas are not uniform across the board. What information you need to log into a cache is going to be different than what you log into a database and many other components. How you interface with Kafka is going to be very, very different than how you interface with Amazon’s SQS service. There’s a lot of different nuances there.
But one of the things that I’ve noticed that has worked really well with services is the ability to say, okay, I know these are my contract tests, point in time for certain infrastructure. I’m going to run these contract tests and make sure that what I understand my application needs to do to interface with these components. I am able to articulate that, communicate that knowledge, and if it’s a little bit different, maybe the cloud provider’s API changes, then I’m able to accommodate for those changes very dynamically because they don’t match what I’ve done in my contract tests.
Changing Observability Needs [09:19]
Thomas Betts: I think that’s a good segue into your next capability, which was observability, and that’s about being able to understand what the system is doing, both when it’s working but also when you’re trying to troubleshoot it. And you look at the spectrum from my application was on-prem or maybe in a colo server and then it was cloud-hosted and then it’s cloud-native. When you move across that spectrum, how do our observability needs change?
Rosemary Wang: t’s very easy to start with I guess our traditional monitoring system where we would push all of the information. We would say, okay, here’s an agent that’s gathering all this information and we push it to a monitoring server somewhere. What we’re seeing in the cloud-native environments now is more of a push-pull model or a pull model. There’s some server and it’s extracting the information dynamically, but the application itself has to give this information, this telemetry freely and there of course has to be some standard in place. So that’s where Prometheus formatted metrics are now particularly of interest for folks. You’ll see OpenTelemetry, the standardization of libraries that emit traces or emit metrics. We’re starting to get a little bit more consistent in the protocols and the formats that we’re using to support a better observability in our systems.
But unfortunately, there is not one tool to rule it all. So you’re not going to get the same monitoring tool or the same observability tool that you’re using in cloud as on-prem. I think there’s just too much specialization at times as well from some of these tools. So you might be on-prem using application performance monitoring, something that’s a little bit more agent based, and then when you go to your cloud environment, you might decide to go with something like a third-party managed service, and it’s up to you to decide how you’re going to aggregate all that information. And so we’re seeing organizations building their own observability systems, particularly on the aggregation side of the house where they’re responsible for aggregating, adding metadata, the proper metadata to services and then indexing in this big pool of telemetry. They’re not necessarily depending on the different vendored monitoring systems anymore. They’re instead aggregating in one place.
Thomas Betts: Yes, I think it’s another place where people sometimes make the assumption that just like going to cloud-hosted, oh, I’m now in the cloud and I can scale and I can get all these benefits. Sometimes there’s a perception if I just add OpenTelemetry, now I’m going to have all these benefits. You still have to do some work and you have to do your own customization and find out what works for you. Right?
Rosemary Wang: Exactly. And I think we’ve also got a lot of complexity now too with all these different libraries and different protocols. Let’s say you have a cloud-hosted application and you’ve invested in distributed tracing with Zipkin. Well, Zipkin traces aren’t necessarily compatible with other instrumentation libraries, with other use trace aggregation tools, etc. So you go down this rabbit hole of trying to figure out, okay, even though it’s cloud-hosted where it’s what we think is cloud-native, there’s a lot of disparity or inconsistencies with the compatibility across all of these tools as well as the protocols that we’ve already put in place or instrumentation that we’ve put in place in these services.
I think that complexity from going from cloud-hosted to cloud-native comes really from this historical implementation of us depending on what was available at the time and now we’re responsible for re-platforming or re-factoring to something that we recognize as the open standard for a cloud-native service.
Thomas Betts: Where it used to be traditional, I owned everything that was on the server that I built and I deployed, now I’m depending on all these other services, and if I really want to get the benefit of, well, what’s happening in all of those dependencies, you might have to adapt to understand what they’re producing. Are they using the same OpenTelemetry that you should then figure out how to get that integrated? That’s that aggregation you were talking about, right?
Rosemary Wang: Yes, exactly. And I think there’s plenty of abstractions out there now. We depended on the abstraction of the monitoring agent. Now I think we’ve got different kinds of abstractions. We’ve got the code libraries or SDKs, the instrumentation you put directly into the service, and then you also now have sidecars or sidecar processes. So it’s not just in the case of containers, but also if you’re doing a virtual machine that’s on a cloud, you might consider a sidecar process that’s retrieving some of this information. So there’s a lot of patterns now, and we’re pushing a lot of these into abstractions. Service mesh also being one of those as well saying, okay, I’m going to turn on telemetry on the proxy level and not necessarily need to think about instrumentation on the application side.
Thomas Betts: Those abstraction layers can be a benefit but also a curse. You’re giving up some control and you don’t have to think about all those things. It’s the traditional buy versus build model. I don’t have to build a service mesh because I can buy one off the shelf and it can do that type of work for me, but it means I do have to understand what am I giving up and what is that abstraction boundary.
Rosemary Wang: Yes, exactly.
The Importance of Immutability [14:07]
Thomas Betts: I want to move on to immutability. And immutability I think is great because it applies to a lot of different things. I can go down to I’m doing functional programming and my data is immutable, but I can also have immutable infrastructure, which is usually what’s talked about for infrastructure as code. You want to build once and deploy many times so that it’s repeatable. Why is it important to have all those things that we cannot change when we’re saying we’re cloud-native?
Rosemary Wang: Well, part of the problem is everything changes so quickly when it comes to a cloud-native environment. I think the best example of this is actually in secrets. Oftentimes when we’re starting to think about cloud-native, we’re using a lot of different services. We’re using many different approaches to authentication, API authorization, and there’s a secret sprawl of sorts. So now you’ve got tokens everywhere. You’ve got database credentials everywhere. You’ve got certificates. Who knows where the certificates are and when they’re going to expire?
And whenever you have so many of these components floating around, and it’s very natural in the cloud-native environment to have a lot of credentials across multiple providers, multiple systems, you start to wonder, well, what are we doing with them and what happens if one of them is compromised? Do we have the ability to (a) observe that something has been compromised? So that’s where observability is important. But also (b) make the changes with least disruption to our systems, and that’s where immutability becomes really key to surviving in a cloud-native environment.
Part of this is that it’s easier to say I will completely create something net new with the changes rather than go in and change or tweak one or two different configurations and lovingly go in and identify where all of these credentials are, where all these passwords are being used, when they’ve been used, who’s using them?
And so immutability becomes really powerful because when you say, okay, I don’t need to go in and make a change in place, instead I can just create something completely new and fail over to that new instance or new secret or new server or new set of applications without necessarily removing the old one. You can still use it, still debug, you can put them in isolation, and you generally mitigate the risk of how your changes may impact the system. And so immutability is all about mitigating risk at the end of the day and mitigating the impact of a potential change failure.
And whenever we talk about immutability, it’s scary. People don’t necessarily want to think about creating something new. They think about, oh, the resources are really expensive for me, but in a true cloud-native environment, resources are not as expensive as you expect, and it’s a decision that you make from the cost of failure versus the cost of creating that resource and making sure that you’re doing your due diligence and implementing the change correctly.
Thomas Betts: Yes, I think that speaks to a lot of how you have to design to be cloud-native, that you want smaller units, and that makes the immutability easier because you think about your whole system can’t change. Well, it’s got all these different pieces that have to connect, but if you break it down, it’s easier to say, well, that isn’t going to change. And how do you adapt to having the ability to deploy each of those small pieces individually or being able to change the one immutable thing without having to change all of it?
Rosemary Wang: There’s a balance to it. There’s also a science. It’s almost like dependency injection but for cloud-native environments. And we think about dependency injection oftentimes in software where we say, okay, we’ll not only have the abstraction but we’ll also make sure that higher level resources will call lower level resources. That’s a way that we make sure we decouple those dependencies. In the case of larger systems, modularity is a science. You have to identify what groups of functions are that make the most sense, and so there is a fine line between making too many small modules in your system.
Generally, I joke like if you find yourself putting one or two resources or one or two components into a bucket and saying that’s like one part of the system that I’m always going to change, maybe you should question if there are maybe more pieces to that that you should be adding to that group. On the other hand, you shouldn’t necessarily have a hundred different resources into let’s say one group of a change. It’s a little bit harder to describe at times, but I would certainly say that the rule of immutability that I have is if it takes more than two or three lines of code to make the change, then I should probably do it immutably. And that goes for infrastructure. That also goes for application code as well. But it’s a very difficult thing.
It’s a good question. I wonder if there is a scientific graph for evaluating it.
Thomas Betts: The answer is always it depends. You have to evaluate your own situation. I think you mentioned optimizing for the provisioning that this only works if you can deploy those things quickly. If it takes half an hour to deploy or longer, you’re going to let that thing live longer and you might design it differently. But if you say, oh, I can change this and turn it off and back on again, is the joke for IT, that if that’s going to happen in seconds or a few minutes, then that’s just the easier answer.
Rosemary Wang: And there’s also the other thing where if a change has gotten so drastically wrong that recovery is just not looking very promising, in place recovery is just not looking very promising, and this happens sometimes in the case of lets say data corruption or in a system where it’s been compromised from a security standpoint, then immutability works really well. It is worth, let’s say, taking two days to stand up the new environment when you might take a week trying to restore everything in place when all has gone wrong. So there’s still a power to immutability. And I think it will depend on, as you’ve pointed out, the scenario, the day-to-day what you’re doing, what change it is, and what state that system is in, in that point in time.
Thomas Betts: And you said state, and I was just thinking that you don’t want to have a whole lot of state being managed in that service. You want to think about the service is just doing the operation and the data is stored somewhere else. It’s another big design consideration.
Rosemary Wang: It is. We don’t talk enough about data in I think cloud-native. Most of the time when we talk about cloud-native, we often mix it with the idea of statelessness. The application does not contain any of the data. It does not track any of the data. It is merely processing it, but we don’t talk about state enough, and I think that what makes cloud-native state important tends to be whether or not you’ve handled the distribution of that data correctly. It’s no longer enough for us to say, okay, we’ve got one database and we’re just going to store it somewhere. That data, oftentimes, we have to think carefully about where it’s going to be replicated to, as well as how we’re going to manage it. I think we have to be more careful at times as well about how we do disaster recovery or data recovery for cloud-native components. We oftentimes just funnel that and ignore that consideration when we talk about environments in the cloud. But I think it becomes more important to think about backup, think about recovery whenever you have those components.
Elasticity: The Hallmark of Cloud-Native [20:57]
Thomas Betts: Yes. Again, nothing is for free. Nothing happens automatically. You have to design for it and plan for it. I think we’ll move on to elasticity. You talked about this being the hallmark of cloud-native, and that’s where we’re … So go to the cloud and you can just expand and shrink to meet your demand, almost automagically. And we talk about that usually as horizontal scaling that we want these small units to just expand out and they work in parallel, but is that always the right advice? And for cloud-native, do we sometimes need to have the vertical scaling as well?
Rosemary Wang: So vertical scaling is what we traditionally thought about in terms of resource efficiency. Are we using our memory? Are we using our CPU to the greatest extent? And if not, and we need more, well, we would say, okay, we’ll need a server or we’ll need a resource with more memory, more CPU. As you pointed out with horizontal scaling, instead, we’re focused on workload density, so how many of these instances in the smallest unit of resource usage can we schedule? And most of the time, cloud-native tends to be thought in the context of horizontal scaling. How many of these containers can we schedule or how many of these small, little instances can we use to distribute this amount of data?
And the reality is that vertical scaling is actually incredibly important in cloud-native as well because not everything can be horizontally scaled and not everything should be horizontally scaled either. And it’s all dependent on the idle versus active resources you do have. Sometimes it is more efficient, especially if you have something that is running with great frequency, sometimes you don’t want necessarily horizontal scaling. It’s more efficient to allocate one resource and that resource is allocated to running a job or some process very frequently. And then other times, you might have a need for volume, very, very adjustable volume over time. So volume adjusted based on demand. That could be requests for an application. That could be how much information or how many functions you’re running or processing something at one time? So it’s a pretty nuanced assessment.
With elasticity, what I will say is that part of our problem is that we think a lot about scaling up and we don’t think necessarily about scaling down. And even whether you’re doing vertical scaling or horizontal scaling, there is a possible scale down for both scenarios, and that’s where we have to be a little bit more careful about how we optimize and where we schedule some of these workloads.
Thomas Betts: That scaling down I think was another talk at QCon about cloud zombies that we have all these instances that are running around and they either got too big and they never got smaller or we left them running. I think Holly Cummins gave that talk, talking about #lightswitchops. She just wants people to go around and start turning things off. You think, oh, it’s in the cloud, it will just automatically turn off, and nothing is as automatic as you think.
Rosemary Wang: No, it is not. One thing that I see, elasticity tends to affect cost. And the thing that people don’t realize is networking components often are probably the culprit of the zombie resource that’s just hanging out there and not really doing anything. So in the case of AWS, Elastic IPs. Data transfer, there’s surprising amount that you get charged for transfer out of your network and across regions as well. We don’t think of those as zombies per se because we’re transmitting data out. That’s got to have some use. And the reality is sometimes we don’t need to be what I call doing the traffic trombone. We don’t need to send that traffic out. Really we should be just keeping it within our network. So there’s a lot of considerations there. I like the zombie resources analogy.
Thomas Betts: That was part of the green software track or design for sustainability and just that’s another idea about going cloud-native, is you have all this power but you also have to be cognizant of how much carbon are you using? And it’s harder to measure. It’s getting better. A lot of times, we just look at the cost of the resources and think, oh, that must be how efficient it is. And your example of you’re having a lot of small things, sometimes it’d be useful to have a few larger instances running. I think a lot of examples of people went to a serverless offering because they thought it would save money and they went back to a full VM or Kubernetes or whatever because it turned out having one thing provisioned all the time was cheaper than spinning up all these little instances all the time. So you can’t just assume that, oh, we can go smaller and it’ll be cheaper or be more efficient.
Rosemary Wang: Yes, exactly. I see a lot in data processing, specifically. People will spin up a lot of functions, and the cost of the functions, unfortunately it gets quite expensive and then they end up going to a dedicated data processing tool, and that’s something that they host themselves. And over time, they’ve made this discovery that it’s no longer feasible for them to maintain the volume of functions and the frequency of functions that they’re spinning up.
Changeability Often Involves Paradigm Shifts [25:27]
Thomas Betts: Well, let’s wrap up with changeability. So this is another promise of the cloud, is the ability to be flexible and we get to try new things. That might be a new resource capability that was just offered or maybe it’s a new way of working among the teams. What are some of the pitfalls that we’re running into if we just assume that we’re going to the cloud and we get all this flexibility to do whatever we want?
Rosemary Wang: Most of the time, these latest offerings involve paradigm shifts, and they’re not paradigm shifts about the technology themselves. They’re paradigm shifts about how we work. It’s about process and about people. A good example of this is GitOps. People have been really interested in GitOps recently because it’s this promise of continuous deployment of services with a metrics-based approach. Automated deployment, you get a sense of what these metrics are. You get automated rollback. It handles the operational aspect of rollout, but it’s a paradigm shift. It’s not that easy to say I’m going to defer all decision-making automatically to a controller that’s going to assess whether or not these metrics are good or bad and then handle the rollout for me, and then I’ll trust that it will roll back. It’s a really difficult thing to change your mindset on.
And so when we talk about changeability in the context of cloud-native, it’s this idea that you may have to convince someone to take on new perspective for these latest offerings or latest technologies, and that in some ways is a more difficult prospect than some technical implementation. But what I generally say is that in the context of changeability, there’s always an intermediate step. You don’t have to take advantage of the latest offering immediately. In order to get buy-in and to really think carefully about whether or not this new offering is for you from a cultural standpoint for your organization, choose the intermediate step. And that may mean that you do something manually or you do something that is not ideal in the beginning, but then you eventually shift toward that latest approach.
So in the example of GitOps, maybe you do some kind of CI automation that allows you to do some deployment that is partially manual, partially automated, and that gets you a little bit closer to the GitOps model, a little bit more comfortable with it. I think it’s important to differentiate cloud-native changeability from cloud-hosted changeability. Cloud-hosted changeability is this idea that you’re hosting it and you may not necessarily re-platform it on the latest offering or you may not want to take advantage of all these latest technologies. With cloud-native, you do because to be honest, they’ll probably deprecate the service offering you’re using in a couple of years, so you always need to rethink about what is the latest offering that you’re going to have to move to in a couple of years’ time.
Thomas Betts: I like the idea of having those intermediate steps. You don’t have to jump to the end and assume you can get all the benefit and the overlap between the, “I want this technical feature, but I have to think about the socio-technical impacts and that for us to change our process or change our application, we have to get the people to change as well.”
Rosemary Wang: It’s something we forget about sometimes. I think it’s easy to say, okay, I’m going to quit my application, and that’s where this journey starts, saying, I’m going to put this application on the cloud. It’s easy enough to say I hosted on the cloud and therefore I’m running it. But cloud-native, it changes a lot of perspective, and I think that’s where we go wrong. We forget that we ourselves have to change to accommodate for a very dynamic environment because we’re no longer able to control the next version of Kubernetes or the next version of a monitoring tool anymore. If we’re using open standards, this will be driven by the community. This will be driven by outside forces that are not directly in our control, and we have to make sure that we can adapt to all those things.
It’s Okay If You’re Cloud-Hosted [28:57]
Thomas Betts: Any final advice for the people who are thinking about how to get either onto the cloud-hosted, onto the cloud-native or what they’re doing and how to make any little incremental changes for their process?
Rosemary Wang: It’s okay if you’re cloud-hosted. I think people are like, oh, we must be cloud-native. And the thing is it’s okay if you’re cloud-hosted. It’s working for you. You’ve got the cost optimization you’re looking for. Really, it’s already incremental progress for you. You don’t have to wait around for procurement of infrastructure, for example. Maybe that’s what the value you’re seeing right now. And it’s okay, you’re cloud-hosted.
If you do want to go to cloud-native and you want to scale the way you are offering infrastructure as well as how people are developing services because you’ve reached that point, again, take those incremental steps. You don’t have to jump immediately to a new greenfield offering and commit to changing everything. Change one or two things that are already going to improve your workflow. So think about what your workflow is today. Think about small things that might make it a little bit better, and then eventually, you’ll get to a point where you can say I’m fully cloud-native.
Thomas Betts: Well, that’s a good place to end. So I want to thank you, Rosemary Wang, for joining us on the InfoQ Podcast.
Rosemary Wang: Thank you for having me.
Thomas Betts: And we hope you’ll join us again soon for another episode.
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.