MMS • Abby Bangser Helen Beal Matt Campbell Steef-Jan Wiggers
Daniel Bryant: Hello, and welcome to the InfoQ Podcast. I’m your host Daniel Bryant, and today we have members of the editorial staff and friends of InfoQ discussing the current trends within the cloud and DevOps space. We’ll cover topics ranging from AI and low-code, no-code, the changing face of Serverless, the impact of shift left and security and sustainability within the cloud. I’ll start by letting everyone introduce themselves.
Helen Beal: Hi, I’m Helen Beal. I am chief ambassador at DevOps Institute. Now part of the PeopleCert family, also chair of the Value Stream Management Consortium and co-chair of the Value Stream Management Interoperability Technical Committee at Open OASIS and a strategic advisor on Ways of Working. Lovely to be here.
Daniel Bryant: Thank you very much then. Abby, over to you.
Abby Bangser: Hi, yes, I’m Abby Bangser. I’m based in London as a principal engineer at Syntasso working on platform engineering solutions. I’ve also worked with the InfoQ group before, including just recently at QCon London with the debugging production track. Glad to be here.
Daniel Bryant: Fantastic. Matt, over to you.
Matt Campbell: Thank you. Matt Campbell, I’m based in Canada, lead editor for the DevOps queue for InfoQ and then also VP of Cloud Platform for education company called D2L, where we do a lot of platform engineering stuff.
Daniel Bryant: Fantastic. And finally, last one or least Steef-Jan, over to you.
Steef-Jan Wiggers: Yes, Steef-Jan Wiggers. So I’m based in the Netherlands. I’m a lead cloud editor at InfoQ for the last couple of years. My work as an integration Azure architect for i8c in the Netherlands. And Yes, I’m happy to be here.
Is the cloud domain moving from revolution to evolution? And is DevOps dead? [01:25]
Daniel Bryant: Fantastic. Thank you so much for taking time out of your busy days to join us. We’ll launch straight in, try not to be too controversial on the first question, right? But I want to set the high-level piece. Now, two things with the cloud and DevOps space, I’m hearing cloud innovation is slowing down. Our colleague InfoQ Renato did a fantastic piece covering Amazon reinvent and even he said that it’s more evolution rather than revolution. Also, we’ve been seeing here is DevOps dead. We had some interesting conversations around that. So I love to get everyone’s thoughts in terms of what do you see the big picture with cloud and DevOps? How are things moving? Where do you think things are going?
Helen Beal: I think it’s probably true. The innovation has slowed a little bit around clouds. I mean it’s gone really, really fast for a long time and now a lot of people are trying to adopt it. I think we’ve got over things like lift and shift and learned a lot about not doing things like that, but there’s still a lot of work for people to move workloads and re-architect workloads and things like that and adopt the things that have been invented for them.
As for DevOps, definitely not dead, don’t be ridiculous. However, I would say that it has some problems sort of 13 years into its journey. It’s reached some stagnancy, maybe dormancy in some organizations they’re struggling to really get it to the next level. That’s personally one of the reasons I’m particularly interested in Value Stream management is I see that as an option to help people unlock the flow and the value realization parts of it, but definitely not dead. Absolutely in the mainstream, lots of organizations still working on getting it right in their organization. So I’m going to ask Abby to respond and let me know how much of what I’ve said she agrees with.
Abby Bangser: Yes, I think that whole, just to go for the easy one first, I agree on the cloud space. I think there’s been a lot of evolution towards the original goal of cloud computing, which is access to the resources that you need on demand and with easy scalability. And we can see that now moving across more and more types of resources and becoming more ubiquitous, which is looking like evolution even though in those spaces it’s revolutionary. I think that there’s the DevOps conversation’s really interesting ’cause as you say, the again, intention of DevOps, just like the intention of cloud I think is still alive and well.
The intention of giving people access and autonomy in their teams to create business value without blockers is still something that we are all striving for and working hard towards. I think the concept of trying to say it is a developer person and an operations person sitting together on a team was never quite the intention, but was the attempted implementation, which we’re now seeing cracks in after years of trying to do it. And I think that’s showing rise to some of the other changes we’re going to be talking a bit about in platforms, internal platforms, and other things. So absolutely.
Daniel Bryant: Matt, you’re nodding along there a few times.
Matt Campbell: Yes, I would agree. I think one of my favorite topics of late is the sort of cognitive pressure that people are put under. And I think Abby, you were hitting on that and Helen as well, where there’s just in the approach to try to localize the development to try to make sure we could deliver business value faster. We also overloaded teams and even with all the cloud innovation that’s happening, every time something new comes out, you have to figure out how that’s going to fit in your worldview. How I’m going to integrate that into what I’m doing. Does it add value to what I’m doing? And I think we’re seeing everyone has a lot on their plate. The tech landscape changes very quickly. Businesses want to evolve and adapt quickly as well. So I think it’s all somewhat interconnected with the slowdown in that we’re now in a phase where we’re trying to figure out how do we sustainably leverage all of the cool stuff that we’ve invented and created and all these ways of interacting with each other and move it to a place where we can innovate comfortably going forward.
Daniel Bryant: Love it. Steef-Jan, so you’re nodding along there too.
Steef-Jan Wiggers: Well, I did see a massive adoption of the cloud predominantly because of COVID. Yes, I agree that a lot of that gives a burden on development and even I have to deal a lot in the cloud, and this is in some evolution with regards that also concurs with DevOps is where the automated setup environments. So recently some of the cloud providers like Azure have this automated way, not just setting up a dev box but also complete environments. So I see some evolution in that area as well that we start lifting and shifting that type of way so that you have a completely development test, automated development street basically setting that up that we say, “Yes, we try to get the dev and ops sitting together.”
Because I usually still see a little bit of a wall between them and it comes around to identity and access management, that kind of stuff. You still need access to certain resources, or you need to access certain tiers and then you don’t get them because someone on the DevOps side, “Well, I haven’t got the time for it yet.” So some of that stuff is not yet automated. So I still see a little bit of a gap there too, just from personal experience. I still feel that dev and ops, there’s always this little boundary between them, and has to do with predominantly identity and access management.
Daniel Bryant: That’s very interesting, Steef-Jan. I definitely think the whole DevOps, I like everyone sort of riff on that. Joking in a presentation I did a few years ago, said it’s like should really be BizDevQaOps at least, right? There’s a whole bunch of things that should be in there, but you know what developers are like, well engineers, I should say architects, we just like that snappy title. So I think that was a great call out, which I’m sure we’ll dive into many times throughout the podcast.
What is the current impact of AI and LLMs on the domains of cloud and DevOps? [06:19]
I did like their focus on revolution. And Matt, you mentioned about sort of cognitive overload, those kinds of things popularized by Team Topologies, which I think we’ll cover as well. But on that notion, we can’t get away from the impact AI and LLMs in particular had over the last six months and definitely since the last trend report we did. I’d love to get folks thoughts on what value does AI add to someone who’s going to be listening to the podcast either as an engineer, an architect, a technical leader. What do you think the immediate impact is of these kind of large language models and where do you think it’s going?
Helen Beal: Well, I can start again if you like. I mean yesterday, funny enough, I was doing a similar one of these about observability, but we ended up talking about AI quite a lot and initially in the context of AIOps, which of course isn’t so much about the large language models but more about algorithms and machine learning and absolutely what Matt said about cognitive pressure, cognitive load limits, and humans is a big problem when we are dealing with lots and lots of data coming out of lots and lots of monitoring systems and creating huge amounts of alert noise. So AIOps has a very strong use case there in this instant management, which is where we started having the conversation. But we of course got onto ChatGPT and the generative AI as well. And one of the particularly interesting use cases I found, or I heard from the team yesterday was around, again, instant management, but this time ticketing.
So developers probably remember things like the rubber duck. You’d have a rubber duck on your desk and you’d have to talk to it if you had a problem before you were allowed to bother somebody else with your problem because, generally, you’d find if you had a conversation, you’d come to a solution and they’re doing a similar thing at the insurance company that was on this talk yesterday and they’re actually using it in their ticketing system. So instead of raising a ticket, it’s encouraging you to chat with the AI and then once you get to a point it’s like, “Actually, no, we are going to have to raise a ticket for this thing.” And I think there was a few other ones, of course, the developer kind of Copilot things are getting code snippets and I think there’s just a huge opportunity here for some leapfrogging.
I see it in some of my daily work. So for example, recently, we had a problem where we were writing a new course, and the guy that was writing it was running out of time and we needed some instructor notes writing, and usually in the past that would’ve meant I’d Google the topic, spend 15 minutes digesting the information and then regurgitate something into an instructor note, but I could leapfrog sort of 15 minutes of work by asking ChatGPT the question and then validating the information and shuffling it around into taking the right bits out and the bits I didn’t want out as well. But another slightly off-topic, but interesting ChatGPT story. I was doing some work on a novel that I’m writing this morning and a scene I’m particularly writing, it’s set on a celebrity cooking show and somebody’s about to sabotage the cooking competition.
First of all, I did it in Sudowrite actually, which is a large language program for writers, and it came out with all sorts of suggestions about fiddling with utensils and ingredients. And then I stuck it in Bard and Bard said, I’m a large language model thing and I can’t help you. And then I stuck it in ChatGPT and ChatGPT said basically, I’m morally not allowed to help you with that. You’re trying to sabotage somebody. So I thought that was great though, seeing the ethics going on in the large-scale open tools was fun. Anyway, probably a little off-topic, apologies for that.
Daniel Bryant: Who would’ve thought cooking in the InfoQ Cloud DevOps podcast? Fantastic. Anyone else got any thoughts on that?
Steef-Jan Wiggers: Well, Yes, I do. From more than Microsoft side, I see a huge push through open AI initiatives. Like Helen mentioned, Copilot, so many of the Azure and even Microsoft Cloud products or services now have a Copilot. So you see it in their tooling like Visual Studio, Visual Code, you see it at Microsoft and Dynamics side that they have Copilots, A lot of the services Microsoft, even something recent, the Microsoft Fabric, which is a complete SaaS data lake or lakehouse solution they have is completely infused with AI. So Yes, you definitely see a big push in an investment on the Microsoft side.
Abby Bangser: Yes, I think to your question of what would listeners care about in this space, I think the big thing is just to start getting involved and exploring what is possible and not possible because it’s very easy to poke fun at the prompts that show up on social media of look at how silly this AI is. The robots are definitely not coming for us because it’s true. There are things that you do need to validate. Helen talked about getting information back and then validating it before going off and posting that to a project or to a course. But at the same time having an idea of what can be done for you, even if it’s not yet in the language models today, and where you will be able to add value is I think a really big piece of what we’re trying to figure out right now.
And we’re also just trying to figure out what that evolution will look like. And I think that prompting, there’s that joke about being a prompt engineer instead of being a software engineer. I think it’s a joke today to many people, but actually, that ability to be creative and to think critically about the problem in order to generate useful suggestions that’s effective in a meeting that’s going to be even more effective when you start applying things like a large language model to it. So I think these are all the things that are coming down the pipe at us because of this new technology.
Daniel Bryant: Fantastic. Matt, any thoughts on that one?
Matt Campbell: Well, it sounds like DevOps is going to get replaced by DevPromptOps then.
Daniel Bryant: Yes, love it.
Matt Campbell: Is what you’re saying there. So Yes, I love the idea of leveraging tools like this to make our lives easier. I think that’s always been the dream of technology is that we would have flying cars and be able to not have to work as much. And in some cases, kind of circling back to the first question, a lot of the innovation I think has actually maybe made us work harder in order to achieve some of our outcomes. And I love the idea going back like Helen you were talking about with AIOps and using that for monitoring and especially with distributed systems, as you get observability in there, it’s really hard to try to trace down where something’s actually broken. And having a system be able to do the first stab at that and report back, maybe look over here and remove that something recent at work, somebody wanted to learn how to use Antlere to help process a large amount of data that we had and used ChatGPT to kind of help construct some of the initial scripts as a way to guide their learning in that.
So using it to help prompt you and fill in gaps and give you a jumpstart. I think that as an initial jump-off point is pretty awesome usage of it. I’d love to see more developments in that area.
How do AI-based and ChatGPT-like products impact the low-code and no-code domains? [12:12]
Daniel Bryant: Yes, no, I love all the great sort of comments about the validation, the ethics involved. Sure. And I think we’re all saying the Copilot, the pair programmer, the summarization is super useful. And I was kind of wondering, I was doing a lot of work back in the low-code space a while ago and what do you think the impact of say AI is on low-code? Is it disrupting it, right? The whole low-code, no-code space, or is it going to augment it in that the tools to make things simpler were already there and now they’re perhaps going to get even better. So I’d love to get people’s thoughts on where do you think these low-code, no-code tools are available in the cloud. Where do you think they’re going to go with the impact of AI?
Helen Beal: Funny enough, that came up in the conversation yesterday as well when we were doing takeaways in the same chat from the insurance company that I was talking about when I asked him for his vision of the future, that’s exactly what he wanted. And as a head of DevOps in the technical team, what he actually wanted is his business teams to be able to use AI so that they could drive particularly things like self-healing and remediation around their core business systems. So people are definitely looking for that connection. I haven’t seen anybody having it yet, but something that definitely I’ve seen people have appetite for. Has anybody seen it in action in the real world yet?
Daniel Bryant: So listeners to the podcast, there’s a business opportunity there, right?
Helen Beal: A huge business opportunity. And I found it really interesting because traditionally when I’ve been a consultant, when we’ve had conversations about businesspeople doing stuff with systems, people have started shouting about shadow IT and it being very undesirable to have these businesspeople having their hands in the systems to that extent. So I found it quite interesting that we seem to be turning a corner because AI is actually supporting the businesspeople giving them an amount of knowledge that is probably safe. The small amount of knowledge can be dangerous. It feels like actually AI is lifting us out of that space where we can be trusted. And perhaps in the point that I made in response was it would be great to stop having this division between product management and software engineering. The business as people call it in technology and actually having us collaborate as single value streams would be fabulous.
Abby Bangser: I think it’s really interesting with that point about how dangerous is it to have someone who doesn’t particularly know about code to use these low-code platforms. I think that’s been the biggest debate on their uptake. And I’ve been hearing a lot of increased conversation about ClickOps, which used to be talked about in the sense of clicking around a browser to be able to solve your problem. But today’s products and leaders in that space are talking about how you enable people to click around and do things if that is the right interface for them, but you result in code or something that is version controlled, is declarative in nature, and can be understood and evolved.
And I think that the exciting bit about the LLMs and the AI side of things is that we’re getting better and better code generation. We’re seeing that with Copilot, with Codeium, with lots and lots of other tools. Can we get even better code generation, code generation people can actually use that, don’t want to click around the UI can actually read, can actually evolve, can actually meet standards of our organization’s code. But from that no-code, low-code spot, I think that will be an interesting evolution.
Steef-Jan Wiggers: Yes, I think so too. If you can prompt or talk and then have it out visualized through low-code. I’ve worked a lot with low-code platforms at least in the Microsoft space with logic apps that it can be out generated and then you have your business process right in front of you. You could do clicking it around, that’s correct. But usually, it’s left up to us instead of left up to them. But if you can just have it generated and then your business process right in your face it runs and there’s always kind of a code-behind stuff that is generated for you. So that can definitely work.
The interesting thing though would be in the end of the day, the governance regards to accessing certain part of data. It’s always been the case with low-code. Even if you look at the low-code platforms on the Microsoft side and the cloud side, which is I feel called power automate and that kind of stuff. And then it’s easy for them to use even businesspeople and they call them citizen developers, but in the end of the day it’s also about the data and the governance that they are using at least access to data and then sometimes it gets a bit weary that they get too much power or too much access to certain data.
Matt Campbell: I think that’s the sort of merger between the low-code and platform engineering and that we want to provide a safe platform that complies with our governance needs but also empowers the users of it to be able to get their job done. And I do see even with some of the low-code systems being challenging for end users to use, so getting some of the prompt engineering and ChatGPT stuff into that I think could be helpful in almost telling it what you want, and it kind of helping to build a skeleton for you to flesh out. My guess is we’re going to see a lot more evolution in this area as we look to try to drive more business value quickly and then also see this start to Steef-Jan’s point there, improve in the layer of governance and the platform engineering kind of DevOpsy layer of that so that those platform teams can take care of that part for the users of the low-code without them needing to worry about that or worry about crossing any boundaries you don’t want to be crossing.
How will Platform Engineering evolve? [16:59]
Daniel Bryant: Fantastic summary everyone. And I’ve heard several times sort of implicitly we’ve mentioned about platforms. Now, most of us here have been building platforms for 10, 20 years, but platform engineering has become a thing du jour. People have really locked onto that with good reason like DevOps, many of us are doing DevOps before it became a thing, but giving us a label does help build a community and drive the principles of that space forward. So I’d love to get folks thoughts around platform engineering. Particularly, I was thinking of things like Kubernetes and Service Mesh, all the rage for the last five years. Are they getting pushed down to everyone’s points about hiding some of that complexity, right? Reducing some of that cognitive load. And Abby you pointed out in our sort of a chat before their podcast started around Team Topologies going through a new evolution as well and that everyone, I seem to be chatting to you too is adopting some form of Team Topologies and that platform as a service, not as in paths, but more as in offering a platform as a service is seemingly everywhere at the moment.
Abby Bangser: Yes, I think it’s really interesting to see something like Team Topologies in the early majority and something like self-service platforms in the late majority. When I look at what I see around the industry and around the people that I work with. And I think that one of the big things is what does self-service actually mean in these organizations and what application of Team Topologies is being applied, how much of it is in theory agreed and how much of it’s really driving how you organize your teams and how you organize also their interaction models, not just how you organize their teams but how they interact.
And so I think what I’m seeing is with self-service, a lot of teams are saying they have self-service. I, a couple of years ago was saying I had a self-service platform and the way that people would self-serve from that is they would make a pull request into a repo with some code because they would have to have learned how to use a Terraform modular or how to have extended a YAML values file or something else which was not core to their business value delivery and still required some level of a ticketing system.
It’s different than a ServiceNow or Jira or something else. It is a pull request in GitHub, GitLab and other repository types. But it’s still a ticketing system that relies on a human. And I think today we’re talking a lot more about what is an API, what is something where we can actually separate the concerns of the consumers being the application developers or other members around the organization and the producers being the platform team. And I think that’s where that revolution right now is happening with people because of Team Topologies conversations.
Helen Beal: I’m particularly interested in its application within Value Stream Management as well when we look at the streamline team as being what I would call the kind of core team that’s doing the work that’s going to make the difference to the ultimate customer. And then we’ve got the supporting type teams like the platform teams that help the streamline teams do that work. My particular favorite pattern for the platform team isn’t in the cloud actually it’s in the DevOps space. So it’s historically when we’ve seen adoption of DevOps and specifically CI/CD pipelines, we’ve seen the mushroom up in little development teams all through an enterprise and there becomes a critical mass where people are starting saying, “Well, we’ve got all these different tools we standardize, then there’s absolute uproar because people will get very emotionally attached to their tools quite understandably.” But then you get a whole other group of people in an organization that are desperate for the DevOps tool chains and don’t have the skills and the knowledge.
So actually, having a platform engineering team or a platform team that provide a DevOps tool chain as a service solves a few problems. It makes the whole technology stack much more available to the people that haven’t yet got there. And then kind of enables the conversation around standardization, which is much easier to have if you talk about the kind of architectural configuration of a tool chain rather than the specific flavors of tools within it. And then that central team can talk about what they’ve got skills to support and what might be exceptions that people will need to support themselves if they want to continue using them.
Abby Bangser: I think I just quickly add to that ’cause I think it’s a really good description, how much the change in the perception of the platform engineers on the platform team as a service team to the rest of the organization rather than as the keeper of the complex bottom layer information is what’s really driving all this change instead of being like, “You will use our tools ’cause we told you, you will.” Even if you really attach to the things that you used to use, they are no longer compliant and they’re no longer cost-efficient or sustainable. The engineering teams don’t care about those things. They want to use the things they want to use, and they want to go to production quickly. And today, platform engineering teams are learning about what developer relations looks like and marketing and what does it look like to actually engage with customers and get feedback and have a roadmap that meets their needs, and the business constraints of the platform engineering team is managing and working under.
Matt Campbell: Those are all fantastic points. And Daniel you kind of mentioned like platform engineering is not a new thing as many things in our industry, it’s an old thing that maybe some of us forgot about and then now we’re remembering, and it feels like a new thing. But there are new parts to it I think that maybe we’re bringing up, and Abby you kind of just commented on those where we’re as opposed to treating platform engineering as a thing we build that people must use, we’re trying to find ways to build things that delight our end users and actually drive value and make them want to use it because of the value that it’s adding.
And I think in that regard, Daniel, to your initial question, we are seeing sort of the technologies pushed down a little bit and Helen to your point, that API layer, that interface layer being added to simplify the interaction to help drive better business value and to make it easier for teams to get in there and not have to learn all the intricacies and all the fun stuff that YAML brings to the table as you mentioned, Abby. So Yes, it’s a really exciting time and a really interesting evolution that we’re seeing there. I really love the platform as a product framing that we have this time around.
Daniel Bryant: Yes, love it, Matt. Just to double down on that, when I was at KubeCon, Kubernetes conference, right? I was chatting to folks and the overwhelming anecdotal data that people were pushing Kubernetes down the stack at KubeCon. I find this kind of bizarre, right? My engineers just want to go into some interface and define something or call some API, Abby’s point. And I was like, “Wow, this is KubeCon, right? And it’s really interesting seeing the evolution of trends at a conference where it’s really like about the technology I’ll give you, right? And now it’s pushing more towards the platforms.” I saw a lot of push towards observability, which we’ve covered a few times as well, but not just observability in terms of say SLI service level indicators, but also things around KPIs like key performance indicators. And a big one of that was financial folks are now, particularly in this non-zero interest rate phenomenon we’re going through now folks are really concerned about how much my spending on this platform, no more spinning up some huge database poking around, shutting it down, whatever. You got to justify that cost.
Is FinOps moving to the early majority of adoption? [23:20]
And I think Steef-Jan, you mentioned sort of offline around FinOps going into the early majority. I’d love to get your take on that because I think I’m hearing more about FinOps. I initially learned it from Simon Wardley, fantastic blogs out there, but now I’m seeing it very much go more into the mainstream as you’re pitching Steef-Jan. So I’d love to get your thoughts on that.
Steef-Jan Wiggers: Yes, that’s correct. There’s more companies joining the FinOps Foundation which provides the support and processes around FinOps. I do see a lot of tools popping up that can support your FinOps and give you look into what you’re actually spending, but I feel that it’s also about important about the process and what are you spending versus what value are you getting out of it versus just looking at tools because Yes, the tools give you an idea, “Okay, I’m spending, but it’s the spending what you do just justify it or not.” And that’s what I think is great about FinOps Foundation coming in and more and more companies like Microsoft and I think other companies, cloud companies as well, just adopting it and are part of that journey as well. There’s even ways of getting to know the processes. I think you can even do certifications. In my line of work, I see it more and more the discussion.
We have this running, but are we actually using it? We’re not using it, we just get rid of it or we just turn it down. So even I see that back in our discussion as well, why we have things running? And even on the personal side, I’ve got an MVP subscription which allows me to basically burn money, but then I’m even, I’m starting to get them where ID demos, I’m not using them, you know what, I’m just shutting them down. Although the credits are free, but it doesn’t make sense. So even I’m studying in more aware because FinOps also has to do with sustainability, which I learned during our latest QCon in London from Holly Cummins with this light trade shop making us aware that we have stuff running that costs money. And I think it’s even like 21 billion a year or two ago they were figured out all the cloud costs combined. It’s insane. So it makes us aware that we should only spend when we should spend like a flick of a switch with energy.
Helen Beal: I think it’s related to some of the sustainability and GreenOps things as well. So with the AIOps space, we’ve been having quite a lot of conversations about data and the fact that it seems like you might need to have a lot of data and therefore a lot of storage in order to get the information that you want. But making the differentiation between data at rest and data at motion is quite important in that context. Additionally, the AI steps in, again, obviously that’s part of AIOps, but another way that AI can help us in your classic big data scenarios, increasingly it can identify where you’ve got data that’s not being used where there’s no APIs or anything that is calling. It’s not being used as an asset by any of your applications. So increasingly AI is becoming like a data housekeeper where it can tidy up and save us space. And that’s good from a planetary perspective as well as a personal kind of or a company financial perspective.
Are architects and developers being overloaded with security concerns when building cloud-based applications or adopting DevOps practices? [25:55]
Daniel Bryant: So changing gears at InfoQ, we’ve definitely seen a lot of attention being put on security now with the software supply chains, SBOMs, lots of great technology going on in that space as well. When I go to conferences, I’m hearing anecdotal information from developers that they’re getting a bit overwhelmed with this whole shift left. Some folks are even jumping, it’s more of a dump left as in they’re just dumping all the responsibilities left. Do you know what I mean? Rather than actually thinking about how can developers think more about security? I need the tools, I need the mental models. So I’d love to hear folks’ thoughts on, are you seeing folks in the wild caring more about this? Are they implementing more of the solutions and what’s the impact on developers and architects?
Steef-Jan Wiggers: Yes, I see a lot of them in my line of work. So security becomes more important. You write about the shift left, at least developers are giving the burden of making things more secure when it comes to cloud platforms. They have to deal with the operational side of things too, with identity and access management, but also their, let’s say functions or whatever they’re running like code, accessing what type of system, what type of security model is in place predominantly or most commonly or ALF. And Yes, that’s kind of what you need to do. You need to go to the identity provider to get the token and then access the service or even some of the platform have something like menace identity. So then the cloud platform takes away that and then you have to be aware of that and then it comes back to giving the menace identity access to whether it’s also key vaults and that kind of stuff. So it all migrates into that developer that needs to be aware of all these nitty-gritty details. So it’s definitely can be overwhelming if you’re less say security aware.
Abby Bangser: It’s definitely important. And I’m seeing more of a push towards it by leadership, which is usually the thing that pushes security onto the developers, right? Because developers and not every developer, but many that are there, their first instinct is not to go straight to the SBOMs, right? If they’re being pushed for new features, that’s what they’re focusing on. But when the company has raised that the risk there for security is important enough, they will take on that responsibility and do that work. I think the interesting thing about securities, it’s going through a similar evolution and growth pattern that a lot of technologies go through, which is the first solutions are made by experts and they often need to be used by experts. And eventually, those start to get smoothed to out and become easier to be used by other people in the industry more. And I think that’s maybe the phase that we’re getting into now with the security tooling is people are realizing that some of the early days security tooling isn’t quite user-friendly and the current tooling is trying to evolve into more user-friendly ways of working.
Matt Campbell: Yes, I would agree that there’s definitely a lot of research into SBOM specifically about questionable implementations and value of them and are they actually delivering on what they say they’re going to be delivering. And then I think to your point, Abby, there’s new frameworks coming out like salsa that as just a product developer might be too in-depth for me to actually be able to get into and understand how do I actually do anything with this? And I think the gap that we need to look to fill is circling back to Team Topologies, is treating security as an enablement function and finding ways to have that team build platforms that can help with some of these things, but then also enable through education to make sure the teams understand the important parts of the things that they need to do as they’re working through the code.
I would agree it definitely feels a lot like a dump left at this point with just all these things going on and the big push from leadership, even from the government level to help shore up the supply chain is leading to a lot of… To your point Abby, like expert-driven implementations, which don’t necessarily translate well to ground-level trying to get it into your code.
Is WebAssembly a final realisation of “write once, run anywhere” in the cloud? [29:22]
Daniel Bryant: Now I know we’re mainly focused on high-level techniques and approaches in our trend report, but I did want to dive a little bit deeper for a moment on a couple of cloud technologies. I think these are having an outsized impact and the first one is WebAssembly, Wasm, and the second one is eBPF, which we’re seeing a whole bunch too. Now I’ve just got back from a fantastic event in New York, QCon New York, and there was a really good presentation by Bailey Hayes from Cosmonic on WebAssembly components. I’ve also been reading a whole bunch from Matt Butcher and the Fermyon team about what they’re doing with WebAssembly too. I’m really seeing this promise of the right once run anywhere, which I appreciate is a bit ironic coming from a Java developer. I’ve worked in Java for 20-plus years, and this is one of the platform’s original promises, but I think this is the next evolution of that promise if you like.
Now, both Bailey and Matt and others have been talking about building this case for reusability and interoperability. So the idea with WebAssembly components that Bailey was talking about is you could build libraries effectively in say go and then call them from a rust application and any language that can compile down to Wasm, this kind of promise stands, which I thought was really easy for that true vision of the component model within the cloud. I also thought it was super interesting. You can build your application for multiple platform targets. Now, given the rise of ARM based CPUs across all the cloud hyperscalers and the performance and the price points, this is really quite attractive. I know Jessica Kerr has talked to InfoQ multiple times about the performance increases and some of the cost savings as well that the Honeycomb team saw when they migrated some workloads to the AWS Graviton processes from Intel processors.
So very interesting thinking points there. I’m also seeing Wasm adopted a lot in the format as the cloud platform extension format. For example, I grew up writing Lua code to extend NGINX and OpenResty, and now we’re increasingly seeing Wasm take that space. They’re being used to extend cloud-native proxies like Envoy and the API gateways and service meshes that this technology powers. In regard to eBPF, I’m definitely seeing this more as a platform component developer’s use case. So maybe we as application engineers won’t be using it quite so much. But I’m seeing the implementation of the CNCF’s contain network interface such as cilium, I shouted to my friend Liz Rice and the Isovalent folks doing a lot of great content on this and also seeing a lot in the security space on this. Project Falco or the Falco project and the Sysdig folks are embracing eBPF for security use cases, really nice general-purpose security use cases. And Matt, have you got any thoughts on this too?
Matt Campbell: Yes, no, definitely. We’re seeing a lot of really interesting usages of this technology within observability and then also circling back with security as well because it’s allowing you to get closer to the kernel, closer to what’s actually happening on your systems within your containers. So you’re getting quicker in some ways, more accurate, in some ways safer, less likely to be potentially tampered with in the event of something malicious going on data about how your system’s operating. So that’s been the major push that I’ve seen usage of this. I’ll admit this is not an area that I’m super well versed in, so I’m not going to make any predictions about where it’s going to go, but I think those current usages have definitely been really interesting.
How widely adopted is OpenTelemetry for collecting metrics and event-based observability data? [32:21]
Daniel Bryant: Fantastic, Matt. Fantastic. Should we move on to OpenTelemetry? I saw a couple of folks highlighted OpenTelemetry, and again, we’ve mentioned observability several times in the podcast, and I keep hearing OpenTelemetry all the things because it is great work. I’m lucky enough to work with you, other folks in that space. I’d love to hear folks’ thoughts on where you’re seeing the adoption and what do you think the future is on this space?
Helen Beal: So for me, it’s gone really rapidly in my AIOps world from a, “This is coming soon to, this is what you should be using de facto standard, everyone. It works really well, really, really fast.” Like faster I think than I’ve seen anything else be adopted. Abby’s nodding, she agrees to me.
Abby Bangser: Yes, I just think it’s something that took a long time to get to version one in a lot of languages, but once it got there, it had been being built by so many amazing people around the industry and was cross-vendor to begin with. There was many vendors that were actively putting their engineering time into building this the right way that I think it became quite easy to call it the de facto and it’s so core to your applications, it needs to be that cross-language friendly in order to be used across your platform. So absolutely.
Steef-Jan Wiggers: Yes, I’ve seen it too. Like Helen said, I agree. I see a rapid adoption too because Yes, last year I’ve never heard of it, but now I have, and it’s even Microsoft, even our bracing it and putting it into Azure monitor and stuff. And last QCon London, there was a great presentation on OpenTelemetry. I came about people like Honeycomb IO that have adopted it, so it’s cross-vendor. So there’s a lot of adoption there too. And it’s something similar… I wish happened in the IoT space, getting something standardized. So if it’s around eventing, right? Event driven architect, you’ve got something like cloud events, pretty rapid adoption. You see some services and left and right in cloud and even in Opensource world where they have tooling around event driven architecture with cloud events, and then you have symbol and with OpenTelemetry, right? You got this standardized that they agreed upon and then it becomes vendor-agnostic and that’s super.
So you create telemetry and then you export it and then you can use X amount of tools that all have their own little thing that can do with the telemetry. That’s what I’ve learned from people during QCon. Some of the vendors are supporting it. Yes, we have our own little thing where we do something with memory consumption and execution and that kind of stuff. Yes, it’s pretty interesting. So Yes, even I am excited.
Daniel Bryant: That’s high praise, Steef-Jan. High praise.
Abby Bangser: And I think the vendor-agnostic bit, you can think of it as, “Oh, it’s no vendor lock in. We get away from vendor.” But as you say, what it gives is actually a requirement now that the vendors optimize and provide something that’s more interesting, it’s no longer that they’re competing at getting to the baseline. I can collect your metrics, your events, your traces, and visualize them. Now they have to be doing the next level of interesting thing for them to gain customers and gain market share. That’s the really big power in my mind around something like this. OpenTelemetry or other open standards is the movement of the industry that happens once standards come about.
Daniel Bryant: One more technology thing I’d love to quickly look at is Serverless. Last year we said Serverless as a baseline and we see Serverless popping up all over the place. Steef-Jan, I was reading one of your news items, I think it was recently about Serverless for migrations, things like that. I don’t hear the word Serverless that much anymore, but is it because it’s becoming just part of the normal choices? We push the technology away, but we are embracing the actual core products that use it. So Yes. What’s thoughts on adoption levels of Serverless?
Steef-Jan Wiggers: Well, I think Serverless becomes just managed. We have a managed service for you. And then usually a lot of these services have something like outer scale in it and you have units that you need to pay for. So there’s the micro billing stuff. So it’s what you consume and it’s all migrated into what we call a managed service. Sometimes you see popup, Yes, we have a tier in our product that now is called Serverless something, Serverless database or Serverless, any type of resources when it’s AWS, Google or Microsoft, a lot of their services now also have some Serverless component in that too, which means the outer scale, which means the micro billing, which means all the infrastructures abstracted away, which you don’t see. So making it easier for the end user basically. And sometimes dropping even a name Serverless.
Daniel Bryant: Nicely said.
Matt Campbell: Yes, that move from sort of Serverless as an architectural choice of say building on Lambdas versus building on virtual machines to Steef-Jan’s point versus moving to managed services. So I think that’s the transition we’re maybe seeing here. I think Serverless became that kind of proverbial hammer and then everything became a nail where we’ve seen a few cases recently, some very overblown in the news of companies moving from Serverless back to monolithic applications. But the move towards managed services really does fit the mold with the platform engineering approach. Trying to reduce cognitive overload, shifting who is working on what to other people. For most of our businesses, our customers are not buying us because we built the best Serverless architecture. They’re buying us because we built the best product. So having somebody who’s an expert in running those managed services, run them for you is just good business sense in a lot of ways. So I think that might be the shift that we’re seeing here.
Abby Bangser: Absolutely, and I think one of the things we might be seeing is the impact of the value Serverless actually coming out in other ways at this stage. And again, moving on from the architectural decision and saying, “What does Serverless give us?” It gives us scaling to zero. It gives us the ability to cost per request. And so we talked about FinOps already. Your ability to understand where to cut costs has to be dependent on what is your cost of customer acquisition? What is your cost of customer support and of a request. That’s where Serverless really shined and now people know that’s possible and they’re asking for that. Now, does it have to be done through a service architecture? Absolutely not. But it’s something that now the organization is asking from their engineering team and that’s coming out in other architectures as well.
How is the focus on sustainability and green computing impacting cloud and DevOps?
Daniel Bryant: So we’ve talked about the changes of pricing several times there, and I’d like to sort of wrap this together in terms of Steef-Jan, you’ve touched on it earlier on, of pricing relating to the impact and sustainability, things like that being sort of charged maybe on how much you are consuming or polluting, arguably these kind of things, right? I’d love to get people’s thoughts on the adoption level of thinking about prices in architectures and choosing managed services as Matt said there, and where you think that’s going in the future.
Helen Beal: I’m interested to hear the others’ view on how much of this falls into the SRE camp and into the SLXs because I can’t be bothered to say them all, but it feels to me like this is the natural kind of node of the technology team to figure this stuff out. What do other people think?
Steef-Jan Wiggers: Yes, I agree. It’s my day-to-day. If you start thinking about architecture and then thinking about the services, how you want to componentize, how you want to create the architecture, and then it comes boils down to, okay, what type of isolation do, or do we not need security in the end? Then you’re coming into the price ranges of what type of products. If it’s completely what I’ve recently learned, like a term called air gap. So the environment needs to be air gap. That means, okay, then we end up in the highest skew of almost all our services, networking, firewalls, everything.
And that comes with a quieter cost. You’re talking about TCOs that come into over 10, 20, maybe 30,000 a month. And do you really need that? Is that really a requirement? If you’re a big enterprise presumably, but if you’re a small, medium, small business, I’m not so sure. And there’s other ways to secure things. And even some of the cloud vendors also providing some of the services are touching the middle ground. So not high-end or low-end with the enterprise feature, but just in the middle they could be secured just enough.
What are our predictions for the future of the cloud and DevOps spaces?
Daniel Bryant: I’d like to take a moment now for some final thoughts from each of you and it can be a reflection on where we’ve come from. Maybe looking forward to the future, something you are excited about, maybe something for the listeners you’d like them to take away. So should we go around the room? Shall I start? Matt, I’ll start with you if that’s all right.
Matt Campbell: Yes, of course. I, excellent conversation. Some really, really cool things that are covered off here. For me, the thing that I’m most excited to see and is feeling like I’m harping on the same thing, but anything that we can do in the industry to help reduce how much things we need to think about while we’re working so that teams can focus in on their area and have the biggest impact, I think that’s going to start to drive more innovation. We start to get happier people. People are excited to come to work, they’re focused on a thing. They’re not trying to juggle 10 different things and now figure out what is an SBOM, why are they throwing more acronyms at me?
So those kind of innovations, mergers of AIOps and ChatGPT and to platform engineering and even the sustainability and FinOps stuff like trying to find ways to reduce costs and reduce the amount of things that we’re working with, I think are all good shifts in that direction. So I guess if there’s a kind of a call to action, it’s try to build things that move us in that direction, I think that’s a healthy way for our industry to go.
Daniel Bryant: Love it. Abby, do you want to go next?
Abby Bangser: Sure. I think anytime we have these kinds of conversations about trends, it’s hard not to talk about the fact that a different word for that might be hype cycles. And I think there’s a lot of things that can feel like hype cycles, but the power behind what creates that hype is that there is a nugget of opportunity there. And I think what ends up happening is it’s easy to dismiss the nuance and the opportunity to the fact that there is marketing jargon and hype around it that maybe is overselling something or is particularly overselling the application of it. Not just what it can do, but also how widely applicable it can be.
And I think what’s been really interesting about the trends on this call that I’ve really enjoyed is that a lot of them are not new. We talked about from the first get-go how we’ve been building platforms for decades across the team here. And I think at the same time we are doing new things, we are learning from our experiences, and we are actively seeing evolution to create more value for our users, for our industry. And I think that’s really exciting right now.
Daniel Bryant: Fantastic stuff. Helen, do you want to go next?
Helen Beal: I’m going to be quite quick and just say that I think I’m going to go and have a chat with Bard about ClickOps ’cause that sounds very interesting.
Daniel Bryant: Fantastic. Steef-Jan, over to you.
Steef-Jan Wiggers: Yes, really happy about all the developments that happen in the cloud, the sustainability, the FinOps, but also seeing Opensource being adopted everywhere. I really enthusiastic about the OpenTelemetry. I’m interested seeing that cloud events. I see that adoption overall of what’s happens in that Opensource helps us also standardize and maybe even make our lives a little bit better than have to juggle around with all kinds of standards and things you need to do. I hope that the AI-infused services, like the Copilots and stuff and ChatGPT can make us more productive and try to help us do our jobs better. So I’m excited about that too. So Yes, I said that’s just a lot of things going on. Even enjoy talking to the other guys. We even learned a lot of things today, so that’s also cool.
Daniel Bryant: Amazing. I’ll say thank you so much all of you for taking time out to join me today. We’ll wrap it up there. Thanks a lot.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.