Month: February 2023
MMS • Sarah Hawley
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
In this podcast Shane Hastie, Lead Editor for Culture & Methods spoke to Sarah Hawley of Growmotely about employing and retaining people in the remote and hybrid working world.
Transcript
Shane Hastie: Hey folks, QCon London is just around the corner. We’ll be back in person in London from March 27 to 29. Join senior software leaders at early adopter companies as they share how they’ve implemented emerging trends and best practices. You’ll learn from their experiences, practical techniques and pitfalls to avoid, so you get assurance you’re adopting the right patterns and practices. Learn more at qconlondon.com. We hope to see you there.
Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I’m sitting down across the miles with Sarah Hawley. Sarah is the founder of Growmotely and this week is in Austin, Texas. Sarah, welcome. Thanks for taking the time to talk to us today.
Sarah Hawley: Thanks, Shane. Thanks for being here. Yes, I am in Austin. People can probably hear, I’m an Aussie, so it’s fun to be chatting to another, what do they call us, antipodeans or something? But yes, I live in Austin, Texas, so I’m here this weekend mostly.
Shane Hastie: Cool. Okay, so probably a good start. Who’s Sarah?
Introductions
Sarah Hawley: Well, I am an Aussie entrepreneur, serial entrepreneur. I moved to the US in 2016 after I had turned all of my companies remote back in 2014 and started living the dream out here, which is where I wanted to live, and really experienced a lot of growth as a founder and a leader specifically when it came to culture and leadership after turning my companies remote. And so when I had my last exit in 2018, I was thinking about what did I want to do next? And I wanted to do something in technology and something that was scalable and the thing that I felt most passionate about was remote work.
So in 2019, I started working on Growmotely, which is a remote work platform designed to help any company anywhere in the world find and hire talent anywhere in the world with a focus on culture first hiring and the future of work as I see it, which I’d love to dive into with you here today. But yeah, that’s me. I’m an entrepreneur. I love to ski as well. I actually first moved to the mountains of Colorado, but now I live in Austin, as we said earlier. I’m a mom, I’ve got a baby boy, Luca, he’s almost two, so he is growing up. That’s a little bit about me.
Shane Hastie: Cool. So tell us a bit about Growmotely, culture first hiring for the future of work from anywhere.
Sarah Hawley: It is an end-to-end platform, but I think what we’re really strong at is bringing together people and companies. So basically from a company perspective, you set up a company profile, there’s a job board where you can post jobs. We have an applicant tracking system that’s been designed specifically for remote hiring, which is different to hiring in an office location based environment. So that’s really kind of one of the core strengths of our product. And then we do have a payroll product designed to simplify that kind of global payroll aspect. And we’ve built some culture tools and other things in there. But the real focus of our product, especially after being in market for 18ish months, since we went live in April last year, what we’re really focusing on is continuing to develop that kind of first stage of what we’ve built of helping companies and professionals meet each other on the platform and move through a process that’s designed specifically to get to know each other in a remote context with culture first being at the heart of what we believe successful hiring, successful work looks like in the future.
I think skills and experience are important, but the most important thing is that we feel aligned in terms of the vision and the mission of the organization and the culture, the values, the ethos, how we want to show up and work together. So provided we have the skills and the experience and the ability to do a job, we feel like we’re competent and all of that. What really matters is how are we working together and how are we feeling fulfilled in the work that we’re doing in the world.
Shane Hastie: So what is different about hiring for remote?
What’s different about hiring for remote work?
Sarah Hawley: I think that the first aspect is how do we get to know each other in this new context where we’re not coming into an office and sitting across from each other? And so what we’ve built is a multi-dimensional way to get to know each other through this process. So it starts with pre-screen questions, which are written questions that you answer when you submit an application. So instead of just sort of, you can hit apply to a million jobs, you actually have to answer questions in order to apply. So the company has been thoughtful about three to five questions they would like to know alongside your online resume slash profile about why you’re interested in this company, why you’re interested in this role. So that’s the first kind of starting point. The second phase is a video Q&A. So the company has also been thoughtful about a couple of questions they’d like candidates to answer via video, a couple of minutes per question.
And this now is starting to build like, okay, we’ve seen someone’s written responses, we’ve seen the video Q&A before we get to the point where each of us are going to invest an hour or whatever it might be in a Zoom or some kind of online interview. We also built instant messaging into the platform. So along the hiring journey, you can also just be sending what’s more like a text, which is that informal aspect that I think is really important because email can get very formal. And I think in this future of work, what we want to do is meet each other on a human level. So what we’re trying to do in this process is give people these different ways that they’re experiencing each other, video, informal instant messaging, more formal written content, the profile, the online interview. And we’ve found that through the hiring journey and using these different methodologies of communication, we’re starting to get a real sense of who each other are and build that relationship.
Shane Hastie: As a potential candidate, how do I vet the company culture, that it’s for me?
As a candidate how do I learn about a company’s culture?
Sarah Hawley: Yeah, that’s great. And that’s the other aspect of what we built, so I’m glad you asked. But we have the companies set up their vision, their values and their purpose as a part of their profile along with an about the company so we have an idea of what the company actually does. And the candidates also add their values in. And we have an auto matching algorithm that we will be developing over time into a culture matching algorithm. So we want to suggest people to companies and companies to people based on, yes, skills and experience and job openings, but also on these values matching attributes. So that’s a longer term project that we’re building as we have more data in our system. But that’s the kind of vision is that ultimately you go on Growmotely and you see a list of companies where you would actually thrive based on your profile and the things that you’ve put in.
And companies see candidates that would thrive in their organization. I think really the values, the ethos that forms the culture is the most important thing that we should be considering when we’re looking at where we’re going to go and work, or on the other side, when we’re looking at people we’re bringing into our team because team cohesion and having people engaged and happy in their work, and as an individual being engaged and happy in your work and feeling resonant with the way that the culture is in this organization is the thing that leads to more easy productivity is what I would call it. So I think there’s two different ways to create high performance. I mean, there’s probably more than two, but generally speaking, we can have a high performance culture that’s driven by pressure, lots of goals and metrics and driving, driving, driving. And that ultimately tends to lead to burnout and high staff turnover.
So while we may get that performance short-term, we’re probably going to have a higher turnover of staff and really going to have to work hard to find people that can actually thrive in that kind of organization long term. The other way that I think, which is a much nicer way to get high performance is having that alignment around the vision and the values and the skills and experience to be able to do the role because when we feel happy and fulfilled and engaged and competent, the work can just flow out a lot more easily. And so that leads to a higher level of performance in a way that’s really enjoyable and doesn’t see people burning out and wanting to leave the organization.
Shane Hastie: For an employer looking to bring people on remotely, how do I keep that level of engagement? So they’ve come to me because we have a values alignment, this looks good on paper, we’re starting to work together. How do I keep that engagement when we might never talk to each other?
Keeping people engaged and involved
Sarah Hawley: The first thing I would say is hopefully they do talk to each other. So creating some kind of environment where there is consistent communication and opportunity to meet each other and engage. And I think there’s many ways to do that. I don’t believe in meetings all day long and being on Zoom all day long. I don’t think that’s the solution at all. I think for smaller companies, one weekly, all company meeting is nice, for larger companies, maybe it’s a monthly where you’re kind of updating everyone, but maybe then the smaller teams are getting together once a week. So I think this kind of weekly face-to-face touchpoint where everyone has their cameras on, normalizing it for it to be okay that they might be dialling in from the yoga studio or the park or from bed if it’s late at night or whatever it might be, not necessarily having to be at your desk and all of that.
Because also when we’re working remotely and globally, we could have people in all different time zones and hopefully we’re embracing a more fluid, flexible way for people to work. But bringing people together, I think at least once a week, is a really nice way to create that relationship. And secondly, the most important thing is having good online easy ways to communicate. So in our company, we use Basecamp for example. Obviously Slack is another really popular one. Tools like this where you can easily be chatting. We also use Telegram so that we can send voice notes. Voice notes is a really incredible way of brainstorming and having ongoing collaborative conversations either in a group or even just with one of your peers where you can unpack things, convey things, spend some time with a concept or an idea, but not have to be doing it at the same time, not constantly having to be finding time where we can meet.
So I think in terms of communication and coming together, I think that’s really important. But to get back to the core of your question around the values and making sure that we’re actually embodying them so that we are attracting the right people in and then we’re maintaining that culture. So the first thing is to be honest with ourselves about the culture and the values of the organization and communicate them clearly and truthfully to the market when we’re looking for talent. Don’t put a value of flexibility if you’re ex-military and have created a pretty structured type of organization. There’s nothing wrong with being a more structured organization. So it’s important to communicate clearly, not going, oh, flexibility’s a buzzword, let’s put that out there, because you’re just going to attract people who will struggle in your environment if they’re truly much more flexible and flowy and you have a much more structured approach.
Trust that there are people who really thrive and desire structure. So when we’re doing a values exercise, and I think it’s important for companies to re-look at their values every year or so to really, see is this true to who we are? Are these the core most important things about how we are and are we communicating them authentically? And are we living up to them? Because we all stray from them from time to time. And what they should be is our kind of moral code or something to bring us back to center when we maybe have strayed off course a little bit. So our vision is the north star. It’s like where we are trying to go, what we’re looking out to the horizon for. Our mission is why? Why do we even care about this thing? And I think that’s really important because that’s the fire that fuels us every day to get to where we’re going.
And then the values or the ethos, which essentially becomes our culture, is how we show up together, how we like to work together, how we’re going to actually achieve this vision. And so being really honest with that and learning how to communicate it clearly is what’s going to attract the right people in. It also means you have an anchor point for conversations when people maybe are not thriving. And you can start to say, well, of these five values, what are they embodying and what are they not? And how do we have a conversation around where they’re not embodying some or one of our values and ask them openly, does this feel like something you want to be a part of and be in, or is it just feeling really uncomfortable for you? Because it’s also okay if it’s not the right fit, and how can we help navigate out to whatever’s next for you?
And what I’ve learned over the years of running organizations this way is it actually removes this idea of firing people or having people quit unexpectedly. Because what you’re doing is having ongoing open conversations around who we are as an organization and how we all want to show up together and making it really okay if that doesn’t feel right. Because there is no right or wrong in the world. There’s no right way to be necessarily. I mean, I know some schools of thought do, and we do have laws and things like that, but a much more open way I think of living and experiencing this life is to say there’s no right or wrong, there’s just different. And what works for me maybe doesn’t work for someone else, and it doesn’t make it good or bad, it just means let’s not try to work together because it’s going to be painful. Why not just find somewhere where you’re actually going to thrive and really love it? So that’s kind of I think the ideal scenario that we can strive for within an organization.
Shane Hastie: Another concept that we are seeing become more visible today is this concept of not just hiring for culture fit but hiring for culture add. How do I as a leader in an organization build a self-awareness of what’s next in terms of our culture? What is the add-on that we want to our culture rather than what we are today?
Hiring for culture add
Sarah Hawley: Yeah, I think that’s really beautiful. And I think it’s still important to be sure that we want that and that it’s not a trend that we are forcing ourself down. And I think that’s really critical because if we get into a conflict, like an inner conflict where the market says they want this, but we don’t actually want it, but we’re hiring someone to bring it in, but there’s a resistance, it’s just not going to work. So there’s still always that aspect of can we trust that who we are, there are people out there that are like us and that want to be with us? Where I see culture add working really well, and where I see it to be really important is like, wow, we really desire this. This feels really resonant, but we don’t yet know how to do it. We don’t necessarily have the tools.
And I think that’s very relevant right now. We’ve just taken this huge leap forward into a whole new way of working together. We have a really big opportunity to redefine how we’re working together as humanity, to redefine what work even means to us. It’s very exciting, it’s very empowering, it’s very enlivening, but we have basically no past conditioning or evidence or experience to kind of support this new model. So it makes a lot of sense for those leaders and organizations that are looking out and saying, yeah, I mean I really want to lead a company with more transparency, but be really scared about what does that actually look like, what does that actually feel like to be really transparent? I see that it’s a trend. I also do actually desire that for our organization, but I don’t know how to do it. I don’t know how to have really transparent conversations.
I don’t know how to share all the financials of the organization with the team and then take the team on a journey where they actually understand what those financials mean, whatever it might be. So to use that as an example, looking for someone who has that experience, who is an expert in that area, to join the team or to be an advisor or whatever it might be, to actually guide us and be intentional about transitioning the culture into something new, something with this added piece to it. So we’re not necessarily flipping the entire thing on its head, but we want to bring in more of whatever it might be. So I think that’s very, very powerful. Still requires that honesty and integrity with self, the organization of like, is this an actual desire for us to move in this direction? Because if we have that buy-in and that intention, we can communicate that out to the team, we can communicate out why we want to do it. It will resonate with people, it will hit, and then people will start to invest in like, okay, let’s do it.
Let’s be more transparent. Let’s see how we can start to have these conversations. And I do feel that it’s leadership’s responsibility to embrace something new first and as wholly and fully as possible. So if we bring that expert in, having the leaders actually really learning and really embodying and really going to that person and saying, how am I doing with this? Are we embodying it? Are we embracing it? Because we can’t expect that people who are further out from that core, like I don’t really like the hierarchy type system. I don’t like to say people who are at the bottom compared to those at the top. But if we think of the leadership maybe as the core, the nucleus of what’s happening and what’s evolving, we want people out at every layer and level to be able to see and be led by that nucleus into the future.
Shane Hastie: You mentioned the new way of working, the new world of work. What is it? Actually, I want to say it’s not it. What are the many possibilities that are in front of us today?
Many possibilities for new ways of working
Sarah Hawley: My personal why, the reason I exist is to inspire possibility. So I like the phrasing that you use there of that question. Yeah. So what I am excited about and what I’m leaning into is a world of work where work becomes much more deeply integrated into our life in a way that’s joyful because it doesn’t feel like work. The traditional definition, I think, when we hear the word work, what we think about is something I don’t really love doing, but I do because it gives me money so that I can live. And then outside of my work, I can do the things that I enjoy. The world of work that I see that we now have a greater potential of achieving is a world of work where I am doing something that I absolutely love and I’m really, really good with a group of people that I thrive amongst.
And so now it’s just amazing that I get to do that every day and I get paid for it. And because of remote work, I get to live where I want to live, build a lifestyle that I want to build, so work is actually now being very much integrated into the life versus also before we lived in a city where there were job opportunities, we probably lived somewhere that was fairly convenient for our commute. So there were a lot of decisions around our life that work was dictating and wasn’t necessarily integrated. In a lot of cases, it was overtaking our life, but on this false belief that, well, I have to do this thing here so that either one day when I retire or on my weekends or whatever, I can do the things that I actually love. The world of work I see is I love what I do, I do it from wherever I want.
I have this amazing lifestyle. That feels pretty good. The other aspect of it is sovereignty and empowerment as an individual. So am I trusted as an individual? Do I trust the people I work with to do what they are responsible for, accountable for in whatever way works best for them? And do I treat them and treat each other with respect and this kind of equality in terms of just like we’re all just human beings? There’s no one that’s above anyone else, there’s no one that’s below anyone else. So it’s really moving away from the hierarchical structures and moving more toward, okay, we are a company that’s made up of all these different people doing all these different things so that we can get to our North Star because of the things that we care about side by side. And as a leader, I’m not above anyone.
My area of accountability is to look out to the vision and to communicate that vision to the world so that everybody’s engaged with it, whether it’s our customers, the media, our team, whatever it is, that’s my role as the leader and to kind of navigate us toward that direction we’re going. But I can’t actually do that without all the different people kind of doing their bit and telling me what’s important so that I can course correct and do whatever I might need to do. And so I’m not above anyone. I’m not like the boss in the typical sense, in the traditional sense. I’m just a person in the team that happens to be the one responsible for the vision and driving the ship toward where, maybe not even driving it, but kind of making sure we’re on course for where we’re going. I’m speaking as myself as the CEO, founder of my company.
But a world where in that typical traditional model, let’s say a receptionist or a customer service person, they were seen as lower in the organization with less say and less power and less whatever. How about we’re all empowered? It’s not a power game. The finite power game is like, I have power, which means you don’t. Or if you have the power, then I don’t. To be empowered is we’re both embodied, we’re both empowered, we’re coming together. And my value as the customer service person is that I’m talking to our customers every day. I know what they want. I know what they care about. So my voice needs to be heard in our organization because I’m close to them. And hopefully I’m amazing at customer service and it lights me up and I love dealing with people and I’m highly valued within the organization, no more or less than the CTO or the whatever it might be.
So it’s really just about saying yes to the areas of accountability and responsibility and then trusting and empowering ourselves and each other to work in ways that work best for us. And I think that also comes down to the daily. If someone’s a morning person and their brain works really good at four, five, 6:00 AM like why would we not let them embrace that and do three or four hours work early in the morning, then maybe finish off with some of their more mundane tasks later in the day after they’ve gone to yoga and taken the dog for a walk? Why does that matter to us when they’re doing things? So I see that as a big part of the new world of work as well, where individuals are truly empowered to create their lifestyle from the bigger picture of where they live and all of that, but also down to how they’re living out each day.
Shane Hastie: That’s a long way from where many organizations are today.
Sarah Hawley: Yes, I know. Some days I’m like, man, I got my work cut out for me.
Shane Hastie: How do we get there? What’s the kaizen steps, the small steps? Thinking of our audience, the technical leaders, the influencers who are working in the technology teams and organizations, what can they do to move towards this?
Make change little by little and bring people along with you
Sarah Hawley: It is little by little. I think that’s important. Tackling things bit by bit and seeing, not trying to be all of that, from Friday morning we leave at five, in the office and from Monday it’s a free for all, do whatever you want. I mean, it’s probably part of the challenge is nobody knows how to do it. So even if people are inspired by the idea of all of this freedom, when they find themselves in it, often they’re like, wait, I don’t know what to do. How do I even start? How do I know that I’m doing what is expected of me? I’m so used to being in an environment where I’m told what time to turn up, where to sit, what time I can have my lunch break, where the bathroom is, like everything’s planned out for me. Now I need to think about all of those things as well as my actual job.
And it can be overwhelming as well. So I definitely encourage just baby steps and also looking at taking the risk off the table. Because I think that’s one of the things that people struggle with and can be scared by is like, okay, that all sounds like some kind of idealist utopia that probably won’t work in reality and it’s too freaking scary, so I don’t even want to do it. There’s such a big risk. What if people don’t do what they say they’re going to do? And I can see all of those risks and all of those objections and all of those fears. So if we approach it little by little, and then we also take some risk off the table by seeing it as an experiment. What’s one thing we want to experiment out of all of that new vision that I painted? What’s one thing we want to experiment on and why don’t we just do it for a month?
And why don’t we talk amongst ourselves about the experience every week, for example, with a plan to circle up at the end of the month and say, what did we learn? What do we want to take forward, knowing that we can just roll back to what we were doing at any point in time? So the biggest risk you’re taking is a month long experiment or a week or whatever feels good for your organization. So let’s say you started with allowing people to work, if you’re already remote, allowing people to work whatever hours they want. So we’re going from all agreeing to work nine to five to all agreeing to work roughly eight hours, but whatever eight hours of the day we see fit. Do it for a week. Let’s just do it for a week and just let’s talk about it. Let’s get to Friday and talk about how everybody ended up just working nine to five because they were too scared to actually do something different or they didn’t know.
Or let’s hear from the one person who took the risk and actually worked through midnight each night because that’s when their brain wakes up. And let’s learn from them about how much more productive they actually were. And then next week, let’s all try again. Let’s see if we can break out of our pattern and see what’s possible. So just I think little by little and keeping the dialogue open, which even that could be scary, right? Wow. Actually talking to each other about changes and what a risk if we say we’re going to do this and we’re going to talk to people about it, but we want to roll back, that feels like risky in itself. So just little by little, whatever experiment you decide to take, just know that it is an experiment and you can roll it back. So maybe starting with that vision and going, okay, if we want to transform our organization over the next five years, maybe, if it’s a really big organization, or one year, or six months if you’re smaller.
Or it could just be, I want to transform my team. And I mean, that could be an approach in a bigger company as well is letting a couple of teams try something and then, I don’t know, that could be a way to do it. But looking at all of this stuff as experimentation and saying, where would we really love to be in X time? And then what do we just want to tackle one by one? Let’s try this. Let’s try that. And being really compassionate with ourselves that we’re going to mess it up. We’re not going to be able to do it really well right up front. I wrote the book on this stuff and I still mess it up. I still find myself off course, micromanaging because I’m stressed and I want something to happen.
So all of a sudden I start telling people what to do and when to do it. And then thankfully, because we have a really open culture, they’re like, “Hey, Sarah, back off. Remember empowerment and ownership?” And I’m like, “Oh, gosh. You’re right. I’m sorry. I’m sorry. I’m stressed and this is what’s going on.” But I think this new world of work as well is about not being perfect. It’s about having these ideals that we want to hold ourselves to, but knowing that it’s okay that we’re not going to be perfect. We’re going to go off course, we’re trying things, we’re learning, we’re growing, and then coming back to centre.
Shane Hastie: Some really interesting ideas in there. If people want to continue the conversation, where do they find you?
Sarah Hawley: LinkedIn is a good place. Yeah, I’m pretty active on there and I do share a lot of my thoughts and ideas, and I’m down to connect with anyone. And from there, yeah, feel free to reach out to me or what have you. But you can also find me on Growmotely and all of our social medias. I love this stuff and I’m very aware that we are absolutely pioneering this space. We’re very out there compared to where the standard is. For me, that’s important though that some of us are being that change and we’re being that light and we’re showing the world what is possible. Because the scariest thing about making these changes is not knowing if it’s possible.
And having companies, organizations, leaders, teams, embodying some new way of being that looks good to us and talking about it openly and talking about where they’re doing well and where they’re not is really, really helpful. So it’s a whole new world. Like you said, it does overwhelm me sometimes thinking about how far some organizations need to come, but I’m bullish on this future and it’s what matters to me. This is my life’s work.
Shane Hastie: Sarah, thank you so very much.
Sarah Hawley: My pleasure, Shane. Thanks for having me. It was great to be here.
Mentioned
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.
MMS • Zack Jackson
Article originally posted on InfoQ. Visit InfoQ
Transcript
Jackson: I’m Zack Jackson. I’m here to talk to you about Module Federation, a mechanism for at runtime code distribution. What is the motivation behind Module Federation? Why did we even build it? It mostly came from personal experience. Sharing code is not easy. Depending on your scale, it can even be unprofitable. The feedback loop for engineering is often quite laborious and slow. What I’m really looking for here is some system that allows parts of an application to be built and deployed independently. I’d like to make orchestration and self-formation simple and intuitive. I’d like the ability to be able to share vendor code without the loss of individual flexibility. I don’t really want to introduce performance hits, page reloads, or a lot of bandwidth overhead, that would generally have a drawback on the user experience. Let’s set the stage here. Module Federation is ideal for developing one or more applications involving multiple teams. This could be UI components, logical functionality, middleware, or even just server code or sideEffects. These parts can be shared and developed by independent teams. I want little to no coordination between different teams or domains.
The Problem
Let’s understand what the problem is with the technologies we have today. Sharing code is painful and a laborious process. Existing options often have limitations that we do end up hitting. As an example, we have native ESM, but that requires preloading. It only works with ESM. It has a pretty high round trip time. The sharing mechanism is pretty inflexible. Native ESM usually depends on the ability to share code based on the asset’s path. If you have multiple applications with different folder structures and asset path structures, the chances are you might not be able to leverage the reusability that ESM has. Overall, ESM is close, but I believe that it still needs some form of an orchestration heat. This is where the webpack runtime really comes into play. If we look at a single build, there are challenges there as well. It’s slow. Any change that you make requires a full rebuild. That feedback loop and monolithic structure really does get in the way at scale. We do have something from previous versions of webpack, which was also a little bit hit and miss. That was the DLL and externals plugins. DLLs or externals also have a few drawbacks. There’s a single point of failure. Everything has to be synchronously loaded upfront, regardless of if you use it or not. It requires a lot of coordination and manual labor just to deal with. It also does not support something like multiple versions, which makes it very challenging to depend on a centralized single point of failure system, where you have to be very tactical about how you can upgrade your dependency sets across multiple applications.
We Need a Scalable Tradeoff
What we really are looking for here is good build performance, something that has good web performance. A solution for sharing dependencies. We need something that’s a little easier and more intuitive in general. What we’re looking for is something similar to npm meets Edge Side Includes, but without the overhead that comes with these approaches. The existing tradeoffs that we make without Module Federation are usually going to reveal themselves in the form of operational overhead, complexity, or knowing development and release cycle, not to mention all the additional infrastructure you would need outside of your code base to stitch the application together. This is where Module Federation comes into focus. I believe that Module Federation, or at least the technology behind it, is an inevitability for the future of web technologies.
What Is Module Federation?
What exactly is Module Federation? The easiest way to describe it would be a similar concept to Apollo’s GraphQL Federation, but it’s applied to webpack modules for an environment-agnostic universal software distribution pattern. There’s a few terms that we often use when we talk about federated applications. The first one is something that we call the host. The host is considered the consuming webpack build. Usually, it’s the first one that’s initialized during an application load. We also have something called a remote. A remote refers to a separate build, where part of it is being consumed by a host. We have something called bidirectional hosts, which is a combination of a host and a remote application, where it can consume or be consumed, which would allow it to work as a standalone application for individual development environments, or allow it to work as a remote where parts of it can be consumed into other standalone web applications. The last one that we have is a newer term, and we usually refer to it as omnidirectional hosts. The idea behind an omnidirectional host is it’s a combination of all of the above. It’s a host that behaves both like a host and a remote at the same time, meaning when an omnidirectional host first boots, it is not aware if it is the host application or not. Omnidirectionality allows webpack to negotiate the dependency supply chain at runtime between everything connected to the federated network. Then determine which dependency is the best one to vend to the host itself, as well as share across the other remote applications too.
The Goals – What Are We Trying to Achieve?
What exactly are the goals of Module Federation? One, I would like dynamic sharing of the node modules at runtime with version support. I really want team autonomy, deploy independence, and I want to avoid something that I refer to as the double deploy. A double deploy usually is that process of if this was an npm package, you would have to apply the changes to the npm package. Publish that package. Go to the consuming repo, install the package update. Then open a pull request, and push or deploy that to some ephemeral environment to see your change. Hence why we call it a double deploy. You have to release two things in order to see one change. If you have more than just one application, this double deploy convention starts to get really out of hand. Imagine you had something like a header, and you had eight micro-frontends or independent applications, independent experiences but they all generally use the same navigation UI. I would first have to release a copy of the nav. I would have to open pull requests to each individual code base, and then create a merge train to merge each one independently. This is not a very scalable solution, especially if you’re trying to have a consistent experience across the applications. Synchronizing a package update everywhere all at once, is not easy to do. Another goal that I want is the ability to import feature code which is exposed from another team’s application. I want to be able to coordinate efficiently at runtime and not at build time. That is really where Module Federation stands out from most of the other approaches. With things such as DLLs or externals, it’s all coordinating this at build time. What I really want is the ability to dynamically coordinate dependency trees and code sharing at runtime.
In addition to those first set of goals, what do I actually need to make Module Federation something that’s viable? One is redundancy. I need to make sure that I have multiple backup copies that can vend any of their code to anyone else connected to the network. I would like the capability to create self-healing applications, where webpack has mechanisms that would allow me to automatically roll back to previous parts of the graph in the event of a failure. When designed well, it should be extremely hard to knock one of these applications offline. I also still want the ability to have versioned code on the dependencies and on the remotes themselves from another build. While versioning is great, there are also going to be times where I want the opposite, and I would like to always have the evergreen code where it’s always up to date, always have the latest copy on the next execution or invocation of that environment. Lastly, I’m really looking for a good developer experience. I want it to be easy to share code, and work in isolation without impacting performance, page reloads, or degrading the user’s experience.
In summary, what I’m really trying to build here is something that just works. With several approaches in the past, we often find any code sharing options that we come across to usually be limited to a specific environment, such as user land. If we want to try and apply some code sharing technique to another environment, we usually would have to have a separate mechanism in order to achieve some similar solution. What I’m really looking for here is distribution of software that works everywhere, works across any compute primitive in any environment, such as node, the browser, Electron, React Native. I’m looking for simplicity with little to no learning curve. I don’t want to have to learn a whole framework or be locked into framework specific patterns. I really want to leverage the known ways of working with code today.
How Simple Is It? (Configuring and Consuming a Federated Module)
The real question is, how simple is it? Let’s take a look at configuring an app that’s going to utilize and consume a federated module. These are two separate repositories, two separate applications. We have application A, and we have application B. Inside of there, I can see that application A has a remote referenced as application B. Application B is going to expose button and dropdown. I’m also going to opt in to sharing some additional dependencies, where some of them can support multiple versions, and other ones such as React or react-dom. I really need it to be a singleton in order to ensure that we don’t have any tearing of state or context between these individual applications.
Now that you’ve seen what the configuration looks like, let’s see what the consumption of federated code would look like. What you’re going to see here is a code snippet from application A. To demonstrate how flexible Module Federation is, I’m consuming the button as a dynamic import from application B, but I’m going to consume the dropdown statically and synchronously from application B as well. As you can see in the JSX here, my dynamic import is using React.lazy wrapped around a suspense boundary, but my dropdown is just standard JSX. What’s really cool about this is you can require asynchronous code distributed and coordinated at runtime. I can do so in a synchronous or asynchronous manner.
When to Use Federated Modules
Now that you know a little bit about Module Federation, when should we actually utilize the technology? There are a couple categories that it fits quite nicely into, one of them being global components. These could be your headers, footers, general Chrome or Shell of your application. They’re typically global, and they’re a very good first candidate if you’re looking to federate something. It’s also very good for features where it is owned by another team, authored by another team, or be consumed by other teams’ applications that are not strictly owned by the team providing that feature in the first place. It’s also very useful for horizontal enablement. If we think about the platform team, your analytics, personalization, A/B tests, all of those type of things usually would require sticking JavaScript outside the scope of the internal application, which causes a lot of limitations, especially if we think about analytics or A/B tests. If I can’t integrate as a first-class piece of software inside of the React tree, I’m usually having to overwrite the DOM with a very primitive A/B test that can’t really hook into state, context, or any other of the React lifecycle hooks.
The last one would be systems migration. There’s a couple different ways where you could see this show up based on the patterns that you’re trying to use. One of them is the LOSA architecture, which stands for Lots of Small Applications. LOSA systems generally depend on mounting several small parts of an application onto independent DOM nodes, under their own React render trees, or whatever other framework you might be using. Module Federation is not specific to React. It’s also very useful for standard micro-frontends in whatever way, shape, or form that you choose to design them. It also offers the opportunity for polylithic architecture versus a monolithic architecture. A polylith, it is really a modular monolith, where pieces can be interchanged easily, but the application as a whole still behaves in a monolithic manner. It’s just not deployed in a monolithic manner.
There’s also the good old-fashioned strangler pattern to get rid of a legacy system. There’s a couple ways where federation could be very handy. One way is you could federate your updated application code into the legacy system, and slowly strangle it out that way. The other approach could be that you make the legacy system federated, and you start building in the new modern platform that you’ve got. What you do is you import pieces of the legacy monolith as needed into your new development environment, which completely reverses the strangler pattern. Instead of taking new and putting it into old, we could take old and pull it into new, and strangle it out slowly that way. There’s also a very unique advantage here for multi-threaded computing, especially in Node.js. We have also been able to make this work with web workers in the browser. Since you have a federated application that’s exposing functionality inside of a worker on the browser or server, I could hand off any specific part of my application to another thread to handle processing seamlessly.
Component Level Ownership
In order to get the most out of Module Federation, we do need to design software in a slightly different way. Considering that this is a very different paradigm to the traditional ways we’ve always had to build and deploy software, it also requires some new paradigms on how do we actually just build code that’s designed to work in a, at runtime orchestrated environment. Component level ownership really tries to establish a pattern of software that shifts as much responsibility to the component as possible. How this would usually show up is as something I refer to as a smart component, where instead of having a page container that does all the data fetching and all the business logic, and passes all of this data to dumb components, we reverse this process a little bit. We say, let’s have smart components, and dumb pages. These smart components should be able to work in a near standalone manner, and they should also remain self-sustaining.
We also want to really focus on colocation. We want the code to be well organized, easy to understand, maintainable, and in general, reduce the fragility. When you have a smart page and dumb component, what generally ends up happening is, all of the page logic starts to get bunched up in that page container. This can introduce risks and challenges to scale. Because if you need something like let’s say, inventory, what exactly is driving the API call and the data transformations to retrieve inventory and supply it to a specific component? If it’s inside of the page, and the page is feeding data to several components, this could end up being pretty hard to untangle and actually understand what feeds what data? If somebody doesn’t fully understand the data flow, and they alter some piece of your query, or the shape of data, you could risk impacting the stability of your code base.
We really want loosely coupled code. This is even before Module Federation came along, loose coupling and modularity has always been something that’s been encouraged. With Module Federation, though, it really gives us a strong reason to use loosely coupled code and to build things more in that pattern. If things are loosely coupled, you could almost independently mount it. When this component is mounted or rendered, it would more or less just work, making it very portable, whether it’s Module Federation, npm, or just a monorepo with some symlinks. The loose coupling of the code means that it could fetch its own data, be self-sustaining, regardless of distribution patterns.
The last area that we really want to think about with component level ownership is the ownership portion. What are the ownership boundaries, and when do they apply? That is something you really want to choose wisely. Because you need to understand, where does the scope of what one team owns, ends, and where does the scope of what another team owns, begins? With clear ownership boundaries in place, it makes it a lot easier to maintain an application and split it up so that the responsibilities of components owned by certain teams is resilient, easily known, and doesn’t have a lot of bindings or data dependencies associated with the page itself. The one thing that I would caution is, beware of granularity. Component level ownership is very nice, but you don’t need to make it super granular. This is again where understanding the boundary of ownership is important. An example of being too granular could be making a title use component level ownership. There’s no need for it to do that. Let’s take something like PurchaseAttributes of a product page where it’s a decently sized feature, and it handles several responsibilities that may be owned by a different work stream. It might need to get inventory, get sizes, colors, price, anything like that. It’s very easy to draw those ownership boundaries or boxes around who owns what. That’s really what I would suggest is trying to break it up into what is a complete feature or zone on the page, and whose responsibility is that? Who owns that? Who works on it? That would be the primary place where you would want to implement something such as component level ownership.
Difference between MFE (Micro-Frontend), and Components
With component level ownership, a question starts to emerge about what’s the difference between a micro-frontend, and something that uses component level ownership, especially in a Module Federation world. A micro-frontend is a pretty loose term these days. It’s meant several things as the years have gone by. A micro-frontend can be small, it can be large. It could be whole user flows. There’s not really a good boundary on, what exactly is the scope of a micro-frontend? With component level ownership, what we’re looking for is almost a hybrid between a normal React component and a micro-frontend. The granularity here can be friend and foe. You don’t want to federate everything or make everything a micro-frontend, it just doesn’t make sense. A micro-frontend is usually mounted, utilizing some form of serialized communication bus, or browser events, or network stitching to assemble these independent parts of an application. Federated components, on the other hand, can coexist in a single application tree. They could also be used for a traditional micro-frontend where you mount several pieces of an application on two independent DOM nodes. The key here is really that level of flexibility. If I want to hook into React context, and I want a micro-frontend-like resilience, independence, autonomy, component level ownership and Module Federation really marry the two together in ways that it’s just not been possible to do in the past.
To go into a little bit more detail here. What federated components offer us in comparison is you can parse functions. You could share context. You can inherit and compose from class based components. It’s designed to behave in a self-sustaining manner. It’s modeled loosely on micro-frontend concepts with an effort to remove the drawbacks that would usually come with micro-frontends as we think about them today. What we do want from the micro-frontend concept in general is, if something breaks, we don’t want it to crash the entire application. Component level ownership and Module Federation give us these type of capabilities. It can all exist in a single app tree, go through a single render pass, but it can avoid crashing the entire application in the event that one of these components fails for whatever reason. The components themselves can be self-sustaining, as I said, but we also don’t want to lock other teams in to something that they can’t really plug into, or recompose in ways that might make sense. While we have this concept of a self-sustaining smart component, we also want to expose the base primitives that make up the smart component that allow teams to utilize it through Module Federation in multiple different ways. The primitives that I really tend to expose most often is the data element, whether it’s a GraphQL fragment, a fetch call, or any other data fetching system, I would treat as an independent export inside of the file. I would also still want access to the dumb component, where it doesn’t fetch any data on its own. It just expects props to be sent to it, and it will render based on those props. Then, of course, I still want to export out a smart component, because the majority of use cases would likely be, teams aren’t really going to be passing a whole lot of data to smart components that are owned by other teams.
Where to Use It
Where exactly should we use Module Federation? Knowing where to use it is just as important as knowing when, with some slight nuances between the two. The one big thing that always stands out to me is exposing arbitrary code can lead to brittle systems. Leveraging federated modules along well defined ownership boundaries is generally the safest bet. Federating modules should be strategic, and have patterns or contracts that are relatively standardized across an organization. I’m a big fan of Conway’s Law. Conway’s Law essentially states that a code base will more or less represent the organizational structure at a company. That works fine most of the time, but if everything operates under a Conway’s Law type structure, it starts to break down when you have shared components or horizontal components, or horizontal enablement teams. Where the code base is unable to mimic the organizational structure of a business, because some of that software is used across multiple different teams yet still owned by an individual delivery team. Module Federation is extremely useful for being able to break out of Conway’s Law when needed.
Example – Component Level Ownership (The Before)
Let’s see an example of component level ownership just to tie everything together here. This would be the before. What we would have is a product display page. It accepts some props that would come from a data fetch, or a container. We would pass some of that information into a component that we’re going to call PurchaseAttributes. This structure is very dependent on data supplied by the host system, which means that it could introduce fragility if PurchaseAttributes was, say, federated. If the data changes in any way, it could break the component, or if the component changes in any way and the data pipeline has not been updated inside of the host system, it could also break the component. What we want to really do here is try to limit the blast radius and surface area of what API we expose to consuming teams.
Example – Component Level Ownership (The After)
The after, when we’ve implemented something like component level ownership, would look more like this. The product page would get less information. Really, what we would want PurchaseAttributes to know is a very small API scope, such as, what’s the product ID, and maybe what’s the selected color of the product that was chosen. With something like that, it’s a lot harder to actually break a component, because that surface area and all of those data bindings are not really depending on the parent page. The parent page is dumb. All it does is provide some very simple hints to a smart component. The smart component can use those hints as instructions on what it should do.
What Does a CLO (Component Level Ownership) Look Like?
If we drill one in, let’s see what component level ownership actually looks like when we look at it on a component itself. In here, you can see I’ve got the three export rule in place. The first thing that I expose here is the dataUtility. It accepts one argument, which is the product ID, and it can go out, fetch the product data and return it as JSON. The second attribute that I’m going to expose here is the actual PurchaseAttributes component itself, but the dumb component. What that one expects is you give it props, and it will render. The nice reason about having these things split up is maintainability becomes quite easy. If you want to know how PurchaseAttributes works, all you need to do is go to the PurchaseAttributes component and you can easily find, what feeds it data. How does it work? What does it accept? The last thing that I would export out of here would be the smart component, which is really just a combination of the dataUtility and the dumb component together. Now what I end up with is the smart PurchaseAttributes component. The only contract that I have with the host system is I expect it to receive a prop that has the product ID in it. If they send me that prop, which I would consider a hint, this component, server or client side, will fetch its own data and pass that data into the dumb component.
Federation + CLO
I want to take it one step further here. We’ve got component level ownership, which is really useful for any organizational structure, especially if we look forward at the future of React 18, where we’re going to be getting native async data support out of the box. Component level ownership is geared toward the future, while still offering some solution for what we have today. If we combine Module Federation and component level ownership together, there’s still one additional step that I usually prefer to put in place. I call this extra step the proxy module. The idea being, I don’t want to expose arbitrary pieces of the code base to other applications to consume. That could become risky. It would also be harder to find, what does an application or team expose that we can consume? With a proxy object or proxy module, what we’re really going to do is import the actual base feature from some code base inside of that team’s application, wrap it in a function. Then export it back out, wrapped around this proxy function. What this lets me do is create very easy, intuitive ways to create contracts, to create tests. To guarantee that the component adjustments or rewrite that I’ve done to PurchaseAttributes internally does not break the contract expected under the proxy export that would be exposed out of that team via Module Federation. If it does break that contract, because it is proxied out, we have the opportunity where we could take the existing contract and transform the data into the new shape that our PurchaseAttributes component might anticipate under the hood. It gives us a little bit more future proofing. It also is really useful, because now we have a very simple way to create local unit tests where you can still test that PurchaseAttributes works, but you have another file, where you can go and actually test that if I supply the agreed upon contract to this proxy module, it will still render and return the desired functionality we are already anticipating.
SSR and Hot Reloading Prod
Now that we’ve gone over Module Federation, and component level ownership, let’s tie this all back together. Next.js is a very popular framework. It’s also not really good at micro-frontends. I think, in general, any server-side rendered application tends to be quite tricky to create a micro-frontend pattern out of. You either have to depend on stitching layers, or infrastructure, or ingresses to bounce the traffic to the right application. If you’re trying to just change a component or something like that, you pretty much have to npm install it, because when you go granular enough, there’s not really a good way to stitch an individual component into another application, and server side render it. The first thing that we’re going to see is this is the consumer. I’m importing other components from independent remote. This is independent remote. Currently, it says, Hello QCon London. We’re going to change this component. We’re going to commit it. We’re going to deploy this repo out to production independently. We can now see that it is in the process of building. I come back to the page, I still see my old content. I refresh it. I now have the new content in here. If you check it out, it was last deployed 13 minutes ago versus the independent remote, which was deployed one minute ago. If I look in the HTML of this checkout application, you can see the HTML is there with the updated code that got server-side and client-side rendered.
Why this is really powerful, and what you might not have really seen or noticed in that video is that homepage or landing page that you saw, it’s not just a page with a header and a homepage body content. That page doesn’t actually exist in the checkout application itself. The header part comes from an independently deployed repo, and uses Module Federation to pull the navigation in to this checkout application. The body content or the homepage that was there doesn’t exist in the checkout app either. It’s actually using Module Federation as well, and it’s pulling that runtime through a software stream into this checkout application, performing the data fetch and the server or client side render and hydration as well. The content that we just changed, our little Hello QCon London component is federated inside of the homepage content itself. Which means that the checkout application is using Module Federation to pull in the homepage body. The homepage body is using Module Federation to pull in another application’s hello component. This also works with circular imports and nested remotes without sacrificing round trip time, hydration mismatches, or anything else.
In a traditional world, I would usually have to take the Hello component, package it on npm. Publish it to npm. Go to the home application. I would have to install the Hello component into the home application. Publish that to npm as well. Then go over to my checkout application or shell and install the latest copy of the homepage. It would require at least 20 minutes of work to actually get the change propagated through all the applications that we need to in order for it to show up the way that we just saw it. In a Module Federation world, however, it took 56 seconds to propagate across all applications at all layers without any other coordination required from anybody else. In short, it really gives us that just works feel.
Summary
This is really just the beginning. You can imagine what’s going to come next, now that we have server-side rendering, client-side rendering, we have React Native support. We even have WebAssembly support for polyglot at runtime orchestration. Some of the things that we’re going to look at doing in the future is making the entire npm registry available via Module Federation. My ultimate goal is to try and create a world where we still have a resilient system that can be versioned, but does not require redeployment or software installations in order to retrieve updates.
Questions and Answers
Mezzalira: There are different ways that you can use Module Federation. What is the craziest implementation that you would see for Module Federation? Instead, which one is the most useful one from your perspective?
Jackson: The craziest one that I’ve seen is using federation inside of cars to actually control the microcontrollers, so that when teams need to deploy updates to the cars themselves to controlling the various onboard systems of the vehicles, they can do it at runtime. Whenever the car next starts the application, instead of having to do firmware updates, you just boot the system, and it automatically has the latest stuff. I’ve seen a similar thing done to popular game consoles that most of us probably own, where the Bluetooth controllers and all the services for running the game console are using the same thing to avoid having to do firmware updates to your game consoles. The most useful scenario that I’ve come across is probably on the backend side for something like hexagonal architectures. With federation working on Node.js, it offers a very different design pattern where we can do something like automatic rollbacks, or hot reloading of the production servers without having to tear them down or have multiple instances of something out there. For hexagonal architecture on the backend, it’s super helpful because I can create high density computing models. If I have a multi-threaded Lambda or container, I don’t need to deploy each service out to another container, but I could use federation, stream the code from these individual deploys into a single container. Once that container is under too much load, I can still shard them back out and just go over the network to another container which can repeat the same process. You can dynamically expand and contract these systems. They’re usually very fast because everything that you need would be in memory. If I can just pull the API into process and use in-memory communication, it’s typically more resilient and faster than going over the network.
Mezzalira: Towards the end, you were sharing what are the opportunities and the future of Module Federation. Do you reckon that Module Federation will be implemented not only in the JavaScript world, like Java or other solution, maybe HTML even, or CSS? That is using the same logic, at least.
Jackson: One thing I didn’t really cover well was that federation works with anything webpack can understand. It’s not just JavaScript limited. It can be images. It could be CSS, sideEffects, WebAssembly, pretty much anything that can be processed during the webpack build can be distributed and consumed in any way. What I have seen on the polyglot side is having something like a hexagonal backend where it’s Node orchestrated, so it’s powered by federation, but through the port flow, we’ll be pulling in the Java payments processing. As we need to process a payment, I can stream the Java application in there. Execute Java code, pass that then to something in Python. It’s definitely not limited to Java. It’s limited to what can webpack compile as a whole. Anything that you can import and successfully build, you can distribute however you want from federation to any other thing that can consume it.
See more presentations with transcripts
Making the Right Strategic Cloud Modernization Decisions for Your Commercial Databases
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
<!–
Join us free for the Data Summit Connect 2021 virtual event!
We have a limited number of free passes for our White Paper readers.
Claim yours now when you register using code WP21.
–>
Commercial RDBMS databases such as SQL Server and Oracle are not always the right fit for developing modern applications, as these legacy systems were originally designed around relational data models and monolithic architecture. While performing a lift and shift cloud migration of SQL Server or Oracle databases offers some benefits, it’s only the start of your cloud journey – modernization is next.
Download PDF
MMS • Nick Tune
Article originally posted on InfoQ. Visit InfoQ
Transcript
Tune: My name is Nick. I’m going to talk to you about sustaining fast flow with socio-technical thinking. I think I know what all of those words mean, but let’s find out. I’d like to start by asking you a question. Does this sound familiar to you about where you work now, or any companies you’ve worked at before? This is a quote from a recent conversation I had with a marketing leader. Something has gone horribly wrong in this company. All we’re asking for is 2 textboxes to be put on a webpage, and it’s going to take 3 months. Why is everything taking so long here now? You see, I feel like this phenomena is so common. It reminds me, every time I see this, I think of how we go from being a Formula One car to an old banger. When we start something new, we’ve got a brand new product to start up a project with a fresh canvas, and everything’s new, and there’s no problem, there’s nothing existing in the way. We can deliver a fast flow of changes very quickly. Almost any company can deliver a fast flow of changes very quickly, at the start of something. Over time, some weird, mysterious phenomenon means that everything starts taking so long. From, at the start, a couple of hours to put a textbox on a webpage, one year later, is taking three months. What happens in that period of time?
When Flow Drops Off, the Whole Company Is Affected
I think this is such an important phenomenon. I’m going to share with you one of my experiences. Back in 2017, I worked for quite a large company. I was based in London. The majority of the other teams were based in Indianapolis. We got invited over there for a big engineering offsite for a large majority, or a big number of the company’s engineers working in this part of the business. Overall, the company was huge, tens of billions of dollars revenue and stuff. This was just one part of the business with probably about a couple of million dollars revenue, so still fairly big. The purpose was to get the engineers together to talk about what’s working, what can be improved, to learn from each other. To pull out all the stops, trying to build this feeling and culture of how can we become a high performing engineering company. For me, it was also my first visit to the States, so I had big expectations, had never been there before. For many reasons, I was super looking forward to this trip to Indianapolis.
For the first day, we spent some time in the office, met some colleagues in Indianapolis face to face for the first time. It was cool. Then we went to the offsite, it was in a big hotel. There was a games room. There was like some VR stuff, Xboxes and PlayStations, lots of nice food. Then in the big room where the event was taking place, there was a space in the middle, a main floor. It wasn’t really a stage, it was ground level. There was seating all around it, I think, probably 200 to 300 people. This event was all about engineering. How can we become better engineers? How can we share ideas? The keynote was being held by the CEO of that part of the business, that business unit. He didn’t waste any time getting his point across about what he wanted to say that day. He just said, “We’re facing a lot of problems in the company right now. We’ve grown, which is successful, but the products are unreliable. We’ve got lots of bugs. Customers are complaining. Customers are leaving. It’s taking us so long to get any work done. What’s going wrong here?” Then, in no uncertain terms, he called out the engineers. He’s like, “You engineers, you’re not doing a good-enough job. This is all your fault. All of these problems, it’s your fault.” I was pretty shocked by that. Then he went on to say software engineering is not a 9:00 to 5:00 job. You got to put in the hours and the effort to make sure work is done on time to a high quality with no bugs. That’s how bad things can get when the work slows down, everything turns into an old banger, pressure across everyone in the company.
Sustainable Fast Flow at 7digital (2012)
It doesn’t have to be that way. In 2012, before that experience with a big company, I worked for a smaller company called 7digital. We didn’t have any fancy offices, no big engineering offsites. We worked in a basement in London in this building called Zetland House, in between Old Street and Moorgate. We didn’t even have any daylight, yet I got to feel in this environment, a real high performing team. No frills, just smart, passionate people wanting to do good work. The key thing is the work and the speed was sustained over the course of years. There wasn’t this big drop-off. When I started, 6 teams, deploying to production up to 25 times per day. I think on average between 15 and 25 per day. In fact, on my first day, I paired up with a senior engineer. We did some pair programming. We did some TDD. He wrote a test, I wrote some code to make it work. Did that a few times. Then one click, deploy to production the work we’d done, just one button click. It took about 5 or 10 minutes to get that code in production. No long testing processes. For comparison, at a big company, it took two hours to build the code base. In that two hours, we’d implemented a whole new feature. Deployed it all the way to production, and customers were using it. It doesn’t have to be that way. Performance fast flow doesn’t have to drop off, it can be sustained.
Sustainable Fast Flow at Scale
I was curious. After 7digital, I wondered, this was quite a small company. It was going from startup to scaleup. Two years later, they were building more teams, but still, in general, quite a small company. I wondered, can this work in larger companies? Since then, I’ve worked on a government project in 2015, 2016 at HMRC, very large organization. At that time, there were 50-plus teams, all deploying to production on a daily or a near daily basis. Since then, I’ve had the opportunity to work with supermarkets, travel companies, and seeing companies in a variety of different industries who’ve all managed to have this high rate of flow and sustain it over time.
How to Achieve Sustainable Fast Flow
I feel that this wasn’t a fluke, or by chance, I feel that there are consistent things that these companies have done that’s allowed them to have this sustainable fast flow. Firstly, incentives. I’m not talking about money or bonuses. I’m talking about the way people are encouraged to build products. The kind of messages leadership sets down around quality, learning, and not just working to get work done as quickly as possible. The second thing is decoupling, splitting different areas of the business, different parts of the software, different teams, so that work can be done in parallel. When the coordination costs get too high, when changes start interfering with each other, that’s a huge blocker to flow, and it doesn’t scale as your company grows either. Thirdly, I think platforming is one of the key things that enable sustainable fast flow by moving all of these things that block engineers and slow them down. By moving all of that stuff into a platform, taking it away from them, and making sure it can’t block them. You just completely preclude those things from blocking your teams. I think platforms are especially important as your organization scales. Once you get beyond six or seven teams, I think platforms really start to shine. When you’ve got 50-plus teams, it is a huge differentiator. I think also underpinning all of these things has been socio-technical thinking, from engineers to leadership. There’s a focus on how we work, how we treat people, how we think about the social impacts of all the things we’re doing from incentives to architecture, to culture, and balancing that with the technical concerns of building software systems and building organizations. I think, really, the socio-technical thinking has been present in all of those companies.
Part 1: Incentivizing Sustainable Fast Flow
The first part, I’m going to talk about incentivizing sustainable fast flow. I’m going to talk about this first, because I think this is the most important thing. This touches on the culture, behavior, and leadership. Without those things, I don’t believe it’s possible to create sustainable fast flow. Here’s a quote from a company I worked at previously. What do you think about this? If you started a new job, and you spoke to a senior person in the company, and they said to you, “People who go home on time here, don’t become managers in this company.” How would you feel about that? How about this quote? If you’re setting up a new team, or a few teams, and the CTO of the company who works in a different country, comes over to visit your teams in person and introduced himself. He says, “I’ve got a reputation for shouting at people, but don’t take it personally. Anyway, if I don’t shout at you, the CEO will shout at you louder.” How would you feel if you worked in a company where you heard something like this? Maybe you do work in a company where you hear things like this. It won’t surprise you potentially to hear that in the same company, it was also going through a situation where the investors weren’t happy. In fact, they’d said to the CEO, “We’re getting very frustrated that this company never delivers anything. You’ve got six months to deliver this project that hasn’t been going anywhere for two years. Six months, and it has to be done or we’re going to implement some big changes, and you Mr. CEO won’t be part of those plans for the future.”
I think those behaviors are a big sign that your company has a flow-destructive culture, where you’re not emphasizing or addressing the social aspects needed to sustain fast flow over a long period of time. Always a rush to hit some deadline, pressure to deliver, engineers being stolen from one project to move on another. Those engineers are called resources or development resources. They’re just numbers in a spreadsheet. We’ll take naught point five of this developer, put him on that project for three days a week, and then the other naught point five of him can work on that project for two days a week. If you see a lot of that, I think you’ve got flow-destructive culture. Also, in this company, there was a single Jira workflow that the entire company had to use. We’re talking tens of engineering teams building stuff in the cloud, stuff on- premises, legacy, APIs, embedded software. They thought they could have this one Jira workflow that every team had to use, and, of course, the CTO who enjoys shouting at people. These behaviors, in my experience, will not allow you to have sustainable fast flow in your organization, because they totally neglect the social aspects of building high performing teams.
Also, that company had this situation where one of the principal engineers in my team was contacted by a sales reps. He got a customer who’s very angry. The customer is saying this API is not working. The sales rep, he couldn’t find out the team that owned the API to ask them to fix it. It turns out, no one owned the API anymore. It had just been abandoned. Some team was part of a project to build it, then they all got moved to another project. No one was looking after the API anymore. It just became a mess over time and not being cared for. In my experience, if you neglect the social aspects, and you’ve got these flow-destructive behaviors, like single Jira workflow, hitting arbitrary deadlines, rotating developers like resource the numbers into spreadsheets, and you’ve got a CTO who likes shouting at people, you are incentivizing software that is not maintainable. Of course, if that software is not maintainable, over time, you’re going to have a huge drop-off. It will be harder to make changes. The compile times will take longer. Testing will take up more of your time. You’ll have more bugs in production, and flow in your company will catastrophically be blocked. If you’re wondering, why is a webpage so difficult to update with a textbox? Look for these behaviors in your company. They probably caused a whole buildup of legacy.
It takes a change in the mindset of leadership to introduce good behaviors and practices and a culture that supports sustainable fast flow. I recently heard a quote from a CEO. He told me this. We were talking about the challenges in his company, and he said, “The engineers here have got OCD. They’re always waffling on about technical debt and rewriting stuff. I wish they would just focus more on delivery.” That there is an example of flow-destructive leadership. You’re not incentivizing good engineering practices that make your systems easier to sustain and evolve. Very often in these companies, the leadership team who like shouting at people, putting in place arbitrary deadlines, not wanting to address technical debt, they’re the same people who are always looking for some quick technical fix. “We need to modernize our systems, but let’s try some agile framework. We’ll hire some consultants and they’ll give us all the answers that we need.” Unfortunately, those things are not going to pave over your cracks, or if they do, it will only be a small fix. It really won’t address the deep fundamental problems. I think the only quick fix is to go back in time and incentivize good behaviors that create sustainable software systems.
At 7digital, it was a completely different mindset. We were encouraged to do TDD and pair programming. We weren’t forced to rush and hit any deadlines. We had a lot of time during working hours to learn, two days every month to learn something new, in addition to a bunch of other time for team get-togethers, retrospectives. I just felt so incentivized to work in a sustainable way. We were even told, go home on time every day, which is not something you hear at many tech companies. Those were the things that incentivized well-designed systems, one-click deployment, few bugs in production, high test coverage, and little downtime. It was no surprise that every team in the company had this same incentivization from leadership. They were all deploying multiple times per day, over a long period of time with no drop-off.
I think there are two things that I really want to call out here, which just do not work. If you see these behaviors in your company, these leadership behaviors, you need to stamp them out immediately. The first one is this idea that we can move people around teams. An engineer, 20% here, 50% here, it doesn’t work. It doesn’t make sense. It doesn’t inspire motivation and purpose in people who just feel like cogs in a machine. They’re constantly context switching, no sense of ownership. The second one is, if we have this standard process, or this standard Jira workflow, that will make us all effective. If someone moves from one team to another, then we’ve got the same process that will quickly be up to speed. It does not make any sense. When you’re an engineer, and you move from one team to another, it takes a few hours to learn the Jira workflow. It takes months to learn to build relationships with people, to understand the code base, to understand the domain you’re working in. A Jira workflow is not going to make moving people around any faster. I think that good companies, they don’t just maintain F1 car speed, they get better over time. These incentives encourage people to build quality systems, to learn, to improve how they work, to improve their knowledge, and that F1 car can turn into a spaceship.
Part 2: Decoupling for Sustainable Fast Flow
Part two is decoupling. This is where we try and identify independent parts of our business so we can make changes and teams can work in parallel without tripping each other over, or blocking each other. Sometimes it’s not always easy to decouple parts of our business, especially if you’re an established company with lots of systems and existing teams. I was having a conversation recently, and a CTO had a different perspective on what this involves. He basically said, we know our team is organized in the wrong way and we’re not working efficiently. Can you just tell us what our value streams are, then we can re-org all of our teams, and we’ll work much more effectively. Unfortunately, no, I’m not proposing any quick fixes here. It’s not that simple. The idea that the way we organize our teams, and align teams in software, definitely has a big impact on flow. I agree with that. Big re-orgs overnight don’t really work, because you can’t re-org your software as easy as you can re-org your teams. Also, identifying your value streams which your teams are aligned to, that’s not a simple easy thing either. You need to understand how your company works, and get into the details to be able to make those decisions.
One of the key reasons for that is the relationship between how teams are organized, our team topology, and our software architecture. There’s a concept called Conway’s Law. Basically, the communication in your organization will be mirrored in the architecture of your software system. If you organize your teams in a certain way, your architecture will start to mirror your teams. That’s a natural thing. Teams will organize their software in a way which makes it easiest for them to get their work done. Easiest is often not having to be blocked or depend on other people or other parts of the system. Over time, the way we shape the architecture then shapes us. If we want to re-org our teams in the future, we can’t easily re-org our software architecture and refactor it, because refactoring code, moving responsibilities around network calls, it’s very difficult and expensive to do that. The way we shape our software systems impacts how we can change our teams in the future. It’s important to try and get the boundaries right. It’s also important to have the flexibility. Again, that’s where the incentives play a big part. If teams are incentivized to architect software systems well, maintain high quality, when you do want to change your teams, it’s going to be much easier to do that. You’ll get much more flexibility from your systems.
The approach I recommend is to understand your business domains, different parts of your business. Focus on a specific topic, or subject, or area. Basically, in each business domain, we try to understand, what are the user needs in this business domain? What expertise can we build for developing capabilities in this domain? If we organize our teams around true business domains, we’ll be organizing teams around cohesive and related business concepts, parts of the product that change together, and that will naturally mean fewer dependencies. We want both our teams and their architecture to be aligned with our business domains. Business domains are things like tax calculation, journey planning, discovering new treatments, booking appointments. These are different areas of different businesses, and they’re conceptually domains.
I’ve got an example here, of where identifying better domain boundaries had a big impact on the company. This is a company in the experiences industry. The problem they were having was a common problem. When one team makes a change, they’re having to coordinate those changes in other teams. This was a frequent phenomenon. A lot of the work they were doing required multiple teams to all make changes to their parts of the system. They also had a manual process which involved 30 people, lots of Excel spreadsheets, emails, handing things over. It was taking up a lot of people’s time. The company wasn’t huge, it’s like 30 people, that’s a decent chunk of the company all involved in this manual process, which is slowing them down. After a bit of time, we realized there’s a hidden domain here. There were different parts of this domain which have been scattered around, owned by different teams in different parts of different systems, different IT systems, and that’s causing this problem. By decoupling those parts of the domain from the systems they’re currently coupled to, and consolidating them into a single domain with a single code base, and organizing a team around that, changes will be much easier. That manual process can easily be automated in a centralized place.
One of the tools I recommend for this is things like core domain charts, to map out your different domains at your company, and identify which domains allow your company to differentiate itself, to get some advantage in the market. In this example, we realized that, step one, by consolidating that hidden domain, and reducing all of that manual complexity, that was only a supporting domain. It wasn’t a big differentiator, but it was a big cost, big blocker to changes. By consolidating the domain, simplifying all of that complexity and centralizing it, would make it much easier to add big improvement in their core domains, which would come after that, maybe three to six months later. The timelines weren’t particularly clear, but it was on the case of months. Thinking strategically about your core domains, and optimizing for flow in your core domains, that’s definitely something I recommend you do.
There were other benefits. When you align your teams, your architecture with business domains, teams have a sense of purpose. They’re working in a specific area. It’s their job to identify value, and develop capabilities and own the technical solution. It’s very motivating when you’re solving the whole problem, and not just taking requirements. The team can also gain expertise in that part of the system. They can start to gain lots of domain knowledge and become more valuable to the business as they understand more clearly the problem that they’re solving, and they can propose solutions and ideas. That’s helped by working closely with domain experts. Also, incentivizes teams. If they’re going to own something and be responsible for the choices they make, that just incentivizes good long-term behaviors to keep the code sustainable and aligned to the business. On the technical side, in addition to the sustainability, the code will be much more closely aligned to the domain. When your business talks about features and concepts in your business, the code will match a conversation they’re having, and it will be much easier to translate requirements into code because it’s all using the same language. Also, there’s the low coupling benefits. Parts of your business concepts that change together, will live together in the same code base owned by a single team, so fewer dependencies and a lot of those coordination problems are reduced or even go away.
I talked about empowering teams on purpose, and teams having ideas themselves about the product itself. There’s a survey from Alpha UX product manager insights, and they found that the best product ideas come from the whole team brainstorming. If you want to learn more about this, check out “Inspired” by Marty Cagan and Melissa Perri. These are people who are experts in product management, they’re not engineers. They’re talking about giving teams ownership and empowerment is actually building better products. If you want to know how to identify true domain boundaries, event storming is a great technique for collaboratively mapping out your business. I use this all the time. I highly recommend it. Domain message flow modeling is also useful to design end-to-end business processes, how your different domains collaborate to fulfill those four user journeys. Then from “Team Topologies,” Matthew and Manuel have created independent service heuristics, and some workshop formats, and a bunch of tips and ideas and clues for identifying true value stream boundaries. Definitely recommend all of those techniques. There are some links here to find out more.
Part 3: Platforming for Sustainable Fast Flow
Part three, I want to talk about leveraging platforms to enable sustainable fast flow. I think platforms are super important because you can take away responsibilities from engineering teams, put them in platforms, and make it so teams don’t even have to think about these things. Therefore, the platform just prevents blockers from even existing in the first place. It’s not easy to build good platforms. The platform itself can become a cause of blocked flow in an organization if it’s not done really well, so a big effort, but also big risks. One of the examples I’ve got where a platform mindset wasn’t in place, is a financial services company in the UK. It’s a very difficult experience being a developer, in this company. I tried to go through the experience myself. I was given the company laptop. It took about three or four minutes to load up every day, just logging in. It was very slow, just generally, because it had so much stuff on there. I don’t think anyone really thought of making it productive. It just seemed to be, lock everything down as much as possible. I know security matters, but there were better ways to do that. It doesn’t need to be that extreme. It took me three tickets with the system operations team to get Docker installed. Developers weren’t allowed to download packages from npm. It was forbidden for developers that you can get access to production logs. They had to create a support ticket, and an ops team would extract the logs and give them a file on a file share which they could download. Very difficult to diagnose production issues. It’s not surprising that after one year of a new project, nothing got delivered. They didn’t even get one line of code to production. The crazy thing about this is developers on these teams probably spent 50% of their cognitive load, 50% of their time just dealing with all of this accidental complexity of local development machines, trying to get pipeline set up and code to production. That’s 50% of more than 10 engineers’ time spent adding no value to the product itself. That’s crazy.
On a positive note, when I worked at HMRC org, I got to see a very different experience. HMRC had a platform called MDTP, Multi Digital Tax Platform. The platform had a very slick paved the road. Any team could get a new microservice pretty much all the way to production in a day or two. There was a security approval required to go to production, but, technically, you went into a file, added some configs, triggered some job, and it would spin up a whole new template for your application, development, QA, production environments. You got metrics, monitoring, logging, testing. You got everything you needed to build code, put it in production, and support it in production. The platforms gave it to you. Developers spent nearly all of their time working on product and domain capabilities, and not fighting infrastructure and red tape. I’ve put some links to resources here about the MDTP. I just recommend these resources for learning more about the MDTP, and some of their experiences about building platforms.
Platforms are also a key concept in “Team Topologies.” I think there’s a couple of highlights here which get the point across. The goal of a platform is to enable stream-aligned teams to deliver work with autonomy. It’s about giving them ownership and empowerment, to support applications in production. It’s about taking cognitive load away from teams for things that they shouldn’t be caring about anyway. If you build platforms that do all of these three things, you will have a greater chance of achieving sustainable fast flow, especially if you’ve got a good decoupling, and teams that are incentivized to maintain levels of quality. It’s not easy to build good platforms, and it’s not just a technical endeavor. This quote here is something I heard in a project a few years ago. Basically, I was meeting a director in the company, and we were talking about the platform. He said, “You have to work on their terms. You don’t go to them with a solution, you go to them with the very problem, and they’ll figure everything out for you. Don’t try and do their job for them. If they don’t want to talk to you on a certain day, you go away, and you come back when they’re ready.” That is not a good attitude for a platform team. If you’re supposed to be helping developers, that’s not how you want people to see you. That’s not a good relationship.
Likewise, this one, in a traditional company with traditional operations, they were learning about team topologies and platforms. This idea that the platform team is there to enable development teams to be successful, to empower them, to support them. That just did not land well with the operations team. In this company, operations had always been responsible for delivering outcomes. They thought they were above the developers. This idea of building a platform to support the developers and make developers successful, that’s like the king helping the peasants do their job. They could not comprehend the idea. If you’re going to do platforms, and you want to do platforms well, and you want to create sustainable fast flow, those social aspects of building platforms, equally as important as all of the shiny technical stuff, that enables you to spin up microservices in 10 minutes. Things like, firstly the mindset. Platform teams have to see developers as their customers. How can we make engineering team successful? That’s got to be their mindset.
Platform team also has to reduce cognitive load. I worked with one company doing platforms, and each team had to create a huge Kubernetes file to get a new service set up. They had to learn all this stuff about Kubernetes, and how the company used it. That was slowing them down, not speeding them up. Good platforms minimize that. MDTP at HMRC, a few basic configurations but a new application, and that was it. There you go off to production with your code in just an hour or so. Developer experience again. Every interaction with the platform needs to be slick. Every suboptimal interaction slows down your teams and slows down your whole company. I think it’s important to have a platform as a product mindset, things like surveys. Platform team going out to engineering teams. Are we serving you well? How can we improve the platform? What’s your experience at the moment? In one company, we started doing joint retrospectives from the platform team, and the engineering teams to improve that social relationship between them. I think it worked really well.
In terms of sustainable fast flow, I think there are some developer experience metrics to keep an eye on, which will give you a clue if you are working in sustainable way. I think first one talked about a new service, how quickly to put some new code in production. A couple of hours is good, one hour is good. Any longer than a week, that’s a very bad experience. Likewise, how quickly does code go from a developer’s laptop to production? We’re talking minutes here. If it takes more than 10 minutes to put some code in production, you’re not at the high level companies are operating at these days. Also, I think about the onboarding experience, how quickly until a new team member is productive? At 7digital, I deployed to production on my first day. I think that’s the standard you should be aiming for in your company. Also, managing the number of tickets. If engineering teams are constantly having to create tickets with the platform team, that’s not self-service, that’s a potential big blocker to flow. Ideally, we don’t want any tickets but we can accept one or two per month per team. Any more than that, it looks like the platform team is becoming a bottleneck. Don’t tie your engineers up in red tape and bureaucracy. Roll the red carpet out for them, and this will give you sustainable fast flow.
Wrap-up: Socio-Technical Symbiosis
To wrap up, I’d like to propose looking to nature as a way to help think about balancing socio and technical needs. In nature, we have the concept of symbiosis, where two species live together and there’s an association between them. For example, the gut microbes living inside us, we give them a home to live and to flourish, and they break down foods and help us to get energy from that food. There are different kinds of symbiosis. On one hand, there’s mutualism, where two species coexist and help each other. On the other hand, there’s parasitism. Parasitism, where one species lives off and harms the other. When we try and create organizations with sustainable fast flow, we need to have a mutualism approach where we’re not constantly incentivizing bad behaviors that cause our software systems to become more unsustainable.
I presented three things which I think can help build sustainable fast flow, if you apply socio-technical thinking. Firstly, it’s about incentivizing good long-term behaviors by going home on time every day. I think that’s very important, and technical practices. Decoupling is important. Identify business domains, align your architecture and teams with those business domains. When you implement new features and concepts, you will have minimal dependencies and coupling between those things. You’ll also get more empowered teams who build better products. Finally, especially in large companies, it’s important to build good platforms. It’s also very dangerous. It’s important to have a socio-technical mindset, building a platform as a product and platform teams having the right mindset.
Summary
Socio-technical thinking leads to socio-technical mutualism, leads to sustainable fast flow, leads to happier employees and better products. Here’s three questions to ask yourself to help you take the next steps applying these ideas. What’s stopping you applying socio-technical thinking in your company? What could you start doing differently tomorrow? Finally, what happens if you do nothing? Are things going to get better, or will they continue to get worse?
See more presentations with transcripts
MMS • Robert Krzaczynski
Article originally posted on InfoQ. Visit InfoQ
Recently Microsoft released .NET Community Toolkit 8.1. This new release contains performance improvements to the MVVW Toolkit source generators. There are also new features such as custom attributes for ObservableProperty, MVVM Toolkit analyzers, IObservable messenger extensions and support for .NET 7 and C# 11.
The previous (8.0.0) version was a collection of helpers and APIs that facilitate using patterns like MVVM independently of the underlying platform. Version 8.1 includes changes impacting various areas such as performance or improving code readability. All of the features introduced were already present in the preview version.
The community had requested supporting custom attributes for ObservableProperty. Previously, this could only be set manually for the source-code generator of the MVVM toolkit. In order to implement this feature in the new version, the Microsoft team decided to use the existing property: – it is a syntax in C# to allow developers to mark attributes for propagation to generated properties. This solution offers certain advantages: by using the built-in C# syntax, the property does not require additional attributes. In addition, this solves the problem of annotating attributes, which can only apply to properties and not fields.
The new version of the .NET Community Toolkit provides developers with more targeted support in optimising the use of the MVVM Toolkit. The source generators will now use the Roslyn 4.3 target, so they can enable some of the more optimised APIs if the host supports them. This is automatically enabled when referencing the MVVM Toolkit. The other thing is to switch generators to the new high-level Roslyn API to match attributes, which improves the performance of generators run on the basis of specific attributes.
Developers also moved almost all diagnostics to diagnostics analyzers, which reduces overhead while typing. This improves all incremental models and pipelines to also reduce overall memory allocation.
Another new feature is how to bring together the functionality provided by the messenger APIs in the MVVM toolkit. This is now supported with the new IObservable extensions for the IMessenger interface. The extensions can be used as follows:
IObservable observableMessage = Messenger.CreateObservable();
This extension will create an IObservable object that can be used to subscribe to messages and dynamically respond to them.
.NET Community Toolkit 8.1 also provides the .NET 7 TFM platform to the HighPerformance package and several changes to take advantage of the new features of the C# 11 language, particularly the ref fields. It is possible to declare ref fields inside a ref struct. Additionally, the runtime is able to fully define Span using the C# type system.
In addition to the official release post, Sergio Pedri, .NET engineer at Microsoft, also wrote a comment on Reddit about adding the Roslyn multi-targeting:
Hopefully, the experience in Rider is better now – I added Roslyn multi-targeting in this new release and it’s possible that wasn’t playing too well with Rider. But now there’s also another custom .targets in the MVVM Toolkit that manually handles multi-targeting if the target host doesn’t support it, so maybe that helps? Anyway hopefully the experience outside of Visual Studio is smoother now.
Aathif Mahir, a .NET developer, commented on Twitter in the thread about the .NET Community Toolkit 8.1 announcement:
Those who are outside of Windows or .NET ecosystem don’t know how powerful .NET Community Toolkit is and how much time it saves for developers, really appreciate the whole community that contributed to the toolkit and @SergioPedri for super awesome Source Generators.
The entire changelog of this release is available on GitHub.
MMS • Mostafa Radwan
Article originally posted on InfoQ. Visit InfoQ
The Cloud Native SecurityCon North America 2023 kicked off this week in Seattle. The first dedicated event focused on Cloud Native Security with over 800 attendees, 70 sessions, 50 sponsors, and vendors organized by the Cloud Native Computing Foundation (CNCF).
Priyanka Sharma, executive director of the CNCF, kicked off the event and announced a new Kubernetes and Cloud Security Associate (KCSA) certification that will be available later this year. The purpose of this certification is to fill the need for technical expertise when it comes to cloud native security by providing knowledge and skills to practitioners including beginners to be incorporated into organizations’ cloud native infrastructure.
Currently, the certification is being developed by community experts and professionals. Practitioners who want to participate can apply to be considered beta testers after submitting an online form. The certification is expected to be generally available before KubeCon+CloudNativeCon North America 2023 in Chicago, November 6 – 9.
Also, during the keynote Priyanka underscored that security within the cloud native ecosystem is deeply complex due to the nature of cloud native environments with fast deployment cycles, modern infrastructure, and scale. She outlined CNCF’s approach to cloud native security which is a people-powered approach focused on the cloud native community collaborating, educating, learning, and sharing knowledge and expertise.
InfoQ sat with Chris Aniszczyk, CTO of CNCF, at Cloud Native SecurityCon NA 2023 and talked about the event, its relevance to developers, and cloud native security.
InfoQ: Why there is a need for a standalone conference for cloud native security?
Chris Aniszczyk: We talked to many of our members, maintainers, and end-users. The feedback we received about KubeCon is that it’s a great event but it’s so big to focus on one particular area.
We felt this could be a good idea as we looked for similar events that are developer-led, open source, vendor-neutral, and focused on cloud native security. We couldn’t find any.
We will see the feedback after the event. We will probably do this again next year and make it a bigger and better event.
InfoQ: As more organizations go cloud native, we are seeing more containers and Kubernetes vulnerabilities, threats, and ransomware. How can we address those challenges and how can CNCF help?
Aniszczyk: All this crazy stuff that’s happening is going to continue. There’s no way to completely avoid this. What we can do is de-risk this threat to ensure that developers, security teams, and IT leaders have a good idea of what tools to protect and secure their environments. There is no silver bullet here.
The role of CNCF is to provide educational resources and training to the next generation of developers regarding security. As we announced today, the KCSA certification can help with that.
Our role is mostly educational through training and sponsoring promising projects, and partnering with vendors to address those challenges.
InfoQ: What are some of the technologies and/or projects that you think are going to play an important role in cloud native security?
Aniszczyk: There is a couple I could think of that come to mind. First, Cilium and the modern eBPF stack including projects such as Falco and Pixie. I think the future of cloud native security will be based on eBPF technology because it’s a new and powerful way to get visibility into the kernel which was very difficult before.
The other thing that is happening at the intersection of application and infrastructure monitoring, and security monitoring. This can provide a holistic approach for teams to detect, mitigate, and resolve issues faster. For example, SBOMs can help both application developers and security practitioners better understand what their software is made of to detect anomalies in production environments.
Cloud Native SecutityCon used to be a co-located event along KubeCon+CloudNativeCon. Based on the feedback from the community and the focus on cloud native security, CNCF decided Cloud Native SecurityCon to be a standalone event starting this year.
Some of the conference sessions and keynotes are available on the CNCF Youtube channel.
The next CNCF conference is KubeCon+CloudNativeCon EU 2023. It is a hybrid event in Amsterdam, April 18 – 21.
MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ
Having a good strategy for test automation can make it easier to implement test automation and reduce test maintenance costs. The test automation pyramid and automation test wheel can be of help when formulating a test automation strategy and plan. Test automation code should be clean code, and treated similarly to production code.
Christian Baumann spoke about test automation at Agile Testing Days 2022.
One of the challenges in automated testing is that too many people still see test automation as the holy grail, as the silver bullet that will solve all their quality and testing issues, Baumann said. In fact, it can be a great tool to support your testing, to create a safety net for your product, but it is a lot of effort to implement test automation properly, and it also comes with some maintenance cost, he added.
A test strategy and plan for test automation helps to achieve the mission for testing and describes how that mission can be accomplished; it provides guidance throughout the project, especially when things are unclear, and as a consequence, reduces the chance for failure, Baumann explained:
A strategy and plan defines what is in scope for the automation, and – of equal importance – what is not. It should define what to test, when to test, and how to test. Furthermore, it should clarify who does the automation and what shall be automated. And – crucial for automation – what tools, framework and programming language to use, and on what level to automate.
The strategy and plan for automation should be aligned with the overall test strategy and plan; eg. the tests that in the future will be executed by automation don’t need to be considered anymore for non-automated testing, Baumann said.
According to Baumann, test automation code should be valued and treated as the production code of the applications. He suggested using rules and principles from clean code also for test automation code:
Clean code can be read, understood and enhanced by a developer other than its original author. With understandability comes readability, changeability, extensibility and maintainability.
InfoQ interviewed Christian Baumann about test automation and clean test code.
InfoQ: How can we apply the test automation pyramid and automation test wheel to automate testing?
Christian Baumann: I would like to quote Michael Bolton, who says “Good models are a set of guidelines that help us solve a problem.” These models can be of great help when formulating a test automation strategy and plan.
The pyramid for example gives an idea of how much test automation effort to put in each level of the application:
- At the bottom, there should be a lot of unit tests. They are relatively cheap, in terms of efforts to create, but also in terms of execution time and maintenance effort.
- On the top, there should be very few tests on the GUI level, because they are expensive. It takes a lot of time and effort to create them, they are slow in execution, and also debugging and changing them is a lot of effort.
This is represented by the form of the pyramid: a wide base or foundation, but only a small top.
InfoQ: What can be done to create clean test code?
Baumann: The idea of clean code comes with a set of certain rules and principles to follow and apply when writing code. Most of them are relatively easy to understand and follow, for example the “KISS – Keep it simple stupid” idea: it originates from the US Navy and goes back to the 1960 already. It states that most systems should be kept as simple as possible; unnecessary complexity should be avoided, simply because less complex systems are less prone to errors. This principle can and should also be applied to code.
MMS • Steef-Jan Wiggers
Article originally posted on InfoQ. Visit InfoQ
Amazon recently announced the general availability of OpenSearch Serverless, a new serverless option for Amazon OpenSearch service, which automatically provisions and scales the underlying resources for faster data ingestion and query responses.
Amazon OpenSearch is an open-source, distributed search and analytics suite derived from Elasticsearch. It supports the latest versions of OpenSearch, previous versions of Elasticsearch, and OpenSearch Dashboards and Kibana.
The key with OpenSearch Serverless is that it decouples storage and compute components, allowing every layer to scale independently based on workload demands. In addition, it also decouples the indexing (ingest) components from the search (query) components, with Amazon Simple Storage Service (Amazon S3) as the primary data storage for indexes.
Source: https://aws.amazon.com/blogs/big-data/amazon-opensearch-serverless-is-now-generally-available/
The GA release of the serverless option is a quick follow-up from the preview launch at re:Invent 2022 last December, and includes a few enhancements with scale-in support and availability in three additional regions.
Regarding scaling, OpenSearch will now scale out and scale in to the minimum resources required to support the customers’ workloads. In addition, the compute capacity used for data ingestion, search and the query is measured in OpenSearch Compute Units (OCUs). The OCU limit per account has now been increased from 20 to 50 for both indexing and query. And finally, customers can now use the high-level OpenSearch clients to ingest and query your data and migrate data from their OpenSearch clusters using Logstash.
Khawaja Shams, a co-founder & CEO of momento, tweeted:
#OpenSearch “#Serverless” is now GA – but it isn’t Serverless.
“One OCU comprises 6 GB of RAM, corresponding vCPU, 120 GB of GP3 storage (used to provide fast access to the most frequently accessed data).”
An OCU is code for $8.4K/yr if you are idle! 2x if you have a dev env
In addition, Carl Meadows, product manager for Amazon OpenSearch Service and the OpenSearch project, responded to a question via a Twitter thread if the service is serverless:
Hi Ben – As of today, there is a min. compute required per account to ensure fast and hot query and indexing performance – though scale to zero and pause/resume are top requests in our list -thanks.
Currently, OpenSearch Serverless is available in eight AWS Regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). Pricing details of the OpenSearch Serverless are available on the pricing page under serverless.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
<!–
Join us free for the Data Summit Connect 2021 virtual event!
We have a limited number of free passes for our White Paper readers.
Claim yours now when you register using code WP21.
–>
Data modeling is an essential aspect of application modernization. As organizations migrate data and applications to the cloud and other modern forms of infrastructure, they need data modeling solutions that can facilitate the overall modernization lifecycle. This paper reviews the top 10 considerations for choosing a data modeling solution, based on real user experiences with erwin® Data Modeler by Quest®. Taken from reviews on IT Central Station, it explores the importance of support for NoSQL, collaboration, visualization and version management, among many factors.
Download PDF
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
<!–
Join us free for the Data Summit Connect 2021 virtual event!
We have a limited number of free passes for our White Paper readers.
Claim yours now when you register using code WP21.
–>
The digital world is moving faster than ever before, so the enterprise needs to keep pace by becoming data-driven.
That means using Big Data – much of it unstructured – to effectively respond to customers, partners, suppliers and other parties in real time – and profit from those efforts.
NoSQL, and in particular the Couchbase and MongoDB platforms, is growing in adoption by organizations that are focused on supporting modern cloud applications and agile development.
If you’re considering the transition from traditional relational databases to NoSQL – or if you’re already using the technology – then you need to read our e-book: Taking Control of NoSQL Databases.
We cover the rise of NoSQL, its vital role to data-driven organizations, and how it is changing the data modeling game.