Presentation: How to Lead Effectively in Hybrid and Remote Environments

MMS Founder
MMS Erica Farmer

Article originally posted on InfoQ. Visit InfoQ

Transcript

Farmer: I want to start by asking you, who was the best leader you ever worked with, and why? I want you to have a think about what that means. For me, the best leader I ever worked with was an operations director at a financial services company who gave me the right balance of guidance and support, as well as opportunity and autonomy. I felt trusted, respected. I could spread my wings. I could achieve. I had some stabilizers in place when I needed to. We lived at different parts of the country, and we got to see each other face-to-face maybe every quarter. I was doing a lot of traveling, she was doing a lot of traveling, but we spoke quite often. That trust that we had in place was the core vehicle for me to feel confident in my role and what I could achieve. Who was the best leader you’ve ever worked with and why? I bet it’s something to do with how they behaved, their morals, and their values, and the relationship that you had with that person.

Background

My name is Erica Farmer. I’m a leadership and management expert. I am the co-founder and business director from Quantum Rise Talent Group. We work with organizations to support digital and hybrid remote working for leaders and managers. I’ve got over 20 years’ experience with some of the UK’s largest brands, such as Centrica British Gas, LV, Specsavers, and Virgin. Some of the roles that I’ve held is head of leadership development, head of apprenticeships, head of learning development. My expertise spans with working with a very large number of leaders and managers across various sectors. Long-term partnerships at Quantum Rise are clients like Fujitsu, the NHS, Bostik, and various apprenticeship providers. We work very much in that remote hybrid digital tech and data space. I’m also a TEDx speaker.

Why?

Let’s think about why. The why is so important. Why are we thinking about leadership and management in this hybrid remote setting post-pandemic, more importantly. According to Forbes, following the pandemic, 97% of employees did not want to return to the office full-time. That might include you as a leader or a manager in a technical role. This means that it’s critical for employees to adopt a hybrid working model to retain talent in the future. That’s critical for managers and leaders, but also for employers, so the organizations who you’re working with, because we know that’s what people are looking for now. This transition has had an impact on the skills and mindsets that leaders and managers have to have to manage productivity and performance. For you as a leader or a manager of people, that might be a direct leader. You might have people directly hardlined into you, or you might be a project manager where you’ve got a matrix organization or a dotted line into you. Any leadership and management role where you are influencing people and managing performance, these are the things that we need to start thinking about. This skill set and mindset from a leadership perspective, that’s what we’re going to be focusing on.

Employee Retention

Let’s look at a little bit of context. According to the 2023 Workplace Learning Report from LinkedIn, 92% of organizations in the UK are concerned about employee retention. We know that people are interested in portfolio careers. There’s more interest in setting up businesses, side hustles, doing multiple career jobs or career portfolio as it were. No longer is it the case where generations are coming into the workforce and looking for an employer for 20, 30, 40, 50 years. That’s changed. Why has it changed? Because people’s expectations have changed. People want a better work-life balance. People want to be able to pick up the kids, or work in the evenings, or have a four-day week. People want to measure their employer value proposition and their experience that they have as an employee through their morals and their values. We know a lot of that is to do with an expectation of development, and skills, and career, and portfolio. As a technical leader in this space, you have an opportunity to revolutionize the digital and future skills as a hybrid and remote leader. You have that opportunity. This is exactly a core part of your role as a people manager or a people leader, to understand what people need and build that digital capability. Let me ask you, have you started yet? Have you started thinking about what that means for you and your teams, your team’s career? What they’re looking for, what’s important to them.

Just a little exercise for you right now. On the screen, there’s a spectrum with the 1 being, at the moment, right now, we haven’t thought about as a leader in a hybrid or remote environment, what do our digital skills strategy mean right now? What do I need to do as a hybrid or remote leader? On the opposite side, you’ve got a 10, which is we’re all over this. As a leader, I’ve been working in a hybrid and remote environment. I’ve been working on my skills, and I’m thinking about what our future digital skills strategy means for our organization, particularly in these new ways of working. I just want you to think about where would you score you or your organization on that right now. Just do a little bit of thinking, a little bit of self-analysis. Give yourself a score. Then I want you to think about, if you score yourself a 6 or a 7, for example, what would it take to make that an 8 or a 9? As a hybrid or remote leader or manager, what would you need to do to support your team, support your workforce, and build that new future skill strategy? Because it’s not an HR responsibility, it’s not an L&D responsibility, it’s everybody’s. As leaders and managers, it’s our accountability to develop our people in this space.

Challenges of Remote and Hybrid Working

We know we’ve all had challenges when it comes to hybrid and remote working. Some of that has been tech. Some of that has been trust. Some of that has been, how do we manage if we’re not looking over the shoulder of our teams? How are they going to be productive if I’m not seeing them every day in the office? We’ve seen a lot of micromanagement from managers. People might think there’s less productivity happening because people aren’t sat in the office. Workers think they might have to be present, even if they’re not productive, or they’re not feeling well. There’s challenges around sickness, well-being, for example. We also need to think about, is someone sat at their desk at home being productive, or they’re just wiggling a mouse and why is that? Often, it’s because people aren’t feeling motivated or psychologically safe. There can be an increased risk of isolation when it comes to hybrid and remote working, we know that. Let’s start thinking about what are the things that we need to do as hybrid and remote leaders to reduce that, to make sure everybody feels included and not isolated?

Initially, at the beginning of the pandemic, with people working at home, people thought they had to be online more. People were getting burnout of being in front of screens constantly. We know more now. We know better. We know that we need breaks. We need to be flexible. People’s autonomy with remote and hybrid working is motivational. People making their own decisions on how and where they work and when they work. I appreciate there’s often a need for a client to have contact with employees. There’s a balance to be struck, but thinking about people’s motivation, and that’s what drives productivity and engagement. You could perceive there might be a reduction in opportunities. Again, what can we do to make sure as leaders and managers that that’s not the case, and any internal politics that might sit around hybrid and remote working. As a manager, you might have a preference for being on-site and therefore have an expectation, or a favoritism for people on-site, or the same with remote. How does your proximity bias play into this? Is everybody getting the same opportunities or are you favoriting people in different areas? All of this can play into our leadership and management capability.

Top 5 Reasons in EMEA to Seek a New Job

Let’s have a look at a little bit more context, again, coming from the LinkedIn 2023 Workplace Learning Report. The top five reasons in Europe and Middle East, Asia, people have said that they want to seek a new job. Number one, flexibility to work when and where I want. Hybrid working isn’t two days at home, three days on-site, being told where and when. Pure hybrid working is the employee making those decisions. A little bit later on, we’ll talk about a little framework that’s going to help you make those decisions. Number two, compensation and benefits. That’s important for all of us. Notice that isn’t number one. Number three, challenging and impactful work. No surprises there. What challenging and impactful to you though, it might not be to your team members, so have you had that conversation? What does that mean? What does purpose mean to your team? Opportunities for career growth within the company comes in at point four, important to everybody. That doesn’t necessarily mean promotion. That could be secondments, leading on different types of projects, stepping sideways. Squiggly careers that you might have heard it called, it’s a mindset. Opportunities to learn and develop new skills. Again, no surprises there. People love development that’s particularly relevant to them, and where they want to go. How as a hybrid and remote leader or manager are you facilitating these things? Because that is your role as a manager, in your job.

Reasons Leaders and Managers May Avoid Remote and Hybrid Working

Let’s put this back now to hybrid and remote leading and managing in particular. I want you to have a think about what are the reasons that leaders and managers may avoid remote and hybrid working. What’s going on for managers sometimes? What are some of the challenges do you think managers and leaders might face? What’s really going on? Sometimes it’s matters, productivity. Sometimes it’s matters, people perhaps not doing what they should be doing, or not being at their desk, not getting stuff done, not being available. Actually, when we come down to it, when we really look at it, and what the research tells us, is it is trust and fear, from a manager’s perspective. Does the manager trust their team to get on with the outputs that are being delivered? What’s the fear that sits behind that if they’re not doing that, if the manager isn’t present with the team constantly. This is what we see it comes down to.

If we want to break that down just a little bit further, trust, again, comes at number one. The one thing that you have to hugely dial up is trust. Something that’s perhaps built organically in a face-to-face office space, you have to prescribe and work hard at, because you don’t have those corridor conversations, or those quips, or those personal, how did the weekend type conversations go in the tea and coffee area. You have to prescribe it. We have to make sure that the length of time online for people isn’t excessive because this is how we start to drive burnout and start to lose psychological safety, so if people don’t feel like they can challenge, or innovate, or question. Flexibility is absolutely key that we’ve seen from the LinkedIn stuff as well, so how do you enable a flexible working environment where people feel that they can balance their home life and their work life? How do you drive engagement in your team, making sure that people feel engaged and they want to turn up to work wherever that might be, every day.

This next one is super important, outcome driven. If people are set outcomes, so what the end result is for that project, for that client, for that piece of software build, whatever that might be, that’s what you should be managing on. You shouldn’t be checking in every single day asking for progress reports. That’s what we call micromanagement. Agree with your team, what works for them? What are the key milestones? What needs to happen? What do they do if they do need help or support to get there? Manage on outcome, not on day-to-day. That doesn’t mean that you can’t interact with people. Interaction is absolutely key. If you’re not asking people how they’re feeling today, or what they did at the weekend, or what’s important to them, that’s what great interaction looks like. Getting to know people as people, and not just as employees all the time. Again, just a little bit of a checklist there to audit yourself against to think, what am I doing? Do I really trust my team? Am I really providing flexibility as a hybrid or remote leader, or manager? Am I really managing on outcomes, or am I micromanaging? We all do these things, and we all pull back and forth based on how we’re feeling about things. We might feel under pressure and start to micromanage a little bit more potentially.

The Growth Mindset

We know that no matter the context or the environment, great leaders put their people first. If you’ve ever seen anything from Simon Sinek, he has a great talk called, Leaders Eat Last. I would definitely go and watch that, because it’s about enabling, facilitating, creating the opportunities for your people to deliver. That’s quite tricky in terms of a shift, particularly if you’ve been a technical manager, you’ve been a technical expert for a long period of time, you’ve been a teacher, you’ve been the SME as it were. Actually, a step into leadership and management is very different in regards to the capabilities and requirements and the skill set, and the mindset that we need to deliver through others, because that’s what we’re doing. What I’m talking about here is a growth mindset.

The work of Carol Dweck, who’s a psychologist and a lecturer at Stanford in the States, she noticed that people who had a growth mindset, so people who approach challenges and solutions with the, I can achieve, or I can learn to do something and my intelligence isn’t fixed, but I can go away and develop and try and fail, and fail fast, and pick up and learn. You might know it is an agile mindset in the sector. These are people who are generally more successful in life, whether in an educational setting or a workplace setting. If you’ve got people in the team who automatically go to problem rather than solution, a more fixed mindset, this is where your coaching and support as leaders and managers, particularly in a hybrid and remote environment, kicks in. It’s understanding what’s going on for that person, but you need to check in with your own mindset first. What do you need to do to make sure that you can be solution focused? What do you need to do to know that you can grow and develop and learn? A fixed mindset might be, “I’m not very good at math,” or, “I’m not very good at singing,” or, “I’m not very creative.” We’ve all heard phrases like this, but if you put the word yet, on the end of that, it completely changes the meaning of that sentence. I’m not very good at math yet. I can learn. I can develop. It’s not fixed. It’s not stuck in terms of my intelligence, but I can learn and grow. This growth mindset particularly in the hybrid and remote environment is fundamental.

Skills & Behaviors Needed as a Leader in a Hybrid or Remote Environment

I want you to think about now, what are the skills and behaviors you need as a leader in a hybrid or remote environment? We’ve talked about trust. We’ve talked about coaching. We’ve talked about growth mindset. What other? What would be your six skills and behaviors that you would say is imperative for a manager or a leader in a remote or hybrid working environment? We know great leaders embrace agility. The ability to pick stuff up, put it down, turn left, turn right, apply things quickly, fail fast, learn from mistakes, all ties into that growth mindset piece that we talked about earlier. Great leaders are compassionate. They really understand the needs of their people, and they spend time understanding what those are, rather than just moving on to the task constantly. Great leaders have empathy. You can put yourself in the shoes of others. We all have a challenging year ahead of us. We’ve all had quite a few challenging years behind us, whether that’s the pandemic, whether that’s the cost-of-living crisis, whether that’s anything else: health, wealth, family, whatever it might be. We’re all people first, at the end of the day. Empathy is one of the number one requirements of a great leader.

Autonomy, allowing people to make decisions through purposeful work. Allowing people to feel that they’ve got autonomy. Allowing people to feel that they’re not being controlled or commanded. That ties into the micromanagement stuff we talked about earlier. Flexibility, we talked about this a little bit so far, and growth mindset. Enabling people within a structure or within a set of boundaries, to be able to make mistakes, to achieve, to do things their way, rather than your way, because who knows, your way might not always be the best way for that person. Think about those skills and behaviors. Again, I just want you to do a bit of a self-audit against these. Where are you right now? If you want to score the 0 to 10, again, go ahead and do that. Again, if you sit at about a 5 or a 6, what does it take to get to a 7 or an 8? If you set a score to a 9 or a 10, that’s fabulous. What can you do to support other people and other leaders, or people that you might be mentoring perhaps in that space? I want us to think about what we’ve learned again, over the last couple of years, as well. Keep thinking about that great leader, the best leader that you’ve worked with. Keep thinking about the experiences that you’ve had growing up in your career and working with different leaders and managers.

Personalization (Post-Pandemic)

I’m just going to introduce you now to some data from Dave Ulrich. Dave Ulrich is a HR thought leader, pretty much invented the HR business partner structure. The question that he put to the HR practice worldwide, post-pandemic, was, what do you think or hope will be the intended and lasting legacy of the 2021 people organizational crises? He means the pandemic, but everything else that happened around that, that just won’t fade. What do we think is the one thing that we can really hold on to as leaders and managers and HR that people really embraced, and that changed from a working conditions and environment, which felt very different compared to anything before? He goes on to say, advancing the digital revolution, redefining work boundaries to include virtual work, increasing social citizenship with an emphasis on diversity, harnessing uncertainty through agility, renewing relationships with family and friends, managing emotional resilience. For me, I hope their lasting legacy is personalization. What he’s saying there is he hopes organizations, and managers, and leaders all take what our teams need, whether that’s the home work-life balance, whether that’s a way of working, whether that’s diversity, whether that’s special needs, whether that’s being neurodivergent, whether that’s anything else in between, being able to bring your whole self to work in your different preferences, and for organizations to be able to offer that up and provide that personalization. That’s easy to say in itself. That can just almost start with a conversation with each of your team members to understand what does personalization mean to them? What can you do as a leader or a manager to support that?

The Choice Framework

Here’s a little framework that I’m offering up to you called the choice framework. It’s about having a conversation with your team members and enabling them to make choice around how and where and when they work. Going back to hybrid and remote, particularly hybrid, there might be times where, from a customer need or an employee need perspective, you might need to be together on site, you might need to be face-to-face. It might be a customer needs an on-site meeting. It might be one of your employees needs you face-to-face. That is a core reason why you would spend the time and effort perhaps being face-to-face in the office. Same with coaching and feedback sessions. You might choose to make the effort to spend some time in a room together building that relationship and trust in-person. That’s not to say that you can’t do coaching and feedback over Zoom, or Teams, or digitally, because you can absolutely do that. Let’s think about trust, connection, and body language with that stuff. If you’re co-creating or curating perhaps a piece of software, or the outcomes for a project, or kicking off a new project, for example, you might want to be in a room together collaborating and connecting. That could also be networking or having development sessions. Actually, your colleagues and your team from a preference or well-being perspective might want to be in a room with you as their manager, needing that connection and that face-to-face. There’s just some ideas to think about. There’s a prompt, use these words, use this framework as a conversation with your team members. Use this as a prompt. You could score it. You could use it as an opportunity just to prompt conversation. Think about what that means. It gives you some specifics to be able to hook that conversation on to. I hope that helps.

Technology, and Digital Collaboration

How does all this show up? Let’s get specific. Let’s think about hybrid meetings. Let’s think about when you need to connect the team, you might have some people in different parts of the country, different parts of the world on different time zones. You might have some people in the office. You might have some people in a different office together. Thinking about things like development and hybrid meetings, connecting, having those conversations and using the technology that we have right now to be able to do that. It’s not just a case of us all getting on Teams and having a conversation. Let’s get back to the why. According to a global report from a leading software company, more than half of respondents describe digital collaboration as useful, but rarely engaging, impactful, or crucial, which indicates that the technology used now is far from helping teams reach their full potential. We think some of this is because it’s a different skill set and mindset for managers to be able to use technology in a way which speaks to everybody. Before we go into the detail of that, I just want to ask you to ask yourself three questions when it comes to your online meetings or hybrid meetings with your team or with your customers. Question number one, how do you know this meeting is engaging for everyone, for the people who like to talk, the people who don’t like to talk? How do you know? Question number two, what impact will this meeting have? Is it just switching people off when they’re doing their emails in the background, or are you genuinely inspiring the change in the message that you’re looking to land? Question three, how crucial is it that you can use the technology as part of the meeting. We’re going to deep dive that a little bit now.

As a leader or a manager, you’ve got an opportunity to be able to speak to all of your people through great digital technology. We know that tech can offer less dominant people and more reflective people, opportunities. We know that those who don’t like to just take up the air time by opening up their mics and answering all your verbal questions, there’s other opportunities to perhaps engage more introverted team members. What can we do in this space when it comes to meetings and technology? I love this quote, empowering the quieter half of our teams, “One of the most unremarked advances of the online revolution,” so that’s hybrid working, remote working, “is that we now hear loudly from the quieter half of the population.” I think that’s fantastic, because you know what it’s like. We’ve been in rooms where it’s the extroverts who like to take up the air time, are the people that answer all the questions and perhaps contribute the most. The introverts need to go away and reflect a little bit more, and perhaps we don’t always follow up with them. We miss 50% of the thinking, and the engagement, and the contribution sometimes if we’re not conscious about our practice. What digital tools can do is start to support us as a hybrid and remote leader and manager in this space. What digital tools can you use, or do you use to collaborate effectively online as a leader and a manager? Have a think about it. How do you use those tools, for example?

Here’s a couple of examples that you might use. We’ve got lots coming around the metaverse. We’ve got already lots of practice in Microsoft Teams, in Zoom, Google, Oculus, Miro, Mentimeter, Poll Everywhere, Slack channels, there’s a huge amount of technology. A lot of this is free to be able to engage people and interact with our teams, because that’s what we talked about, engagement and interaction. That’s what you need to dial up in terms of your remote leadership and your hybrid management, that communication, that checking in with people, making sure people are ok. I’m not talking about micromanagement. I’m talking about people skills here. A really easy example that you could use for perhaps some of your team’s meetings if you use Microsoft Teams, for example, and the other platforms, Zoom, for example, Google also have this opportunity, is just online polling. You can set up polls to ask any question that you want. Here’s just a couple of examples on the slides right now. Rather than just having verbal conversations with your teams over online platforms, make it interactive. Enable the quieter half of your team to think about their contribution, and submit that in a different way, that nonverbal contribution. Again, word clouds, emojis, yes or no questions, open questions, list your ideas. These are already available in Microsoft Teams, in websites like Poll Everywhere. You could use whiteboards, Jamboards. Google Jamboard is absolutely free, for example. Mentimeter, you can put questions in Slido. There are so many different pieces of software we can now use. This isn’t just learning and development stuff, this is great stuff for your team to engage with, and for it to feel different and interactive and new. This is what’s going to drive energy and motivation and productivity in your teams.

Practical Tips to Support Hybrid and Remote Teams

As we start to wrap up, I want you to start thinking about what will you do next or differently to support your hybrid and remote teams. We’re going to move on to some practical tips and tricks. I want you to think about what we’ve covered so far, and what are the things that you’re going to take away and implement straightaway? Let’s look at our practical tips. Make sure everybody feels comfortable using the tech in the situations, and using digital tools to your advantage. It might be you’re all in-person, or you’re hybrid, or you’re all remote, but make sure people have got the time and the investment to use whatever the platforms are that you’re using. Coach people, support them, give them training. Don’t just assume that people can use this stuff. Because you don’t get the benefits realization out of a full system implementation, unless you train them and give them skills. That goes for you as a remote and hybrid leader as well. Make time for social connection, share stories, and keep going with those check-ins. You might have a Monday morning huddle, for example, or a weekly huddle, just make that conversation personal. Don’t go into task. Don’t go into work. Don’t go into the day-to-day. Maybe ask one of your team members to chair, and alternate every Monday. Get them to pose a question that enables you to learn something different about each of your team members. Have that conversation, have laughter, have fun, rather than just going straight into the task that you might do normally.

Challenge yourself around your team dynamic and about staying visible, whether that’s virtually or in-person. That could be you have a preference for being on site, or you think that people should spend more time in the office. Or you might think that’s more remote, and, actually, you might not always put your camera on, or you might not travel because there’s a travel ban in your organization or whatever it might be. Visibility isn’t just being in-person or just being on the screen, it’s phone calls. It’s WhatsApp. It’s check-ins. It’s making sure people are ok, and asking them what they need from you. That’s what we mean by visibility. These check-ins are super important. You need to make sure people feel included. You need to prescribe this. Not just assume the organic is going to happen because we’re not having those corridor conversations or those coffee conversations that we used to have. Ask for feedback. What’s working well, and what could be even better? That WWW EBI model works really nicely because people can give you a positive slant and then a suggestion for something even better, for something different next time. Maybe ask that what’s going well and what could be even better next time, because people might not always feel that they can offer up suggestions and feedback, so actively ask that.

Demystify terminology and make information accessible. Don’t just assume that everybody knows exactly what you’re talking about all the time. This is core in very technical roles. If we’re working with other technical people, we will assume people get it. You just need to check in to make sure people are feeling ok. How do you access information? Is it easy to access? Is it in the right format? Do people have different needs to be able to access information? Are they neurodivergent? How do you know? Have you had the conversation? Again, this is where we talk about getting to know people as people. As a hybrid and remote leader, dialing up that conversation, dialing up that trust and support is even more important. What I’m generally talking about here is a digital-first mindset. When I say that, I’m not saying we’re just going technical, we’re just going digital technology. What I’m talking about is using digital to support your leadership and management skills, and to support your team’s engagement, motivation, and productivity, because that’s what you’re here to do.

Summary

What have we learnt? I want you just to think about, what have you learned? Again, what is it that you’re going to be implementing? Let’s just summarize. According to Forbes, following the pandemic, 97% of employees didn’t want to go back to the office full time. This means it’s critical for employers and leaders and managers to dial up that hybrid modern ways of working, to hire and retain talent in the future. This is exactly what we’re talking about. Transition has had an impact on skills and mindset for managers and leaders. You need to be thinking about how do you maximize productivity and performance in these working environments. Everything we’ve talked about, those skills, that mindset, that using technology, trust and fear, micromanagement, support, leading by choice, managing through outcomes, think about all of that. I want you to really assess where you are and what your biggest learn has been. What is the one thing that you’ve learned that you’re going to take away as a hybrid or remote leader or manager and put into place?

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Has Your Architectural Decision Record Lost Its Purpose?

MMS Founder
MMS Pierre Pureur Kurt Bittner

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • When software architecture evolves through a series of decisions, development teams need a way to keep track of the architectural decisions they have made. They usually record each decision as an Architectural Decision Record (ADR).
  • The boundary between architectural and other significant decisions is often ill-defined.  
  • All architectural decisions are significant (using the cost of change to measure significance) but not all significant decisions are architectural.
  • Just because something is time-consuming to change does not make it architectural.
  • Architectural decisions involve the fundamental concepts the system uses because the code implications of the choices are scattered throughout the software rather than being localized. 
  • Teams who need to record significant decisions should create a separate Significant Decision Record to avoid overburdening their ADR with other decisions. 
     

Architectural Decision Records (ADRs) are important vehicles for communicating the architectural decisions a development team makes about a system. Lacking a clear definition of what is architectural, and also lacking anywhere else to record important decisions, they can start to drift from their original purpose and lose focus and effectiveness.

ADRs are intended to expose architectural decisions, and in so doing improve transparency and accountability. But when it becomes bloated with every decision a team makes, it becomes the antithesis because the architectural decisions can’t be easily seen amidst everything else that’s been thrown into an ADR.

Why are ADRs needed?

In a previous article, we observed that in a dynamic software development approach, in which solutions evolve over time (e.g. agile), software architecture is defined by a set of decisions about how the system will handle quality attribute requirements. This is in contrast to an up-front architectural approach in which the architecture is defined primarily in a software architecture document.

An ADR makes architectural decisions transparent, helping the development team to clarify what it is doing and why, and to preserve that reasoning for people who will support and enhance the system in the future.

When software architecture evolves through a series of decisions, themselves born of hypotheses and experiments that test those hypotheses, development teams need a way to keep track of the architectural decisions they have made.

Over time, some of these decisions may change, and the development team needs a way to easily examine the decisions. Even when they don’t change, future teams will need to understand the choices and trade-offs that were considered so they can make better decisions about the evolution of the system.  

What is the purpose of an ADR?

There isn’t a simple answer to this. Authors writing on the subject agree that it should document significant decisions, and some go further to say architecturally significant decisions. This sounds reasonable, but it’s hard to come to a consensus on what is significant, and even harder to decide what is architectural.

Many teams, lacking a place to record any sort of significant decision, put any decision they consider significant into an ADR, diluting the architectural aspects and turning an ADR into an “Any Decision Record”. Doing so overloads the ADR with lots of decisions that should not be there, and whose presence merely makes the really significant architectural decisions harder to see. So to decide what to put in an ADR, we have to decide what is architectural? And that’s not as easy as it might seem.

What is “architectural”?

Recently, we’ve caught ourselves in one of those “Princess Bride” moments in which Iñigo Montoya, played by Mandy Patinkin, says to Vizzini, played by Wallace Shawn, “You keep using that word … I do not think it means what you think it means.”

We throw around the word “architecture”, at least in the software context, and we act like we know exactly what it means. But when we more closely examine the concept, we have a hard time pinning down exactly what this means. In this article, we try to more clearly define what we think is architectural and what is not, and provide more explicit criteria for making the decision.

The term “architecture”, as it is applied to software development, is actually part of the problem; it’s the wrong metaphor. Architecture in the physical world is concerned largely with usability and aesthetics. What we call “software architecture” is more similar to structural engineering, which is mostly concerned with how a physical system resiliently handles loads.

Similarly, the art of “software architecture” is to anticipate the loads of a software system and to design for them. A key difference is that structural engineering is based on a vast body of knowledge based on thousands of years of experience, reinforced by scientifically derived laws of physics and mathematical models of those laws. Software is nothing like this. It is encoded thought, and there are few standard approaches to solving problems once we look beyond certain kinds of algorithms.

A good starting place for understanding software architecture is this observation by Grady Booch that contrasts architecture with design:

“All architecture is design, but not all design is architecture. Architecture represents the set of significant design decisions that shape the form and the function of a system, where significance is measured by cost of change.” (Grady Booch on Twitter).

The important parts of this observation are:

  1. Architectural decisions are costly to reverse, and
  2. Architectural decisions define the fundamental character or “shape” of the solution, which we interpret as the fundamental approach to solving the problems defined by the set of the system’s quality attribution requirements (QARs) – see Chapter 2 in “Software Architecture in Practice” for a deeper discussion.

If a decision does not involve both of these aspects, it’s our view that it should not be in the ADR. Let’s examine that assertion more closely.

What kinds of change are costly?

Some decisions are costly to change, but they aren’t necessarily complex, and by complex we mean “intellectually challenging” or “likely to really mess things up if you make the wrong decision.” In other words, rewriting code isn’t complex, it’s rethinking the concepts behind the code where real complexity arises.

To illustrate, some decisions are expensive to change but not very complex, such as:

  • Redesigning the user interface for an application. Even when using a UI framework, changing visual metaphors can be time-consuming and expensive to modify, but it is rarely complex so long as the changes don’t affect the fundamental concepts the system deals with.  
  • Exchanging one major component or subsystem with another of equivalent functionality. An example of this is switching from one vendor’s SQL database to another vendor’s SQL database. These changes can take work, but conversion tools help, as does staying away from proprietary features. So long as the new component/subsystem supports the same fundamental concepts as the old one, the change doesn’t alter the architecture of the system.
  • Changing programming languages may not even be architecturally significant so long as the languages support the same abstractions and programming language concepts. In other words, syntax changes aren’t architecturally significant, but changes to fundamental concepts or metaphors are.

With the appropriate conversion tools, these kinds of decisions might not actually be very costly. It used to be that rewriting a user interface was expensive but modern UI design tools and frameworks have made this sort of work relatively inexpensive. Deciding what UI framework, SQL database, or programming language to use is an implementation detail, not an architectural decision. Those are all significant decisions but they do not rise to the level of architectural decisions.

Even the cost criteria of “architecturally significant” boils down to “the shape of the solution”.

What does “the shape of the solution” mean?

The “shape of the solution”, for us, means the fundamental data structures and algorithms the system uses to solve its problem. Extending the observation about SQL databases above to provide an example, while the choice of a specific SQL database may not be architecturally significant, changing from using rows and columns to represent fundamental concepts to using tree structures or unstructured data is. The algorithms to search, sort, and update these different kinds of representations are very different, with different strengths and weaknesses, so the choice will dramatically affect the system’s ability to satisfy its QARs.

Speaking more generally, architectural decisions, for us, have the following characteristics:

  • They involve the fundamental concepts the system uses, and its key abstractions as represented in the data structures (e.g. classes, types, …)  it uses to share information across the entire system, and even between systems.
  • They also involve the way these data structures are used, i.e. the fundamental algorithms that access and manipulate the data structures.
  • Any change to data structures used to represent the fundamental concepts of the system affects the algorithms that use those data structures, and any changes to algorithms change the data structures that they use.

Architecture, then, establishes limits on the kinds of problems a system can solve, and even, sometimes, on the ability of developers to see different kinds of solutions by establishing a kind of hammer-nail blindness to alternatives. Changing architectural decisions means changing the fundamental concepts the system deals with, and the way that system works with those concepts.

In addition to algorithms and data structures that represent key concepts, other choices play critical roles in shaping the architecture, including, for example:

  • Changes to the messaging paradigm – e.g. synchronous to asynchronous
  • Changes to response time commitments – e.g. non-real-time to real-time
  • Changes to concurrency/consistency strategies, e.g. optimistic versus pessimistic resource locking
  • Changes to transaction control algorithms – e.g. fail/retry strategies
  • Changes to data distribution that affect latency
  • Changes to cache coherency strategies, especially for federated data
  • Changes to security models, especially the granularity of security access when it extends to individual objects or elements.

Ultimately all of these choices turn into code that isn’t simple to change because the code implications of the choices are scattered throughout the software rather than being localized. If something can be localized and encapsulated, it’s typically not architectural because it can be changed without the impacts of the change rippling throughout the code.

Architecture and Decision Longevity

Sometimes the expected longevity of a decision causes a team to believe that a decision is architectural. Most decisions become long-term decisions because the funding model for most systems only considers the initial cost of development, not the long-term evolution of the system. When this is the case, every decision becomes a long-term decision. This does not make these decisions architectural, however; they need to have high cost and complexity to undo/redo in order for them to be architecturally significant.

To illustrate, a decision to select a database management system is usually regarded as architectural because many systems will use it for their lifetime, but if this decision is easily reversed without having to change code throughout the system, it’s generally not architecturally significant. Modern RDBMS technology is quite stable and relatively interchangeable between vendor products, so replacing a commercial product with an open-source product, and vice versa, is relatively easy so long as the interfaces with the database have been isolated.  The architectural decision is the one to localize database dependencies and abstract vendor-specific interfaces, not the choice of the database itself.

Where decision longevity does come into play in an ADR is in addressing sustainability and resiliency. Sustainability involves a system being able to respond to an unknown set of future events of unknown probability and unknown impact. Resiliency is the ability of that system to resist failure when those events occur. What people mean when they say something is sustainable is that they believe that the system will be able to handle everything they can conceive of, and even things they cannot.

When do architectural decisions need to be made?

In the old days, a team would create a Software Architecture Document early in the development of a system, and that document would guide the development of the system throughout its lifecycle.

When a team uses an agile approach to the development of a system, the ADRs collectively take the place of the Software Architecture Document as they incrementally document the architecture to support the incremental development of the system. We have described in previous articles how a Minimum Viable Architecture (MVA) evolves in parallel with Minimum Viable Product (MVP) increments. In practical terms, this means that teams will make architectural decisions over time as they evolve the solution. Unlike the Software Architecture Document, the decisions documented by the ADRs are made up-front and all at once.

What’s the harm in using ADRs for all major decisions?

In brief, it muddies the waters, making the real architectural issues harder to see. Doing so makes discussions about fundamental decisions harder because the issue isn’t clear, especially if the implications of a decision are not fully spelled out.

Teams still have a need to record non-architectural decisions for a variety of reasons, many of which boil down to having a record of what was decided and why, in case someone needs to explain or justify it later. Sometimes a decision is classified as “architectural” in order to record it in an ADR, for the lack of a better place to record it.

Things that ADRs should not be used for:

  • Promoting reuse. Some people, managers especially, see the ADRs as a means to enforce reusability. They want to see common components and subsystems reused because they think this will lower cost and simplify development. This is only true, however, if the reused components and subsystems are fit for purpose and result in a better solution.

    When they are not, they convolute designs and make the architecture worse. We’ve all probably had the experience of struggling to make the “company standard” work when it’s not the best solution for the problem at hand. The result is usually increased cost and reduced resilience.

    Promoting reusability is one aspect of knowledge sharing across a development organization, but there are better ways to promote this knowledge sharing than gumming up the ADRs with lots of information about the potential reuse of designs and code. We think it is better to make an ADR a clear record of the problem the team is trying to solve and their reasons for choosing their approach.

  • Responsibility deflection (CYA). Some teams believe that by putting a decision in an ADR they can absolve themselves from the consequences of that decision should it prove wrong. The more people who see and explicitly or tacitly approve an ADR, the more the responsibility for making a poor decision is diluted. There is, they think, safety in numbers.

    Punishing people for bad decisions is a sign of a toxic management culture. Development teams make the best decisions they can with the information available to them at the time. When they learn more, often through building and deploying the system, some of these decisions will change. The key to reducing the cost of decisions that change is to build the system in small increments and to frequently test hypotheses. Criticizing past decisions is demoralizing and unproductive. If teams don’t have to fear being blamed for their decisions, they can focus on building better solutions by experimenting instead of using an ADR to inoculate themselves from blame.

  • Recording non-architectural product decisions. This often happens because teams make important decisions all the time, but if they lack a place to record them they are going to put them in an ADR. Misusing the ADRs in this way makes the architecture harder to perceive: if every decision is architectural, no decision is architectural. Put another way, an ADR that turns into the “Any Decision Record” has lost its purpose.

    There is an easy remedy to this: simply keep a log of important decisions that are not related to architecture. Architectural decisions are usually only understandable to developers, while records of most other important decisions have a much wider audience. Keeping them separate often makes everyone happier.

Conclusion

Even though all architectural decisions are important, not every important decision is architectural. Creating separate records of architectural decisions and other important decisions helps to improve communication across organizations. ADRs contain technical discussions that are not usually of broad interest, and keeping them separate makes the architecture of a system easier to understand.

ADRs, if kept focused on architecture, provide an understanding of the evolution of a team’s thought processes as they balance choices and trade-offs. Decisions are never wrong, they are just an indication of the team’s thinking at a point in time. Yes, they may choose different approaches as they learn more, over time, but having a record of this evolution is useful to preserve. Seeing how the team’s thinking has evolved provides insights into current and future trade-offs.

In software architecture, there are often no perfect solutions, only “less than perfect” alternatives that need to be balanced. Being able to see these choices more clearly helps current and future teams better understand the trade-offs they may have to make.

About the Authors

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


NGINX Modules Can Now Be Written in Rust

MMS Founder
MMS Claudio Masolo

Article originally posted on InfoQ. Visit InfoQ

NGINX announced the availability of ngx-rust project which allows developers to write NGINX modules in Rust. The Rust programming language has emerged as a powerful and popular choice due to its stability, security features a rich ecosystem, and strong community support.

NGINX is a high-performance, open-source web server and reverse proxy server software that powers a significant portion of the internet’s websites. Originally created by Igor Sysoev in 2002, NGINX has since evolved and gained widespread popularity in web hosting, content delivery, and application deployment. It is known for its performance, scalability, and versatility, making it a crucial component for serving web content and managing internet traffic efficiently.

The three principal functions of NGINX are:

  • Web Server: NGINX primarily operates as a web server, handling HTTP and HTTPS requests. It can serve static web content such as HTML files, images, and JavaScript, making it an essential component for hosting websites and web applications.
  • Reverse Proxy Server: NGINX can work as a reverse proxy server, serving as an intermediary between client requests and backend servers. It is often deployed to distribute incoming requests across multiple backend servers, ensuring load balancing and fault tolerance. This is particularly valuable in high-traffic environments.
  • Load Balancer: NGINX can act as a load balancer, distributing incoming network traffic across multiple servers. This ensures that servers don’t get overloaded, optimizing the use of resources and providing a seamless experience to users.

Originally, ngx-rust was created to expedite the development of an Istio-compatible service mesh product with NGINX. However, this project remained dormant for some time, during which the community actively engaged with it, forking the repository and creating their projects based on the Rust bindings examples provided by ngx-rust.

More recently, F5’s Distributed Cloud Bot Defense team required the integration of NGINX proxies into its protection services. This necessitated the development of a new module. At the same time, F5 aimed to expand its Rust portfolio and improve the developer experience to meet evolving customer needs. With internal innovation sponsorships and collaboration with the original ngx-rust author, F5 revitalized the ngx-rust project. This revival involved publishing ngx-rust crates with enhanced documentation and improved build ergonomics for community usage.

NGINX relies on modules as the fundamental building blocks that implement most of its functionality. Modules also empower NGINX users to customize its features and support specific use cases. Traditionally, NGINX supported modules written in C, but advancements in computer science and programming language theory have opened the door for languages like Rust to be used for NGINX module development.

To get started with ngx-rust, you can choose to build from source locally, contribute to the ngx-rust project, or simply obtain the crate from crates.io. The ngx-rust README provides guidelines for contributing and local build requirements. While ngx-rust is still in its early stages of development, F5 plans to enhance its quality and features with community support.

The ngx-rust project comprises two key crates:

  • nginx-sys: This crate generates bindings from NGINX source code, automating the creation of foreign function interface (FFI) bindings through  bindgen code automation.
  • ngx: The main crate implements Rust glue code, APIs, and re-exports nginx-sys. Module writers interact with NGINX through ngx symbols, and the re-export of nginx-sys eliminates the need for explicit import.

The process of initializing a workspace for an ngx-rust project involves creating a working directory, initializing a Rust project, and setting up dependencies:

cd $YOUR_DEV_FOLDER
mkdir ngx-rust-howto
cd ngx-rust-howto
cargo init --lib

Creating a Rust module involves implementing the HTTPModule trait, which defines the NGINX entry points, including postconfiguration, preconfiguration, create_main_conf, and more. The new module only needs to implement the functions necessary for its specific task. In the following code and example of postconfiguration method implementation:

struct Module;
struct Module; 

impl http::HTTPModule for Module { 
    type MainConf = (); 
    type SrvConf = (); 
    type LocConf = ModuleConfig; 

    unsafe extern "C" fn postconfiguration(cf: *mut ngx_conf_t) -> ngx_int_t { 
        let htcf = http::ngx_http_conf_get_module_main_conf(cf, &ngx_http_core_module); 

        let h = ngx_array_push( 
            &mut (*htcf).phases[ngx_http_phases_NGX_HTTP_ACCESS_PHASE as usize].handlers, 
        ) as *mut ngx_http_handler_pt; 
        if h.is_null() { 
            return core::Status::NGX_ERROR.into(); 
        } 

        // set an Access phase handler 
        *h = Some(howto_access_handler); 
        core::Status::NGX_OK.into() 
    } 
} 

More example code and implementations are available at the ngx-rust-howto repository

With the introduction of the ngx-rust project, NGINX is embracing the Rust programming language, providing developers with a new way to write NGINX modules. This initiative aims to enhance NGINX’s capabilities and offer developers a safer and more ergonomic way to work with the web server. Also, Cloudflare started to use Rust for the NGINX module implementation as reported in this detailed blog post.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Foreign Function & Memory API to Bridge the Gap Between Java and Native Libraries

MMS Founder
MMS A N M Bazlur Rahman

Article originally posted on InfoQ. Visit InfoQ

After its review has concluded, JEP 454, Foreign Function & Memory API, has been promoted from Targeted to Integrated for JDK 22. This JEP proposes to finalize this feature after two rounds of incubation and three rounds of preview: JEP 412, Foreign Function & Memory API (Incubator), delivered in JDK 17; JEP 419, Foreign Function & Memory API (Second Incubator), delivered in JDK 18; JEP 424, Foreign Function & Memory API (Preview), delivered in JDK 19; JEP 434, Foreign Function & Memory API (Second Preview), delivered in JDK 20; and JEP 442, Foreign Function & Memory API (Third Preview), delivered in the release of JDK 21.

Improvements since the last release include a new Enable-Native-Access manifest attribute that allows code in executable JARs to call restricted methods without the use of the --enable-native-access flag; allow clients to build C function descriptors, avoiding platform-specific constants programmatically; improved support for variable-length arrays in native memory; and support for multiple charsets in native strings.

This API is designed to improve the interaction between Java and native code, offering a more efficient and safer way to access native libraries and manage native memory. The API consists of two main components:

Foreign Function Interface (FFI): This part of the API allows Java programs to call functions written in native languages like C and C++. It abstracts away much of the boilerplate code required in the Java Native Interface (JNI), making it easier to write and maintain code that interacts with native libraries.

Memory Access API: This component provides a set of tools for interacting with native memory. It includes features for memory allocation, deallocation, and manipulation of native data structures. The API also provides safety checks to prevent issues like buffer overflows and common pitfalls when dealing with native code.

The Foreign Function & Memory (FFM) API, part of the java.lang.foreign package, generally consists of a few core classes:

Linker: This interface provides mechanisms to link Java code with foreign functions in libraries that conform to a specific Application Binary Interface (ABI). It supports both downcalls to foreign functions and upcalls from foreign functions to Java code.

SymbolLookup: This interface is used for retrieving the address of a symbol, such as a function or global variable, in a specific library. It supports various types of lookups, including library lookups, loader lookups, and default lookups provided by a Linker.

MemorySegment: This interface provides access to a contiguous region of memory, either on the Java heap (“heap segment”) or outside it (“native segment”). It offers various access operations for reading and writing data while ensuring spatial and temporal bounds.

MethodHandle: This class serves as a strongly typed, directly executable reference to an underlying method, constructor, or field. It provides two special invoker methods, invokeExact and invoke, and is immutable with no visible state. A reference of MethodHandle can be obtained via Linker::downcallHandle() to invoke the method.

FunctionDescriptor: A function descriptor models the signature of a foreign function. A function descriptor is made up of zero or more argument layouts and zero or one return layout. A function descriptor is used to create downcall method handles and upcall stubs.

Arena: This interface in Java controls the lifecycle of native memory segments, providing methods for their allocation and deallocation within specified scopes. It comes in different types—global, automatic, confined, and shared—each with unique characteristics regarding lifetime, thread accessibility, and manual control.

For example, here is Java code that obtains a method handle for a C library function radixsort and then uses it to sort four strings which start life in a Java array.

import java.lang.foreign.*;
import java.lang.invoke.MethodHandle;
import java.util.Arrays;

public class RadixSortExample {
    public static void main(String[] args) {
        RadixSortExample radixSorter = new RadixSortExample();
        String[] javaStrings = {"mouse", "cat", "dog", "car"};

        System.out.println("radixsort input: " + Arrays.toString(javaStrings));

        // Perform radix sort on input array of strings
        javaStrings = radixSorter.sort(javaStrings);

        System.out.println("radixsort output: " + Arrays.toString(javaStrings));
    }

    private String[] sort(String[] strings) {
        // Find foreign function on the C library path
        Linker linker = Linker.nativeLinker();
        SymbolLookup stdlib = linker.defaultLookup();
        MemorySegment radixSort = stdlib.find("radixsort").orElseThrow();
        MethodHandle methodHandle = linker.downcallHandle(radixSort, FunctionDescriptor.ofVoid(
                ValueLayout.ADDRESS, ValueLayout.JAVA_INT, ValueLayout.ADDRESS, ValueLayout.JAVA_CHAR
        ));

        // Use try-with-resources to manage the lifetime of off-heap memory
        try (Arena arena = Arena.ofConfined()) {
            // Allocate a region of off-heap memory to store pointers
            MemorySegment pointers = arena.allocateArray(ValueLayout.ADDRESS, strings.length);

            // Copy the strings from on-heap to off-heap
            for (int i = 0; i < strings.length; i++) {
                MemorySegment cString = arena.allocateUtf8String(strings[i]);
                pointers.setAtIndex(ValueLayout.ADDRESS, i, cString);
            }

            // Sort the off-heap data by calling the foreign function
            methodHandle.invoke(pointers, strings.length, MemorySegment.NULL, '');

            // Copy the (reordered) strings from off-heap to on-heap
            for (int i = 0; i < strings.length; i++) {
                MemorySegment cString = pointers.getAtIndex(ValueLayout.ADDRESS, i);
                cString = cString.reinterpret(Long.MAX_VALUE);
                strings[i] = cString.getUtf8String(0);
            }
        } catch (Throwable e) {
            throw new RuntimeException(e);
        }

        return strings;
    }
}

To be able to run the above code, the developer needs to install JDK 22 which can be easily downloadable through SDKman.

Traditionally, handling off-heap memory in Java has been a challenge. Before, Java developers were confined to using ByteBuffer objects for off-heap memory operations. However, the FFM API introduces MemorySegment objects, permitting more control over the allocation and deallocation of off-heap memory. Moreover, MemorySegment::asByteBuffer and MemorySegment::ofBuffer methods further strengthen the bridge between traditional byte buffers and the new memory segment objects.

The FFM API aligns with the java.nio.channels API, providing a deterministic way to deallocate off-heap byte buffers. This negates the need to rely on non-standard, non-deterministic techniques, such as invoking sun.misc.Unsafe::invokeCleaner, paving the way for a more reliable and standardized approach to memory management.

The enhancements in the FFM API are a step towards making the Java platform safer, more efficient, and interoperable. The focus is not only on facilitating Java-native interactions but also on safeguarding them. The API provides a more seamless, Java-idiomatic approach to working with native libraries, offering a solid alternative to JNI’s complexities and safety issues.

The API is expected to revolutionize how Java interacts with native libraries, and it aligns with the broader Java roadmap that aims to make the platform safer and more efficient out-of-the-box. For developers working with Java and native libraries, this is an exciting development that promises to simplify complexities while ensuring safer and more efficient code.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


JAX London 2023 Discusses Java Treands and AI Impact

MMS Founder
MMS Karsten Silz

Article originally posted on InfoQ. Visit InfoQ

For the tenth time, Java fans attended JAX London in the first week of October. The 2023 edition featured 43 sessions and keynotes by 37 speakers, organized in four tracks over two days, and surrounded by six full-day workshops on two additional days. This was slightly fewer than the 46 sessions from JAX London 2022. An Artificial Intelligence track was prominently featured with four talks and a keynote. JAX London was a hybrid conference that streamed all talks to remote participants.

Apart from AI, other tracks included: Core Java & Languages; Cloud, Kubernetes & Serverless; Microservices & Modularization; Software Architecture & Design; Agile, People & Culture; DevOps & CI/CD; and Serverside Java. There were also just four exhibitors this year, fewer than in 2022. Devoxx Belgium ran in parallel this year, which may have explained a lower-than-usual attendance at JAX London.

The session videos were available to conference participants for free during the conference week. With a four-day conference ticket, or the additional video recording package, or full-stack access to the conference organizer’s devm.io learning platform, the session videos will be available for six months.

The first keynote was “The Team is The Real Product” by Jason Gorman. He said that businesses see teams as a means to deliver software by a deadline and an expensive liability afterward. However, a Harvard Business study states that 95% of all software projects fail in the market. So, Jason argued, teams are the real product then – and delivering software builds and grows them. Or, as Jim McCarthy wrote in the book “The Dynamics of Software Development“: “The end of software development is software developers.” Keeping and developing stable teams benefits the business: Stable teams can learn faster. That means their organizations can change directions quicker and test more new products to compete better.

Ted Neward delivered the second keynote: “What International Relations Can Teach You About Development“. He studied international relations, which included political science, geography, history, economics, law, sociology, psychology, and philosophy. This diverse background helped him work with legacy projects, understand the startup community, talk to users, and generally work with a team. According to Ted, “soft” skills are anything but “soft” – they are difficult, nondeterministic and fuzzy. And hard skills aren’t. That makes soft skills hard to grasp for developers who prefer binary logic. He also argued that it’s not enough to know the history, one also has to apply it. Finally, he discussed the OODA loop (Observe, Orient, Decide, Act) in projects and businesses: Applying it is not enough, but the speed of going through the OODA loop is important.

The third keynote, “Paving the Road to Effective Software Development” by Sarah Wells, discussed the tension between developer autonomy and standardization. Having autonomy is a key motivational factor for developers. Yet, it carries higher risks and costs for organizations because it increases the number of technologies. Guardrails are one countermeasure: They mandate specific outcomes and features, such as security and performance. But, verifying compliance with guardrails is a challenge.That’s why Sarah proposed an internally developed platform that developers actually want to use. She built such a platform while heading operations at the “Financial Times.” Most developers prefer such a “paved road” platform, though they can still seek their own way through the woods if they wish. Sarah suggested these principles for building such a paved road: Build what people need, own and support things long term, don’t make people wait, make things easy to use, allow people to extend and adapt, and help people do the right thing.

The last keynote by Kevin Goldsmith, “The Inspiring Synergy between Software Developers and Emerging Technologies“, addressed the timely question of whether AI will put developers out of work. Kevin put this question into historical context with thirteen “Things That Were Supposed to be the End of Software Development,” starting with Automatic Programming in the 1940s through IDEs in 1991 to AI (twice: 2005 and 2020). So far, all increases in software development productivity have only led to more software, more sophisticated software, and more developers. Kevin didn’t know if it would be the same this time around. But he urged developers to adopt and use AI: “AI won’t take your job. It’s somebody using AI that will take your job,” Richard Baldwin stated at the World Economic Forum 2023. He suggested using AI to automate the easy stuff, creating documentation and explanation, replacing Stack Overflow, and adding ML-based features to applications. He also suggested AI for Low code/No code: prototyping, developing simple applications, and empowering non-technical teams.

Sebastian Meyen, Chief Content Officer at S&S Media Group, was happy to answer some questions from InfoQ.

InfoQ: S&S Media organizes the Java conferences JAX London, W-JAX Munich, JAX Mainz, and many other conferences. What’s your view on the popularity of on-site conferences today compared to 2019?

Sebastian Meyen: We are organizing 25 conferences a year in Germany, London, Singapore, New York, and, from next year, also in San Diego. For the JAX universe, we added the JAX Software architecture in NY this year. Plus, of course, there are around 100 training events in Germany.

Since the middle of 2022, the popularity of the onsite conferences has been high again. We can see a decrease in the number of remote participants. Also, the participants prefer to stay longer in the conference and enjoy the full scale of the conference. Our workshops have high demand.

InfoQ: You continue to organize hybrid conferences that stream the on-site talks to remote viewers. How has remote participation changed over the last couple of years?

Meyen: We consolidated our conference apps into our platforms, Entwickler.de and devm.io, over the past two years. We provide optimized conference experiences for software developers and extend the learning experience. So besides unlimited knowledge, online live events, etc., the participants find all relevant information on the Platform. We offer the remote participants almost the same experience as the onsite participants. The stream quality is high, and we have a lot of great features.

We have seen a few significant changes. There are fewer remote speakers from the conference location which means more international participants. Teams have been split between onsite and remote. We have also seen that many onsite participants use the system during the conference.

InfoQ: How can you mimic the planned meetings and random conversations of on-site conferences for remote participants?

Meyen: Some things you cannot transfer into the hybrid world. We do have chat functions, Zoom workshops during the sessions, and let participants ask the speaker through the app. But what happens near the coffee table is the only thing we can not mimic.

InfoQ: The JAX conferences now have 30-minute breaks between sessions, more than the usual 20 minutes. What was the inspiration for this change?

Meyen: We see that participants need some time to recap and reflect. The session breaks are designed to have a short pause, chat and network. There has been a lot of positive feedback on the conference page. People go fresh into the next session.

InfoQ: Besides organizing conferences, S&S Media also sells training, publications, and books. Developers already have abundant free technical content on the Internet. Now, generative AI can answer technical questions quickly and even write code. How does S&S Media plan to stay relevant in such a world?

Meyen: That is a good and important question. We offer people the opportunity to become better in their field – our goal and drive over more than 20 years. We are the training and education partner for teams. Our Platform curates articles and technical content written by international experts. We offer more than 80 different topics in one place. It’s all reachable and accessible, easy to find. The user does not need to search long on various websites, forums, or anywhere. It’s all in one place. Additionally, we have live online sessions with experts once or twice a week.

Generative AI is great for solving problems. You can be faster and improve your work quality. Also, our developers use it a lot, and we use machine learning tools in our Platform. Still, it can’t replace your knowledge. Using AI correctly or evaluating the quality of the result is still a matter of personal skills.

JAX London will return in the first week of October 2024 to London’s Business Design Center, which also hosts the Devoxx UK conference.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Google Open-Sources AI Fine-Tuning Method Distilling Step-by-Step

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

A team from the University of Washington and Google Research recently open-sourced Distilling Step-by-Step, a technique for fine-tuning smaller language models. Distilling Step-by-Step requires less training data than standard fine-tuning and results in smaller models that can outperform few-shot prompted large language models (LLMs) that have 700x the parameters.

Although LLMs can often perform well on a wide range of tasks with few-shot prompting, hosting the models is challenging due to their memory and compute requirements. Smaller models can also perform well when fine-tuned, but that requires a manually created task-specific dataset. The key idea of Distilling Step-by-Step is to use a LLM to automatically generate a small fine-tuning dataset that contains both an input with an output label, as well as a “rationale” for why the output label was chosen. The fine-tuning process trains the small model to predict both the output label as well as generate the rationale. When evaluated on NLP benchmarks, the small fine-tuned models outperformed the 540B PaLM model while requiring only 80% of the benchmark’s fine-tuning data. According to Google:

We show that distilling step-by-step reduces both the training dataset required to curate task-specific smaller models and the model size required to achieve, and even surpass, a few-shot prompted LLM’s performance. Overall, distilling step-by-step presents a resource-efficient paradigm that tackles the trade-off between model size and training data required.

Research has shown that increasing the number of parameters in an LLM can improve its performance, with the current state of the art models such as PaLM having 100s of billions of parameters. However, these large models are expensive and difficult to use at inference time, as they require multiple parallel GPUs simply to hold the parameters in memory. Recent efforts have produced slightly smaller models, such as Meta’s Llama 2, that can perform nearly as well but with an order of magnitude fewer parameters; however, these models are still quite large and compute-intensive.

One way to get a smaller model that performs well on a certain task is to fine-tune a smaller language model with a task-specific dataset. While this dataset might be relatively small—on the order of thousands of examples—it may still be costly and time-consuming to collect. Another option is knowledge distillation, where a large model is used as a teacher for a smaller model. InfoQ recently covered such a technique developed by Google that uses a PaLM LLM to create training datasets, producing fine-tuned models that performed comparable to LLMs that were 10x larger.

Distilling Step-by-Step does require a fine-tuning dataset, but it reduces the amount of data needed to create a high-performing model. The source dataset is fed to a PaLM LLM via a chain-of-thought prompt that asks the model to give the rationale for its answer. The result is a modified fine-tuning dataset that contains the original input and answer as well as the rationale. The smaller target model is fine-tuned to perform two tasks: answer the original question and generate a rationale.

Google evaluated their technique using four NLP benchmarks, each of which contains a fine-tuning dataset. They used Distilling Step-by-Step to modify these datasets and fine-tune T5 models with fewer than 1B parameters. They found that their models could outperform baseline fine-tuned models while using only a fraction of the dataset; as little as 12.5% in some cases. They also found that their 770M parameter model outperformed the 700x larger 540B parameter PaLM on the ANLI benchmark, while needing only 80% of the fine-tuning dataset.

In a discussion about the work on X (formerly Twitter), AI entrepreneur Otto von Zastrow wrote:

These results are very strong. I would call it synthetic data generation, not distillation, and I am really curious to see what happens if you train the original LLM on this synthetic rationale per sample question.

The Distilling Step-by-Step source code and training dataset are available on GitHub. Google Cloud’s Vertex AI platform also offers a private preview of the algorithm.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Nvidia Introduces Eureka, an AI Agent Powered by GPT-4 That Can Train Robots

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

Nvidia Research revealed that it has created a brand-new AI agent named Eureka that is driven by OpenAI’s GPT-4 and is capable of teaching robots sophisticated abilities on its own.

With the help of this new AI, robots may be taught intricate feats like spinning pens in a manner akin to how people learn. Robots can learn through trial and error reinforcement thanks to Eureka’s intelligent reward algorithms, which are created using generative AI and big language models like OpenAI’s GPT-4.

This method has been shown to be almost 50% more effective than conventional human-authored programs, according to a paper written by Nvidia. According to a post on Nvidia’s official blog, Eureka has been effective in teaching robots to perform a variety of activities, such as opening drawers, using scissors, catching balls, and more.

Anima Anandkumar, senior director of AI research at NVIDIA quoted:

Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process.

According to the research, Eureka-generated incentive schemes outperform skilled human-written ones on more than 80% of tasks because they allow robots to learn through trial and error. The bots’ performance improves by an average of more than 50% as a result.

Robots are rewarded for reinforcement learning by the AI agent, which uses the GPT-4 LLM and generative AI to create the necessary computer code. It doesn’t require task-specific prompts or pre-defined incentive templates, and it readily accepts human feedback to change its rewards for outcomes that are more closely in line with a developer’s goal.

The major innovation for Eureka was the fusion of language models’ capacity for pattern detection with simulation technologies like Isaac Gym. Eureka successfully “learns to learn” by fine-tuning its own reward algorithms after a number of training cycles and even taking into account human input.

This research complements recent innovations from Nvidia Research, such as Voyager – an AI agent powered by GPT-4, capable of independently engaging in Minecraft gameplay.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB (NASDAQ:MDB) PT Lowered to $440.00 – Defense World

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB (NASDAQ:MDBFree Report) had its target price trimmed by KeyCorp from $495.00 to $440.00 in a research report released on Monday, Benzinga reports. The firm currently has an overweight rating on the stock.

Several other research firms have also recently issued reports on MDB. Citigroup increased their price target on MongoDB from $430.00 to $455.00 and gave the stock a buy rating in a research note on Monday, August 28th. Morgan Stanley raised their target price on MongoDB from $440.00 to $480.00 and gave the company an overweight rating in a research note on Friday, September 1st. VNET Group reaffirmed a maintains rating on shares of MongoDB in a research note on Monday, June 26th. Barclays raised their target price on MongoDB from $421.00 to $450.00 and gave the company an overweight rating in a research note on Friday, September 1st. Finally, Needham & Company LLC raised their target price on MongoDB from $430.00 to $445.00 and gave the company a buy rating in a research note on Friday, September 1st. One research analyst has rated the stock with a sell rating, three have assigned a hold rating and twenty-two have assigned a buy rating to the stock. Based on data from MarketBeat, the stock currently has a consensus rating of Moderate Buy and an average price target of $415.46.

Get Our Latest Stock Analysis on MDB

MongoDB Price Performance

Shares of MDB opened at $342.28 on Monday. MongoDB has a 12 month low of $135.15 and a 12 month high of $439.00. The company has a market cap of $24.42 billion, a P/E ratio of -98.92 and a beta of 1.13. The stock has a fifty day moving average price of $359.19 and a 200 day moving average price of $341.13. The company has a debt-to-equity ratio of 1.29, a current ratio of 4.48 and a quick ratio of 4.48.

MongoDB (NASDAQ:MDBGet Free Report) last released its quarterly earnings results on Thursday, August 31st. The company reported ($0.63) earnings per share (EPS) for the quarter, topping the consensus estimate of ($0.70) by $0.07. MongoDB had a negative return on equity of 29.69% and a negative net margin of 16.21%. The company had revenue of $423.79 million during the quarter, compared to analysts’ expectations of $389.93 million. On average, research analysts forecast that MongoDB will post -2.17 earnings per share for the current year.

Insider Activity

In other news, CAO Thomas Bull sold 518 shares of the stock in a transaction dated Monday, October 2nd. The stock was sold at an average price of $342.41, for a total transaction of $177,368.38. Following the transaction, the chief accounting officer now owns 16,672 shares of the company’s stock, valued at approximately $5,708,659.52. The sale was disclosed in a legal filing with the SEC, which is accessible through this hyperlink. In other MongoDB news, CAO Thomas Bull sold 518 shares of the firm’s stock in a transaction that occurred on Monday, October 2nd. The stock was sold at an average price of $342.41, for a total value of $177,368.38. Following the sale, the chief accounting officer now owns 16,672 shares of the company’s stock, valued at approximately $5,708,659.52. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is accessible through this link. Also, Director Dwight A. Merriman sold 2,000 shares of the firm’s stock in a transaction that occurred on Tuesday, October 10th. The shares were sold at an average price of $365.00, for a total value of $730,000.00. Following the completion of the sale, the director now directly owns 1,195,159 shares in the company, valued at approximately $436,233,035. The disclosure for this sale can be found here. In the last quarter, insiders have sold 187,984 shares of company stock valued at $63,945,297. Insiders own 4.80% of the company’s stock.

Institutional Trading of MongoDB

A number of hedge funds and other institutional investors have recently added to or reduced their stakes in MDB. GPS Wealth Strategies Group LLC bought a new stake in shares of MongoDB in the 2nd quarter worth approximately $26,000. KB Financial Partners LLC bought a new stake in shares of MongoDB in the 2nd quarter worth approximately $27,000. Capital Advisors Ltd. LLC lifted its position in shares of MongoDB by 131.0% in the 2nd quarter. Capital Advisors Ltd. LLC now owns 67 shares of the company’s stock worth $28,000 after acquiring an additional 38 shares during the period. Parkside Financial Bank & Trust lifted its position in shares of MongoDB by 176.5% in the 2nd quarter. Parkside Financial Bank & Trust now owns 94 shares of the company’s stock worth $39,000 after acquiring an additional 60 shares during the period. Finally, Coppell Advisory Solutions LLC bought a new stake in shares of MongoDB in the 2nd quarter worth approximately $43,000. Institutional investors and hedge funds own 88.89% of the company’s stock.

MongoDB Company Profile

(Get Free Report)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Further Reading

Analyst Recommendations for MongoDB (NASDAQ:MDB)



Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Presentation: What API Product Managers Need

MMS Founder
MMS Deepa Goyal

Article originally posted on InfoQ. Visit InfoQ

Transcript

Goyal: My name is Deepa Goyal. I’m a product management strategist at Postman. I’m going to be talking about what API product managers need. In order to start thinking about APIs as products, I’ll be sharing some frameworks to understand how do you identify the right stakeholders across API product lifecycle? How do you establish user research strategy, understanding developer journey to develop that empathy for your users? Also, we will be thinking about API maturity-based distribution model. Then going into what makes up for a great API experience and looking at all the components. Also, how do we develop effective feedback loops so that we can collect customer feedback and improve and iterate upon our APIs.

Mapping the API Producer and Consumer Lifecycle

Let’s talk about the API producer and consumer lifecycle. The API producer is the team that is producing an API. The consumer, as the name suggests, are the teams or customers who are integrating with your APIs and building their applications. API producers are generally teams, and not just teams of developers, but the lifecycle of the API producer actually spans over multiple teams. The first step being defined, which is where you’re defining what your API is going to be about, who is it for. This is where a product manager writes the requirements and user stories. Then you get to the design phase, where your architects and your team leads jump in to understand those requirements and convert them into an API design. Then you start the development process. Even before you develop, you have to think about what you’re trying to build, who you’re building it for. Those steps happen in the define and design phase before jumping into develop. Then you get into testing. You think about security, where security teams will help you evaluate your APIs in terms of making sure that they meet the security guidance of your organization. You should also be thinking about deployment generally taken care of by SRE teams. Then you get into implementing observability, and establishing monitors and analytics. It’s very important to measure. As you release any product, you should be measuring the results. You should think about observability before the release instead of later. Then, at that point is where you are ready to distribute your API, make it available to your audience.

It’s really important to understand that distribution is not just putting your API out there, as like it’s now available on your site. It’s actually a process that is handled across multiple teams, such as sales and marketing, developer relations being a big factor of it. In terms of API experience components that we’ll be talking about soon, in addition to documentation, there’s also video content, or blogs, and things like that, that go into making your APIs available, and known, and discoverable to your customers. Once your APIs are available and your consumers can actually discover them, then that’s where the consumer lifecycle begins, which is where they go from discover to evaluate. As a customer, you discover a set of APIs, let’s say you’re thinking about payment APIs, and you want to integrate with some payment APIs. You discover that there’s quite a few of them in the market. Then you come to evaluation phase where you try to explore. Your developers are going to maybe do prototyping with a few of those, and then try to identify which one’s right for you. That is the evaluation phase. As you go from evaluation, once you have identified that, ok, these are the set of APIs that’s right for me, right for my application and my use case, then you get into integration. During the integration phase is where your development happens. This is generally followed by testing. You want to make sure integration works. You go through use cases, different scenarios that your customers are going to walk through. For example, in case of payments, it could be that you process a payment, and then your customers end up asking for a refund. What happens? How does that integration work? In the scenarios where the customer does a stop payment, what happens? You try to test all these scenarios. Then you deploy.

Once you deploy, you want to make sure you’re observing your integration. In case of payments, for example, or payment integration, you want to make sure all your payments are going through. Throughout this experience, you want to make sure that you get the support you need from the producer of those APIs that you’re integrating with. That there is a developer community that has information you need in case you get stuck, and there is enough resources for you to debug your integration if you need to. In case you get stuck, you’ll be able to reach out and provide feedback to the company that’s producing the APIs. It’s very important that that feedback loop exists. From the producer standpoint, it’s very important that that feedback loop is designed in a way that helps the customer in a way that is time sensitive. Also, make sure that that feedback across many customers is available as an input to the next iteration of your APIs, so that you are able to empathize with the customers and your APIs will evolve over time to better address customer needs.

Stakeholders Across API Producer Lifecycle

One of the very fascinating things that I find about APIs is how it has evolved from engineering teams, and engineers have led the development of APIs for a long time. Most of the times, a majority of APIs tend to be internal and not public facing. At the same time, now that a lot of companies are starting to think about publishing APIs, monetizing APIs through public facing APIs or partner specific, customer specific APIs, the need for thinking of APIs as products has really come to the front. It’s become really important that we think about various aspects of what that means. One of the important things that really impacts a product is to make sure that the stakeholders across the product lifecycle, in this case, API lifecycle are identified, and that they are able to collaborate in effective production of the product. This is where the role of product comes in in terms of API development.

In this diagram, the blue items in the API producer lifecycle are business functions. When we traditionally did not have a lot of product management in terms of building APIs, now it’s wrapped in more business focused initiatives and thought process. In terms of when we think about APIs, just jumping into developing something, which is very developer-first way of developing APIs, we have to take a step back and approach it with a more design led way of developing APIs. That’s where you start with a product manager leading the definition and requirements of the API that needs to be built. You bring in software architects, and designers, and PMs, who collaborate on the design of the API. Then the actual development begins. This also ensures that in a large organization, you have done the due diligence of making sure you don’t have redundant APIs, that your APIs meet a certain quality criteria, and you have thought about the prerequisites of what is needed, and you’re being more mindful of your developer bandwidth and not just building things that you have to later deprecate. There’s more thought put in upfront before the APIs are actually built.

This is also an opportunity to shift security and testing left where you can think about what will the security needs be for the API. I recommend bringing in security teams as early as the design phase to actually get your feedback on, what are the criteria that a particular API needs to meet in terms of being more secure for your end users? Since your consumers are going to be very focused on security, and security is a very top of mind aspect of how customers evaluate different APIs to integrate with in applications. When we think about development, you have the developers for testing, you have test engineers. For security, you have information security teams. Deployment is generally a different team of release management. Then you come to observability, where you bring in the SRE, DevOps, your platform teams. You could probably even bring in data analysts to help you build dashboards, depending on what metrics you want to establish for measuring your APIs. When you start to think about go-to market, you need to start involving your product marketing manager, who will drive the distribution of your APIs. As you can see, these are different teams that are involved, who are owners of different steps of the API producer lifecycle. As your APIs go through different stages of the lifecycle, all these different owners have to work together and provide their input, set the standards. Generally, the product manager has to be the one who owns the progress of the API across all of these steps. Although they are the originator and they’re part of the definition, they are the ones coordinating and collaborating with all these owners to get the APIs to the finish line.

This is also the same framework you can apply to API governance, because governance is applicable at every step of the producer lifecycle. Because you need certain standards that an API must meet to be considered ready for the next phase. You need a certain level of quality, of definition, to actually jump into design. You need a certain level of design to be able to start development. Owners of each of these steps are the people who should be setting those standards. There should be some SLAs that you establish for those owners to approve and sign off on those steps. If the design doesn’t meet the security standard, for example, then it needs to be worked on further before development can begin, for example. You need to establish API governance with the lens of API producer lifecycle, so that you are making sure that you’re building high quality APIs for your end customers. Another thing that we should be thinking about is supportability. Supportability is another aspect of like, when you start to think about distributing your APIs, do your customers have a way of getting support from your team? How do you distribute? We will be looking at some distribution models.

Creating a User Research Strategy

First, let’s begin with trying to understand the customer journey and develop some customer empathy. The first thing that we need to do when we think about building any product is user research. Let’s say you’re building a new set of APIs, then you need to identify who your customers are, who is your target audience. There are a lot of different approaches you can take to understand your customers. Some of the most straightforward ways is to start thinking about customers in terms of customer segments. For example, are you thinking of making an API that’s for individual developers building applications, or are you thinking about small to medium businesses? You could be thinking about targeting enterprise audience. You could segment in terms of industries. Is your API specifically for FinTech payments, or is it more for healthcare? You can do user research to identify the right audience, and understand what are the use cases that you’re trying to serve, way before you actually start building your API. This user research although should be ongoing even after you build your APIs, but you should really think about what are the different use cases you’re trying to address. What is the customer segment you’re trying to address? Once you do that, you can go deeper to understand your customers better.

Once you’ve identified that, for example, I’m going to build these APIs to address an SMB market with payment APIs. Then, you try to understand that SMB market, who needs payment APIs a little better, by trying to understand maybe using customer interviews, surveys, and various other aspects to understand, for example, what do developer teams look like in SMBs? How big are they? Do they usually have three or four developers? What skills do they have? Do they mostly write Python, or JavaScript? What kind of tools they use. What does that tech stack look like? For example, a very large team can probably build an integration really quickly. Versus if your customer is a startup and the CEO is doing some coding, then tools like a lot of copy paste code might help them better. Understanding the team structure, the skills on the team, the tech stack, things like that, as you go deeper into the target audience will help you decide what tools you can provide your customers to help them integrate better with your APIs.

This leads to also a very important decision is, are your APIs going to be public, private, or internal? Internal being, it’s only available to your internal developers within your organization. If that’s the case, then your total addressable market is the size of your organization, the number of developers you have. If it is private APIs or limited to certain customers, then that is your TAM, that is your total addressable market. If it is going to be a public API, then it is going to be based on the target segment that is the potential size of your audience, which is your total addressable market. That would help you understand how big your audience is, how much potential use your APIs are going to get, which will help your design team and developer team gauge the scale that you would need for those APIs. It would also help you manage APIs in terms of what kind of experience you need to build around those APIs.

Customer Journey Map

Another important aspect once you have your customers identified, let’s say you have published your set of APIs, this is where you start to build a framework to map your customer developer journey. In terms of your consumer developer journey, the first thing that we talked about a couple of slides ago, is discovery. This is where your customer is asking questions like, how do I find your APIs? Discovery is very important, because how do you publish your APIs? Is it just on your site? Is it available on other tools? For example, on Postman’s public network, do you have a collection that allows people to discover your APIs? Are there blogs, YouTube videos that you have published about your APIs that allows people to discover them? There are different customer touchpoints that help the customer get to the next step in their journey. In terms of discovery, the customer is asking questions like, how do I find your APIs? You address it with touchpoints such as your landing pages that you SEO optimize, your newsletters, case studies, white papers, along with social media content. Then, when the customer goes into evaluation phase where they have looked at a couple of different API solutions out there, maybe your competitor products, this is where they start to look deeper into your documentation. They want to look at your FAQs, pricing pages, because pricing is a pretty important aspect of anybody’s decision-making in terms of if they want to go with the product or not. One of the interesting things about APIs is also how important the developer community is. Because a lot of times, developers, when they’re evaluating APIs, they would look at how strong the developer community is around those APIs. Are there developers who are already integrated with these APIs? If I had a question, is that something people have already run into and described? Are there enough developer generated content out there that can help me? For example, on Stack Overflow, people asking questions. That really helps developers think that, yes, these are tried and tested APIs, and really plays an important role in their evaluation process.

The other thing is also understanding technical dependencies. Making sure your documentation outlines the technical dependencies, or limitations of your APIs. It really helps consumers go from evaluation to actually deciding to go with your APIs and start building their integration. This is where your consumers start to actually do the development work and will probably start reaching out to support. Interestingly, if your support teams are not able to handle these support requests, or they’re not equipped, then this is where you would see a drop-off. Because if as a customer, I run into an issue that does not get resolved, then I cannot move forward. It’s an important aspect of conversion for your customer funnel, if you look at it as a funnel. Once the integration is complete, if your consumer and developers have received enough code samples, tutorials, they have learning resources and ability to get support from your support teams, or within the developer community, then they can get into testing, where they might need tools like testing tools or sandbox offerings that you can build either in-house or use third-party tooling. Then, they can go into deploying their application, at which point, you would start to see the spike in usage. This is where you start to see in your data, some API calls going up, and customers coming online and starting to make API calls as they grow and scale.

Customers are also very interested in observability. That is where as a producer, you can publish things like status pages where you list your recent incidences, or you establish SLAs and you publish change logs to make sure that customers are aware of any changes happening to your APIs. You can also create some ambassador programs where customers can participate and engage better with your teams. At every step of this journey, you should establish feedback loops and also measure customer sentiment. It’s very important to understand that every touchpoint is an opportunity for you to improve customer experience. If you provide better support in different aspects, you can get more customers through the journey smoother, faster, and more effectively. That will drive business value for your APIs.

API Maturity Based Distribution Model

Next, I want to share with you a distribution model. As you start to think about building your APIs and publishing them, this framework will help you think through how you publish them and how you think about distribution. Going through, on the left is the API development lifecycle that we talked about, where you go from define, design, develop, test, secure, deploy, observe, distribute. Once you do that, now in the distribution phase, the questions you should be asking is, are these APIs supposed to be for internal release only? Are they only for your internal developers? Then, once your team has developed them, you release it to your internal audience. Hopefully, you can do that with limited set of teams. Let’s say you want to publish it to 20 teams across your company, then you do it one by one, to first publish it to maybe one team, get some feedback, make some iterations before you publish it to all 20. Then scale them little by little, so that you’re improving the experience over time. Then, when it is ready, and all teams that you were targeting are ready to use the API, then you’re done. Let’s say your publishing partner APIs are specifically built for some specific customers, then you still release these APIs as internal APIs. You get some internal teams to test it out. A great way to get early testing done is to use internal audience like support teams, or your sales teams as power users, because they are much more aware of the customer needs than your developer team. Making sure that they can test out these APIs for you, maybe create a hackathon. Get them tested as internal release before promoting them to a private beta for a partner API, where you get some external customers to start testing them in beta. Then based on feedback, you can iterate until you feel confident that they’re ready to promote to being published partner APIs.

If your APIs are supposed to be targeted for public use, then you go through the same cycle, but first you test it out with internal audience, then you test it out with limited set of customers. That could be a private beta. Once you’ve done the private beta, you can publish a public beta, where you invite or make your product available for external customers, but with the type of beta so that they understand that the APIs might be subject to change, to a point where you feel ready that your APIs are ready to be promoted to general availability. What this does is you are iteratively promoting your APIs through a maturity-based distribution model, so that customers at all times are very aware of what to expect from the APIs. The reason that I recommend you go through this framework of promotion in terms of a distribution model is because APIs are inherently designed for your consumers to build their applications on. This process builds the opportunity for you to learn from the customer feedback, iterate on your APIs before they have built dependency on your APIs. Because once your customers have developed with your APIs, you can’t really change them. It’s very important to build that time that allows you to iterate without impacting customer applications, so that they have transparency into understanding how mature your APIs are, and if they’re ready to be integrated into production applications or not, if they’re experimental. Because general availability is something they would expect that your APIs are going to be stable, meet a certain SLA, and not going to change. It’s very important that you be transparent with your customers, whoever, be it internal, partner, or general audience.

The other thing, also in terms of your stakeholders, like the sales, marketing, developer relations team, they should also be aware of how mature an API is, so that they can prioritize what efforts they are pulling in to actually improve customer discoverability of those APIs. For example, if it is a really awaited feature that customers are waiting for and really excited about, and it’s launched in beta, that might have a much higher set of maybe code samples, or blogs, or something that you publish, and get people on the beta experience. Maybe you want to put in more of your efforts on general availability products, trying to get more content out there for them to help customers who are already evaluating your APIs and integrating with them.

Components of API Experience

Once you have your producer lifecycle, we’ve looked at distribution. The other end is to think about customer experience, which is delivered through things like documentation, API reference, code samples, blogs, videos, and so on. All of these that you see on the left, the first orange box in this diagram should be mapped to maturity. Customers know that if it’s a beta product, it might have, for example, a little less extensive documentation, maybe fewer video blogs, and things like that. If it’s a general availability product, they can expect a certain level of documentation, and stability, and scalability of the product. Making sure that your developer experience components are mapped to an API maturity lifecycle, so that your support teams can prioritize the support tickets coming in from all these different API experience components, and establish SLAs for support, is very important in terms of effective API related operations. Your support operations need to be able to prioritize, establish, and meet those SLAs. Your product team should be part of that process in a way that you’re aware of any support escalations that they have reviewed on a regular basis, so that the role of the product team, or the product manager would be to identify when escalations happen. That, is this a feature gap, or is this a bug, and provide the necessary solution. Let’s say if it’s a feature gap, then is it something that’s on the roadmap? Is it not on the roadmap? Or if it’s a bug, when can it be fixed? Then making sure that in your roadmap, and as it happens, it again goes through the iteration of releases and distribution that we just talked about. All the developer experience components are updated with the new functionality and the new behavior of your APIs, so that that feedback loop of getting information from support back into the product and improve your product iteratively continues.

Creating Effective Feedback Loops

As you think about feedback loops, support team is a product manager’s best friend. It’s very important that we think about how our APIs are serving our customers best. Support teams, since they spend so much time talking to customers are our best way to get that insight. We can get both quantitative and qualitative data from this. You can run customer satisfaction surveys, and get CSAT surveys to give you insights in terms of how customers are feeling. You can run CSAT scores to understand maybe every quarter, or you can segment your population of customers depending on how big your audience is, to have more insight into overall customer experience and how it’s impacting your customers. You can also set metrics like ticket volume to active customer count, because, for example, if your APIs are being used by 100 customers, actively 100 active customers, but then you’re getting 60 tickets, then you’re probably doing something wrong. It’s very important to understand that that ratio is very important. You want to reduce that over time. You can also segment tickets to understand what kind of customers are generally getting issues. Is it like usually small teams or large teams or certain regions, perhaps. There might be some segmentation to understand what is the segment of population that generally runs into issues. You can also segment tickets to understand what issues your customers generally run into so that you can design a better roadmap from product perspective to help prioritize those particular type of issues.

Continuous Iteration Using Data

Ultimately, it’s very important from a product perspective to have metrics to measure every initiative that you have. For APIs, I like to think about metrics in terms of infrastructure metrics, such as uptime, requests per minute, average latency, errors per minute, because that’s your fundamental set of metrics that define how your API works. If your API doesn’t work, then there is no point of measuring anything else in the experience of product metrics. The founding block of your metric should always be the API in terms of the infrastructure itself. Then you have the experience layer on top, which is where you start to measure things like views, session duration, support volume by channel. This is just a small subset of examples. You can build a lot of different metrics across the user journey, do think about acquisition, retention, engagement, and overall experience. You start to measure different aspects of the customer journey to establish how well your API experience is, and also to identify opportunities for improving your experience.

In terms of product metrics, which is built on top of the infrastructure and API experience metrics, you start to think about things like API usage growth, your unique API consumers. It’s very common, where you have a handful of customers driving majority of the usage. In terms of APIs and API products, it’s not necessarily that you have a lot of scale to be successful using a set of APIs. A customer can potentially have just 50 API calls a week and still be very successful and very satisfied with your APIs. Versus a customer can have 5 million weekly API calls and still be very dissatisfied with your APIs. The spectrum is fairly large, and is not necessarily correlated with scale. It’s very important to understand who are the API users, who are the consumers who are driving majority of your usage. Another very important metric is Time to First Hello World, where how quickly can a developer discover and start using your APIs because that tends to be the first step. If that one is full of friction, and customers can never get started with your API, so it’s very important that you make that process the most frictionless so that customers can really check out your APIs quickly, and get started. That’s where it would really help them to get into evaluation and start building with your APIs.

Conclusion

I hope that the frameworks I provided help you build successful APIs and become successful as API product managers. A lot of this information is also available in my upcoming book called, “API Analytics for Product Managers” where I go a little deeper into the analytics and how to measure the success of your APIs.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Data Architect: What You Need to Succeed | Dice.com Career Advice

MMS Founder
MMS RSS

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Data architects are among the more senior members on a data team, with years of experience working with multiple types of technologies. They have a strong understanding of data warehousing, data management systems and data modeling, and they use those abilities to help organizations build out their respective data architectures.

Their technical skills may vary based on the types of systems their organizations use, but all data architects have the same fundamental skills. Let’s break down what you need to learn to become a data architect in today’s environment.

What do you need to know to become a data architect?

Pranabesh Sarkar, senior distinguished architect, data architecture, engineering and governance at Altimetrik, suggests that both technical and soft skills are key for designing an effective data platform.

“A data architect has many responsibilities, starting with understanding the business requirements and converting the same into a pragmatic and scalable data infrastructure,” he says.

That means a data architect needs to be well versed with different database solutions, including:

“They need in depth understanding of various NoSQL types and skills to apply the right one for a specific business problem,” Sarkar adds. “Hands-on experience with a variety of data solutions along with data modeling is a must-have skill.”

Sunil Kalra, associate director of data engineering at LatentView Analytics, says it is highly recommended for data architects to have experience with at least one public cloud analytics service such as:

“These cloud platforms provide a wide array of analytics tools and services that can be leveraged to extract meaningful insights from data,” he says. “Overall, possessing these skills and knowledge is essential for data architects to effectively navigate the complexities of data architecture in today’s data-driven landscape.”

How can I train to become a data architect?

With new technologies and trends constantly emerging, data architects must be at the forefront of whatever is coming next. That means a life of continual education and training.

“Every year technology will change—last year it was moving to the cloud, this year it’s Snowflake,” says Jim Halpin Jr., technical recruiting leader for LaSalle Networks Chicago. “Data architects have to be able to take the initiative to learn the new technologies.”

From his perspective, the best data architects are also never too far from the code and regularly participate in code reviews and audits, keeping a pulse on the specific KPIs in their environment. “They enjoy the technical side, as well as the big picture strategic thinking, and so oftentimes they have an idea where the trends are moving and spend time researching, reading and discussing those topics within their network,” he says.

Beyond networking, conferences, meetups and publications, there are also structured programs including certifications, master’s programs and bootcamps data architects can participate in.

“Azure and AWS have certifications, there are master’s programs in predictive analytics and so many more,” Halpin says. “Most data architects choose what’s most relevant to their roles.”

Here are some online training programs that can help you learn the intricacies of the data architect career track. Keep in mind that some of these options are quite costly, while others (such as YouTube) are cheap or free:

Evaluate any course carefully to make sure it meets your needs and timetable before beginning.

Do you need a degree to become a data architect? Or just skills?

Given the demand for skilled data architects (and the historically low unemployment rates throughout the tech industry), you don’t necessarily need a formal degree to become a data architect, so long as you can convince a hiring manager and/or recruiter that you have the necessary skills for the job.

Keep in mind that any data architect job interview will plunge deeply into your technical experience, with your interviewer asking several questions to gauge your aptitude and experience level with various tools and platforms. For example, you’ll be asked:

  • Your experience with building out data models.
  • How you’ve ensured data security and integrity when building and managing databases.
  • How you manage external data sources in relation to a database.
  • How you’ve overcome challenges and secured buy-in from stakeholders when planning data architectures.
  • Whether you’ve transitioned datasets from on-premises to the cloud, and how you overcame challenges related to that.
  • Your methods for testing data architectures before release.
  • How you’ve applied strict data governance.

Those are just some of the questions you could be asked; the key is to stay flexible and come prepared with stories that put your skills and experience in the best possible light.

Data architects must also know how to talk business

A data architect must constantly work with different stakeholders in an organization, including the technology team, product management team, and business stakeholders. This means stakeholder management is a key aspect of the data architect profession.

 

Sarkar says every data platform in an organization is built to drive multiple business outcomes, noting the data architecture needs to be designed to handle multiple personas and different use cases. “It is important to engage with business teams before the design is initiated to understand the various requirements and expectations,” he explains. “It is advised to approach the solution in an incremental way by incorporating the business use case as part of the data architecture.”

A data architect must multi-task and troubleshoot multiple complex issues with data architecture. “To succeed, data architects must have a business-oriented mindset with a good understanding of the company objectives and goals,” Sarkar says. “The data architect is instrumental in using technical expertise to minimize platform costs while still delivering performance and scalability.”

Halpin adds that communication and collaboration are crucial skills for a data architect, as they often serve to bridge the gap between the technical teams and business leaders. “Data architects must have strong business acumen and a solid understanding of the direction the leadership team wants to take the company,” he says. “They are included in larger management discussions and their input is highly valued.”
 

This means they know how to tactfully present obstacles and challenges, as well as ramifications of decisions—both to leadership and the technical user. 

As Halpin points out, sometimes data architects are aligned to a specific industry and have deep subject matter expertise in that space: “We see this more in highly regulated industries like healthcare, insurance or banking where there is a lot of compliance and nuances that come with those fields.”

 

Other skills include project management, enabling data architects to plan, prioritize and execute ideas on time and on budget. “High levels of initiative to research emerging trends in technology, and strong communication skills to communicate ideas to leadership, as well as get in front of issues and manage expectations of both technical teams and leaders,” Halpin adds, are likewise critical.  

Staying up to date with emerging technologies is crucial in a rapidly evolving technical landscape. “Continuously seek information online, follow industry-leading companies’ blogs and newsletters, and actively engage with new technologies through hands-on experiences,” says Kalra, who also recommends maintaining a habit of writing blogs to help stimulate critical thinking, encourage further research, and facilitate continuous learning. 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.