Google Cloud Introduces Optimized Rocky Linux Images for Customers Moving Off CentOS

MMS Founder
MMS Renato Losio

Article originally posted on InfoQ. Visit InfoQ

Google recently announced the general availability of Rocky Linux optimized for Google Cloud. The new images are customized variants of Rocky Linux, the open-source enterprise distribution compatible with Red Hat Enterprise.

Developed in collaboration with CIQ, the support and services partner of Rocky Linux, the new images are a direct replacement for CentOS workloads. Started by Gregory Kurtzer, the founder of the CentOS project and CEO of CIQ, Rocky Linux is a downstream, binary-compatible release built using the Red Hat Enterprise Linux (RHEL) source code. The distribution was born after Red Hat decided not to provide full updates and maintenance updates for CentOS 8 as initially announced.

Google Cloud builds and supports the Rocky Linux images for Compute Engine, with both a fully open source version and one optimized for Google Cloud: this version has the suffix “-optimized-gcp” and uses the latest version of the Google virtual network interface (gVNIC). Clark Kibler, senior product manager at Google, explains:

These new images contain customized variants of the Rocky Linux kernel and modules that optimize networking performance on Compute Engine infrastructure, while retaining bug-for-bug compatibility with Community Rocky Linux and Red Hat Enterprise Linux. The high bandwidth networking enabled by these customizations will be beneficial to virtually any workload, and are especially valuable for clustered workloads such as HPC (see this page for more details on configuring a VM with high bandwidth).

A few months ago, Venkat Gattamneni, senior product manager at Google, announced the partnership with CIQ and promised more integrations with the new distribution:

In addition to CIQ-backed support for Rocky Linux, Google is also working with CIQ to provide a streamlined product experience – with plans to include performance-tuned Rocky Linux images, out-of-the-box support for specialized Google infrastructure, tools to help support easy migration, and more.

Google Cloud is not the only provider supporting Rocky Linux. AWS and Azure are other sponsors of the Rocky Linux project and offer AMI in the AWS and Azure marketplaces. Kibler adds:

Going forward, we’ll collaborate with CIQ to publish both the community and Optimized for Google Cloud editions of Rocky Linux for every major release, and both sets of images will receive the latest kernel and security updates provided by CIQ and the Rocky Linux community.

The Rocky Linux 8 AMI optimized for Google Cloud is available for all x86-based Compute Engine VM families. Versions for the new Arm-based Tau T2A and Rocky Linux 9, the latest Rocky generally available release, are expected soon. Google does not charge a license fee for using Rocky Linux with Compute Engine.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


A New Service from the Microsoft and Oracle Partnership: Oracle Database Service for Microsoft Azure

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Recently, Microsoft and Oracle announced the general availability (GA) of Oracle Database Service for Microsoft Azure, a new service that allows Microsoft Azure customers to provision, access, and monitor enterprise-grade Oracle Database services in Oracle Cloud Infrastructure (OCI).

Microsoft and Oracle have partnered since 2019 and first delivered the Oracle Interconnect for Microsoft Azure,, allowing hundreds of organizations to use secure and private interconnections in 11 global regions. Now both companies have extended their partnership with the GA release of Oracle Database Service for Microsoft Azure, which builds upon the core capabilities of the Oracle Interconnect for Azure and enables any customer to integrate workloads more easily on Microsoft Azure with Oracle Database services on OCI.

Through the Azure Portal, customers can deploy Oracle Database running on OCI with the Oracle Database Service. The service automatically configures everything required to link the two cloud environments and federates Azure Active Directory identities, making it easy for Azure customers to use the service. Furthermore, OCI database logs and metrics are integrated with Azure Services such as Azure Application Insights and Azure Log Analytics for simpler management and monitoring Azure Application Insights and Azure Log Analytics.

Source: https://www.oracle.com/cloud/azure/

Jane Zhu, senior vice president, and chief information officer, Corporate Operations, Veritas, said in a Microsoft press release:

Oracle Database Service for Microsoft Azure has simplified the use of a multi-cloud environment for data analytics. We were able to easily ingest large volumes of data hosted by Oracle Exadata Database Service on OCI to Azure Data Factory where we are using Azure Synapse for analysis. 

In addition, Holger Mueller, principal analyst and vice president at Constellation Research Inc., told InfoQ:

It is remarkable as customers brought competitors together – and now Oracle is even better integrated into the Azure… practically making Oracle a first-grade citizen in Azure – operating the Oracle DB from an Azure console. This is how multi-cloud should be implemented – so customers win. And they must win……

Furthermore, he said: 

Tacitly it is also the admission by Microsoft that the Oracle DB is better than MS SQL Server and by Oracle that Microsoft PowerBI is better than Oracle Analytics – at least for some customers… and Larry J Ellison is right – it is all about giving customers choices.

Lastly, there are no charges for using the Oracle Database Service for Microsoft Azure, the Oracle Interconnect for Microsoft Azure, or data egress or ingress when moving data between OCI and Azure. Customers will pay only for the other Azure or Oracle services they consume, such as Azure Synapse or Oracle Autonomous Database.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Promoting Empathy and Inclusion in Technical Writing

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Empathy is the first step in practicing sustainable, genuine inclusion. If persons or groups of people feel unwelcome because of the language being used in a community, its products, or documentation, then the words can be changed. Identifying divisive language can help to make changes to the words that we use.

Eliane Pereira and Josip Vilicic, both technical writers, spoke about promoting inclusion in documentation at OOP 2022.

Empathy is a conscious practice where we listen to each other and try to feel what they are feeling, Vilicic said. This allows a person to relate to someone else’s experience, even if it’s unfamiliar.

Pereira mentioned that empathy is the key to understanding how others feel in certain situations, even if the situation does not impact you. A word can be just a word for you, but the same word can impact your coworkers and make the work environment unsafe for them.

It takes empathy and willingness to make changes so that everyone feels included, as Pereira explained:

For example, in the engineering field, master/slave is a model of asymmetric communication or control where one device or process, “master”, controls one or more other devices or processes, the “slaves”. For a descendant of slaves who decides to contribute to a project, the perceived racial connotations associated with the terms, invoking slavery, is terrible.

Vilicic mentioned that the most important thing is having support from the top decision-makers in an organization in improving the language. Once there is ideological support, the difficulty of the journey is less important, because we know we have a good goal in mind: improving the language so it doesn’t harm our community, he said.

By using inclusive language, we give everyone the opportunity to be themselves, Pereira argued. We can ensure that we are not using expressions that can be punitive, or make people feel rejected or embarrassed for what they are, she said.

InfoQ interviewed Eliane Pereira and Josip Vilicic about promoting empathy and inclusion in documentation.

InfoQ: What role does empathy play when it comes to inclusion?

Josip Vilicic: If someone says that they are being illegitimately excluded from a community, and the community says this is unintentional, there is a conflict.

Conflict is not inherently bad… but we can respond to conflict in a destructive (defensive or aggressive) manner, or in a constructive (empathetic) manner. If the community actively listens, does not deny the experience that the excluded group shares, and does not want to perpetuate harm, then the only choice left is for the community to fix the inequality through inclusion.

InfoQ: What can be done to encourage and support people in changing the language?

Eliane Pereira: Speak up to show that changes are needed. Listen if you think that some changes are not needed. People are used to the harmful language being present in their daily lives to the point that, when they are asked to change, they will say, “This has been here forever, we don’t need to change it, it is just a word”.

That is why we need to explain why we need those words to be replaced, so people can be aware that those are not just words, but a way to communicate something and sometimes, this something can be derogatory to someone.

InfoQ: How can the use of inclusive language increase psychological safety?

Vilicic: Using inclusive language signals to the audience that we are moving forward in a way that is sensitive, intentional, and kind. This can make teammates feel like they are in a supportive environment, where they can speak freely and they won’t be rejected.

No matter what our professional efforts are, we’re a group of imperfect people working towards a shared goal. When we base our interactions around respecting each other’s humanity, we allow each other to collaborate towards amazing things.

Pereira: Some words have a connotation for us, but for others, can be a pain point in their lives. For example, saying you have ADHD to express you are having difficulties concentrating in your work can make a colleague who actually has attention deficit hyperactivity disorder (ADHD) to feel embarrassed or afraid that they are labelled as a worker with inability to perform their job. That is why we think that it is important to avoid metaphors in expressing yourself. Saying you are unable to concentrate on work on that day is clear enough.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Ant Group Open Sources Privacy-Preserving Computation Framework

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Alibaba financial arm Ant Group has open sourced SecretFlow, its privacy-preserving framework, with a specific focus on data analysis and machine learning.

SecretFlow includes a number of components, such as a secure processing unit, which provides secure computation capabilities guaranteeing data privacy; a homomorphic encryption unit; a portable simplest oblivious transfer protocol implementation; and SecretFlow, a higher-level unified framework integrating all of them. While the high-level SecretFlow module is written in Python, the lower-level modules are written in C, C++, and assembly.

IMAGE

SecretFlow aims to be complete, transparent, open, and interoperable with other technologies. According to the Ant Group, the framework aims to make it easier for developer to create applications based on privacy-preserving computing and to contribute to the further growth of the market and technology maturity.

You can install SecretFlow by running pip install -U secretflow. The following snippet shows how you can generate a random number between 3 and 4 for a specific user in standalone mode:

import secretflow as sf
>>> sf.init(['alice', 'bob', 'carol'], num_cpus=8, log_to_driver=True)
>>> dev = sf.PYU('alice')
>>> import numpy as np
>>> data = dev(np.random.rand)(3, 4)
>>> data

SecretFlow can also be deployed in cluster mode, which enables allocating nodes to specific users to increase privacy. SecretFlow cluster mode is based on Ray, an open source framework that provides a simple, universal API for building distributed applications.

For a quick start with SecretFlow, you can check the tutorials which present a number of use cases, from data preprocessing to logistic regression, to neural network training, and so on.

Privacy-preserving computation is a technique that aims to provide protection for sensitive data while they are processed. Using such techniques, e.g., homomorphic encryption, you can carry through computation over encrypted data, which ensure it cannot be collected or tampered with during the processing.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Developer Satisfaction Is Key to Engineering Success

MMS Founder
MMS Lilac Mohr

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today I’m sitting down with Lilac Mohr. Lilac is an engineering leader at Pluralsight. Lilac, welcome. Thank you for taking the time to talk to us today.

Lilac Mohr: Thanks for having me, Shane.

Shane Hastie: Let’s start a little bit and who’s Lilac, what’s your background?

Introductions [00:25]

Lilac Mohr: As you said, right now I lead software engineering teams that are working on the Flow product at Pluralsight. Flow’s the software delivery intelligence platform. So we look at development workflows holistically. We look at all the different activities that make up an engineer’s day, how engineering teams collaborate, how well they’re doing unpredictability of being able to deliver on their commitments, and the general efficiency of the workflow.

And it’s really fun because we get to use our own product and continuously improve the product and the process. And prior to this job I led other engineering teams, but I was also a developer for a very long time. So I’m still very connected to the software engineering side.

Shane Hastie: What are some of the things that get in the way of engineers being productive today?

Challenges engineering teams are facing [01:20]

Lilac Mohr: We try to stay away from the word productivity, we don’t want to measure engineering productivity. We want to really have a holistic view. Make sure that we’re looking at engineers as people, at what makes their day good. At the end of the day ask them the question, “What were the blockers you had today? And then what were things you were really excited about today?”

And as an engineering leader, I do have to talk about deliverables, being able to deliver with excellence and the outcomes, but then it comes down to the people who are really behind the code. Every line of code comes from an engineer, and it comes from a collaborative process. And some of the things that I’ve found have been creating friction to that’s delivery. Right now I think with the Great Resignation it’s very difficult for me to keep a healthy team because people are leaving, so I think that’s where the culture plays in and it’s important. When someone leaves, it puts a big burden on everyone who’s left. So I want engineers to feel like they can keep moving forward and don’t have to take on a lot of extra work.

I think that’s a big challenge that all engineering teams are facing right now. And as something we do with our Flow product, is we’re always talking to our customers and trying to understand what it takes to have great engineering teams. We ask them the question, “Are your team set up for success?” And we get a wide variety of answers. It’s actually a difficult question to answer. A lot of times engineering managers focus on the process and the tools and say, “If I have the right tools, if I have an agile process, then my teams are set up for success.”

And that’s not digging deep enough, in my opinion. I think that a lot of times that’s based on an old playbook. Everyone is still talking about agile transformation as they have been the last 20 years. And with all the attrition that we’re seeing, you need to really focus on people. You need to take a broader view of everything that goes into making teams healthy.

Shane Hastie: What does a healthy team look like and feel like today?

What makes a healthy team [03:39]

Lilac Mohr: I think it comes down to developer satisfaction. And actually, I’m really big into metrics, and I think part of that is looking at the data itself, for how they’re able to deliver what their day looks like. But then part of it is also being able to survey them and ask them, “How are you doing? How are you feeling? What’s your confidence level right now?”

So I think it’s a combination of those factors. And I think a healthy team is a team that genuinely likes everyone on the team. They like working together, they like solving problems. They feel empowered to be able to make decisions that are closest to them, the engineers are the ones that are the closest to the code. And they want to feel that they understand the value of the things that they’re producing, and that they’re empowered to be able to make those decisions.

Shane Hastie: As an engineering leads, how can you influence this? How do you create this space?

Leaders enable the culture [04:43]

Lilac Mohr: It all comes down to culture. And I think that visibility is really important. I think that when you have the right metrics and you have the visibility into those metrics, then you let individuals make decisions about their team, what’s best for their team. And I think that drives the culture.

I think that it’s also important to have gratitude and have a culture of constantly acknowledging people’s hard work and their contributions. And not just coming from leadership, but encouraging them to acknowledge each other and have that space where you can provide feedback across teams, up, down, all around. It just becomes part of the culture. And the example of how we do this at Pluralsight and at Flow is we have Gratitude Fridays, where on Friday everyone posts the co-worker of the Week and recognizes someone who just moved the needle a little bit on making their experience that week fantastic. And it’s really fun to see everyone just acknowledging each other.

And I know, doing a skip level review with one of the new engineers, I remember him telling me, “At first I was really sceptical and thought it was really cheesy to do that. But then after someone tagged me and acknowledged me for the help that I provided that week, it felt really good, and I get it now.” I love hearing that type of feedback and seeing people just appreciate each other and enjoy working together as a team.

Shane Hastie: The perspective that you just mentioned from that engineer, “It felt really tacky,” it does kind of sound like that. As an organization, as a leader, as a co-worker, how do you prevent that from becoming tacky?

An example I can give is one organization that I spent some time with used a weekly 15-5, where they asked five questions, 15 minutes, “How are you doing?” And if anyone seemed even vaguely dissatisfied, a very senior leader, out of genuine care and concern, would immediately get hold of that person and say, “What’s going on? How can I help?” But to the engineers this felt intrusive and creepy. So they stopped reporting it was tough, everything was shiny, so they didn’t get the personal phone call.

Creating trust and enabling openness [07:18]

Lilac Mohr: Yeah, I think you have to create that trust in each other. And I think that a lot of those types of initiatives need to come from within, not from leaders saying, “Hey, this is the great new thing we’re going to do.” Maybe they can get it started and then make sure they have that feedback loop and that ear to the ground to ensure that it’s being carried on by the team members, that they appreciate it and that they share it with each other. And I think that’s important.

For example, like what you mentioned in one-on-ones. Everyone talks about how important it is to get that feedback in one-on-ones and how employees are doing, and making sure as a leader that you can address some of that’s discontent. But there’s ways to do it that are healthy, and I think there’s ways to do it that feel weaponizing. It’s about creating that safe environment. And being able to ask, if someone shares some information with you about how they’re doing, make sure that you ask them, “Is this okay if I share this with anyone else?” Sometimes you want to keep that relationship private.

I think as humans, when someone says that something’s wrong, you want to immediately react and say, “Well, I want to fix it.” Or you want to make them feel better. Sometimes all you need to do is just validate them and say, “What, if anything, would you like me to do about this? Or do you want me just to be a sounding board just to listen?” And sometimes people say, “Right now I just need you to listen.”

Shane Hastie: One of the things you mentioned earlier on, Lilac, was the importance of metrics and getting that feedback about what is actually happening in the team, the way work is flowing and so forth. Couple of things if I could explore with you there. One, what are some of those important metrics?

The importance of metrics [09:13]

Lilac Mohr: First I’d like to zoom out and just talk about the importance of metrics in general. I think a lot of times engineering leaders are very comfortable with using metrics to measure systems. We want to know how is the database operating, is the system up or down? Things that are less human. And then when they think about their humans they get a little bit uncomfortable about using metrics.

And I think that discomfort probably comes from focusing on a single metric. I think that the holistic approach is what you really need to take, where you’re not looking at just a single dimension. We all know that if you focus on a single metric, that’s what you’re going to get. And sometimes you are optimizing one little piece, but it actually creates tension somewhere else. Or maybe you’re focusing on the wrong thing. A lot of times that’s called the streetlight effect, where maybe you dropped your keys on the way to your car and the dark, but where you’re going to look is under the streetlight because that’s where you can see them.

Examples of metrics that can be useful [10:20]

Lilac Mohr: At Flow, we really try to create this holistic view, where we’re looking at engineering metrics down to the primary level is your code commits. So making sure that engineers have good habits around code, and those habits need to come with a purpose. The reason that we want to commit frequently is because we don’t want to hold onto code until it gets stale. We want to have that safety to check in the code and then be able to quickly get some peer review on it and get the process moving. We want to make sure that we’re freed up to code frequently. How frequently you commit code is not about how fast you’re working, it gives you an outlet as an engineer to say, “I don’t have time to do my work because I keep getting pulled in these different directions,” or, “I have meetings all day.” And it allows you to have those types of conversations with your leader.

That’s at the primary level, just things you should be doing is making the time to be able to code and doing frequent commits. And then the next level is collaboration, which I think is the most critical to code quality. A lot of times we think about code quality as lets have tools that analyze our code for that quality. But the quality really comes from helping each other on a team. Doing that peer review and doing it effectively, and being able to get all that work in progress moved through the system.

We take a look at pull requests, at the type of comments you’re getting on pull requests. How long they’re sitting in queues before another pair of eyes takes a look at them. And then also making sure that we’re not just rubber-stamping them as they go through. It’s a great way to learn and to help each other in that quality, so that’s the second level. And then the highest level that we track in Flow is at the ticket level, which is, these are the things that your team committed to doing in the sprint or in this chunk of time. Are you able to deliver on those, and what are the blockers to flow? Where is that work getting stuck in queues? Where are there unnecessary handoffs, anything else that we can improve in that higher level process.

I think all those things stack on top of each other and help you get an understanding of how, as a team, you’re able to deliver code. And important part of using those metrics is using them to guide discussions. They’re not supposed to be used to punish teams, to make teams feel bad about their process, to blame anyone. It’s all about being able to identify things that sometimes your intuition doesn’t realize that some of that friction is there. It gets conversations going. And I think that if it’s framed that way, then those metrics are enabling.

Shane Hastie: How do we ensure, and as a manager, how do I make sure that we’re not bringing that blame culture in, that we’re not using those metrics in the ways that would be so easy to do, but as you say, are not right.

Empowering people to own their own improvements [13:49]

Lilac Mohr: I think visibility is the key, in having access to the metrics. We have an engineer who was using Flow at her previous company. And she said that her leader would come in and show the Flow metrics to the team and provide some insight, but they didn’t have access to that data themselves. And she was curious, she was like, “Well, if they’re tracking this information about me, what else does he know? What else is in there?”

I think that what we’re trying to do with our customers is encourage them to let individual contributors have access to their own metrics. And we found that by doing that internally in our own team, a lot of great insights come out. Sometimes what we do is we just say, “Okay, just explore the metrics and come back with one thing. One thing that you found that’s interesting, maybe something unexpected, or something that you want to improve on as an individual or as a team.” And we’ve found amazing things come out of that because it’s open, it’s not coming from engineering managers. It’s not something punishing like, “I found this thing we can improve upon.” It’s asking them to be empowered, to look at their own data and make improvements.

Shane Hastie: One of the things you mentioned earlier on that I’d like to just delve into a little bit more again, if I may. You spoke about how team performance does come when people like each other and work effectively together. How do we prevent that from becoming a monoculture?

Avoiding monoculture [15:28]

Lilac Mohr: I think that’s an important topic. And a lot of times we find our teams do have a subculture. The teams, we let them pick their own names and they’re really proud of who they are. And I think that probably needs to be balanced with the culture for the entire engineering organization, because one of the problems is a lot of times it creates problems with inclusion. For example, an engineer who maybe is older, who’s on a team with younger individuals, they talk about different things. And I’ve heard some of this feedback. They talk about things that he feels he can’t relate to. So how do we create that sense of inclusion on a team and still allow them to feel really close to each other [inaudible 00:16:18]?

Pluralsight does really well because we talk about inclusion and diversity a lot. We just make it … It’s not a special topic we discuss, it’s just part of how we think. So we’re constantly asking, “Am I being inclusive?” And being able to stand up for each other and be able to provide that feedback and say, “I feel that this person was left out of the conversation,” or, “Can we choose an activity that might be more inclusive to do as a group?” And by talking about it a lot it makes it safe. And it makes it something that, when it’s not calling people out, it’s just pointing out something that we all are aiming towards. We all want everyone to feel good.

And I know that personally, as a female software engineer in a very male-dominated industry, I’ve been doing this for about 25 years. So back in the late ’90s, a lot of times I was at startups where I was the only female. Sometimes the only female engineer, sometimes the only female in the entire small organization. And there were a lot of awkward situations where I didn’t feel like I belonged. And coming into Pluralsight and being able to say, “I belong here” was really powerful for me. Because looking back, I didn’t always have that feeling. And I think it comes from just creating the importance around belonging, where everyone really wants everyone else to feel like they should be here. And it’s huge.

Shane Hastie: One of the things that we see, and I’d love to know your experiences around this, becoming an engineering leader, we often take the best technologist and we throw them into a leadership position. And one statistic says 58% of new managers receive absolutely no training.

How do we help people who are transitioning into those leadership roles become leaders? Because the skillsets are very, very different from the individual contributor to the leader.

Helping new leaders [18:30]

Lilac Mohr: It’s definitely a challenge. I know in the Flow organization, we do have that culture of taking the best engineer and saying, “You’re going to be the leader,” and it doesn’t always work out. A lot of times we have seen those leaders go back to being individual contributors. And we make it okay, we make it okay even within the organization to do that because otherwise they would leave the organization to become ICs.

But part of it is making sure that they get the satisfaction that they did from being able to generate a lot of code to help their team, to be able to focus on other high-performing engineers and be able to raise the level up of the entire organization. I think it’s just that mind shift. It’s very difficult to let go off code because we enjoy writing code, but how can you be more impactful? Instead of just grabbing tickets and then becoming a bottleneck for your team, how can you coach others to be able to get to the next level with our own careers?

And then also, how can you keep your fingernails dirty in the code, because that’s what you love. Maybe doing some pairing with engineers to help them get their own levels up or be able to be involved in architectural discussions. Ways that you can still help at a technical level, but you’re helping not just in the code, but helping the team move up and improve their own career paths and being able to deliver.

Shane Hastie: In your position as an even more senior leader, what are you doing?

Transitioning into more senior leadership [20:14]

Lilac Mohr: For me, the transition from individual contributor to a leader of one team, to a leader of multiple teams, it was tricky because you’re responsible for more. You know that ultimately, I’m responsible for all these outcomes, but I have to resist the urge to micromanage. I have to trust people.

I think that I would focus that effort where I want to dig in and be able to know everything that’s going on, to focus on coaching and focus on that higher level of enabling people and putting that trust in people and using data. We always come back to the data. The data to help me understand what’s going on with my teams, but also helping those leaders do the same thing, where they’re not micromanaging, but they’re driving and understanding where they can be helpful on reducing friction on their teams as well.

Shane Hastie: And cycling back to another point that you made that’s hanging over us all at the moment, the Great Resignation. Keeping things flowing through it and retaining great people, how do we do that?

Advice on retaining people [21:27]

Lilac Mohr: The first thing is you need to value your people. It sounds really easy, but a lot of people don’t genuinely care about people at the level that they should. I remember I was interviewing for a leadership role for a very small company, and the owner of the company made some offhanded remark about engineers are a dime a dozen. And I remember having a very strong reaction to that comment and trying to think about it. Was my reaction due to me thinking that engineers need to be treated like special snowflakes? Is it because of my experience with hiring new engineers and how difficult that was?

And I think what it came down to is you can’t have a successful company at any level unless you really value all the individuals there. And you’re not going to have individuals put in their best work unless they know that they’re cared for, that they’re valued. I think it all comes down to that, putting people first. Not thinking about people as resources but as human beings, and I think that that helps with retention.

Shane Hastie: And when we do lose people, as inevitable in these times, how do we let them go well?

Lilac Mohr: We need, as a group, to celebrate the opportunities that they have, and leaders can set the stage for that. I think that if a leader handles someone leaving well, then the rest of the organization feels like it’s okay, it’s not something that they need to panic about. And I think that a lot of times I question my own leadership abilities in different areas, I think everyone does. But some feedback I’ve heard that made me feel really good is that I can be a calming force amidst all the change. And I remember talking to one engineer who admitted that they were thinking about leaving. So it was about a year ago they were thinking about leaving and seriously considering it and not happy. And they decided to stay because of me, because of a discussion they had with me.

To me, just knowing that even if I saved one person and made them feel like they can have a successful career staying here at Pluralsight, that gives me validation that I did my job as a leader. You can’t save everyone, but if you can make a difference, even in just one engineer’s life, I think it’s worthwhile.

Shane Hastie: And then the flip side of that, how do we bring people into the organization well?

Onboarding people well [24:14]

Lilac Mohr: A lot of times we’re looking in the wrong places we’re looking for, “Let’s bring in the top talent,” instead of thinking, “How do we bring in really good people and then be able to grow them?” I think that mind shift helps us bring in people who feel like they’re going to belong. We know that they can have good problem-solving skills, that they’re going to work together with their team members, that they’re hungry to learn to solve these problems. And I think those sometimes make the best engineers, especially in the long-term, because then you get to cultivate them and be able to watch them grow and learn.

I think that’s definitely the first step, is thinking about who you’re hiring and why. We know that diverse teams are better performing teams. Bringing in that diversity and then making sure that it’s not checking a box, it’s bringing in the best people, and then making sure they feel like they belong, I think is really playing the game for the long-term. Making sure that you’re building a healthy team, not just for now, but into the future.

Shane Hastie: Lilac, thank you very much for taking the time to talk to us today. If people want to continue the conversation, where do they find you?

Lilac Mohr: They can find me on LinkedIn. And we’re constantly hiring, so don’t hesitate to reach out.

Shane Hastie: Wonderful. Thanks so much.

Lilac Mohr: Thank you.

Mentioned

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


BigScience Releases 176B Parameter AI Language Model BLOOM

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

The BigScience research workshop released BigScience Large Open-science Open-access Multilingual Language Model (BLOOM), an autoregressive language model based on the GPT-3 architecture. BLOOM is trained on data from 46 natural languages and 13 programming languages and is the largest publicly available open multilingual model.

The release was announced on the BigScience blog. The model was trained for nearly four months on a cluster of 416 A100 80GB GPUs. The training process was live-tweeted, with training logs publicly available throughout for viewing via TensorBoard. The model was trained with a 1.6TB multilingual dataset containing 350B tokens; for almost all of the languages in the dataset, BLOOM is the first AI language model with more than 100B parameters. BigScience is still performing evaluation experiments on the model, but preliminary results show that BLOOM has zero-shot performance on a wide range of natural language processing (NLP) tasks comparable to similar models. According to the BigScience team:

This is only the beginning. BLOOM’s capabilities will continue to improve as the workshop continues to experiment and tinker with the model….All of the experiments researchers and practitioners have always wanted to run, starting with the power of a 100+ billion parameter model, are now possible. BLOOM is the seed of a living family of models that we intend to grow, not just a one-and-done model, and we’re ready to support community efforts to expand it.

Large language models (LLMs), especially auto-regressive decoder-only models such as GPT-3 and PaLM, have been shown to perform as well as the average human on many NLP benchmarks. Although some research organizations, such as EleutherAI, have made their trained model weights available, most commercial models are either completely inaccessible to the public, or else gated by an API. This lack of access makes it difficult for researchers to gain insight into the cause of known model performance problem areas, such as toxicity and bias.

The BigScience workshop began in May of 2021, with over 1,000 researchers collaborating to build a large, multilingual deep-learning model. The collaboration included members of two key organizations: Institute for Development and Resources in Intensive Scientific Computing (IDRIS) and Grand Equipement National De Calcul Intensif (GENCI). These provided the workshop with access to the Jean Zay 28 PFLOPS supercomputer. The team created a fork of the Megatron-DeepSpeed codebase to train the model, which used three different dimensions of parallelism to achieve a training throughput of up to 150 TFLOPs. According to NVIDIA, this is “the highest throughput one can achieve with A100 80GB GPUs.” Training the final BLOOM model took 117 days.

Thomas Wolf, co-founder and CSO of HuggingFace, joined a Twitter thread discussing BLOOM and answered several users’ questions. When asked what compute resources were necessary to use the model locally, Wolf replied:

Right now, 8*80GB A100 or 16*40GB A100 [GPUs]. With the “accelerate” library you have offloading though so as long as you have enough RAM or even just disk for 300GB you’re good to go (but slower).

Although BLOOM is currently the largest open multilingual model, other research groups have released similar LLMs. InfoQ recently reported on Meta’s OPT-175B, a 175B parameter AI language model also trained using Megatron-LM. Earlier this year, EleutherAI open-sourced their 20B parameter model GPT-NeoX-20B. InfoQ also reported last year on BigScience’s 11B parameter T0 model.

The BLOOM model files and an online inference API are available on the HuggingFace site. BigScience also released their training code on GitHub.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: How to Spark a Consumer-Grade UX Revolution

MMS Founder
MMS Marcelo Wiermann

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Consumer-grade UX is a game changer for enterprise SaaS.
  • Starting at the team-level means starting today, not someday.
  • Don’t work in a vacuum. Understand the business, the product and what customers and end-users like and don’t like about it today.
  • Turn end-users into advocates. Connect with them early in the process, check in with them regularly and track their satisfaction.
  • Keep iterations short and build momentum as data starts coming in.

Modern enterprise applications need to care about the end-user if they wish to thrive in 2022 and beyond. Many enterprise application companies are now using the same UX concepts once reserved for their consumer cousins – and for a good reason. Companies that implement this concept, commonly called “consumer-grade UX”, have consistently outperformed their more drab competitors in end-user adoption rates, end-user advocacy, workflow efficiency, market share, and revenue.

Why

In the distant past (i.e., early 2000’s), the UX of enterprise applications was not considered a significant adoption or sales driver. Managers looked at feature sheets, compared the cost of different services, carefully weighed the pros and cons, and, finally, made purchasing decisions based on which salesperson responded the fastest. Users would adopt because their boss told them to, and that was that.

But something changed around 2010. Slack launched and, fast forward to 2014, overtook the then incumbent Hipchat by Atlassian (yes, the JIRA folks). Many other companies – Dropbox, Asana, Google, etc. – started making a similar decision to Slack’s: applying the same principles used to provide consumers with a great user experience to their enterprise applications.

The user became the decision-maker. The brilliance of those companies in exchanging the UX equivalent of a gas station for something people like to use was twofold. First, because users are more efficient and effective when using systems that are well designed; second, because end-users become advocates within their companies to adopt solutions that they like, especially if they used them before. These factors, combined, transformed those brands into professional household names. In Slack’s case, they became a verb.

There are two cases where these UX overhaul projects tend to be required. First, established companies who wish to serve their customers better, have new leadership with new ideas or have modern, nimbler alternatives eating their market share. Second, fast-growing startups that have optimized for implementation speed at first to prove their product-market fit, but now have mounting design debt and increasingly less forgiving users as they move past the early adopter category.

How

Implementing consumer-grade UX in an existing product is resource-intensive, time-consuming, and presents considerable risk (“why change a winning team?”). Decision-makers may see such investments as a waste of time since they take time away from developing new features, improving software quality, and fixing bugs.

It is possible to execute a full scale UX overhaul top-down, with extensive redesigns and long term plans involving multiple teams. These projects, however, require a lot of organizational will, support from upper management, upfront investment and other luxuries. We are not going to focus on those.

Instead, let’s focus on something that any team can start today: kickstarting a UX revolution, one iteration at a time. There are four main phases to making that happen:

  • Understand the status quo.
  • Design a workable solution and get buy-in.
  • Execute a pilot test.
  • Iterate and build upon the momentum.

Learn & Understand

The first step is to understand the business, the application, the current limitations, and the end-user before going to the drawing board. You need to answer who your customer is, your end-user, what you are selling them, their key objectives and critical tasks, how much they are paying you, and what contributes to your cost.

Many good consumer-grade UX overhaul projects never get past the initial concept phase. As tragic as that may be, decision-makers are not necessarily wrong. I have seen many engineers, product managers, and designers develop idealistic proposals that get fast-tracked to the “maybe” pile and, then, into oblivion. The two major problems of such proposals are that they either don’t solve real problems or fail to speak the language of business.

Competent decision-makers are always thinking about serving their customers better – and getting more value from them in the process. If you want to grab their attention and obtain their buy-in, you need to show that you understand the status quo and discuss changes in terms of OKRs, KPIs, ROIs, roadmaps, and more.

Start by learning about the business, the product, the customers, the end-users, and their pain points. One way to do that is by talking with product managers, senior engineers, customer support managers, and, in startups, the founders. As questions like what are you selling? Who is buying? Why are they buying it? What is the price structure? How much does it cost for you to provide those services? Who is the user? What are the major tasks that they need to do?

Another valuable source of information is your data. You want to analyze two types of key performance indicators (KPIs): lagging or output KPIs and leading or input KPIs. The first type tells you if the users got what they needed, and the second tells you if they are having difficulties getting there.

Examples of output KPIs are the Net Promoter Scores (NPS), the System Usability Score (SUS), how many key tasks are completed in a given time, and the cost per key task. Examples of leading KPIs are the time it takes for users to accomplish key tasks (and their error/rejection rate). You need both types of KPIs. If you lack some (or all) of them, start by working with teams to begin implement tracking them.

Pro-tip: put a lot of thought into capturing human KPIs associated with how much users like using the application. Don’t underestimate the power of joy. One of the core differentiating factors in early Slack vs. Hipchat was how easy they made it for users to find and send animated GIFs.

Here’s a real-world example: I led the engineering team at a fast-growing AdTech company in the US that operated a DSP (Demand Side Platform). Our customers were marketing agencies and major brands with in-house ad teams. Our end-users were their ad operators and campaign managers, who used our product to set up digital campaigns, track their progress, make adjustments, and report on their outcomes. Main KPIs were the time to set up campaigns, the freshness of our analytics, time to generate reports, and on-target delivery rate.

The folks driving the frontend part of the project did such a phenomenal job and cared so much about great UX that we won the 2015 UX People’s Choice Award.

Design & Sell

The next step is to design a workable solution and get the buy-in of the necessary decision-makers. Start by using your business knowledge to identify the most significant challenges with the highest potential for impact. Those are usually connected with your lagging and leading KPIs and aligned with the main tasks the end-users have to perform.

Pick the highest ROI challenge you deliver and focus on it. Impact defines the return on investment (ROI), and the implementation cost defines the investment. It’s essential to pick a meaningful problem so that fixing it matters and pick something manageable so that you can deliver. If this first iteration fails, the whole process could fizzle out. Choosing a specific problem to focus on allows you to start now and not after a lengthy planning and review process. Finally, make sure that this is a two-way door decision – i.e., one you can reverse without significant impact to the business in case things go awry.

Once you have chosen a challenge, identify its end-users, their pain points, the KPIs you need to improve and work backward from them to design a minimum lovable solution. The key to moving fast and adapting is the Build-Measure-Learn iterative cycle – build something, see how it works in the real world, learn from the experience, rinse and repeat. At the same time, since you are explicitly addressing UX, go the extra mile between viable and lovable.

At this point, you should have a workable design based on a concrete understanding of the underlying business mechanics and end-user needs. It should also have measurable targets on meaningful KPIs. The final step is to get the buy-in from stakeholders.

To successfully pitch the project to whoever you need to get the buy-in from – a product manager, engineering manager, teammates, etc. – you need to tell a good story. Start with the why: show what problems you are solving and why they matter in the business context and current objectives. Then follow on to the expected impact, what the solution will look like, and, finally, how much time/money/people it will take to make it happen. Many of the same rules of pitching a new business apply here.

Pro-tip: consider the timing of your change. Pitching a UX-centric change at the wrong time – for instance, while the team is dealing with a major technical problem or going through a security audit – can come across as tone-deaf and likely won’t get you the full attention of the people you need. Use your best judgment, but examples of good times to pitch are right after a successful release, during quarterly planning, or while having a coffee with a product manager (never underestimate the power of a 1o1 coffee).

In my previous example, we found that our initial approach to the setup of ads by operators, which was on par with the rest of the industry, was cumbersome, repetitive, and error-prone. Operators have to set up dozens – sometimes hundreds – of ads in a short time, and even minor mistakes at this stage can lead to a lot of time and money wasted. I’m pretty sure an LA restaurant doesn’t want to run their ads in Australia.

We overhauled this process by improving the UX of the forms involved and adding lots of automation. We made some sections optional, implemented a multi-stage process, and added features like restricting the search of cities based on the selected country/state and support for importing bulk ad configurations directly from customers. These changes resulted in a significantly lower time to set up ads and campaigns, fewer setup mistakes, less wasted marketing budget, and happier, less stressed operators.

Pilot Test

You have understood the business, designed an awesome solution, and gotten the buy-in you needed. The next step is to execute it.

Think of the first consumer-grade UX change as a pilot program. Ensure that you have all the necessary tracking in place, especially for the core metrics you are trying to improve. Automate as much tracking as possible using tools like Google Analytics, Hotjar, Kissmetrics, Data Dog, and others. However, if automation is not possible, don’t let that stop you – find ways to manually track the numbers you need, even if you need to flex your Excel skills. Don’t run blind.

Pro-tip: get regular feedback from willing end-users to get their qualitative perspective. Numbers alone tell you only part of the story. You need quantitative and qualitative information to understand what’s going on.

If possible, run your pilot as an A/B test using the old version as a baseline. A/B tests ensure that you compare apples to apples and that any improvements – or problems – are coming from the new design.

Prepare to iterate fast as data and end-user feedback starts coming in, and don’t let your biases cloud your analysis. Understand what the data is telling you and plan your next changes accordingly. These changes could be minor or significant, depending on your results. Either way, plan according to your time budget, keep stakeholders informed, act fast and keep a changelog of what you are doing and why you are doing it.

In my example, we had not originally planned on introducing bulk ad imports so early on. We had to move that feature up the schedule during the overhaul project once it became clear from operator feedback that improving the input process was great, but it was not enough. Operators had crunch times (ex: when first setting up large campaigns), and, as it turned out, they already had a semi-standard file format for describing ads in use.

Iterate & Keep Going!

Consolidate your learnings once the pilot is over. Interview customers and end-users to get their holistic perspective on what you implemented. Collate the information from your changelog and understand what worked (and what didn’t), why it worked and why it failed. The full picture will help you determine if this change was successful and going in the right direction.

Independently of whether the pilot worked and you achieved your objectives, present the results, learnings, and conclusions to the team and key stakeholders. This will build credibility and establish a solid shared knowledge foundation to build upon.

Look at the big picture and think about more systemic changes. Your knowledge from the pilot taught you more than just how to do better in the next iteration – it showed you a glimpse of patterns that can scale-out throughout the application. Start creating an overarching vision and toolset that can tie everything together and create consistency across your future iterations. This can include a common design language, consistent form elements, use of shortcuts, when to sync wait or run a background job, etc.

The pilot is not the end – it’s the first step in your revolution. Capitalize upon the momentum of this first iteration, factor in what you learned from it, and pick one or more high ROI challenges to tackle next. It will be easier to pitch them this time as there are fewer unknowns. Don’t stop now!

Conclusion

You can start a consumer-grade UX revolution at your company today. You don’t have to make a big deal out of it or let analysis paralysis shackle your team. Kick that off by understanding your business, designing a solution for a high-impact-but-still-deliverable case, pitching and getting the buy-in you need, running a data-driven pilot program, and, finally, using the momentum to build a vision and tackle the next challenge.

We were at a very technically focused moment in our DSP when we ran our first UX overhaul iteration. We had to process 100K+ transactions per second, run complex planning and ML algorithms in milliseconds, and create analytics data pipelines that processed multiple terabytes of data per day—all with a startup budget and a small (but plucky) team. UX was not our focus, and we were content with following the competition in that regard. Our first pilot test changed that for engineers and executives alike.

It was, quite literally, the spark that ignited a revolution.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Microsoft Introduces a New Way for Faster Building Cloud Apps with Azure Developer CLI

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Recently Microsoft introduced the public preview of the Azure Developer CLI (azd) — a new, open-source tool that accelerates the time it takes to get started on Azure. It provides developer-friendly commands that map to essential stages in the developer workflow: code, build, deploy, monitor, and repeat.

The Azure Developer CLI is designed to set up the resources developers need to run their applications in Azure. According to the Microsoft documentation, the recommended workflow for the Azure Developer CLI is:

  • Template selection
  • Get and deploy workflow
  • Change code, commit and automatically deploy to running apps 

Source: https://docs.microsoft.com/en-us/azure/developer/azure-developer-cli/overview

Developers can use various commands such as azd init, azd provision, azd deploy, azd monitor, and azd pipeline config.  Also, Savannah Ostrowski, a Senior Product Manager, Cloud Native Developer Tools & Experience at Microsoft wrote in a developer blog post:

Better yet, you can also use azd up to create, provision, and deploy a new application in one step! For a list of supported commands, see the Developer CLI reference docs. Alternatively, run azd –h from your preferred terminal after installation. If you no longer want or need the resources you’ve created, you can run azd down.

However, Dana Epp, a Security Engineer, and Researcher at Vulscan Digital Security, warned in a tweet:

What’s the worst thing for MORE shadow IT for cloud admins to fret about?
It’s sexy. Powerful. And puts potential company resources at risk. 
Friend’s don’t let friends ‘right-click deploy’. And they shouldn’t allow `azd up` without isolation.

Note that every template comes with source code, infrastructure code, pipeline files, and configuration needed to run the entire solution on Azure and local run and debug in VS Code and Visual Studio. Furthermore, guidance is available through the documentation landing page and getting started video.

A respondent in a Reddit thread on the Azure Developer CLI said:

Looks like another wrapper for something that was already solved. Deploying IaC and applications to PaaS is easy enough with CI/CD tasks. I guess the tool is nice if a developer needs to deploy a test cloud infrastructure and application from a local computer. Still going to test out in CI/CD because never know until used.

Currently, the Azure Developer CLI is in public preview and includes support for Container Apps, Functions, Static Web Apps, & App Services in Node, Python, and C#.  AKS and Java are coming soon. Microsoft uses Bicep in current templates, while other IaC providers, like Terraform, are in the works.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Principles of Green Software Engineering with Marco Valtas

MMS Founder
MMS Marco Valtas

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:






Introduction [00:01]

Thomas Betts: Hi, everyone. Before we get to today’s episode with Marco Valtas, I wanted to let you know that Marco will be speaking at our upcoming software development conferences, QCon San Francisco and QCon Plus. Both QCon conferences focus on the people that develop and work with future technologies. You’ll learn practical inspiration from over 60 software leaders deep in the trenches, creating software, scaling architectures, and fine tuning their technical leadership to help you adopt the right patterns and practices. Marco will be there speaking about green tech and I’ll be there hosting the modern APIs track.

QCon San Francisco is in-person from October 24th to the 26th, and QCon Plus is online and runs from November 29th through to December 9th. Early bird pricing is currently available for both events, and you can learn more at qconsf.com and qconplus.com. We hope to see you there.

Hello, and welcome to another episode of The InfoQ Podcast. I’m Thomas Betts. And today, I’m joined by Marco Valtas. Marco is the Technical Lead for cleantech and sustainability at ThoughtWorks North America. He’s been with ThoughtWorks for about 12 years, and he’s here today to talk about green software. Marco, welcome to The InfoQ Podcast.

Marco Valtas: Thank you. Thank you for having me.

The Principles of Green Software Engineering [01:07]

Thomas Betts: I want to start off our discussion with the principles of green software engineering. Our listeners can go find these at principles.green is the website. There are eight listed. I don’t think we need to go into all of them, but can you give a high level overview of why the principles were created and discuss some of the major issues they cover?

Marco Valtas: The principles were published around 2019 by the Green Software Foundation. They are very broad on purpose. And the need for the principles is basically that, how can we frame … Well, that’s how principles work. Principles help us to make decisions. When you are facing a decision, you can rely on a principle to guide you. Well, what is the trade-off that am doing? That’s basically what we have on the Green Software Principles.

They are generic in a sense. Like, be carbon efficient, be electricity efficient, measure your carbon intensity or be aware of your carbon intensity. I think the challenge that it posed to all development is like, okay, when I’m doing my software development decision, what trade-offs I’m making in terms of those principles. Am I making a trade-off of using a certain technology or doing something a certain way that will incur more carbon emissions or electricity consumption and so on and so forth?

Thomas Betts: Who are these principles for? Are they just for engineers writing the code? Are they CTOs and CEOs making big monetary decisions? Operations coming in and saying, “We need to run bigger or smaller servers”?

Marco Valtas: I think they are a target for folks that are making decisions on the application level. You can think about the operations and the software development itself. But operations, usually you are going to look on energy profile, like the data center that you are using, what region you are using. So operations can make big calls about what hardware you’re using.

Those are also in the principles, but the principles are also to help you to make a decision, at a very low-level. Like, what is the size of assets that you are using on your website? How much data you’re trafficking through the network, how often you’re doing that. If those apply to an operations decision, that will work. If those principles apply to a development decision, they will work. And that can be said for a CTO position. If you’re making a decision that will have an impact on any of those carbon emissions, electricity consumption, that can help you to make a decision.

Thomas Betts: I know one of the topics on our InfoQ Trends Report this year, and I think last year was the first time we put design for sustainability for one of the architecture trends that people are watching. And it’s something that people have been thinking about for a few years now, but I think you said these came out in 2019.

That’s about the same time where it says these are how you think about those decisions, and the idea of the trade-offs you’re saying architects are always concerned with. It’s like, well, the answer is, it depends. What does it depend on? 

Varying carbon impact across data center regions [04:03]

Thomas Betts: Can you go into some of the details? When you say about a data center choice, what difference does that make based on which data center you use and where it’s located?

Marco Valtas: For data centers, you can think about cloud usually, not considering your own data center, but each data center is located in a geographic region. And that geographic region have a grid that provides energy, provides electricity to that data center. If that data center is located in a region where there’s way more fossil fuel energy being generated than solar or wind, that region, that data center has a carbon intensity profile of a larger carbon intensity. If you run your workloads on that region, usually you are going to count more carbon intensity.

But if you run the same workload in another data center, let’s say that you change from one cloud region to another cloud region that is on the energy grid, that region has way more solar than fossil, you are using way more renewable energy. So it lowers down the intensity of your workload. Considering where your data center is, is one of the factors, especially if you look on how the grids are distributed.

Thomas Betts: Is that information that’s easily accessible? I think there’s a general consensus. I live in the United States. The Eastern regions are very coal oriented, and so it’s very fossil fuel. But if you move more into the US Central regions for Azure or AWS, whoever, you get a little bit more of renewables. But is that something that I can go onto a website and say, “I’m running this in US-EAST-1, compare it to US-CENTRAL and tell me what the carbon offset is”? That doesn’t seem like a number I see on any of the dashboards I go to.

Marco Valtas: You won’t see it on any of the dashboards. We at Thoughtworks, we created one cloud carbon footprint tool, which actually helps you to look on the footprint by region. But the data, unfortunately, the data is not easily accessible from the cloud providers. We can go over a tangent around how cloud providers are actually handling this information, but the way that we do on our tool and most of other tools that are there, in the case of the United States, you can use the EPA Report on regions from the United States.

If you go to Europe, there will be the agents of Europe, and there’s other regions where you can get from another public databases, where they have reports around, what is the carbon intensity per watt hour in that region? You plug that and you make the assumption that that data center, it’s on that carbon intensity level. It gets complicated or complex if you consider that maybe the data center is not using that grid’s power. Maybe the data center has solar in itself, so it’s offsetting a little bit. It’s using a little bit of a renewable.

But that starts to be really hard because you don’t know, and the providers will make that information easily accessible. So you go for estimates.

Cost has a limited correlation to carbon impact [07:01]

Thomas Betts: All the data center usage, the number that people are usually familiar with is, what’s my monthly bill? Because that’s the number they get. And there’s correlations. Well, if I’m spending more money, I’m probably using more resources. But all of these have a fudge factor built in. That is not a direct correlation.

And now we’re going to an even more indirect correlation if I’m using more electricity here and it’s in a region that has a higher carbon footprint than I have a higher carbon footprint. But I’m in an 87 and I could be at a 43, whatever those numbers would be.

Marco Valtas: Cost is interesting. Cost is a fair assumption if you have nothing more to rely on. Especially on cloud resources, because you pay by the resource. If you’re using more space on the hard drives, you’re going to pay for more. If you’re using a lot of computing, you are paying more. If you’re using less, you’re going to pay less. It’s the easier assumption that, well, if I cut my costs, I will cut my emissions. But as you said, there’s a correlation, but there’s a limit to that correlation. You actually find that limit quite quickly, just moving regions.

Recently, I recorded a talk on XConf which will be published later this month, and I plotted against the cost of running some resources on AWS and how much carbon intensity on those resources. That correlation of cost and emission is not a straight line at all. If you move from the United States to Europe, you can cut your emissions considerably if you look just on the intensity of those regions. But you are going to raise your cost. That actually breaks the argument like, oh, cost? If I cut my cost, I cut my emissions. No, it doesn’t work like that.

If you have only cost, sure, use that. But don’t aim for that. Try to get the real carbon emission factor that you have on your application. That’s where our cost stands.

Taking a pragmatic approach to optimizing applications [08:57]

Thomas Betts: How do I go about looking at my code and saying, “Well, I want it to be more performant”? Anyone who’s been around for a while has run into processes that this is taking an hour. And if I change it a little bit, I can get it to run in a minute, because it’s just inefficient code. Well, there’s those kinds of optimizations, but then there’s all of the different scenarios. Like, how are my users using this system?

Am I sending too much data over the wire and they’re making too many requests? How far away are they from my data center? There’s just so many factors that go into this. You said it’s a holistic view, but how do you take the holistic view? And then, do you have to look at every data point and say, okay, we can optimize it here, and we can optimize it there, and we can optimize it here?

Marco Valtas: This is what makes this such a rich field to be in, and why I really enjoy being part of it. There are so many things that you can think of, so many actions that you can take. But in order to be pragmatic about it, you should think as an optimization loop, as you have an optimization loop in your performances of your application. You try to find, what is the lowest, what is my bottleneck? What is the thing that is more responsible for my emissions overall?

Let’s say that I have a very, very inefficient application that takes too long to answer. It sorts the data four times before delivering back to the user, and that is obviously my computing side of things are the worst culprit on my emission. So let’s tackle that. And then you can drill down over and over and over again, till you can make calls about what kind of data structure I’m using. Am I using a linked list? I’m using an array list. I’m using what kind of loop. I’m using streams.

Those are decisions that will affect your CPU utilization, which translates to your energy utilization that you can profile and make some calls. But again, it gets complicated. It gets very distributed. The amount of savings depends, so you need to go for the low-hanging fruit first.

But yeah, measuring? There is some proposals from the Green Software Foundations, like the Carbon-Aware SDK and the Software Carbon Intensity Score, which you can get some variables and do a calculation, try to measure your application as a whole. Like, what is my score?

The good thing about those scores is not just being able to see a number, but also compare that number with your decisions. If I change something on my application, does this, relative to the previous state, it gets better or it gets worse? Am I doing things that are improving my intensity or not? And of course, there’s counterintuitive things.

There’s one concept which is called the static power draw from servers. Imagine that you have a server running at 10%. It consumes in energy to be just on. And the counterintuitive idea here is that if you run your server at a 90% of utilization, that won’t be the same as increasing your 80% in energy consumption, because you have a baseline just to keep the server up. Memory also needs to be powered, but it doesn’t draw more power if it’s busy or not. Sometimes you need to make decisions of using a server more than spreading across servers. Those are trade-offs that you need to take.

Thomas Betts: That’s one of those ideas that people have about moving from an on-premise data center to the cloud, that you used to have to provision a server for your maximum possible capacity. Usually, that’s over provisioned most of your time, because you’re waiting for the Black Friday event. And then these servers are sitting idle. The cloud gives you those capabilities, but you still have to design for them.

And that goes back to, I think, some of these trade-offs. We need to design our system so that it scales differently. And assuming the run at 50 to 90% is a good thing as opposed to, oh my gosh, my server’s under heavy load and that seems bad, you said it’s kind of counterintuitive. How do we get people to start thinking that way about their software designs?

Marco Valtas: That’s true. Moving to the cloud gave us the ability of, use just the things that we are actually using and not having the servers idle. I don’t think we got there, in the sense that there was a blog post saying that it calculated around $26 billion wasted in cloud resources in 2021, with servers that are up and not doing anything or are offering resources. We can do better on our optimization of the use of the cloud.

Cloud is excellent too, because you can power off and provision as you like, other than on-prems. Bringing that to the table and how can you design your systems to think about that, it starts with measuring how complex an application can be. It’s kind of unbounded. The way that you’re going to design your application to make best use of the carbon resource is going to go through hops. And you should measure how much energy, how much resources are you using?

Some things are given in a sense like, well, if I use more compute, probably I’m doing more emissions. But then there’s more complex decisions. Like, should I use an event-based architecture? How microservice versus a monolith will behave. I don’t have answers for that. At my company, we are researching and trying to run experiments and get to some other information around, well, how much microservice architecture actually impacts on your carbon emissions?

And then the big trade-off is the last, I don’t know, 20 years on software development, we optimize to be ready for deployment, to minimize uncertainty on our releases. And be fast in delivering our values. That’s what continuous delivery is all about. But then you have to ask your question, how much of those practices, or what practices are you using that will turn out to be less carbon efficient? I can think about an example.

There are several clients that I work with that have hundreds of CI servers, and they will run hundreds of pipelines because developers will be pushing code throughout the day. And the pipeline will run and run the test, and run sometimes the performance test and everything. And the builder will stop right in the gate of being deployed and never be deployed. There’s a trade-off to be made here of readiness and carbon emissions.

Should you run the pipeline, every push of code? Does that make sense for your project, for your company? How can you balance all this readiness and quality that we develop throughout the years with the reality that our resources are not endless? They are not infinite. I think one of the impressions that we got from cloud was yeah, we can do everything. We can run with any amount of CPU, any amount of space.

I couldn’t tell you how many data scientists were happy of going to cloud, because now they can run jobs of machine learning that are huge. But now, we have to go back to the question, is this good for the planet? Is this consumption of resources unbounded something that we need to do, or we should?

Carbon considerations for AI/ML [16:42]

Thomas Betts: You touched on one thing that I did want to get to, which is, I’ve heard various reports and you can find different numbers online of how much machine learning and AI models cost just to generate the model. And it’s the idea that once I generate the model, then I can use it. And the using it is fairly efficient. But I think some of the reports are millions of dollars, or the same energy to heat 100 homes for a year to build GTP-3 and other very complex models. They run and they get calculated and then we use them, but you don’t see how much went into their creation.

Do we just take it for granted that those things have been done and someone’s going to run them, and we’ll all just absorb the costs of those being created? And does it trickle down to the people who say, “Oh, I can just run a new model on my workload”? Like you said, the data scientist who just wants to rerun it and say, “Oh, that wasn’t good. I’m going to run it again tomorrow.” Because it doesn’t make a difference. It runs really quickly because the cloud scales up automatically. Uses whatever resources I need, and I don’t have to worry about it.

Marco Valtas: One of the finer principles is, everybody has a part on sustainability, and I think that still holds true independently. What are you doing? We can get into philosophy about technology and see, are you responsible for the technology that you’re producing? And how is the ethics behind that? I don’t want to dig into that, but also we don’t want to cut the value that we’re trying to achieve.

Of course, GTP-3 and other models that are useful and important for another parts might be a case where, well, we’re going to generate this amount of emissions, but then we can leverage that model to be way more efficient in other human endeavors. But can you abstain yourself of the responsibility based on the theoretical value they’re generating? I don’t think so.

I think it’s a call every time, not knowing your emissions. In time, this is going to turn to be something that everybody needs to at least have some idea. As we do our recycling today, 10 years ago, we never worried about doing recycle. Nowadays, we look on the packages. We see, well, even separating your trash, you go like, why this vendor design this package this way that is impossible to recycle? Because this is glued in that, right?

We have that incorporated to our daily lives. And I think in the future, that will be incorporated to design decisions on software development, too.

Performance measurements and estimates [19:12]

Thomas Betts: I wanted to go back a little bit when you talked about measurements, because I think this is one of those key things. You said there are scores available and I hope we can provide some links to that in our show notes. People do performance tests, like you said. Either part of their daily build or on a regular basis, or just I’m watching this code and I know I can make it better. I’m going to instrument it. There are obviously lots of different scales.

This is one of those same things, but it’s still not a direct measurement, that I can’t say my code has saved this carbon. I can change my algorithm here and get the carbon score. I can get the number of milliseconds that it took to run something. Again, is that a good analogy, that if I can get my code to be more efficient, then I can say, well, I’m doing it because it saves some carbon?

Marco Valtas: If it is something that you are targeting at that point, yes. About how accurate is the measurement? That’s hard. The way that we set up our software development tooling, it doesn’t take that in consideration. So you’re going to have rough estimates. And it can say, well, I’m optimizing for carbon, instead of I’m optimizing for memory or something else. That’s definitely something you can do.

When we use the word performance though, it has a broad meaning. Sometimes we can talk about performance like, well, I want my code to run faster, or I want my code to handle more requests per second because I have this amount of users that are arriving at my end point. Does not necessarily means that if we’re making more performance in those dimensions, you are also making more performance on the carbon dimension.

What it boils down to is that carbon intensity will become, at least the way that you can incorporate nowadays, it’s like a cross-functional requirement or a nonfunctional requirement. It’s something that you are aware of. You might not always optimize for it because sometimes it doesn’t make business sense, or your application will not run if you don’t use certain technologies, but it’s something that you are aware of.

And you are aware during the lifetime of your software, if you’re doing certain changes, how that carbon intensity is varying. There’s a good argument like, oh, well, my carbon intensity is rising, but the amount of users I’m handling is increasing too, because I’m a business and I’m increasing my market. It’s not a fixed point. It’s something that you keep an eye on it and you try to do the best that you can for that dimension.

Corporate carbon-neutral goals [21:38]

Thomas Betts: And then this is getting a little bit away from the software, but I know there are a lot of companies that have carbon-neutral emissions. And usually that just focuses on our buildings get electricity from green sources, which got a lot easier when a lot of places closed their offices and sent everybody home, and they stop measuring individuals in their houses. Because I don’t think I have full green energy at my house for my internet.

But I don’t think a lot of companies when they talk about their carbon-neutral goals, as a company are looking at their software usage and their cloud data center usage as part of that equation. Or are they? In the same way that companies will buy carbon offsets to say, “Well, we can’t reduce our emissions, so we’re going to do something else, like plant trees to cancel out our usage,” is there something like that you can do for software?

Marco Valtas: Offsetting something that you can do as a company, not as software, you can definitely buy offsets to offset the software emission, but that’s more a corporation decision. In terms of software development itself, we just want to look on how my software is performing in terms of intensity. In general, we use the philosophy that reducing is better than offsetting. Offsetting is the last resort that you have, before when you get into a wall where are you trying to reduce.

Two philosophies of green software engineering [22:53]

Thomas Betts: The last part of the Green Software Principles, there’s eight principles and then there’s two philosophies. The first is, everyone has a part to play in the climate solution, and then sustainability is enough all by itself to justify our work.

Can you talk to both of those points? I think you mentioned the first one already, that everyone has a part to play. Let’s go to the second one then. Why is sustainability enough? And how do we make that be motivation for people working on software?

Marco Valtas: If you are following up on climate in general, it might resonate with you that we are in an urgent situation. The climate change is something that needs action and needs action now. The amount of changes that we can make to reduce emissions so that we reduce how much hotter will get Earth and how that impacts on environment in general, it’s enough to justify the work. That’s what is behind it. It’s putting the idea that this is enough of a worthy goal to do what we need to do.

Of course, cases can get very complex, especially on large corporations in, what is your part? But sustainability is just enough because it’s urgent in a sense. This is where sometimes organizations might have a conflict. Because there’s the sustainable business or making a profit or making whatever your business grow, and there’s sometimes sustainability, which is not exactly in the same alignment. Sometimes it will cost you more to emit less.

That’s an organization, that’s a corporation decision. That’s environmental governance of the corporation itself. What is behind this principle is basically that idea where focusing on sustainability is enough of a goal. We don’t think that other goals need to be conveyed in order to justify this work.

Thomas Betts: Well, it’s definitely given me a lot to think about, this whole conversation. I’m going to go and reread the principles. We’ll post a link in the show notes. I’m sure our listeners will have questions. If you want to join the discussion, I invite you to go to the episode page on infoq.com and leave a comment. Marco, where can people go if they want to know more about you?

Marco Valtas: You can Google my name, Marco Valtas. It’s easy. If you want to talk with me, I’m not in social networks. I gave up on those several years ago. You can hit me on marco.valtas@thoughtworks.com if you really want to ask a question directly to me.

Thomas Betts: Marco, thank you again for joining me today.

Marco Valtas: Thank you, Thomas.

Thomas Betts: Listeners, thank you for listening and subscribing to the show. I hope you’ll join us again soon for another episode of The InfoQ Podcast.

Mentioned

About the Author

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


AWS Announced Synthetic Data Generation for SageMaker Ground Truth

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

AWS announced that users can now create labeled synthetic data with Amazon SageMaker Ground Truth. SageMaker Ground Truth is a data labeling service that makes it simple to label data and allows you the choice to use human annotators through third-party suppliers, Amazon Mechanical Turk, or your own private workforce. Without actively gathering or labeling real-world data, you can alternatively produce tagged synthetic data. On your behalf, SageMaker Ground Truth can produce millions of automatically labeled synthetic images.

The process of creating machine-learning models is iterative and begins with data preparation and gathering, then moves on to model training and model deployment. Collecting extensive, varied, and precisely labeled datasets for your model training is frequently difficult and time-consuming, especially the initial stage.

For the purpose of building more comprehensive training datasets for your machine-learning models, combining your real-world data with synthetic data is helpful.

Synthetic data itself is created by simple rules, statistical models, computer simulations, or other techniques. This makes it possible to generate vast amounts of synthetic data with extremely precise labels for annotations over tens of thousands of images. A very small granularity, such as a pixel or sub-object level, and across modalities, can be used to determine the label accuracy. Bounding box, polygon, depth, and segment modalities are some examples.

Synthetic data is a powerful solution to two different problems: data limitations and privacy risks. When there is a lack of labeled data, training data can be supplemented by synthetic data to reduce overfitting. In the instance of privacy protection, data curators can provide made-up data rather than actual data in a way that simultaneously safeguards users’ privacy and keeps the original data’s usefulness.

By adding data diversity that real-world data may lack, you can produce more full and balanced data sets by combining your real-world data with synthetic data.

With SageMaker Ground Truth, you are free to design any imaging scenario with synthetic data, including edge cases that could be challenging to identify and replicate in real-world data. Variations can be added to objects and surroundings to reflect changing lighting, colors, textures, poses, or backgrounds.

In other words, you may order the precise use case for which your machine-learning model is being trained. Amazon SageMaker Ground Truth synthetic data is available in US East (N. Virginia). Synthetic data is priced on a per-label basis.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.