Month: January 2023
MMS • Melissa Daley Bob Crews Adam Sandman
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
Transcript
Hey, folks. QCon London is just around the corner. We’ll be back in person in London from March 27 to 29. Join senior software leaders at early adopter companies as they share how they’ve implemented emerging trends and best practices. You’ll learn from their experiences, practical techniques and pitfalls to avoid so you get assurance you’re adopting the right patterns and practices. Learn more at qconlondon.com. We hope to see you there.
This is Shane Hastie for the InfoQ Engineering Culture podcast. Today, I’m sitting down with a group of people from East Coast USA. We’ve got Melissa Daley, Adam Sandman, and Bob Crews who met at the state of testing conference, the InflectraCON state of testing conference. Am I correct with that, Adam?
Adam Sandman: Yeah. Well, it was the InflectraCON Agile, DevOps, Testing, and Quality event, so yes.
Introductions [01:00]
Shane Hastie: All right. And we want to explore the state of testing, what’s happening with quality in software development today. But before we get into that, probably useful idea is get to know each other a little bit. Can I start with you, Melissa? Who’s Melissa?
Melissa Daley: I’m Melissa Daley. I’m the founder and owner of Orca Intelligence. We are a product analysis and design organization. My background comes from over 20 years in IT, primarily in software engineering. I started out as we were talking before, as a tester, testing out systems, and went into development, then went into management and all that other great stuff. Mostly around business analysis before management. But a lot of my career has been around the business analysis and requirements engineering and solutions architecture.
Shane Hastie: Welcome. Thank you. And Bob?
Bob Crews: Hi, Shane. Thank you. I’ve been in IT for about 35 years now. I started off as a COBOL developer and a DBA, and was into programming. I absolutely loved it. And then about 10 years later, I was on an FBI project and I fell in love, absolutely fell in love with test automation, quality assurance, software testing. And then almost 20 years ago, I co-founded and started a company, Check Point Technologies, where our primary focus is functional performance and application security, whether it’s manual requirements gathering, test automation, you name it. But that is our passion. That’s what we love to do. And I am CEO and co-founder of Check Point Technologies.
Shane Hastie: Thank you very much. And Adam.
Adam Sandman: Oh, great. Well, thanks for having me on the show, Shane. And my background is interesting. I started programming at the age of 10, writing 8-bit video games using computers like the Acorn, which preceded the ARM processors which we all use today. And I also wrote other kinds of software as a child. In adulthood, end up in software. No surprise. Worked for an IT consulting firm called Sapient in Boston, LA, and the Washington, DC area. And it was during that time that actually, I shouldn’t say this on the podcast, but I really found that testing was horrible. And I was the project manager. And what I found was testers had all these really great gizmos and great tools and amazing products. They could use compilers, IDEs, all the buggers, all this really cool kit. And they gave testers spreadsheets and whiteboards and word documents and pieces of paper.
And so when I left Sapient, I founded Inflectra, the company I lead today which I’ve been leading for 16 years. And the original mission was to provide great tools for testers, both automated and manual testing. And then we found that if we don’t have developers on the same platform, there’s no communication. So we actually moved into the development space in providing project management and development testing tools. And our whole mission as a company is to bring harmony to the software lifecycle by providing functionality and tools that enable developers, testers, and managers to all work together. So that way, you don’t have that problem I had 20 years ago where some people have amazing tools and can work really efficiently, other people are left out in the cold.
Shane Hastie: And I heard from Melissa a strong background in analysis, and I almost want to say, but there’s something to the left of those tools, Adam, that aren’t there.
Quality is more than just testing [04:09]
Adam Sandman: Idea management and then trying to come up with the… Well, ideation is the popular word now. Melissa’s company actually works. And let’s talk more about this. This is one of the things that we find and you mentioned it earlier on, is quality is a bigger topic and quality is more than just testing. And we found that at our event. And one of the challenges and you think about quality is what are the requirements? People don’t know the requirements. And in many cases, if you’re building a system, it has to work in a context, government, state, local, federal or requirements that are by law, FDA. And people don’t think of those things until they go live. So Melissa, if you can tell a bit more about some of the things that you’ve done in those areas, because they’re very, very smart?
Melissa Daley: Yes. And so I’m very excited about this conversation because quality does start before you start testing. You always have to build a good requirements framework in order to get you to a great testing experience. So we created a software called Swiftly that allows you to automate requirements generation. The intent is that you put in information in the tool and it automatically generates your requirements for you. So then guess what? That matches up with your testing. Basically, if you think of requirements, requirements is an atomic view of a scenario. And then you need to permutate that information in order to test it because now, you’re testing all the different scenarios in the different route.
And so we found that over the years, similar to what Adam has been saying, is that going through the software development lifecycle, that experience was all about documentation and heavy documentation and heavy Excel spreadsheets and so forth. And then these complex systems that will allow you to put all this information in there, and it seemed like a lot of people had a learning curve around that. So by me having the experience in those systems, I decided that we needed a more simpler tool that will allow you to automate this.
And one of the things I was actually talking about earlier today with some colleagues was that there’s going to be a future where you can converge and diverge information. And that information is always going to be generated for software development because we’re always trying to basically create something on the fly, put it out there in the test environment, and then fail quickly. That’s what we hear in the agile environment. But if we don’t have the documentation just as quickly, going through that process just as rapidly as the development cycle, then we’re going to lose that quality as it gets down to the testing cycle. We’re going to always say we have bugs or we have low-quality testing results when we don’t have the right requirements and the most up-to-date requirements to provide you with that. That’s how we’re starting to solve the problem, so I would love to hear what everyone else has to share too.
Shane Hastie: It sounds to me very much like the behaviour-driven development shift stuff.
Drawing on Behaviour-Driven Development [06:55]
Melissa Daley: Correct. Exactly, exactly. Behavior-driven development, which has always been around. It’s been around. It’s just now we’re evolving into something a little bit more unique.
Shane Hastie: Bob, your thoughts?
Bob Crews: The solutions that we’re all talking about and Melissa’s talking about, it’s great because it helps organizations, as we’ve already mentioned, start earlier, but it also helps them look at the whole effort of testing more holistically so that they’re just not targeting one small module. And that helps eliminate that gap between those who deliver the application or the system and those who end up using it. Because a lot of times, that gap gets ignored. So that I’m sure Melissa and Adam and I, I know we’ve all been in situations where the developers, the architects, and the business analysts, they designed everything as it was supposed to be designed. They developed it as it was supposed to be developed, tested it, got it to the point where there were just perhaps 1% defects and issues. And we deliver it to the end user and they say, “No, this isn’t what I wanted.”
So the solutions of Inflectra and Orca Intelligence, they help bridge that gap. And it also helps make testing and testers so much more efficient. And by being more efficient, they become more effective, which is key. Because one thing our solutions won’t do, it doesn’t make testers great. The testers have to know what they’re doing to begin with. They have to have a passion for quality. They have to understand what they’re doing. But the solutions that we’re discussing can absolutely make them more efficient and thus more effective, Shane.
Adam Sandman: Shane, your podcast is all about culture. And I think Bob, you hit the nail on the head. You need the culture of quality. If you haven’t got a culture of quality, you can put any tool you want, any magic silver bullet, and it won’t make any difference. You need a culture of quality to permeate the organization.
Agile is an interesting beast because Agile has made a lot of things better but has made some things in my mind worse. Agile was trying to reduce the conceptual risk. Because when we talk about risks in software development, conceptual risk, schedule, risk, and technical risk, Agile lets you do concepts earlier. You see it faster. You can do spike solutions, and with sprints, you should reduce schedule risk, which are all great things. However, you’ve also got to the point where you’ve made requirements user stories. They’re very small. They’re very brittle. You can have a bucket of user stories so there’s no context. You lose the big picture, the holistic view that you were talking about, Bob. That’s what I think is being lost in the shuffle in some ways, and if you think about that.
Bob Crews: Absolutely.
Shane Hastie: What is a culture of quality?
Having a Culture of Quality [09:28]
Bob Crews: A culture of quality in my mind, it’s got to start with a passion in looking at the big picture, reminding me of the story about two brick layers, two stone layers. Masons while they’re building the Notre Dame, and somebody comes along and they go to the first one and he says, “What are you doing?” And he says, “I’m laying stone.” And they go to the second one and say, “What are you doing?” And he says, “Well, I’m building a church.” And that’s how everybody should look at it. At testers, they have that culture. It starts from the top down with leadership, and you’ve got to help make your team, your entire team, exceptionally proud of what they’re doing and what they’re creating, and truly understand that quality is a key role in all of that.
Melissa Daley: The quality culture I think is exactly what you’re saying, Bob. You have a team that’s passionate about wanting to build good code for the customer. They want to build good development or good software for their end user. And with that, because of that passion, they’re going to look and pay attention to the details and build a framework that’s going to allow them to move through a software development life cycle that’s going to have quality throughout every step. Your project management life cycle should have quality. Your schedule should have some quality around there. And then your requirements of course, no one really thinks about the requirements framework. Everyone thinks about a development framework and they also think about a testing framework. But having a framework for the entire life cycle of your software development really increases the quality for the entire system itself. So I think that’s important.
Adam Sandman: And one last thing I’d like to add about the culture of quality is around the leadership itself. I was thinking back to InflectraCON, our event, where we had some people talking about these topics. And one of the things that people said is that in some companies, it’s a culture when you find a bug of which developer coded the bug, whose fault it is. You have a blame or fault culture. The second kind of culture is one where you find the bug and instead you say, “Let’s find the root cause. Let’s together as a team dig down and find out why did this bug get in there. Not to blame the person but to fix the process to prevent it happening again.” Preventative culture.
Leaders Must Model Blameless Behaviour [11:35]
And the leadership of the team will determine. If they’re going to blame people, then everyone’s going to put their heads down and no one’s going to be a whistleblower and say, “There’s something that smells bad over there,” because I’m going to get blamed for it. If you’ve got a culture from the top down that’s like, “I want to hear everything that’s good and bad. No one gets blamed. Let’s just put it on the table,” then I think you’re going to get a very different outcome.
Bob Crews: At organizations where I’ve been at, I’ve tried to get everybody to actually get excited. Whether you’re a developer, a tester, whomever, when a defect is uncovered. Because you uncover the defect, you resolve it. And you know what? You are that much closer to perfect software, and that’s what it’s all about. It’s the same excitement I feel when I make a mistake for the first time. It’s like, “All right, I learned something. I won’t make that mistake again.” Well, I always do and then I get frustrated. But that’s how it look, finding a defect should be that excitement.
Melissa Daley: Yeah. And I also think, to piggyback on that, the defect is not only in the software. I think we concentrate so much on defects being in the software. There’s defects throughout the process. And being able to identify, like you said, if you made a mistake or you went through a learning experience and you have your aha moments, that aha moment will uncover throughout. Maybe it’s unit testing, maybe it’s your design. It was a defect in the design. There was a defect in the requirements. Even in the planning, I don’t think people concentrate on that. There could be defects in the planning, hence why we have retrospectives. If you’re doing regular Agile, you want to figure out what those defects are in your planning process and it then trickles all the way down throughout the entire life cycle.
Bob Crews: Right. That’s a great point, Melissa.
Adam Sandman: That is absolutely. One of the things also around that is the communication from when you’re a project manager. We don’t think about what’s the quality of the communication, especially with remote working, when arguing on video calls and Zoom. That’s not bad, but it’s different. You lose and you gain. And also we find you’re working with people in offshore development, different countries, different cultures. Our company has employees in almost every continent and we have meetings together. And there are times what I explained to you is not what you heard. And imagine that I was describing a customer feature to you. So a support agent talks to someone, gets an idea for a feature. Maybe it’s an amazing feature. That can get lost in the cultural transition from a support person to this analyst to a developer. Maybe with different natural languages they speak, different cultures. And in that translation, the quality of communication is not perfect. But we assume that we’ve heard it correctly and we write it down and write a story and build it. So quality of communication is really important.
Shane Hastie: Let’s dig into that remote, because most organizations today are working remote. The pandemic has changed this, and I personally feel that it’s a pretty permanent change. We’re seeing organizations able to take advantage of having people in many, many different places. And for those of us who are able to now work remotely, it means I don’t have to spend an hour and a half in traffic and all of that. So there’s a lot of good things that are coming out of this remote work. There’s also some challenges, but what’s the impact been on quality and testing?
Challenges in Remote Work [14:38]
Bob Crews: The big challenges that I found is it has to do with the processes that were in place relative to how communications were conducted. There was so much before where it was face-to-face. It was in meetings. It was easy to quickly whiteboard something. You could see somebody’s facial expressions, body language, know whether or not they were engaged or ignoring you. So now with everybody remote, somebody might have a valid reason for not being able to use their webcam or something. But you don’t have that personal connection which adds a new, I don’t want to say challenge, but a new paradigm to communication. And communication is about verbally saying it, listening, making sure that all parties understand. I believe that’s become trickier now, and we’re all getting used to that and we are learning the solutions like using Zoom and some of the other online meeting applications. So it’s interesting. It’s exciting. I’m certainly not worried about it, but it’s a process.
Adam Sandman: We’re lucky. We actually have in our office, we have remote. We have in-person. We actually have both. We have people obviously in Argentina and other countries. They physically can’t get here. We haven’t come once a year though, so we do make room for team building. We have a policy of trying to get everyone in the entire company together at least for a week, once a year. And around InflectraCON, we did it till we could co-locate it with the conference.
But what we found is that it’s interesting. There are some activities that now work better and some activities that work worse, and we’ve adjusted for those. But one of the things we found very useful is the fact when we’re doing some infrastructure testing and things where we’re using Amazon Web Services and we’re doing consoles, it’s actually much better remote. Because I can screen share with someone and we can literally do a shadowing. Whereas in the old days if I was sitting over someone’s shoulder for half an hour watching them do something, I mean God, I’d be coughing over them. It was unpleasant. I couldn’t sit for half an hour over someone’s shoulder, pair programming or pair infrastructure. But pair programming and pair infrastructure activities where you’re teaching someone who’s and apprentice is so comfortable. I can be doing all the work. They sit there watching, ask questions, and we can do it for an hour on end and there’s no discomfort. That’s how communication is actually much better.
I think definitely the brainstorming and conception, that’s the stuff I think is the weakest still on virtual. Where we can, we’ll bring people into a physical room or we’ll do multiple rooms linked together by a screen so at least we have some person-to-person. I have found those activities work the best in-person. But definitely a lot of things can be better remote. Even when we’re actually in together, we’ll do a screen sharing even though we’re in the same physical building, which proves the point.
Melissa Daley: Yeah. And I’ll say that it didn’t change drastically for us because we were always remote pretty much unless we had to be on a client site. But I think quality had to improve. The thoroughness and communication did have to increase. Listening had to increase, because sometimes you get into an environment where no one wants to turn on their camera so you have to be a little bit more intentional in listening. Those are the things where I think for us, it was more improving the quality through listening more, not really shifting the work environment too much because we were pretty much being in IT. We were always just using all of the different tools, but definitely improving the listening skills so that we can make sure that we’re getting what we needed out of everyone.
Adam Sandman: Melissa, how do you deal with what I call the curse of Alt+Tab, at least at Windows, where people are in a meeting, and I’m as guilty, Alt+Tab-ing, checking email?
Melissa Daley: Yeah, ugh. The reason why I cringe around that is because I’m one of those. I’m a multitasker. I’m like “Okay, so I can hear what’s going on…” But I think I’ve gotten really good at it because I can always jump in and out of conversations. But I’ve been like that since a child. My family makes fun of me because of that. I can be in two conversations at the same time. Call it a very weird skill, but sometimes it’s not good as I get older because I end up missing certain things. But you’re right. But now, some of the technology will tell you when someone is not as attentive. It’s a very tricky situation because you want people to be able to stay engaged, but sometimes there’s some that lull in the conversation where you’re like, “I know what they’re about to say so let me just go and check my email.”
Adam Sandman: “Well, I’ve read it before. Oh, I bet that you 10 times already, this person. They’re a blow hard.” That’s just… Yeah.
Shane Hastie: What is happening in terms of the people in the industry where we’re still towards the tail end maybe of this great resignation, great reassessment, however we want to call it. What’s happening in terms of the workforce in testing today?
Attracting People from Diverse Backgrounds [19:23]
Bob Crews: I find one of the things that because people are so focused on working remotely that they’ve started now focusing on exactly what they want to do within quality and software testing. I’m finding when candidates come to me for a position, they’re able to communicate much more precisely and they’re more adamant about exactly what it is they want to do, the technologies they want to work on, what kind of specialty they want to develop than ever before. And if I can’t offer them that, then they look elsewhere.
So whether or not that’s a byproduct of the pandemic and all of that working remotely, I’m not sure. Or it could be a byproduct of, I believe that over the years, quality and software testing has started to get a lot more respect. I’m starting to see more universities offer software testing courses. The University of South Florida just 20 minutes from where I am offers one, and I speak there at least once a year. That’s the big thing I see, more of a focus on specialization in quality and software testing.
Adam Sandman: That must make really hard with staffing when you have different projects and different clients and you can’t control what they’re using.
Bob Crews: Exactly right. Very difficult, very difficult.
Melissa Daley: Yeah. I would have to agree with that. I think people during interviews, they’re being very intentional in what they want to do, very intentional. They come with a meaning and a purpose, and it’s challenging because like you said Adam, if you can’t meet that, what can you do? Because the clients will have their own environments, or they may have their own selection of tools that they want to use, and you have no control over that. One of the things that we’ve seen is really making our own internal work culture as conducive as possible to attract still those prime candidates.
Adam Sandman: That’s a really good point. I completely agree with Melissa there. Culture is a huge selling point. Obviously, we are a tool vendor so we only have one project. We have the same system we have from 15 years ago. Now obviously, we evolve with technologies, we’re constantly refactoring it, bringing in, react in new technologies, and the testing is improving and changing. But ultimately it is one system. And if you’re with us for 15 years and 10 years, many people with us, not just myself who’ve been there that long, they’ve been working on one system, maybe two systems. We’ve got a few products, but not hundreds, their entire careers. So you have to stimulate innovation in different ways. You have to be able to make them feel this is a great place to work in other aspects with the culture, flexibility, benefits, just the place they’d love to come to work so it doesn’t feel like work.
The other thing we’re seeing around recruitment is we are looking for non-traditional backgrounds. We find there’s a shortage in the industry of developers and testers, and anyone who can code is going to either become an automation engineer or a developer, and it’s very hard to get them and retain them. And so what we are doing is we are reaching out to people who’ve never done development before. And we actually have a program called Second Acts where we actually look for people who either never went to college and have studied computer science on their own, either to being a test or a developer, and bring them in, or people who maybe were in testing or development 10 years ago, took time off to do family things and have come back. And those have been very successful in expanding effectively the pool of people who can work for us.
We’re now taking it to the next level and actually working with some non-profits to actually try and do this in a more systemic way. And there’s a couple of organizations we’re working with in different countries where they actually bring people to use our tools, and we help teach them on how to become a software tester. And these are people who are the road diggers, minimum wage workers in the informal economy in some cases, bring them into the formal economy. And I think we all have a duty to do that if we want to stay competitive and have people to hire, because everyone’s getting into software. Every industry’s becoming a software industry, and we’re competing with those people for our talent.
Shane Hastie: Where is testing and quality going? What is the crystal ball? We’ve heard and there’ve been some things I’ve seen certainly around integrating AI, for instance. What’s happening there? And what do you see as the trends in the future?
The Future of Testing [23:17]
Bob Crews: You just mentioned AI. Absolutely, Shane. It is absolutely going in that direction, machine learning, AI, and everything. And that’s key. That’s key. The other thing I see happening, a good thing from a process point of view which is a subject I love, is more risk analysis. Because application systems, just because of the market, they have to be delivered faster than ever before. That’s not going to change. If anything, our development time, deployment time’s going to become shorter. And I do not believe we will ever be able to increase the speed of testing to match the speed of possible delivery. So what we’ve got to be able to do is make sure that we perform risk analysis, and at the very least, target that. Because we’re seeing in the news every day, whether it’s a security risk, a functional problem, a website goes down because of performance, these can cause irreparable harm to an organization, financially, life or death in many situations, or negative publicity.
So I love seeing risk analysis become part of the process, and I love seeing artificial intelligence growing to become part of the technology that we’re going to be able to use. And when it comes to artificial intelligence, Melissa knows more about that than I do.
Melissa Daley: Yeah. I agree with you. One part that I disagree with you a little bit, just a little bit, is about that testing won’t catch up with being able to deliver fast. I think that’s what it was.
Adam Sandman: Keeping up with deployment speed. Can you test as fast?
Melissa Daley: Yeah, keeping up with deployment speed.
Adam Sandman: Going in, yeah.
Melissa Daley: Yeah. I think machine learning and AI will get us there. Again, I was just having this conversation with colleagues where it goes back to being able to generate all that information fast enough and regenerate it for anything new, but doing it literally at the speed of light. If you’re using quantum, of course now I’m getting really deep, if you need quantum databases and so forth and you have all this data, you’re being able to generate real documentation at any time, pulling it up, pushing it back down, putting it back up, pushing it back in just like you would at deployment.
I actually think that the more we get to the micro level of quality, because I think we’re at certain levels of quality. Like before, we were at the very, very high level quality when we were using just basic documentation and all written down and so forth, and then we got into our spreadsheets. And then if you get to the micro levels that machine learning will get you in quality, we’re able to do just as fast as deployment. That is at least my goal.
Adam Sandman: I guess we’re with quantum computers, it’s designed for such parallel activities that you can, in theory, could you traverse every single edge node? Every edge case simultaneously? Which is scary.
Melissa Daley: Exactly. And keep it going. You just keep it all parallel, keep them all going.
Adam Sandman: Except quantum computers will break every encryption. One of our colleagues went to DEF CON and came back and scared the bejesus out of us because when quantum computers are available, they’re going to break every encryption we have overnight. HTTPS, every encrypted credential, everything’s going to be broken. And it will happen so fast that we won’t have time to react.
So actually thinking about the future, there’s lots of different things I think that will happen around risks. One thing that’s very interesting about risks is can we use AI and machine learning to actually deal with risk management and risk analysis? Because a computer model could be used to model things like weather patterns, large data sets that are uncorrelated, and it will find in there risks that we haven’t anticipated. It might find a risk in a new computer system. You’re deploying this computer system into a particular target user group which didn’t match the data set it was designed for, you wouldn’t have known that. But because we’ve done all this data analysis, we can actually tell you that the demographic is different. Maybe it’s got a large number of colorblind people using red, green. That’s just a very simple example. But AI machine learning could potentially do some of the risk analysis or risk assessment piece from these large data sets.
The converse I think is that machine learning, potentially because we were using algorithms that we haven’t designed, there may be risks we don’t even know. It was modeled on this dataset. We’re applying it to this other dataset. What’s the risk of that? So it adds risk. Reduces risk, any by equal measure.
Melissa Daley: This conversation is so great. It’s really exciting me to get this detailed about where we can go in the future. I just get delighted.
How do you Test AI Systems? [27:35]
Bob Crews: I do too. And I love artificial intelligence. And one of the things, Melissa, that I’m always thinking about is, all right, well when it comes to artificial intelligence, how are we going to test that?
Adam Sandman: How do we check it?
Bob Crews: How do you check it? Because I think imagine Albert Einstein, and if he started as a small boy, could I teach him first, second, third, fourth grade mathematics? Certainly. But then at a certain point, I can’t teach him anymore, let alone test him to determine if he knows what in the world he’s talking about. So then I see having to test AI, right?
Melissa Daley: Exactly, yes.
Adam Sandman: Loving the stoke. Again, like separate machines that test each other.
Melissa Daley: Yup.
Adam Sandman: Like the space where we all check each other.
Bob Crews: And the trust that we’re going to have to have in that AI, that’s going to be a leap for us.
Adam Sandman: What if the AI gangs up on us? They’ll all be in cahoots. They’ll be like in the playground.
Melissa Daley: Exactly, exactly.
Adam Sandman: They’ll be like, “No, no, it’s good. Don’t worry. We’ve all checked it.” And the human’s like, “Are you sure?” The machines are like, “Oh yeah, we checked it.”
Bob Crews: Yeah, that’s right. Movies are made of that.
Melissa Daley: Correct.
Shane Hastie: What does this mean if I’m somebody who’s thinking about testing as a career. What do I need to learn?
What to Learn for a Career in Testing? [28:46]
Melissa Daley: The basics.
Bob Crews: Yes, yes. The basics.
Melissa Daley: Right?
Bob Crews: I was going to say-
Melissa Daley: The basics.
Bob Crews: … absolutely, start with the basics.
Melissa Daley: Yeah. Understand that you’re looking at the whole system, but you’re looking at the different modules. Of course, I’m going to break it down based off of what we do. So you look at the high level features, epics, you look at the actual features, and then you look at the different details which is going to be translated into user stories. But by breaking it down and decomposing your testing that way and understanding decomposition, then you’re able to do a better test. But of course, I’m going to always go back to the requirements. You have to get the right requirements funneled down so you can have a better test.
So knowing the basics and knowing what the foundation of quality means; quality just doesn’t mean the function works, but quality also means that the design works for the end user, that the infrastructure works for the end user, all those. It’s like a couple of things that you have to have for quality and making sure the system is interoperable, that it interacts correctly, that it’s functioning from the end user’s point of view based on the scenario, and then also the design is accessible. So all of those things, understanding what that means for your tool in quality.
Bob Crews: I love, Melissa, that you said the basics because I was down in Mexico City teaching a course on software testing, and it was so refreshing because out of the nine students I had, seven of them were between the ages of 22 and 26. And they were problem solvers. They were abstract thinkers. They were excellent. So they had the very foundational aptitude that I want in a tester. And then we were talking about things like equivalence classes of vectoring part, all that good stuff so that they could better understand how to be a very efficient and thus effective tester. Because I’m a firm believer, organizations tend to test not too little but too much. And what I mean by that is there are a lot of redundant testing and things like that. So if people learn the foundations of being a solid tester, then learning the tools that you’re going to be introduced to, that’ll be the easy part. But you’ll already be a good tester.
Adam Sandman: I think it’s a great time to be a tester. 20 years ago if you came in and they would say, “Well, you can’t code very well. We’ll make you a tester.” That was the mindset 20 years ago. And so if I was a tester 20 years ago, it was not a great career path because you always felt secondary and you were basically brought in to do what they call monkey testing where you’re basically just following these scripts and typing away and not having an intellectual experience. I think now if you were starting on a career path as a tester, I would say watch some Steve Job videos. Do social sciences. Think of user behavior. Your job is to be the user advocate. You’re going to have to put yourself in the shoes of these users and try and figure out how they’re going to use the system.
So it’s an amazing role to be in that and then take that experience and be able to translate that into, “How do I test something effectively? What are we missing? What risks have we not thought of?” It’s a great intellectual exercise. It’s a great questioning role. But I think 20 years ago, it wouldn’t have been. And I think as you said, Bob, testers are demanding that they want to work on certain technologies. I think they’re going to only want to work for companies that recognize that. And if you are going to put a tester in a role where they’re basically doing robotic tasks, they’re going to quit. And so I think as a tester, you have a lot more autonomy on your career path than ever before.
Shane Hastie: Some really, really interesting conversations here, folks. Thank you so much for taking the time to talk to us today. If people want to continue the conversation, where do they find you?
Melissa Daley: You can find me on LinkedIn. I’m on it daily, on LinkedIn. Or you can find us at Orca Intel on either Twitter, Facebook, and Instagram, and also on LinkedIn as well. Primarily we’re always on LinkedIn so that is the best way. Or at our website, www.orcaintelligence.com.
Bob Crews: Everybody can find me. I am on LinkedIn and the name again is Bob Crews. That’s C-R-E-W-S. So not like Tom Cruise, but Bob Crews, C-R-E-W-S. I’m on LinkedIn quite a bit. You can always also go to our website at Check Point Tech with one T, checkpointech.com, and email me at bcrews@checkpointech.com. But if any of our listeners would like to continue this conversation, I love this stuff so please reach out.
Adam Sandman: Same places, really. LinkedIn, adam.sandman, Sandman like the current Netflix series or several movie characters or music songs by Metallica. I’m on Twitter somewhat, not as much as I used to be. I think I’m doing more LinkedIn these days. Or you can go to inflectra.com. That’s the company. And we’re also on Twitter, Instagram, Facebook, and LinkedIn. And also if you’re interested coming in person, if you want a trip to Washington, DC, this is where we all met in May. Next year in April, we’re going to be there. I think it’s the last week of April in the Washington, DC area. Come to InflectraCON. We’d love to see you there. We have lots of discounted tickets and there’s early bird right now. And then we can have a conversation there in person. I think Bob and Melissa are both going to be there. Shane, if you want to come, you are hereby welcome. You’re hereby invited.
Shane Hastie: That’d be great.
Adam Sandman: I’m going to be in Australia in October if anyone’s listening from Australia. And I think some of us are going to be in California at Star West at other events in the software testing world. So any of those events in the next few months, I think quite few us will be there in person as well.
Shane Hastie: Thank you so much.
Mentioned
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.
MMS • A N M Bazlur Rahman
Article originally posted on InfoQ. Visit InfoQ
Just one week after the release of version 3.7.5, the Micronaut Foundation released Micronaut 3.8.0. This new version brings several exciting features, including support for GraalVM 22.3.0, the ability to use @RequestBean annotations with Java records, and a new command that guides users in creating a Micronaut AWS Lambda project. This release also includes updates to Micronaut Data, Micronaut Security, the Micronaut CLI, and Micronaut Launch, as well as improvements to the Micronaut Maven Plugin.
This release introduces the ability to use @RequestBean annotations with Java records. Before this release, the only way to bind values to the HttpRequest, @PathVariable, @QueryValue or @Header fields was to use a POJO as a controller method parameter and mark it with @RequestBean. In this release, a new CorsFilter class has been added that sends a 403 status code to origins other than localhost when the app runs on localhost. This is intended to protect users against “drive-by localhost attacks” by enabling CORS from any origin on localhost.
Moreover, this release adds support for Azure Cosmos and gives Micronaut Data two new ways to handle multiple tenants. On the other hand, Micronaut Security has been improved with some ahead-of-time optimizations that speed up the time-to-first response and add support for Proof Key for Code Exchange (PKCE). Micronaut Security has also made way for getting OpenID Connect metadata to work without stopping the Netty event loop.
Furthermore, this release brings forth a new command called, mn create-aws-lambda, which allows users to create a Micronaut AWS Lambda project through an interactive prompt. Also, Micronaut Launch and CLI now have more features, such as support for the gitlab-workflow-ci, azure-cosmos-db, localstack and aws-alexa,aws-cdk features. In addition, the Micronaut Maven Plugin has been improved with faster start/stop of test resources, the ability to choose a namespace for shared test resources, and the addition of the CRaC packaging type to create checkpointed Docker images. With these updates, developers will have even more tools to use with Micronaut to make strong, scalable apps.
A new version of its CRaC (Coordinated Restore at Checkpoint) feature has also been released. The update adds support for HikariCP, a popular JDBC connection pool. With this update, developers can now use either the Micronaut Gradle CRaC Plugin or the Micronaut Maven Plugin to build Docker images with a CRaC-enabled JDK and a pre-warmed, checkpointed application. Additionally, Micronaut CRaC can now be used in combination with AWS Lambda SnapStart. These updates will allow developers to create more efficient and scalable applications with Micronaut.
Nonetheless, this release has updates for its cloud offerings, including support for AWS, Azure, GCP, Oracle, and Reactor. Updates to Micronaut AWS include the ability to change the endpoint for the AWS Services SDK and updates to dependencies like the AWS CDK and AWS SDK. Micronaut Azure has been updated to the latest Azure SDK and Azure Functions Java version. In contrast, Micronaut GCP has added support for Google Cloud Events and updated dependencies such as Google Cloud PubSub and Google Secret Manager. Micronaut Oracle now supports Oracle Cloud Infrastructure (OCI) SDK v3, and Project Reactor 3.5.0 has been added to Micronaut Reactor.
Additionally, Micronaut Test has received updates to its dependencies, including JUnit and Mockito. Finally, Micronaut Test Resources has added support for wait strategies and a method to get the TestResourcesClient from an ApplicationContext. These updates will enhance the functionality and performance of applications built with Micronaut.
Developers who want to evaluate Micronaut 3.8.0 can also build applications from Micronaut Launch and read reference material to learn more.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Database-as-a-service (DBaaS) provider DataStax on Thursday said that it is acquiring Seattle-based machine learning services providing firm Kaskada for an undisclosed amount.
The acquisition of Kaskada will help DataStax introduce data-based, event-driven and real-time machine learning capabilities to its offerings, such as its serverless, NoSQL database-as-a-service AstraDB and Astra Streaming, the company said in a statement. AstraDB is based on Apache Cassandra.
DataStax’s decision to bring Kaskada into its fold comes at a time when enterprises are looking to build more intelligent applications in order to boost the efficiency of internal operations and enable better customer experience.
According to a report from market research firm Gartner, by 2022, nearly 90% of new software applications that are developed business will contain machine learning models or services as enterprises utilize the massive amounts of data available to companies these days.
However, enterprises can face challenges of scaling and high costs while building AI-driven applications, as these programs cannot rely on traditional processes such as batch extraction, transformation and loading (ETL), but rather have to be built in such a way that data analysis occurs directly on a data platform in order to achieve faster decision-making.
Kaskada’s technology helps solve these issues, according to a joint statement sent by the companies.
“The Kaskada technology is designed to process large amounts of event data as streams or stored in databases and its unique time-based capabilities can be used to create and update features for machine learning models based on sequences of events, or over time,” the companies said, adding that this allows enterprises to adapt to evolving content and generate predictions based on different contexts.
DataStax will release the core Kaskada technology under an open-source license later this year, said Ed Anuff, chief product officer at DataStax.
The company plans to offer it as a new machine learning service in the cloud in the near future, Anuff added.
Kaskada, which also has been contributing to open-source communities, has raised about $9.8 million in funding from venture capital firms such as NextGen Venture Partners, Voyager Capital and Bessemer Venture Partners.
Its co-founders, who hail from Google’s engineering team and The Apache Software Foundation, include CEO Davor Bonaci andCTO Ben Chambers.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
DataStax Inc. today announced that it has acquired Kaskada Inc., a startup focused on easing the development of artificial intelligence applications.
The terms of the deal were not disclosed. Kaskada previously received $9.8 million from investors, most of which was raised through a Series A funding round that closed in 2020.
Santa Clara, California-based DataStax provides a commercial version of the Apache Cassandra database. Cassandra is a popular NoSQL database that can process large amounts of information and includes extensive reliability optimizations. The company also offers a managed cloud version of another open-source project, Apache Pulsar, that is used to move between applications.
DataStax customers rely on its software to power multiple types of workloads, including AI applications. The company will use the technology that it has obtained through the acquisition of Kaskada to enhance its machine learning capabilities.
“Businesses must operate in real time, using data to power operations and fuel instant, informed decisions and actions,” said Chief Executive Officer Chet Kapoor. “DataStax has many customers already using real-time data, and with Kaskada as part of our services portfolio, we can give them the opportunity to use that data to create powerful experiences for their customers with real-time AI.”
One of the most important steps in an AI development project is the so-called feature engineering phase. Feature engineering involves turning the data that an AI model ingests into a form that is easier to analyze. By making data simpler to analyze for an AI model, developers can improve processing accuracy.
A software team’s AI training dataset might comprise two rows: one containing longitudes and another that includes latitudes. During the feature engineering phase, developers could combine each pair of longitudes and latitudes into a single coordinate. This replaces two pieces of data with one, which simplifies processing for AI models.
Kaskada provides a software platform that makes it easier for developers to perform feature engineering. The platform’s flagship feature is its ability to prevent target leakage, a technical issue that often emerges during the feature engineering process. If left unaddressed, the issue can make AI models less accurate.
Target leakage occurs because AI models must be trained on datasets similar to the information they will process in production. For example, a neural network built to process transaction logs must be trained on a collection of sample transactions. Target leakage emerges when the records on which an AI is trained differs from the records it will process in production.
Kaskada’s platform reduces the risk of target leakage by helping developers ensure their training datasets meet technical requirements. Using the platform, developers can filter the records in a training dataset based on the time when each record was created. According to the company, the ability to filter records enables developers to remove data that may lead to target leakage and thereby make their AI models more accurate.
DataStax plans to release the startup’s core technology under an open-source license in the wake of today’s acquisition. Further down the road, the company will incorporate the technology into a new cloud-based machine learning service. The service is expected to debut later this year.
Image: Unsplash
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
DataStax, which arguably is best known as the commercial entity behind the scalable NoSQL database Apache Cassandra, turned some heads in 2021 with the addition of Astra, a real-time streaming data offering based on Apache Pulsar, to its roster of offerings. Now the Silicon Valley firm is turning heads once again with the acquisition of Kaskada, a provider of tools for simplifying and automating real-time machine learning workflows for data scientists and machine learning engineers.
Kaskada, which we profiled three years ago just before the world went into the first COVID-19 lockdown, was founded to help automate tedious feature engineering tasks. The Seattle, Washington, company’s flagship product is a feature store where data scientists and ML engineers can define the features they want to track in their machine learning experiments (through integration with data science notebook environments). Secondly, the software serves features to machine learning models as usable vectors, without writing any time-consuming re-writing of features in new languages or requiring the creation of data pipelines.
Davor Bonaci, Kaskada’s CEO and co-founder, described the offering as a “compiler between the studio and the feature store.” “We are compiling code from whatever the data scientist defines [and] automatically generating a real-time distributed system,” Bonaci told Datanami back in February 2020. “That’s where the rewriting goes away. We generate automatically a distributed system from what you define in our software.”
Kaskada, which uses Cassandra and Akka under the covers (or at least did back in 2020), is primarily used for machine learning projects involving real-time, event-based data, such as recommendation engines and real-time predictions for websites and mobile apps. By automatically keeping the feature vectors up to date based on data coming in from pub-sub systems like Apache Kafka, AWS Kinesis, or Pulsar, Kaskada helps eliminate a lot of “glue” coding that would normally occupy the life of the machine learning engineer.
DataStax clearly sees Kaskada filling a need among its customer base for simplifying the feature engineering work going on as real-time data flows between the data source (open source Pulsar or its commercial Astra Streaming product) and the data sink (open source Cassandra or its commercial Astra DB offering).
“Businesses must operate in real time, using data to power operations and fuel instant, informed decisions and actions,” DataStax chairman and CEO Chet Kapoor says today in a press release. “DataStax has many customers already using real-time data, and with Kaskada as part of our services portfolio, we can give them the opportunity to use that data to create powerful experiences for their customers with real-time AI. It’s an exciting time for DataStax, and we have a clear new mandate: real-time AI for everyone.”
Bonaci said he’s looking forward to working with DataStax to create a new generation of AI-powerd applications.
“AI is at its best when it has access to data at scale. And real-time data, in particular, will shape a generation of new applications and real-time decisions for every industry,” Bonaci wrote in a blog post. “Cassandra is uniquely suited for vast amounts of real-time data, making our decision to join DataStax strategically relevant for us, our mission, and the market.”
A decade ago, the initial wave of big data tools focused on collecting and analyzing huge amounts of data in batch workloads, in the hopes of finding useful patterns or other information that can be put to use later.
Today, the timeframe has been collapsed, and companies need to generate and consume those insights as fast as possible. This is particularly evident when it comes to personalizing Web and mobile content, according to Matt Aslett, vice president and research director at Ventana Research.
“The need for real-time interactivity means that these applications cannot be served by traditional processes that rely on the batch extraction, transformation and loading of data from operational data platforms into analytic data platforms for analysis,” Aslett says in the DataStax press release. “Instead, they rely on analysis of data in the operational data platform to accelerate decision-making or improve customer experience. High costs, complexity, and scaling issues have been roadblocks to many organizations in achieving dynamic, real-time intelligence in their operational platforms.”
One of the DataStax customers that might benefit from the integration with Kaskada is Priceline. The online travel firm, which is a Astra DB customer, leans on ML technology to help it serve relevant and personalized search results to customers.
Terms of the acquisition were not disclosed.
Related Items:
Kaskada Accelerates ML Workflow with Its Feature Store
DataStax Nabs $115 Million to Help Build Real-Time Applications
DataStax Taps Pulsar for Streaming Data Platform
MMS • Ben Linders
Article originally posted on InfoQ. Visit InfoQ
Doing end-of-year retrospectives can help to improve the effectiveness of agile retrospectives, by focusing on the actions done and the formats used. To increase the impact of retrospectives we can alternate between “global galactic” and focus retrospectives.
Reiner Kühn spoke about agile retrospectives at Agile Testing Days 2022.
Looking at more than the single retrospective, almost all retrospective facilitators face the challenge of how to achieve sustainability of the applied measures, Kühn mentioned.
To improve the effectiveness of retrospectives, we can review the results of more than one retrospective, Kühn explained:
I love end-of-year retrospectives where we as a team look back on the retrospectives of the year.
First, we look back on the identified action points and the achieved results. Which action points resulted in positive outcomes? Which ones showed unexpected effects? Which ones were forgotten or not touched? From this view, we learn about our ability to change, to adjust to changes, and how good we are at keeping track of our selected action points.
The second view is on the retrospective formats: which ones led to which insights, which ones touched people. This helps the team to understand why it is important to change the perspective when we look at what we want to improve: our work and interactions.
We can also do end-of-year retrospectives together with other retrospective facilitators to learn from each other. Here we must respect the Las Vegas rule, Kühn said.
To increase the impact of retrospectives, Kühn suggested alternating between “global galactic” and focus retrospectives:
In a global galactic retro, all topics are welcome. In a focus retro, we work on one topic, e.g. our bad ticket quality. Focus retrospectives often require some upfront preparation, e.g. browsing through the ticket and identifying issues.
Good retrospectives have a significant impact on various aspects of our work, Kühn said. First, they are the basis for a structured continuous improvement process. Second, they give the team power and influence in changing things, and they learn to change. Third, they improve their problem-solving skills – not only for retrospective topics but also for their daily problems and conflicts.
InfoQ interviewed Reiner Kühn about improving agile retrospectives.
InfoQ: What are the biggest challenges that retrospective facilitators face?
Reiner Kühn: The challenges can be different for different facilitators.
For some, it is challenging to perform retrospectives that motivate, inspire, and are fun. If a facilitator doesn’t love doing retrospectives, the results will be mediocre.
My personal challenges, even after more than 300 retros, are time management and dealing with personalities. Time management because for me identifying effective actions is more important than being time efficient – keeping a timebox.
InfoQ: How can we recognize “bad” retrospectives?
Kühn: Bad retrospectives often only can be recognized in retrospect. During a retrospective, when conflicts or emotions come up, you might think it is a bad retrospective. But conflicts and emotions often show us that we touched something that is worth further exploration. Pain is one of the most important triggers for change.
From the participants’ perspective, bad retrospectives may be seen as a waste of time, boring, or superficial.
From a facilitator’s perspective, a bad retrospective is a skipped one. Or when the same issue comes up again and again. Or when I have the feeling that we have an elephant in the room nobody wants to see.
InfoQ: Can you give examples of retrospectives that didn’t go well?
Kühn: I can remember two situations in which I could already recognize it was going to be bad.
The first time: The one and only retrospective I forgot to prepare for. Alarmed by my calendar’s notification 15 minutes ahead. At this time, with the experience of more than 100 retrospectives, I was convinced that I could do a retrospective like a stand-up comedian. It was bad, I noticed it, and the handful of participants as well. That taught me: be respectful towards each and every single retrospective. The issues and the people deserve it.
The second time: While working in a retrospective on an issue of mobbing that arose frequently again and again, I stopped facilitating the retro. I told the team that I feel that I was not able to support them regarding this specific issue.
MMS • Johan Janssen
Article originally posted on InfoQ. Visit InfoQ
Two months after the first commit in October 2022, Peter Verhas, Senior Architect at EPAM Systems, has released version 2.0.0 of SourceBuddy, a new utility that compiles dynamically created Java source code defined by a String
or a file to a class
file. SourceBuddy, requiring Java 17, is a simplified facade for the javac
compiler, which delivers the same functionality.
Version 2.0.0 supports a combination of hidden and non-hidden classes during compilation and run-time. Furthermore, the API has been simplified, containing breaking changes such as changing the loadHidden()
method to the hidden()
method, hence the new major release. A complete overview of the changes per version is available in the releases documentation on GitHub.
SourceBuddy can be used after adding the following Maven dependency:
com.javax0.sourcebuddy
SourceBuddy
2.0.0
Alternatively the following Gradle dependency may be used:
implementation 'com.javax0.sourcebuddy:SourceBuddy:2.0.0'
To demonstrate SourceBuddy, consider the following example interface which will be used by the dynamically created code:
package com.app;
public interface PrintInterface {
void print();
}
The simple API is able to compile one class at a time by using the static com.javax0.sourcebuddy.Compiler.compile()
method. For example, to compile a new class implementing the previously mentioned PrintInterface
:
String source = """
package com.app;
public class CustomClass implements PrintInterface {
@Override
public void print() {
System.out.println("Hello world!");
}
}""";
Class clazz = Compiler.compile(source);
PrintInterface customClass =
(PrintInterface) clazz.getConstructor().newInstance();
customClass.print();
The fluent API offers features to solve more complex problems such as the compilation of multiple files with the Compiler.java()
static method:
Compiler.java().from(source).compile().load().newInstance(PrintInterface.class);
Optionally, the binary name of the class may be specified, although SourceBuddy will already detect the name whenever possible:
.from("com.app", source)
For multiple source files, the from()
method may be called multiple times, or all the source files in a specific directory may be loaded at once:
.from(Paths.get("src/main/java/sourcefiles"))
Optionally, the hidden()
method may be used to create a Hidden Class which can’t be used directly by other classes, only through reflection by using the class object returned by SourceBuddy.
The compile()
method generates the byte codes for the Java source files, but doesn’t load them into memory yet.
final var byteCodes = Compiler.java()
.from("com.app", source)
.compile();
Optionally, the byte codes may be saved to the local drive:
byteCodes.saveTo(Paths.get("./target/generated_classes"));
Alternatively, the stream()
method, which returns a stream of byte arrays, may be used to retrieve information such as the binary name:
byteCodes.stream().forEach(
bytecode -> System.out.println(Compiler.getBinaryName(bytecode)));
The byteCodes.load()
method loads the classes and converts the bytecode to Class
objects:
final var loadedClasses = compiled.load();
Accessing a class is possible by casting to a superclass or an interface the class implements or by using the reflection API. For example, to access the CustomClass
:
Class customClass = loadedClasses.get("com.app.CustomClass");
Alternatively, the newInstance()
method may be used to create an instance of the class:
Object customClassInstance = loadedClasses.newInstance("com.app.CustomClass");
The stream of classes may be used to retrieve more information about the classes:
loadedClasses.stream().forEach(
clazz -> System.out.println(clazz.getSimpleName()));
More information about SourceBuddy can be found in the detailed explanations inside the README
file on GitHub.
MMS • Jan Reimann
Article originally posted on InfoQ. Visit InfoQ
Software engineers should accept their responsibility to take energy consumption and carbon dioxide emissions into account when developing software, they have a big responsibility towards nature, our environment and sustainability. This article sheds light on how software engineers can this perspective into account, zooming in on energetic shortcomings or bottlenecks of bugs.
By Jan Reimann
MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ
Introduced with Android 10, Modular System Components enable updating end-user devices outside of the normal Android release cycles. The new Extension SDK framework, now public, aims to make their integration simpler for developers.
By Sergio De Simone
MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ
Geoffrey Hinton, professor at the University of Toronto and engineering fellow at Google Brain, recently published a paper on the Forward-Forward algorithm (FF), a technique for training neural networks that uses two forward passes of data through the network, instead of backpropagation, to update the model weights.
By Anthony Alford