Mobile Monitoring Solutions

Search
Close this search box.

Google ML Kit SDK Now Focuses on On-Device Machine Learning

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Google has introduced a new ML Kit SDK aimed to work in standalone mode without requiring a tight integration with Firebase, as the original ML Kit SDK did. Additionally, it provides limited support for replacing its default models with custom ones for image labeling and object detection and tracking.

Focusing ML Kit on on-device machine learning means your app will not experience any network latency and will be able to work offline. Additionally, the new ML Kit SDK keeps all of its data locally, which is a key requirement to build privacy-preserving applications.

ML Kit SDK retains its original feature set covering vision and natural language processing. Vision-related features include barcode scanning, face detection, image labeling, object detection and tracking, and text recognition. Natural language-related features include language identification from a string of text, on-device translation, and smart reply.

Google is now recommending to use the new ML Kit SDK for new apps and to migrate from the older, Cloud-based version for existing ones. If you need the more advanced capabilities provided by the old version, such as custom model deployment and AutoML Vision Edge, you can use Firebase Machine Learning.

As mentioned, though, Google is also making its first steps to extend ML Kit SDK so it can support replacing its default models with custom TensorFlow Lite models. Initially, only Image Labeling and Object Detection and Tracking support this capability, but Google has plans to include more APIs.

The new ML Kit SDK will be available through Google Play Services, which means it needs not being packaged with an app binary, making its size smaller.

As a final note, ML Kit SDK includes two new APIs that provide Entity Extraction, to detect text entities and make them actionable, and Pose Detection, which supports 33 skeletal points and makes possible hands and feet tracking. The new APIs are available to interested developers through Google early access program.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Krustlet: a kubelet Written in Rust to Run WebAssembly Workloads in Kubernetes

MMS Founder
MMS Christian Melendez

Article originally posted on InfoQ. Visit InfoQ

Deis Labs has released Krustlet, an open-source Kubernetes kubelet written in Rust to run web assembly workloads within Kubernetes. Krustlet’s initial version is functional to run an essential workload as it doesn’t have support for features like pod events or Init Containers yet. Applications must implement the WebAssembly system interface (WASI) as Krustlet only runs WebAssembly containers.

By Christian Melendez

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Facebook Announces TransCoder AI to Translate Code across Programming Languages

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

Facebook AI Research has announced TransCoder, a system that uses unsupervised deep-learning to convert code from one programming language to another. TransCoder was trained on more than 2.8 million open source projects and outperforms existing code translation systems that use rule-based methods.

By Anthony Alford

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Q&A on the Book Leading Lean

MMS Founder
MMS Ben Linders Jean Dahl

Article originally posted on InfoQ. Visit InfoQ

Leading Lean by Jean Dahl describes a journey that leaders can embark on to respond to disruptive change. It leads them through the six dimensions of leading self, others, the customer, and the enterprise, by creating an innovative culture that delivers value. It provides not just the theory behind Modern Lean, but also practical methods, tools, strategies, and case studies.

By Ben Linders, Jean Dahl

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


GitHub Super Linter Helps Developers Ensure No Broken Code Is Ever Merged

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

GitHub Super Linter aims to automate the process of setting up your GitHub repositories so they will use the appropriate linter for your language whenever a pull request is created.

According to GitHub, its Super Linter will make it easier for developers to ensure broken code never makes it into their master branches. When using Super Linter,

any time you open a pull request, it will start linting the code case and return via the Status API. It will let you know if any of your code changes passed successfully, or if any errors were detected, where they are, and what they are.

After fixing all detected issues, the developer can update the pull request and if Super Linter finds no additional issues, the pull request will be created.

GitHub Super Linter basically takes care of configuring GitHub repositories to run a GitHub Action for each new pull request. It packages a number of previously available open source linters into a Docker container that is called by GitHub Actions. Super Linter supports many languages, including JavaScript, Python3, Perl, TypeScript, Golang, and many others.

To better suit Super Linter to your organization’s guidelines, you can provide template rule files for the linters you use by copying them into .github/linters. If no custom rule files are provided, Super Linter will use its own default rule files. Similarly, Super Linter allows you to disable specific linters and rules to ignore certain errors or warnings.

Super Linter includes also many customization options that can be set using environment variables. For example, setting VALIDATE_ALL_CODEBASE to true will all files to be linted, as opposed to only new and modified files. A number of flags allow you to enable or disable linting on a language-by-language basis.

According to GitHub, Super Linter will help establish sound coding practices across languages and will make collaboration more effective by reducing the amount of work spent on pull requests.

GitHub competitor GitLab also supports a feature similar to GitHub Super Linter through its Code Quality component. GitLab relies on CodeClimate engines to provide automated code review before merging to master. GitLab’s solution does not support as many languages as Super Linter, but GitLab said they are working to extend language support in Code Quality.

As a final note, GitHub Super Linter may be seen as another step in the direction of moving your entire IDE to the Cloud. In the same line, a few months ago GitHub introduced GitHub Codespaces, which provide a fully-configured Visual Studio Code-based environment inside of GitHub.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Esbuild JavaScript Bundler Claims 10-100x Faster Bundling Time

MMS Founder
MMS Bruno Couriol

Article originally posted on InfoQ. Visit InfoQ

esbuild, a JavaScript bundler and minifier seeks to bring order-of-magnitude speed improvements in the JavaScript bundling and minification process. esbuild achieves its speed by being written in Go compiled to native code, parallelizing tasks to leverage multi-core processors, and minimizing data transformations.

Evan Wallace , CTO and co-founder of @figmadesign, and esbuild’s creator, explained that he aims to reset the performance expectations of the JavaScript community:

[…]] The current build tools for the web are at least an order of magnitude slower than they should be. I’m hoping that this project serves as an “existence proof” that our JavaScript tooling can be much, much faster.
[…] I’m not trying to compete with the entire JavaScript ecosystem and create an extremely flexible build system that can build anything.
I’m trying to create a build tool that a) works well for a given sweet spot of use cases (bundling JavaScript, TypeScript, and maybe CSS) and b) resets the expectations of the community for what it means for a JavaScript build tool to be fast.

Wallace mentioned that a key use case for esbuild was the packaging of a large codebase for production. This includes minifying the code, which reduces network transfer time, and producing source maps, which are important for debugging.

Wallace provided a custom-made JavaScript benchmark that shows esbuild achieving build times under one second, while other tools’ build time varies between 30s and over a minute. Similar results are observed in the provided TypeScript benchmark:

JavaScript benchmark

A developer has run the benchmark on a different machine and reproduce results confirming the speed improvements. Devon Govett, creator of the parcel bundler, hinted at one reason behind the performance gap between esbuild and parcel:

[…] For those looking at this and wondering why Parcel appears much slower, it’s because it runs Babel by default whereas webpack and others do not. If those tools were configured similarly, or Parcel was configured to skip this, I imagine the results would be closer.

While the benchmark methodology needs to be refined to reflect realistic scenarios, the build times achieved by esbuild remain impressive and are unheard of among the alternative bundlers that are written in JavaScript or TypeScript. esbuild is written in Go, and compiles to native code. esbuild additionally relies on an architecture that maximizes parallelism (e.g. when parsing, printing, and generating source maps), tries to do as few full-abstract syntax tree (AST) passes as possible for better cache locality, and supports incremental builds.

esbuild supports constant folding and is able to handle libraries such as React that may contain code as follows:

if (process.env.NODE_ENV === 'production') {
  module.exports = require('./cjs/react.production.min.js');
} else {
  module.exports = require('./cjs/react.development.js');
}

According to the value of process.env.NODE_ENV passed in the command line, one of the previous code branches will be eliminated from the bundle.

Developers have been welcoming the speed improvements. In a recent panel at QCon London, Eoin Shanaghy, CTO and co-founder of fourTheorem said:

I tried a recent development called ES Build, which is a native transpiler written in Golang, which is significantly faster than any of the alternatives. I think that’s ultimately what’s going to happen because the benefits are there. You need it. […] You don’t have to build every JavaScript tool in JavaScript.

esbuild is still being actively worked on and may not be ready to be used in production. esbuild is an open-source, MIT-licensed project. The latest release at the time of publication is 0.5.0 and includes both a Go and JavaScript (browser) API, together with TypeScript decorator support.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Rust Breaks Into TIOBE Top 20 Most Popular Programming Languages

MMS Founder
MMS Vivian Hu

Article originally posted on InfoQ. Visit InfoQ

Developers’ love for Rust has translated into real-world adoption.

On 6/2/2020, TIOBE reported that Rust broke into TIOBE index top 20 for the first time.

The TIOBE index is a long-standing measure of programming language popularity in real-world use. The current top five on that list are C, Java, Python, C++, and C#. Rust is a direct competitor to C and C++ and, to a lesser extent, a competitor to Java and C#.

There were leading indicators for Rust’s rise. It has been StackOverflow’s most beloved programming language for five years in a row with over 80% approval rating from over 50,000 surveyed developers every year. In a recent developer survey from JetBrains, 8% of the nearly 20,000 respondents indicated that they plan to learn Rust next year — making it the fastest-growing programming language. In fact, the JetBrains survey also showed that 67% of developers are using Rust even if they are not required by the boss or being paid (i.e., on hobby projects).

Rust seems to be the only language where more people are planning on adopting it than are currently using it. — Reddit user u/gilescope

But as the popularity of Rust grows, as evident from the TIOBE ranking, more and more developers are getting paid for their Rust work. Rust has been adopted by well known open-source projects such as Mozilla, Deno, and Polkadot. It has also seen significant traction in the enterprise world with companies like Dropbox, Microsoft, Cloudflare, and many others.

Rust promises to deliver high-performance software as C does, but without the memory-related bugs that plagued C and C++. Microsoft said that 70% of all severe bugs in their software are related to memory safety, and this trend is not decreasing. More than 20 years ago, managed languages such as Java and C# were widely adopted to eliminate this class of bugs. Managed language runtimes, such as the Java Virtual Machine and .Net, achieved this through the use of Garbage Collection (GC) at runtime. However, GC also introduces a significant runtime overhead. It reduces application performance, and perhaps even more concerning, it results in unpredictable performance.

The design goal of Rust is to achieve memory safety without GC or any runtime overhead. It provides a zero-cost abstraction from C pointers. Sounds too good to be true? Well, the trade-off is usually a strict compiler that enforces memory usage rules. Rust features a strongly typed language design and a sophisticated compiler toolchain. It is very well received by developers who use it.

The safety and performance of Rust make it an ideal programming language for system applications, replacing C and C++. But Rust’s popularity goes beyond system applications. A few months ago, the Rust community published results from its own developer survey. It shows that most developers use Rust to write web applications, although it is also trendy in areas like IoT and blockchain.

While Rust can compile into safe and efficient native binaries, it is also often necessary to run Rust applications in runtime containers. Such containers provide additional memory safety, access security, code isolation, portability, and manageability. Outside of the browser, Rust programs are compiled into WebAssembly and run in host environments from Node.js, Deno to blockchains.

As a systems language, Rust does have a bit of a learning curve. But it also has an abundance of tutorials for getting started. Check out tutorials and examples to get started on the Rust journey.

The Rust programming language is a dual-licensed open-source project under both the MIT license and the Apache License (Version 2.0). Contributions are encouraged and should follow the Rust contribution guideline.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Podcast: Lanette Creamer on Exploratory Testing and Technical Testers

MMS Founder
MMS Lanette Creamer

Article originally posted on InfoQ. Visit InfoQ

In this podcast recorded at Agile 2019, Shane Hastie, Lead Editor for Culture & Methods, spoke to Lanette Creamer about the need for technical skills by testers and the importance of exploratory testing.

Key Takeaways

  • Despite the importance and value it provides, software testing is not a particularly respected profession
  • Testers with development skills and developers with testing skills can communicate effectively with each other and pairing results in faster bug identification and removal
  • Unit tests are an asset of confidence
  • Testers have an ethical responsibility to think beyond the intended use of the code, considering what could happen and how the product could be misused
  • Exploratory testing is an approach where instead of trying to prove that the software works, the goal is discovery

Subscribe on:

Transcript

  • 00:21 Good day folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. I’m at Agile 2019, and I’m sitting down with Lanette Creamer.
  • 00:29 Hi, Lanette. Welcome. Thanks for taking the time to talk to us. We’ve followed each other in the social media for a while, but this is the first time we’ve had a chance to meet in person.
  • 00:39 Would you mind giving us a little bit of introduction and background?

  • 00:42 Lanette: Sure. So, I have been on computers since the BBS days. I started on DOS before we had Windows 95, even, and I moved to the Mac Quadra’s and I just really liked being on the computer.
  • 00:56 I went to school for graphic design and I ended up, just through accident, getting a job in tech support for an outsourcer for Adobe, I was a contractor there, so many levels of separation. I ended up loving it and I ended up getting a contract testing job at Adobe, I ended up really liking that and to get my first job at Adobe I interviewed against about 50 people.
  • 01:20 I managed to get that job and I stayed there for 10 years and it was a great experience.  I worked on some of the first Creative Suites we ever put together. The early versions of InDesign back when Quark was the big thing.
  • 01:31 Then I did some consulting for like big coffee company in Seattle. I worked on some hospital data, it was really an interesting space. I was the first tester ever at a company that builds products on InDesign Server. I started a testing discipline there, and then the past few years I’ve been working for the Omni group and we do iOS and Mac software for productivity. And that’s been really great, I’ve never got the chance to learn the iOS tools before. And that’s been fun.
  • 01:59 that brings me up to here where I’m  now focusing on becoming a more technical tester, more on the development side, working on my developer skills and trying to build that up, because I’ve been testing for so long I feel like my growth opportunities really more on the programming side.
  • 02:17 One of the trends that I’ve certainly seen in the testing space over the last few years is the shift towards more technical testers, testers needing more technical skills. What do you feel about that? Is that the case?

  • 02:30 Lanette: I feel very mixed about it because I love exploratory testing, it’s a true passion of mine. I’m very good at it. But I also know the ceiling is too low. Testing is not a respected profession. You can say as much as you want to, that it’s equal, but programming pays more and I’m tired of trying to prove my value as a tester. It’s impossible. There are some people that will never be convinced.  The very best tester you could ever have is still going to make less than a mediocre programmer.
  • 03:00 there comes a point when you’re just swimming upstream. You cannot continue to bash your head against a brick wall .
  • 03:08 I think there’s great value in exploratory testing and I don’t think everyone should have to be a programmer, but I also feel like I can be a good programmer and a good tester. I can do both. It doesn’t take anything away from my testing that I want to write code too, and I’ll still do both. I’ll still be a tester, but I’ll also be a developer.
  • 03:28 Shane: So that combination of skills, what does it bring extra?
  • 03:31 Lanette: 03:31 It brings some debugging abilities, in my opinion, it brings some different levels of testing – so I can go in and write some API tests in the same language as the code, and that makes it easier to involve my developers. I mostly write Java script at this point, but I’ve written Python in the past and I’d like to learn Java too.
  • 03:51 If you can go in and write some code in the same language as your developers and check it in where they can easily review it, they’re more likely to be able to participate with you and make suggestions where they may not know the testing space, but they know the programming space and then they know what the API should do. So it’s a area for collaboration that didn’t exist before. Maybe meeting them in the middle where they’re at.
  • 04:14 Shane: And for the programmer – why should they learn about testing?
  • 04:19 Lanette: Well, they have a responsibility to test their own code and I would hope that we all want a quality product. 
  • 04:26 For one thing, companies are just saying, developers can do all the testing and they don’t have time to do it. So you better learn to do it efficiently, if you want time to code your features.
  • 04:35 So I would think even if you have testers, you want to be able to unit test as well as you can and make sure that you aren’t changing anything that breaks behind you, just for your own sanity, for your own capability to change your code in the future, having those tests, I think is an asset of confidence.
  • 04:55 It just adds a layer of confidence. If you’re good at testing your own code, you aren’t just throwing something over the wall that might be embarrassing.
  • 05:03 Shane: So what are the key skills that a programmer would have to pick up to do good testing?
  • 05:11 Lanette: I think the top thing is curiosity. Not what it’s supposed to do, but you need to think about what it’s not supposed to do or how it may interact with things outside the scope of your code.
  • 05:24 It’s those areas that are getting missed. Things like performance, race conditions, error handling, security, accessibility. We need to think far outside the requirements and outside the code. And that means bringing in some customer perspective, is it even suitable to the purpose and what unintended consequences might it have?
  • 05:44 You know, this past year we’ve had all these security leaks. We have had the unintended consequences of social media really be biting in several countries. No one’s thinking about these things. So we have to, as agileists, start thinking about beyond our intended use of the code, what could happen. And that’s really an ethical concern that we do that.
  • 06:06 So I’m just a programmer. They give me a spec and they tell me what to do. I do it, don’t I?

  • 06:10 Lanette: Well, I would hope that you would have more professionalism than that.  maybe when you first start, that would be your approach is just to get it to work. But at some point, I think we all have pride of what we’ve built, and we want to see it really solve the problem for a person. I think most of us, the satisfaction, isn’t just, Oh, here’s this piece of code. It’s having an impact on someone who’s using it and that they can get their task done.
  • 06:34 So it’s going one level further beyond does it work to does it satisfy this user’s need and is it as good as it can be?
  • 06:42 And I think that’s where exploratory testing comes in, that’s beyond the functional aspects of does it function? But is it suitable and doesn’t do anything it shouldn’t?
  • 06:53 We know about TDD for instance, or we should know about TDD. If I don’t know about TDD, where do I find out?

  • 07:02 Lanette: Well, that’s interesting because we didn’t learn about TDD in my class. I took a certification class for JavaScript developers, and we literally did one test the entire nine month course. it’s like testing didn’t exist.
  • 07:15 Since then, I’ve been trying to learn TDD and part of what’s been excellent there is people are willing to pair with you online, sometimes. And there’s these websites now like free code camp and Exercism IO where you can go in, in any language and there’s already tests set up and you can start learning TDD right there.
  • 07:35 that’s been really great for me because I started from nothing. I knew nothing about TDD. I’d literally never paired until I went to an interview and they paired with me and I’d never done TDD or pairing before. To say I was slow and not good at it would be an understatement, it wasn’t my best performance because I was just so new to it.
  • 07:53 The fact that I’m supposed to be listening, reading the code, processing it, then adding a test and talking at the same time. That’s a lot to do. So just learning how to take a pause when I need to,  learning how to do that hand off when you’re pairing and not try and both type and drive, it takes practice.
  • 08:12 Pairing is a skill.

  • 08:14 Lanette: I think it is.
  • 08:16 Shane: And it takes practice
  • 08:18 Lanette: Getting the pace is not something that comes natural to everyone. It was pretty awkward for me. I felt like I was bringing overload, trying to do all of this one time.
  • 08:27 Let’s step back into the testing space. You’re given a workshop here at the conference on coaching for exploratory testing.
  • 08:35 You mentioned that exploratory testing is a really important skill that goes beyond just looking at, does it meet the requirements? So what is exploratory testing and what’s needed for it?

  • 08:49 Lanette: Exploratory testing is an approach where instead of trying to prove that the software works,  the goal is discovery.
  • 08:56 We’re going to discover new things. And so you need creativity, but in order to coach it, you want to set people up for success, especially if you have people that aren’t testers. And we have cloud setups now, and we have containers, which is a huge boon if you want to coach exploratory testing, because if you have a complex setup, you can get everything set up in the environment in advance, and that maximizes your testing time.
  • 09:24 So when I’m coaching other people and trying to lead exploratory testing sessions, I get everything set up first and I preflight it, I make sure we aren’t dead on arrival, that we have builds that work that we have the access needed.
  • 09:37 Then you also generally want to start with a charter and the charter is a mission statement, and it needs to be broken down so that you can complete it in two hours, but you need several charters because you may not end up finding anything in the first one. And if you’re not finding anything interesting and the discovery is not happening, you move on to the next one. The participant will be following the charter as an overall mission statement.
  • 10:00 What would a charter look like?

  • 10:02 Lanette: Here’s a great example. This is one of my favourite examples, just in general, of a charter. Christopher Columbus, using the ships and the money from the monarchy of Spain would sail to find a passage to India. And guess what? Along the way, he bumped into a continent.
  • 10:21 Now, if you bump into a continent, you don’t ignore it. That’s the point, it’s the discovery. We don’t know what’s there. And I said in my workshop that the main obstacle to effective exploratory testing is the belief that there isn’t much to find.
  • 10:35 Just like Christopher Columbus was so sure there was nothing between Spain and India, he’s just going to sail straight there. That assumption, we need to actually go on the journey and see that. And maybe we’re right. Maybe we do end up finding new passage to India. Great. And we want to report our success if we do, but if we do discover something unexpected, that’s really the point of the charter –  to go on that journey and verify what we find.
  • 11:03 And it may be that our assumptions are correct. And we just validate those or maybe that we bumped into a surprise.
  • 11:09 Shane: So we’ve got a charter that tells us what we think we’re looking for or we suspect might be there or some assumption that we want to validate or invalidate. And now I’m sitting in front of a computer
  • 11:21 Lanette: And you have your setups, so I’ve made sure you have the right permissions.
  • 11:24 I’ve set you up. You’ve got a container. You understand the charter, you have what you need, and as a coach, I’m going to keep track of time for you. I’m going to make sure you have the capability to take notes. So you can just write down anything as you go. I’m going to make sure you know how to grab screenshots and a screen recording if you see something interesting, we want to look into later and then if you get stuck, I’m going to help you. And if you aren’t used to reporting bugs, I’m going to help you.
  • 11:49 I explained in my workshop some basics of how to take performance benchmarks, how to isolate bugs, how to get screenshots. And importantly, how you want to report bugs in neutral language, factually.
  • 12:01 Not a judgment – so you don’t want to say, Oh, redraw is bad. Instead, you might say pixel redraw incomplete in this section.
  • 12:08 Because it’s people’s work, you don’t want to tell someone their baby’s ugly. But you do want to be very factual about what’s happening and provide as much evidence as you can and get it down as simple as you can, and always make sure you’re maintaining a good relationship with the team, including giving them what worked well.
  • 12:26 Developers don’t get enough positive feedback about what was really fun. What did you do that was cool? And so when I’m coaching, I try and bring those facts out as well. So we’re not just dumping an ugly pile of bugs on their desk, but know what were we surprised was so good.
  • 12:43 We found some bugs, but wouldn’t it be better if everyone saw them?

  • 12:47 Lanette: Sometimes, it can be better. And there are several ways. I’ve tried to report that one way will be all laid out a workflow in a flow chart. And I’ll make it very basic. So green, if it went great, little yellow triangle if there were a few problems  but we could get there and a full out red stop sign if you’re blocked.
  • 13:08 That way, you can kind of get an overview of which areas were we successful. It’s sort of like just a map, a visual map. Sometimes on a remote team, we would have a Pivotal Tracker project that had the charters in it, and everyone could see the charters and prioritize them along with them who did user stories. And other times we would just share the bug findings in a chat room so that people could see what was found.
  • 13:34 Some teams prefer an email summary. I’ve done it that way as well. I would just do an outline of what we covered, what worked well and what the bugs were. 
  • 13:41 All of those can be effective, depending on the team. One of my favorite ways to include charters, if the team’s doing sprints is we write charters on a sticky note and then we vote for which one’s most important when we’re doing user story review, and then at the beginning of the sprint, when the developers are, you know, heads down, writing features, testers will grab the most important charter and put it in the test column.  Then when it’s done, if it’s multiple people doing it and it’s coached, the coach moves them, or if it’s just one tester at a time doing it, then they move ’em. 
  • 14:16 And then the charters that aren’t done, sometimes they’re important and they go back in to the next story review. Other times we just say, we’re not going to do them if they don’t get picked. then I like to put those on the backlog somewhere in a backlog pile  and then we see if we made a good decision after it ships or, you know, is that something maybe we should revise next time
  • 14:34 Inspect and adapt our testing?

  • 14:36 Lanette: Exactly.
  • 14:37 Shane: Who would have thought?
  • 14:38 Lanette: Yes, it’s such a simple thing and yet I don’t see very many people include their charters as part of their testing, but it’s a great thing for testers to do early in the sprint where we have the code that we’ve just validated as functional but have we really validated that it works together, that it’s secure, that it’s performing?
  • 14:56 It can kind of help us bake in a little bit more quality to the stuff we did the last sprint.
  • 15:01 What about including other non-testing team members in doing the exploratory testing? You know, asking our technical friends, the developers to actually do some of that testing.

  • 15:11 Lanette: 15:11 I think it’s really effective, but one of my favorite things to do, if developers have time, is if you can have them watch actual customers go through and use the product that gives some strong motivation to fix bugs, because there’s no excuse like, Oh, a user wouldn’t do that, or that’s a tester case, or that’s my favorite – that’s an edge case.
  • 15:31 Shane: I’m an edge.
  • 15:32 Lanette: Yes. Seeing the impact it has on customers can be really positive, it can be that instant feedback that we need, but I’ve also run exploratory testing sessions with up to 40 people, and it doesn’t matter who those people are, as long as they understand the charter and they know enough, you know, whether you need to give them a demo at the beginning so they understand a little more.
  • 15:54 It’s totally possible to include anyone, including customers, I’ve had a business analyst, PO’s anyone at all can really participate as long as they care about the quality of the product and they have an open mind.
  • 16:06 If you’re organizing one of those relatively large scale testing sessions, this is where the coach role, I’m guessing, would be pretty important to be able to guide and support.

  • 16:17 Lanette: Yes. The main thing you’re doing when you’re coaching is making sure everybody else is able to focus. And during the time box, part of that is if you’re doing it remote, people need to have access to the channel so they can ask questions live, and they can see what other people are finding.
  • 16:33 And that’s where one person finds a bug, and another person looks in that area and finds something slightly different because a lot of times bugs live in nest. So doing it remote, the coach is pretty actively in the chat channel. If doing it live, like I have before in a large lab, then you’re walking around and you’re really seeing everyone has what they need.
  • 16:52 Someone finds a bug, they’re not quite sure what it is, you’re going to pair with them to help isolate it. Especially if they’re not testing commonly, it may not come as naturally to them to isolate that bug or to write it up, they may have questions on the process. So you want to be available for that.
  • 17:07 Shane: So in that case, you’re not going to be doing much testing yourself. You’re providing that support and guidance.
  • 17:12 Lanette: I tend to do still do testing because if someone’s kind of having trouble getting started, then I’ll pair with them.  So if you’re not helping someone get unstuck, you can still help someone get started with their testing ideas by verbalizing what you’re doing and kind of testing alongside with them.
  • 17:28 It can be hard to just get in there, just test because when we’re testing, especially exploratory testing, we have our internal assumption, our mental model, and we’re comparing it to what we’re seeing and there’s a lot of ways to verify our mental models, correct, by comparison. And there’s also a lot of ways we can miss what we’re seeing, where if you pair up with someone else, they may see something you don’t see.
  • 17:50 The pairing activity. Adds extra value

  • 17:54 Lanette: You have another pair of eyes and another mental model with you to help you problem solve  and that applies to testing as well as to development.
  • 18:02 Sometimes just verbalizing something, when you’re isolating a bug verbalizing, what you think is happening, it can help you clarify it and help the other person come up with a way to isolate it further.
  • 18:13 And sharing the bugs on the chat room has that effect to some extent, if you’re remote. I really like to exploratory test in person, but we’re doing more remote in a lot of companies now. So for that it’s chat rooms, JIRA tickets, video conference, kind of thing.
  • 18:29 When to include exploratory testing is one question that I’ve been asked a lot.
  • 18:33 I think one very good time to have a whole team is before you release, especially if you have a code freeze period of time, even if it’s a short period of time, just make sure there’s nothing embarrassing that you can fix right before you go out. Sometimes you’ll find something simple. Sometimes you’ll find something serious.
  • 18:52 One of my favorite examples back when we were doing exploratory testing across products for the Creative Suite. At one point teams had made different decisions on the product level and when we went to test them together,  we discovered the PDFs we exported, would not open in the version of Acrobat we planed to include because all the teams have made decisions in isolation, not realizing the impact on the other teams.
  • 19:16 And so we were able to fix that. That would have been really embarrassing if we had gone to beta and we couldn’t open her own PDF files. And so we were able to avoid that. And that’s a simple sanity check of doing a typical customer workflow.
  • 19:30 Sometimes we miss things that are outside of our product. We’re so concerned about us we forget to think about what does the user need, what if they copy and paste from word? You know, we forget about that. Or, you know, when they put this in their email, what happens, these are things that if we go outside the scope of our product, just a little bit we can find out some things that might really impact our users.
  • 19:52 Exploratory testing to me is going beyond functional testing. We’ve done a great job of covering our functional testing in many ways, with TDD, with unit tests, with test automation, we now have UI automation a lot of times, and it’s really the things outside of that scope, those gaps we need to detect.
  • 20:09 Shane: It needs a creative mindset as well.
  • 20:11 Lanette: Absolutely, and that’s one thing I think developers have. But they have it focused in a narrow space, so it’s really broadening that scope. And then they definitely have what it takes to do good exploratory testing, and sometimes they can just jump into the code and fix what they find right then and there, and that’s nice.
  • 20:31 Shane: Reduce the cycle time.
  • 20:32 Lanette: 20:32 Yes. I used to pair with developers when I was the only tester at a  company, when they had new code before they would check it in, I would pair with them just for an hour and we would live test and fix the code without writing bugs. And that would be really effective. I mean, we would fix a lot of bugs that way, and then you don’t even have to worry about reporting the bug.
  • 20:51 You just show it. And they are like, Oh, I know what that is. Um, but sometimes we would end up, it’d be a tough one and we’d have to put it on the backlog and that happens sometimes. But a lot of times we could fix a lot of bugs, like maybe even 10 bugs in an hour and that was pretty effective.
  • 21:06 Sometimes we didn’t find anything, and then we felt more confident just checking in that code and going
  • 21:11 Shane: Switching tack entirely, because I happen to know that cats are a passion and today is international cat day. Tell us a little bit about your cats
  • 21:20 Lanette: I have two kittens one, Nevani, is seven months old and she’s a rescue and she’s a super good hunter. She’s a white cat Siamese with Tabby points, but she’s also a mutt , she’s not purebred. Well, my other cat  Adelin is four months old and he is a little terror. He’s Siamese with seal points and he is 25% ragdoll. So he’s a very snugly, mouthy cat. The ragdoll part makes them extra snugly and he’s personality plus it’s been really fun having these two kittens and I’ve heard from my sister who’s looking after them that they have been emotional eating while I’ve been gone. And they have been through a bag and a half of cat food while I’ve been out here at agile. So I have no idea what size these kittens are going to be. They might be tigers by the time I get back.
  • 22:07 Lanette, if people want to continue the conversation, where do they find you?

  • 22:10 Lanette: If you like a very active Twitter scream, they could follow me at @lanettecream on Twitter. I basically live tweet my entire life. So, it’s not just about tech or if you want to email me lannettecreamer@gmail.com. My full name, the capitalization doesn’t matter, but I’d be happy if anyone has questions about exploratory testing. I have posted my slides for my workshop on the agile website. There’s a PDF on Dropbox. If you want to get it from there too, you can just send me an email. Send you the link.
  • 22:38 Shane: Excellent. Thanks so much.
  • 22:39 Lanette: Thank you.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


C# 9: Type Inference for the New Keyword

MMS Founder
MMS Jonathan Allen

Article originally posted on InfoQ. Visit InfoQ

In many situations, there is only one possible type allowed in a given place. And yet C# still requires you to explicitly list the type. Now that the Target-typed `new` expression proposal has been adopted into C# 9, such boilerplate code will no longer be necessary.

If this opening paragraph sounds familiar, it was because we talked about this proposal in January of last year. At that time, a prototype of target-typed `new` expressions was part of C# 8. It didn’t make the cut, but since then work has continued and now its status is “Merged into 16.7p1”.

If you’re already familiar with the feature, nothing has changed in terms of overall design. In fact, the syntax hasn’t really changed since it was under consideration for C# 7.1 in 2017. For those who haven’t seen it before, basically it is the opposite of the var keyword. Instead of omitting the type name on the variable declaration, you omit the type name on the value creation side. Here are a couple of examples,

private Dictionary<string, List<int>> field = new Dictionary<string, List<int>>();
private Dictionary<string, List<int>> field = new();

XmlReader.Create(reader, new XmlReaderSettings() { IgnoreWhitespace = true });
XmlReader.Create(reader, new() { IgnoreWhitespace = true });


From the developer’s perspective, that’s pretty much all there is to it. The feature removes the type in situations where it is either redundant or simply not interesting. But from a language design perspective there are numerous issues to be considered.

For example, what should occur is there are two viable overloads? Should the compiler choose the “best” match, or mark it as an ambiguous error as it does for two overloads that differ only on the type of an out parameter?

According to LDM notes, Microsoft chose the latter. Part of the reason is to make adding new overloads less likely to result in a breaking change. Note the phrase “less likely”, as this kind of type inference will always be susceptible to issues caused by additional overloads.

A common language design issue is determining when to filter out inappropriate overloads. In the past there have been cases where the compiler would choose one overload, only to later issue a compiler error because it violated a generic parameter constraint. This is known as a “late filter approach” and while it simplifies the compiler design, it reduces the chances that the compiler will successfully find an overload in an arbitrary piece of code.

An “early filter approach” would instead try to eliminate as many overloads as possible before choosing one. Again, this increases the complexity of the compiler in exchange for being more likely to find a good match. Here’s an example from the LDM notes.

struct S1 { public int x; }

struct S2 {}

M(S1 s1);

M(S2 s2);

M(new () { x = 43 }); // ambiguous with late filter, resolved with early.


With an early filter approach, the compiler will see that S1 doesn’t have a field named x so it will eliminate it as a possible candidate. Using the late filter approach, the compiler only looks at constructor parameters before making a determination.

As you can imagine, the early filter scenario could grow to become quite complicated when you start nesting constructors. So as of that LDM, Microsoft chose to use the late filter approach.

According to the same LDM, the following scenarios are not supported because they are “unconstructible types”

  • Pointer types
  • array types
  • abstract classes
  • interfaces
  • enums

Enums are excluded because there’s no benefit for using SomeEnum x = new() when SomeEnum x = 0 or SomeEnum x = default is clearer. And abstract types obviously can’t be constructed, but interfaces are actually more interesting than they may appear.

Though most people are unaware of it, C# does support the concept of a default implementation class for interfaces. Xenoprimate provided this example,

[Guid("899F54DB-5BA9-47D2-9A4D-7795719EE2F2")]
[ComImport()]
[CoClass(typeof(FooImpl))]
public interface IFoo
{
    void Bar();
}

public class FooImpl : IFoo
{
    public void Bar()
    {
        Console.WriteLine("XXXX");

        /* ... */
    }
}

public static void Main()
{
    var foo = new IFoo(); // works just fine
    foo.Bar();
}

As this is a very obscure scenario, it is understandable that Microsoft decided not to consider it when designing the target-typed `new` expression feature.

Another question that came up is whether or not to allow throw new(). In theory this could just throw a raw Exception, but since throwing Exception is considered to be a bad practice the syntax will not be supported.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


DevOps Dojo Provides Online, Interactive DevOps Training

MMS Founder
MMS Matt Campbell

Article originally posted on InfoQ. Visit InfoQ

DXC Technology has recently open-sourced their DevOps Dojo, a collection of learning modules that covers both the technical and cultural aspects of DevOps. The modules are built on the Katacoda platform and hosted on GitHub.

The initial modules cover topics including version control, continuous integration, and shifting left on security. The modules use a fictitious pet clinic along with a number of characters to leverage storytelling as a learning mechanism. The modules provide an interactive, step-by-step instruction based experience. However, they are structured to allow for exploration off-script to encourage further learning.

InfoQ sat down with Olivier Jacques, Distinguished Technologist at DXC Technology, to discuss the project in more detail.

InfoQ: What inspired the creation of this dojo?

Olivier Jacques: Back in 2014, we were on a journey to transform our IT organization, and DevOps was the means by which we were hoping to make great progress. Like many, we started with a set of light in-house applications which we wanted to transform first. Some were business critical, with each hour of unavailability leading to millions of dollars of revenue loss.

We quickly came up to a point where we realized that we needed to re-skill the people in the teams who have been working on these applications. But it was not only about training; it was also about supporting the DevOps transformation with hands-on DevOps coaches. We created a tiny DevOps enablement team for that.

Fast forward a few months later, I was attending the DevOps Enterprise Summit conference in 2015, where we were invited to share our DevOps transformation journey. This is the first time I heard about the DevOps Dojo from the retailer Target with Heather Mickman and Ross Clanton. We connected afterwards, and this is how we created our own flavor of a DevOps Dojo. Our first version was entirely a face to face, transformational experience but we found we had to scale this to more people and customers. This led us to create this online add-on to our DevOps Dojo.

However, we did not want to do simple, boring webinars. We wanted to guide the learners, but give a lot of freedom to explore and get out of the script. We chose the Katacoda platform which allows us to create hands-on labs, accessible directly from a web browser. The content of the modules is mostly based on the Accelerate book, which has practices scientifically connected to business outcomes.

InfoQ: You mention that the modules are “informed by research on how people learn”. Can you elaborate on that research?

Jacques: Education is obviously a very important topic, and an area of constant evolution. Most recently, curfews, quarantines and similar restrictions have turned the education system completely upside down. If we take this tectonic shift and combine the work from earlier research such as How People Learn, we come up with a set of very interesting tactics when it comes to learning DevOps practices.

Online learning becomes a must. We need to be really good at embracing all of the online techniques, not only videos and quizzes, but also hands on labs to learn at your own pace and remote coaching sessions when you get stuck.

We also want to take people from where they are now, their story, to where they should be next. Many kinds of learning require transforming existing understanding. This is why we use storytelling techniques with characters for each role: these characters, often a caricature of their role, help to explain what is changing for you and take the learning home.

InfoQ: How do you see people integrating this into their DevOps transformation?

Jacques: The Online DevOps Dojos are very interesting to understand how key DevOps patterns actually work. Not only have 23,000 DXC employees leveraged these Online DevOps Dojos, but customers and partners of ours. We have had great feedback from people who got their “a-ha” moment while following and practicing in one of the training modules.

I see multiple ways people integrating the Online DevOps Dojos. One can be to just go through the modules to learn techniques they have not learned from previous work assignments or when just getting out of school. As these trainings are open source, another way is to convert modules to cover the use of other tools which will match their actual tool chain better.

InfoQ: What led to choosing the initial five modules?

Jacques: The first five modules are really a subset of 16 modules we already have. Within that collection, about half of them are cultural modules and the other half focused on putting the principles into practice, with tools like GitHub, Jenkins and Artifactory. We chose the initial set of modules so that we can introduce the story, the team, and their pipeline. It creates a core which we can build upon; the first chapters of a bigger story.

InfoQ: Are there plans to add more modules in the future? What’s on the roadmap?

Jacques: We have more modules available which we have developed using an innersource model that we could release in the coming months. We are also looking to get feedback from the community on what to release next.

But the Online DevOps Dojo project is not only about releasing material which was already developed. It is also about building the community to create more modules around DevOps practices. We believe that the software industry can truly benefit from such a project.

So, this is a call to action to join us and help us build more content leveraging open source practices.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.