MMS • Rhian Lewis
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- Open-source software brings its own challenges for testers, particularly in the world of Web3, whether this is integrating with libraries and systems that are outside the control of the test team, or attempting to replicate in a staging environment complex and energy-intensive networks such as public blockchains
- Open-source testing in the form of bug bounties can help broaden the scope of your testing and provide specialist support
- Bounty programs are not a replacement for professional testing; they are a tool in the test team’s armoury. They can add specialist skills and geographically localized expertise that would be difficult to find on your team
- Bounty programs are most likely to be successful when testers are involved in defining scope, triaging bugs and working with the community
- They are also a useful tool for upskilling and developing skills such as mob testing and security testing
Open-source software has changed the way we work, as testers and developers. We are more likely to use open-source libraries and packages than ever before, which means bugs can be introduced via dependencies our teams cannot control.
And now we are entering a world of open-source testing, too. Increasingly, open-source projects (and many closed-source ones) are creating bug bounty programs and asking people outside the organization to become involved in the quality and security process.
The growing importance of the Web3 ecosystem based on blockchains shows how important community test programs are, with recent examples of bugs being discovered by open-source testers who have saved projects tens of millions of dollars.
Some within the testing community see this trend as a threat. However, it is actually an opportunity. Bug bounties and open-source test contributions are a great tool for test teams, and there is every reason for testers to embrace this new trend rather than to fear it.
Challenges of testing open source software
There are two main challenges: one around decision-making, and another around integrations. Regarding decision-making, the process can really vary according to the project. For example, if you are talking about something like Rails, then there is an accountable group of people who agree on a timetable for releases and so on. However, within the decentralized ecosystem, these decisions may be taken by the community. For example, the DeFi protocol Compound found itself in a situation last year where in order to agree to have a particular bug fixed, token-holders had to vote to approve the proposal.
So you may not have the same top-down hierarchy as in a company producing proprietary software, where a particular manager or group of managers are in charge of releases.
When it comes to integrations, these often cause problems for testers, even if their product is not itself open-source. Developers include packages or modules that are written and maintained by volunteers outside the company, where there is no SLA in force and no process for claiming compensation if your application breaks because an open-source third party library has not been updated, or if your build script pulls in a later version of a package that is not compatible with the application under test. Packages that facilitate connection to a database or an API are particularly vulnerable points.
Bug-bounty programs and their purpose
Bug bounty programs are a way of crowd-sourcing testing. The author James Surowiecki popularised the idea in his book The Wisdom of Crowds that the more people who have their eyes on a particular problem, the more likely they are to find the right solution. In the case of very complex systems with multiple dependencies and integrations, where a single small loophole can cause the loss of millions of dollars, it becomes increasingly unlikely that a single tester or test team will have the specialist knowledge and predictive ability to identify every potential issue. So financially incentivising the wider community to search for bugs is becoming increasingly popular.
You can financially incentivise bug searches by publishing the terms and conditions, along with the reward table, on your own website. But more commonly, platforms like HackerOne, BugCrowd and ImmuneFi handle the process for you and provide a one-stop shop for testers and security researchers who are keen to show their prowess as well as earning rewards.
For commercial software, the decision to run a program and mandate particular rewards is one that is made centrally. The process is different for open source, particularly within the Web3 ecosystem. In this case, the foundation or DAO that runs the protocol will vote on a certain proportion of the treasury being released to fund a bug bounty.
Typical examples are Compound’s bug bounty and the one I helped set up for Boson Protocol Boson Protocol.
The Compound bug bounty program on ImmuneFi is a good example because it clearly lays out the rewards available (up to $50,000) according to the severity of the vulnerability and is also very clearly scoped to include only one particular pull request. ImmuneFi takes care of any payouts or disputes.
In contrast, the Boson Protocol program targets all the smart contracts – with a similar bounty of $50,000 – but excludes all associated websites and non-smart contract assets. In this instance, the bounty program is offered directly rather than via an intermediary.
Advantages and disadvantages of open-source bug-bounty programs
The advantage of open-sourcing testing, even on closed-source projects, is that it widens the bug-catching net and allows a much larger number of people to contribute to the security of a system, rather than depending on a project’s formally employed test team to cover all bases. A popular open-source project is usually maintained by a core development team which may include testers, but like most closed-source projects, they may not have the extremely specialist skills that are sometimes needed now and again in the software development lifecycle. Many companies will already hire specialist services, for example, to do penetration testing. You can think of a bug bounty as a kind of ongoing penetration test, where you only pay for the time and expertise of the specialist if they find a vulnerability.
But more than anything, and no matter what your project is, crowd-sourcing testing leads to a variety of different approaches, ways of thinking and skill sets that it would be impossible to find in a single person or team. A successful product or application will have tens of thousands – perhaps millions – of users, all of whom will use it in different ways and take different routes through it using different hardware. Having access to a bigger pool of skills and opinions is a valuable resource when channelled correctly.
The disadvantages mainly lie in the extra time and effort in marketing your bounty program to those with the relevant skills. And if you are not careful to define the scope of the bounty in advance, your company, foundation or project may end up paying out for bugs that you as a tester have already found.
Testing the Web3 ecosystem
Blockchain technology – or Web3 as it is sometimes known – is a very challenging area for testers for many reasons. I will highlight two of the main ones.
Firstly, it is very difficult to replicate the conditions of a production environment in Staging; as in a Production situation, you literally have thousands of validators and thousands of users, who may interact with the system in ways you have not thought of. This is impossible to replicate. If you look at the Bitcoin blockchain, for example, it would cost literally millions of dollars in electricity alone to run an entirely accurate simulation of the live network.
Secondly, Web3 systems are designed to be what we call composable, which means that they all fit together like Lego bricks. To give a simple example of this, the ERC20 token standard devised for the Ethereum blockchain can be transferred into any wallet, as can the ERC721 NFT token standard. This means that a developer can write a smart contract that creates a derivative on a decentralized exchange and then use that derivative to generate income on a completely separate savings protocol, and then use the generated income as collateral on yet another protocol. This interdependency can multiply risk many times over, especially if one key component goes wrong.
The fact that there are literally tens of millions of dollars sitting in these open-source protocols is also a risk: it acts as a honeypot. Sometimes, if you look at existing bug bounty programs, the rewards on offer can look absurdly high, but if a successful bounty hunter can find a bug before it is exploited, the cost-benefit ratio starts to make sense.
For example, the layer 2 network Polygon recently paid out $2 million to whitehat hacker Gerhard Wagner for finding an exploit. This sounds like an incredible sum, but when you think that $850 million in funds were at risk if the bug hadn’t been detected, it makes more sense (source: Polygon Double-Spend Bugfix Review — $2m Bounty ).
Simply looking at a bounty platform such as ImmuneFi gives a hint of the rewards that are currently on offer: $2.5 million for the highest category of vulnerability in The Graph Protocol, for example – and $5 million for ChainLink.
Community-focused test programs
I feel passionately that testers should be involved in defining the scope of successful bounty programs and deciding how they should run. The main thing is to either take charge of the program yourself as a team or to work very closely with the people in your organisation who set it up. You also need to agree who will triage the tickets and how bounty hunters will interact with your team. It is crucial that testers help define the scope of any program so that rewards are not offered for unimportant issues, and that particular areas can be excluded where the test team prefers to retain responsibility for bug reports. It makes more sense to ring-fence bug bounties for areas where there are likely to be edge cases, or where a particular type of expertise is needed.
For example, the Compound bug bounty program I mentioned above specifically mentions that the program is targeted at patches that were made to the protocol’s Comptroller implementation, which deals with risk management and price oracles. This is specialist financial knowledge and it makes sense to draw on a wider pool of people to find someone with these skills.
Further advantages of bug bounty programs for testers
Testers can also involve themselves in open-source software and bug bounties outside their organisation to strengthen their testing skills – and maybe even make some extra cash.
It can be a great way for a test team to practise their mob testing skills and work together on finding bugs. The best known platforms are HackerOne and BugCrowd, so go there and see if there is anything that looks interesting. It’s always a great idea to get out of your comfort zone and test something you haven’t necessarily tested before.
And if you want to target your efforts on Web3 technologies specifically, head to ImmuneFi and check out the programs there.
How radical openness improves testing
One interesting new concept that is gaining currency is radical openness – and there are definitely scenarios that apply to testing. This is a concept popularised by the 2013 book Radical Openness: Four Unexpected Principles for Success by Anthony D Williams and Don Tapscott, which argues that transparency brings benefits to all stakeholders in business environments.
In a recent excellent post by Andrew Knight on opening tests like opening source, he highlights the benefits of open-source testing:
Transparency with users builds trust. If users can see that things are tested and working, then they will gain confidence in the quality of the product. If they could peer into the living documentation, then they could learn how to use the product even better. On the flip side, transparency holds development teams accountable to keeping quality high, both in the product and in the testing.
He is not talking about bounty programs, but the principles are the same. It comes back to the wisdom of crowds. The more people who are involved in scrutinizing software and how it is used, the more likely it is that it will be fit for purpose.