Introduction
Recon is a team of invariant testing engineers and security researchers that provide invariant testing as a service while also developing tools and educational content to make it easier for anyone to test invariants on their smart contracts.
Recon Pro is a tool for automatically scaffolding and running invariant testing in the cloud.
Navigating the Book
Writing Invariant Tests
If you're new to invariant testing with fuzzing, use the Learn Invariant Testing section to learn the background info you'll need to get started.
You can then follow along with the Example Project to see how to set up a project using the Chimera Framework and Create Chimera App template.
Once you've gotten the hang of writing invariants and are using them in real projects, check out the Advanced Fuzzing Tips section for best practices we've developed after using our Chimera framework for over a year on dozens of engagements.
Building Handlers
Learn how to use the Recon UI to add a test harness to your Foundry project.
Running Jobs
Learn how to offload a fuzzing job using the Recon cloud runner.
Using Recipes
Learn how to reuse fuzzer configurations when running jobs on the Recon cloud runner using recipes.
Adding Alerts
Learn how to add alerts for jobs run on the Recon cloud runner that can notify you of broken properties via Telegram or webhook.
Dynamic Replacement
You can test your existing invariant suite with different setup configurations without having to modify the existing setup using dynamic replacement.
Governance Fuzzing
Simulate on-chain changes that modify the system state (function calls) or system configuration (governance function calls) in a forked testing environment so you can preview changes and their side effects before they happen on-chain using governance fuzzing.
Useful Tips
Learn how to make the most of Recon's features so that you can fuzz more effectively.
Recon Extension
The Visual Studio Code extension that combines our most useful fuzzing tools into one so you can get to fuzzing faster. Learn how to use it here.
Recon Tools
Recon's free tools can help you turn fuzzer logs into Foundry reproducers (Echidna/Medusa).
Our bytecode tools can help you compare the bytecode of two different contracts and generate an interface for a given contract's bytecode.
Open Source Contributions
Recon has created a number of open source tools to make invariant testing easier. These can be found in the OSS Repos section.
OpSec
Learn about best practices for operational security for web3 teams and the services that Recon provides to help projects with this.
Glossary
See the glossary for terminology used throughout the book.
Learn Invariant Testing
Most of our guides and documentation are focused on using fuzzing tools, primarily Echidna and Medusa because we use them internally at Recon. However, we also support running other tools on our cloud infrastructure such as Foundry (fuzzing), Halmos (formal verification), and Kontrol (formal verification).
After having chosen a tool best suited to your needs and downloading it locally, you can get started with the tutorials below.
If you're new to invariant testing, we recommend starting with the following series of posts to get you from 0 to 1:
- First Day At Invariant School
- How To Define Invariants
- Implementing Your First Smart Contract Invariants: A Practical Guide
If you prefer a full end-to-end video bootcamp, checkout this series on everything you need to know about invariant testing.
Once you've grasped the basics of invariant testing you can setup your first suite and run it on Recon. For a step-by-step guide on how to do this, check out the First Steps guide.
If you have any questions about how to use Recon or invariant testing in general, you can reach out to our team on Discord.
Additional Learning Resources
Invariant Testing In General
If you're looking for more resources to help you get started with invariant testing, see the following:
Fuzzers
For more resources on our favorite fuzzers (Echidna and Medusa) see the following:
Retrospectives
Deep dives into the work we've done with our elite customers with tips and tricks on building effective invariant testing suites:
- Corn Retrospective
- eBTC Retrospective
- Centrifuge Retrospective part 1
- Centrifuge Retrospective part 2
Videos
Podcasts
- Fuzzing a RewardsManager with the Recon Extension | Alex & the Remedy Team on Stateful Fuzzing for Security
- Workshop on how to go from a simple foundry test to a full blown critical exploit with fuzzing | Alex & Secureum on fuzzing for security research
- Fuzzing Sablier with the Recon Extension | Alex & Shafu on Invariant Testing
- Fuzzing MicroStable with Echidna | Alex & Shafu on Invariant Testing
- How can I run my Fuzz tests longer? Getrecon interview with Alex | Alex & Patrick Collins (Cyfrin Audits)
- Using Recon Pro to test invariants in the cloud | Alex & Austin Griffith
Office Hours
Office hours are live recordings of useful tips and tricks for invariant testing.
- Fuzz Fest | The best Talks of 2024 on Fuzzing for Security
- The Dangers of Arbitrary Calls | How to write safe contracts that use arbitrary calls and the risk tied to them
- Centrifuge Retrospective Pt1 | On scaffolding and getting to coverage
- Centrifuge Retrospective Pt2 | On breaking properties and the importance of understanding the system you're testing
- How we missed a bug with fuzzing! | On the dangers of clamping
- Finding bugs with Differential Fuzzing | Using differential fuzzing to find bugs
- Fuzzing Bytecode Directly
Trophies
A sample of some publicly disclosed bugs we've found using invariant testing. You can use these to understand what kinds of properties will help you find issues that manual review sometimes can't.
Example Project
In this section, we'll use the Create Chimera App template to create a simple contract and run invariant tests on it.
Prerequisites: the example shown below requires that you have Foundry and Medusa installed on your local machine
Getting started
Clone the create-chimera-app-no-boilerplate repo.
Or
Use forge init --template https://github.com/Recon-Fuzz/create-chimera-app-no-boilerplate
Writing the contract
First, in the src/
directory we'll create a simple Points
contract that allows users to make a deposit and earn points proportional to the amount of time that they've deposited for, where longer deposits equal more points:
// SPDX-License-Identifier: UNLICENSED
pragma solidity ^0.8.13;
import {MockERC20} from "@recon/MockERC20.sol";
contract Points {
mapping (address => uint88) depositAmount;
mapping (address => uint256) depositTime;
uint256 totalDeposits;
MockERC20 public token;
constructor(address _token) {
token = MockERC20(_token);
}
function deposit(uint88 amt) external {
depositAmount[msg.sender] += amt;
depositTime[msg.sender] = block.timestamp;
totalDeposits += amt;
token.transferFrom(msg.sender, address(this), amt);
}
function power() external view returns (uint256) {
return depositAmount[msg.sender] * (block.timestamp - depositTime[msg.sender]);
}
}
Adding to Setup
Now with a contract that we can test, we can deploy it in the Setup
contract:
abstract contract Setup is BaseSetup, ActorManager, AssetManager, Utils {
Points points;
/// === Setup === ///
function setup() internal virtual override {
_newAsset(18); // deploys an 18 decimal token
// Deploy Points contract
points = new Points(_getAsset()); // uses the asset deployed above and managed by the AssetManager
// Mints the deployed asset to all actors and sets max allowances for the points contract
address[] memory approvalArray = new address[](1);
approvalArray[0] = address(points);
_finalizeAssetDeployment(_getActors(), approvalArray, type(uint256).max);
}
/// === MODIFIERS === ///
modifier asAdmin {
vm.prank(address(this));
_;
}
modifier asActor {
vm.prank(address(_getActor()));
_;
}
}
The AssetManager allows us to deploy assets (using _newAsset
) that we can use in our Points
contract with simplified fetching of the currently set asset using _getAsset()
.
We then use the _finalizeAssetDeployment
utility function provided by the AssetManager
to approve the deployed asset for all actors (tracked in the ActorManager) to the Points
contract.
Running the fuzzer
We can now run the fuzzer with no state exploration since we haven't added handler functions.
Before we run the fuzzer however we'll use Foundry to check that the project compiles correctly because it provides faster feedback on this than Medusa.
Running forge build
we see that it compiles successfully, meaning the deployment in the Setup
contract works as expected:
$ forge build
[⠊] Compiling...
[⠘] Compiling 44 files with Solc 0.8.28
[⠃] Solc 0.8.28 finished in 717.19ms
Compiler run successful!
We can now run the Medusa fuzzer using medusa fuzz
, which gives us the following output:
medusa fuzz
⇾ Reading the configuration file at: /temp/example-recon/medusa.json
⇾ Compiling targets with crytic-compile
⇾ Running command:
crytic-compile . --export-format solc --foundry-compile-all
⇾ Finished compiling targets in 5s
⇾ No Slither cached results found at slither_results.json
⇾ Running Slither:
slither . --ignore-compile --print echidna --json -
⇾ Finished running Slither in 7s
⇾ Initializing corpus
⇾ Setting up test chain
⇾ Finished setting up test chain
⇾ Fuzzing with 16 workers
⇾ [NOT STARTED] Assertion Test: CryticTester.switch_asset(uint256)
⇾ [NOT STARTED] Assertion Test: CryticTester.add_new_asset(uint8)
⇾ fuzz: elapsed: 0s, calls: 0 (0/sec), seq/s: 0, branches hit: 289, corpus: 0, failures: 0/0, gas/s: 0
⇾ [NOT STARTED] Assertion Test: CryticTester.asset_approve(address,uint128)
⇾ [NOT STARTED] Assertion Test: CryticTester.asset_mint(address,uint128)
⇾ [NOT STARTED] Assertion Test: CryticTester.switchActor(uint256)
⇾ fuzz: elapsed: 3s, calls: 70172 (23389/sec), seq/s: 230, branches hit: 481, corpus: 126, failures: 0/692, gas/s: 8560148887
⇾ fuzz: elapsed: 6s, calls: 141341 (236
You can now stop medusa with CTRL + C
.
At this point, we expect almost no lines to be covered (indicated by the low corpus
value in the console logs). We can note however that because the corpus
value is nonzero, something is being covered, in our case this is the exposed functions in the ManagerTargets
which provide handlers for the functions in the AssetManager
and ActorManager
.
We can now open the coverage report located at /medusa/coverage/coverage_report.html
to confirm that none of the lines in the Points
contract are actually being covered.
In our coverage report a line highlighted in green means the line was reached by the fuzzer, a line highlighted in red means the line was not.
Let's rectify the lack of coverage in our Points
contract by adding target function handlers.
Building target functions
Foundry produces an /out
folder any time you compile your project which contains the ABI of the Points
contract.
We'll use this in conjunction with our Invariants builder to quickly generate target function handlers for our TargetFunctions
contract using the following steps:
- Open
out/Points.sol/Points.json
- Copy the entire contents
- Navigate to the Invariants Builder
- Paste the ABI
- Rename the contract to
points
replacing the text in the "Your_contract" form field
This generates a TargetFunctions
contract for Points
. In our case we'll first just add the handler created for the deposit
function:
In this case you can just copy the points_deposit
handler into your TargetFunctions.sol
contract. When working on a larger project however, you can use the Download All Files button to add multiple handlers directly into your project at once.
Make sure to add the updateGhosts
and asActor
modifiers to the points_deposit
function if they are not present:
updateGhosts
- will update all ghost variables before and after the call to the functionasActor
- will ensure that the call is made by the currently active actor (returned by_getActor()
)
Your TargetFunctions
contract should now look like:
abstract contract TargetFunctions is
AdminTargets,
DoomsdayTargets,
ManagersTargets
{
function points_deposit(uint88 amt) public updateGhosts asActor {
points.deposit(amt);
}
}
We can now run Medusa again to see how our newly added target function has changed our coverage.
We now see that the deposit
function is fully covered, but the power
function is not since we haven't added a handler for it. Since the power function is non-state-changing (indicated by the view
decorator) we'll leave it without a handler for now as it won't affect our ability to test properties.
The coverage report is effectively our eyes into what the fuzzer is doing.
We can now start defining properties to see if there are any edge cases in our Points
contract that we may not have expected.
Implementing Properties
Checking for overflow
A standard property we might want to check in our Points
contract is that it doesn't revert due to overflow.
Reverts due to over/underflow are not detected by default in Medusa and Echidna, so to explicitly test for this we can use a try-catch block in our DoomsdayTargets
contract (this contract is meant for us to define things that should never happen in the system):
...
import {Panic} from "@recon/Panic.sol";
abstract contract DoomsdayTargets is
BaseTargetFunctions,
Properties
{
/// Makes a handler have no side effects
/// The fuzzer will call this and because it reverts it will be removed from call sequences during shrinking
modifier stateless() {
_;
revert("stateless");
}
function doomsday_deposit_revert(uint88 amt) public stateless asActor {
try points.deposit(amt) {
// success
} catch (bytes memory err) {
expectedError = checkError(err, Panic.arithmeticPanic); // catches the specific revert we're interested in
t(!expectedError, "should never revert due to under/overflow");
}
}
}
We use the
checkError
function from the Utils contract to allow us to check for a particular revert message. ThePanic
library allows us to easily check for an arithmetic panic in particular without having to specify the panic code (note that this needs to be added as an import in the above).
We use the stateless
modifier so that state changes made by this function call aren't preserved because they make the same changes as the points_deposit
function.
Having two handlers that make the same state changes makes it more difficult to debug when we have a broken call sequence because we can easily tell what the points_deposit
function does but it's not as clear from the name what the doomsday_deposit_revert
does. Having doomsday_deposit_revert
be stateless ensures that it only gets executed as a test in a call sequence for specific behavior that should never happen.
This pattern is very useful if you want to perform extremely specific tests that would make your normal handlers unnecessarily complex.
Testing for monotonicity
We can say that the Points
contract's power
variable value should be monotonically increasing (always increasing) since there's no way to withdraw, which we can prove with a global property and ghost variables.
To keep things simple, we'll only test this property on the current actor (handled by the ActorManager
) which we can fetch using _getActor()
.
Next we'll need a way to fetch the power
for the deposited user before and after each call to our points_deposit
target function using the BeforeAfter
contract:
abstract contract BeforeAfter is Setup {
struct Vars {
uint256 power;
}
Vars internal _before;
Vars internal _after;
modifier updateGhosts {
__before();
_;
__after();
}
function __before() internal {
// reads value from state before the target function call
_before.power = points.power();
}
function __after() internal {
// reads value from state after the target function call
_after.power = points.power();
}
}
This will update the power
value in the _before
and _after
struct when the updateGhosts
modifier is called on the points_deposit
handler.
Now that we can know the state of the system before our state changing call, we can specify the property in the Properties
contract:
abstract contract Properties is BeforeAfter, Asserts {
function property_powerIsMonotonic() public {
gte(_after.power, _before.power, "property_powerIsMonotonic");
}
}
If we now run medusa fuzz
we should get two broken properties!
Broken properties
Generating reproducers
After running the fuzzer you should see the following broken property in the console logs:
⇾ [FAILED] Assertion Test: CryticTester.doomsday_deposit_revert(uint88)
Test for method "CryticTester.doomsday_deposit_revert(uint88)" resulted in an assertion failure after the following call sequence:
[Call Sequence]
1) CryticTester.points_deposit(uint88)(235309800868430114045226835) (block=2, time=573348, gas=12500000, gasprice=1, value=0, sender=0x10000)
2) CryticTester.doomsday_deposit_revert(uint88)(102431335005787171573853953) (block=2, time=573348, gas=12500000, gasprice=1, value=0, sender=0x30000)
[Execution Trace]
=> [call] CryticTester.doomsday_deposit_revert(uint88)(102431335005787171573853953) (addr=0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC, value=0, sender=0x30000)
=> [call] StdCheats.prank(address)(0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC) (addr=0x7109709ECfa91a80626fF3989D68f67F5b1DD12D, value=0, sender=0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC)
=> [return ()]
=> [call] Points.deposit(uint88)(102431335005787171573853953) (addr=0x6804A3FF6bcf551fACf1a66369a5f8802B3C9C58, value=0, sender=0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC)
=> [panic: arithmetic underflow/overflow]
=> [event] Log("should never revert due to under/overflow")
=> [panic: assertion failed]
For all but the simplest call sequences this is very difficult to read and even harder to debug. This is why we made the Chimera Framework extremely opinionated, because we believe that reading Medusa and Echdina traces is a very slow and difficult way to debug broken properties.
As a result of this, all of our templates come with the ability to reproduce broken properties as unit tests in Foundry.
So instead of debugging our broken property from the Medusa logs directly, we'll use Foundry:
- Copy the Medusa output logs in your terminal
- Go to the Medusa Log Scraper tool
- Paste the logs
- A reproducer unit test will be created for the broken property automatically
- Click the dropdown arrow to show the unit test
- Disable the
vm.prank
cheatcode by clicking the button (as we're overriding Medusa's behavior) - Click on the clipboard icon to copy the reproducer
- Open the
CryticToFoundry
contract and paste the reproducer unit test - Run it with Foundry using the
forge test --match-test test_doomsday_deposit_revert_0 -vvv
command in the comment above it
// forge test --match-contract CryticToFoundry -vv
contract CryticToFoundry is Test, TargetFunctions, FoundryAsserts {
function setUp() public {
setup();
}
// forge test --match-test test_doomsday_deposit_revert_0 -vvv
function test_doomsday_deposit_revert_0() public {
vm.roll(2);
vm.warp(573348);
points_deposit(235309800868430114045226835);
vm.roll(2);
vm.warp(573348);
doomsday_deposit_revert(102431335005787171573853953);
}
}
We now have a Foundry reproducer! This makes it much easier to debug because we can quickly test only the call sequence that causes the property to break and add logging statements wherever needed.
You'll also notice that although intuitively the property_powerIsMonotonic
property should not break because we don't allow deposits to be withdrawn, the fuzzer breaks it:
⇾ [FAILED] Assertion Test: CryticTester.property_powerIsMonotonic()
Test for method "CryticTester.property_powerIsMonotonic()" resulted in an assertion failure after the following call sequence:
[Call Sequence]
1) CryticTester.points_deposit(uint88)(38781313) (block=9757, time=110476, gas=12500000, gasprice=1, value=0, sender=0x10000)
2) CryticTester.points_deposit(uint88)(0) (block=44980, time=367503, gas=12500000, gasprice=1, value=0, sender=0x20000)
3) CryticTester.property_powerIsMonotonic()() (block=44981, time=422507, gas=12500000, gasprice=1, value=0, sender=0x10000)
[Execution Trace]
=> [call] CryticTester.property_powerIsMonotonic()() (addr=0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC, value=0, sender=0x10000)
=> [event] Log("property_powerIsMonotonic")
=> [panic: assertion failed]
Debugging the cause of this break is left as an exercise for the reader, but we'll learn some useful tricks in the next section below for debugging the broken property in points_deposit
.
Debugging the overflow property
Running the reproducer for the doomsday_deposit_revert
property we can clearly see that we get an over/underflow but it's not very clear from the call trace where this happens exactly:
├─ [843] Points::deposit(102431335005787171573853953 [1.024e26])
│ └─ ← [Revert] panic: arithmetic underflow or overflow (0x11)
├─ [7048] Utils::checkError(0x4e487b710000000000000000000000000000000000000000000000000000000000000011, "Panic(17)")
│ ├─ [6142] Utils::_getRevertMsg(0x4e487b710000000000000000000000000000000000000000000000000000000000000011)
│ │ ├─ [930] Utils::_checkIfPanic(0x4e487b710000000000000000000000000000000000000000000000000000000000000011)
│ │ │ └─ ← true
│ │ ├─ [5062] Utils::_getPanicCode(0x4e487b710000000000000000000000000000000000000000000000000000000000000011)
│ │ │ └─ ← "Panic(17)", false
│ │ └─ ← "Panic(17)", false
│ └─ ← true
├─ [0] VM::assertTrue(false, "should never revert due to under/overflow") [staticcall]
│ └─ ← [Revert] should never revert due to under/overflow
└─ ← [Revert] should never revert due to under/overflow
We can add console logs to the Points
contract and the reproducer to see where exactly it overflows:
function deposit(uint88 amt) external {
console2.log("here 1");
depositAmount[msg.sender] += amt;
console2.log("here 2");
depositTime[msg.sender] = block.timestamp;
console2.log("here 3");
totalDeposits += amt;
console2.log("here 4");
token.transferFrom(msg.sender, address(this), amt);
}
function test_doomsday_deposit_revert_0() public {
console2.log("=== Before Deposit ===");
vm.roll(2);
vm.warp(573348);
points_deposit(235309800868430114045226835);
console2.log("=== Before Doomsday ===");
vm.roll(2);
vm.warp(573348);
doomsday_deposit_revert(102431335005787171573853953);
}
Which gives us the following console logs when we run the test:
=== Before Deposit ===
here 1
here 2
here 3
here 4
=== Before Doomsday ===
here 1
This indicates to us that the issue is in the second increment of depositAmount
. If we check the type of depositAmount
we see that it's a uint88
.
contract Points {
mapping (address => uint88) depositAmount;
...
}
This indicates that we must be depositing more than type(uint88).max
, and if we check if the sum of the deposited amounts in the test is greater than type(uint88).max
, we get the following:
sum of deposits 337741135874217285619080788
type(uint88).max 309485009821345068724781055
sum of deposits > type(uint88).max true
So we can see that the issue happens because in our Setup
we initially mint type(uint256).max
to the actor so they can deposit more than type(uint88).max
over multiple calls:
function setup() internal virtual override {
...
_finalizeAssetDeployment(_getActors(), approvalArray, type(uint256).max);
}
This means that to fix the broken property we either need to change the type of the depositAmount
variable to uint256
or limit the amount that we initially mint to the actor. For our purposes we'll change the type of the depositAmount
variable.
Now when we run the fuzzer we can see that the property no longer breaks:
^C⇾ Fuzzer stopped, test results follow below ...
⇾ [PASSED] Assertion Test: CryticTester.add_new_asset(uint8)
⇾ [PASSED] Assertion Test: CryticTester.asset_approve(address,uint128)
⇾ [PASSED] Assertion Test: CryticTester.asset_mint(address,uint128)
⇾ [PASSED] Assertion Test: CryticTester.doomsday_deposit_revert(uint88)
⇾ [PASSED] Assertion Test: CryticTester.points_deposit(uint88)
⇾ [PASSED] Assertion Test: CryticTester.switch_asset(uint256)
⇾ [PASSED] Assertion Test: CryticTester.switchActor(uint256)
⇾ [FAILED] Assertion Test: CryticTester.property_powerIsMonotonic()
Test for method "CryticTester.property_powerIsMonotonic()" resulted in an assertion failure after the following call sequence:
[Call Sequence]
1) CryticTester.points_deposit(uint88)(73786976294838206465) (block=19477, time=38924, gas=12500000, gasprice=1, value=0, sender=0x30000)
2) CryticTester.asset_mint(address,uint128)(0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC, 79387721835223434743036999817) (block=43362, time=399548, gas=12500000, gasprice=1, value=0, sender=0x10000)
3) CryticTester.points_deposit(uint88)(0) (block=43362, time=399548, gas=12500000, gasprice=1, value=0, sender=0x10000)
4) CryticTester.property_powerIsMonotonic()() (block=43363, time=882260, gas=12500000, gasprice=1, value=0, sender=0x30000)
[Execution Trace]
=> [call] CryticTester.property_powerIsMonotonic()() (addr=0x7D8CB8F412B3ee9AC79558791333F41d2b1ccDAC, value=0, sender=0x30000)
=> [event] Log("property_powerIsMonotonic")
=> [panic: assertion failed]
⇾ Test summary: 7 test(s) passed, 1 test(s) failed
This is one of the key benefits of having properties defined, they allow you to check your codebase against any changes you make to ensure that you don't introduce new bugs.
Note that changing the
depositAmount
touint256
only resolves this issue when we have a single actor that deposits, if there are multiple actors that deposit whose balances sum to more thantype(uint256).max
the property will break again.
Now it's your turn, see if you can apply the techniques discussed here to figure out why the property_powerIsMonotonic
breaks!
If you get stuck or need help, reach out to the Recon team in our discord!
Implementing Properties
Implementing properties is the most important part of invariant testing, here we'll look at the different types of properties that you can define and how these can be implemented (inlined or global) along with the different techniques that you can use to implement your properties as code using an ERC4626 vault as an example.
What are properties?
Properties allow us to define behaviors that we expect in our system. In the context of testing we can say that properties are logical statements about the system that we test after state-changing operations are made via a call to one of the target function handlers.
We can use the term invariants to specify properties of a system that should always hold true, meaning after any state-changing operation.
We can use an ERC20 token as an example and define one property and one invariant for it:
- Property: a user’s balance should increase only after calls to the
transfer
andmint
functions - Invariant: the sum of user balances should never be greater than the
totalSupply
We prefer to use the term properties throughout this book because it covers invariants as well properties since invariants are a subset of properties but properties are not a subset of invariants.
Property types
In this presentation, Certora lays out the five fundamental types of properties we’re interested in when writing invariants.
Namely these types of properties are:
- Valid States - checking that the system only stays in one of the expected states
- State Transitions - checking transitions between valid/invalid states
- Variable Transitions - checking that variables only change to/from certain expected values
- High-Level Properties - checking broad system behavior
- Unit Tests - checking specific behavior within functions
For more info on understanding these different types of properties, see this post.
Inlined vs global properties
After having implemented many suites ourselves, we realized that methods to implement properties fall into major categories: inlined and global.
We call properties defined inside a function handler inlined, like the following from the Create Chimera App template:
function counter_setNumber2(uint256 newNumber) public asActor {
// same example assertion test as counter_setNumber1 using ghost variables
__before();
counter.setNumber(newNumber);
__after();
if (newNumber != 0) {
t(_after.counter_number == newNumber, "number != newNumber");
}
}
Because these properties are defined within the handlers themselves, they are only checked after the call to the target handler. This means that by definition they aren't required to hold for other function calls.
We call properties defined as standalone public functions in the Properties contract that read some state or ghost variable global; the following example is also taken from the Create Chimera App template:
function invariant_number_never_zero() public {
gt(counter.number(), 0, "number is zero");
}
Because global properties are publicly exposed functions, they can be called by the fuzzer after any call to one of the state-changing handlers (just like how boolean properties normally work). This lets us check that a property holds for any function call, so we can use it to implement our system's invariants.
Testing mode
Echidna and Medusa, the two primary fuzzers that we use, allow defining properties using assertions as well as boolean properties.
The same global property in the example above could be rewritten as an Echidna boolean property like so:
function echidna_number_never_zero() public returns (bool) {
return counter.number() == 0;
}
Any property that can be written as a boolean property however can also be writen using an assertion, simplifying the test writing process and allowing us to run only one job with Echidna in assertion mode rather than one job in assertion and one in property mode, since Echidna doesn't support using two testing modes at the same time.
Most professionals write all properties as assertions to run them with echidna
Example
We'll be using the scaffolding setup on this simplified ERC4626 vault repo on the book/example
branch.
In the scaffolded setup linked to above, the handlers each explicitly clamp any addresses that receive shares using the
_getActor()
function from theActorManager
. This allows us to ensure that only actors that we've added to the Setup receive share tokens and makes it easier to check properties on them.
Vault Properties
We'll be looking at how we can implement the following properties on an ERC4626 vault.
- The
totalSupply
of the vault's shares must be greater than or equal to the shares accounted to each user. - The
deposit
function should never revert for a depositor that has sufficient balance and approvals. - The price per share shouldn't increase on removals.
1. totalSupply
check
Looking at the property definition, we can see that it doesn't specify any operation that we should check the property after, so we can define this as a global property in the Properties
contract.
Since this property only requires that we know the value of state variables after a given operation, we can just read these state values directly from the contract and make an assertion:
/// @dev Property: The `totalSupply` of the vault's shares must be greater than or equal to the shares accounted to each user
function property_total_supply_solvency() public {
uint256 totalSupply = vault.totalSupply();
// the only depositors in the system will be the actors that we added in the setup
address[] memory users = _getActors();
uint256 sumUserShares;
for (uint256 i = 0; i < users.length; i++) {
sumUserShares += vault.balanceOf(users[i]);
}
lte(sumUserShares, totalSupply, "sumUserShares must be <= to totalSupply");
}
Using the
/// @dev Property
NatSpec tag makes clear to any collaborators what the property checks without them having to reason through the property themselves
We use the less-than-or-equal-to (lte
) assertion in this case because we only care that the vault doesn't round up against the protocol. If it rounds in favor of the protocol, causing the sum of user balances to be less than the totalSupply
, we can consider this acceptable because it still means the vault is solvent and users don't have more shares than expected.
2. Vault deposit
never fails
This property is explicitly stating that the check should only be made for the deposit
function. This means we can implement this property as an inlined property inside the vault_deposit
function.
/// @dev Property: The `deposit` function should never revert for a depositor that has sufficient balance and approvals
function vault_deposit(uint256 assets) public updateGhosts asActor {
try vault.deposit(assets, _getActor()) {
// success
} catch (bytes memory reason) {
bool expectedError =
checkError(reason, "InsufficientBalance(address,uint256,uint256)") ||
checkError(reason, "InsufficientAllowance(address,address,uint256,uint256)");
// precondition: we only care about reverts for things other than insufficient balance or allowance
if (!expectedError) {
t(false, "deposit should not revert");
}
}
}
We use the checkError
helper function from the Recon setup-helpers repo, which allows us to check the revert message from the call to make sure it's not reverting for the expected reasons of insufficient balance or allowance.
3. Price per share increase
This property is more open-ended, but we can conclude from the definition that the price per share shouldn't change for operations that remove assets from the vault.
Since we have two target functions that remove assets from the vault (withdraw
/redeem
), we can check this as a global property that excludes operations that don't remove assets from the vault. This simplifies the implementation so we don't implement the same property in the vault_withdraw
and vault_redeem
handler.
To allow us to track the operation of interest, we add the updateGhostsWithOpType
modifier to the vault_redeem
and vault_withdraw
handlers:
function vault_redeem(uint256 shares) public updateGhostsWithOpType(OpType.REMOVE) asActor {
vault.redeem(shares, _getActor(), _getActor());
}
function vault_withdraw(uint256 assets) public updateGhostsWithOpType(OpType.REMOVE) asActor {
vault.withdraw(assets, _getActor(), _getActor());
}
We then define the following update in the BeforeAfter
contract, which allows us to track the price per share before and after any operation using the updateGhostsWithOpType
/updateGhosts
modifiers:
function __before() internal {
_before.pricePerShare = vault.previewMint(10**vault.decimals());
}
function __after() internal {
_after.pricePerShare = vault.previewMint(10**vault.decimals());
}
We can then define a global property that checks that removal operations do not change the price per share:
function property_price_per_share_change() public {
if (currentOperation == OpType.REMOVE) {
eq(_after.pricePerShare, _before.pricePerShare, "pricePerShare must not change after a remove operation");
}
}
Running the fuzzer
Now with the properties implemented, all that's left to do is run the fuzzer to determine if any of the properties break.
Since we used the Create Chimera App template when scaffolding the above example, we can run Echidna locally using:
echidna . --contract CryticTester --config echidna.yaml
and Medusa using:
medusa fuzz
to run the job using the Recon cloud runner see the Running Jobs page.
Optimizing Broken Properties
Echidna's optimization mode is a powerful tool for understanding the maximum possible impact of a vulnerability discovered by a broken property.
Optimization mode often allows you to determine if the impact of a vulnerability can be greater than what's shown by the originally broken property or if it's minimal because it isn't increased by the optimization test.
As of the time of writing, optimization mode is only available in Echidna, not Medusa
What is optimization mode?
Optimization mode allows you to define a test function that starts with a special prefix (we tend to set this to optimize_
in the echidna.yaml
), takes no arguments and returns an int256
value:
function optimize_price_per_share_difference() public view returns (int256) {
return maxPriceDifference;
}
In the above example the maxPriceDifference
value would be set by one of our target function handlers, then Echidna would call all the function handlers with random values in an attempt to maximize the value returned by optimize_price_per_share_difference
.
Example
We'll now look at an example of how we can use optimization mode to increase the severity of a finding discovered by a broken property using this ERC4626 vault as an example.
Defining the property
First, because this is an ERC4626 vault without the possibility of accumulating yield or taking losses we can define a standard property that states: a user should not be able to change the price per share when adding or removing.
This check ensures that a malicious user cannot perform actions that would make it favorable for them to arbitrage the share token/underlying asset or allow them to steal funds from other users by giving themselves a more favorable exchange rate.
We specify when adding or removing in the property because these are the only operations that change the amount of underlying assets and shares in the vault so these are the ones that would affect the share price.
Given that an ERC4626 vault has user operations that don't exclusively add or remove shares and assets from the vault (tranfer
and approve
), we can use the OpType
enum
in the updateGhostsWithOpType
modifier in our target handlers to only check the property after a call to one of our target functions of interest:
function vault_deposit(uint256 assets) public updateGhostsWithOpType(OpType.ADD) asActor {
vault.deposit(assets, _getActor());
}
function vault_mint(uint256 shares) public updateGhostsWithOpType(OpType.ADD) asActor {
vault.mint(shares, _getActor());
}
function vault_redeem(uint256 shares) public updateGhostsWithOpType(OpType.REMOVE) asActor {
vault.redeem(shares, _getActor(), _getActor());
}
function vault_withdraw(uint256 assets) public updateGhostsWithOpType(OpType.REMOVE) asActor {
vault.withdraw(assets, _getActor(), _getActor());
}
We can then add updates to the price per share in the BeforeAfter
contract like so:
function __before() internal {
_before.pricePerShare = vault.convertToShares(10**underlyingAsset.decimals());
}
function __after() internal {
_after.pricePerShare = vault.convertToShares(10**underlyingAsset.decimals());
}
The above checks how many shares would be received for a deposit of 1 unit of the underlying asset (using the underlyingAsset
's decimal precision) giving us an implicit price per share.
Finally, we can implement our property as:
function property_user_cannot_change_price_per_share() public {
if(currentOpType == OpType.ADD || currentOpType == OpType.REMOVE) {
eq(_before.pricePerShare, _after.pricePerShare, "price per share should not change with user operations");
}
}
which simply checks that the pricePerShare
stays the same after any of the user operations that add or remove tokens to the system (mint
, deposit
, withdraw
and redeem
).
For more details on how to implement properties using the above
OpType
enum see this section.
Breaking the property
After a brief run of Echidna, the property is broken with less than 5000 tests run:
property_user_cannot_change_price_per_share(): failed!💥
Call sequence:
CryticTester.vault_mint(3)
CryticTester.property_user_cannot_change_price_per_share()
Traces:
emit Log(«price per share should not change with user operations»)
This indicates that something in the ERC4626Vault::mint
function is allowing the user to manipulate the price per share.
However, after running the unit test that we generate with the Echidna log scraper tool we can see that the pricePerShare
only changes by 1 wei:
[FAIL: price per share should not change with user operations: 1000000000000000000 != 999999999999999999] test_property_user_cannot_change_price_per_share_0() (gas: 182423)
Logs:
price per share difference 1
Given that any price manipulation by an attacker would allow them to perform arbitrage on the vault, we know this is already a vulnerability, but we can now create an optimization test to determine if the maximum price change is limited to this 1 wei value or if it can be made greater, potentially allowing an attacker to directly steal funds from other users as well.
Creating the optimization tests
Since we want to optimize the difference between the price per share, we can define two separate variables in our Setup
contract for tracking this, one which tracks increases to the price per share and one that tracks decreases to it:
int256 maxPriceDifferenceIncrease;
int256 maxPriceDifferenceDecrease;
We can then define two separate optimization tests to optimize each of these values. If we were to only define one optimization test, we'd be losing information on other potential issue paths as the call sequence given at the end of the fuzzing run will only give us one maximized value.
For example, in our scenario if we only had one test for the price increasing and decreasing, if the price decrease was greater than the price increase, we'd only be able to analyze scenarios where it decreases. Given that we already knew from the initial property break that decreasing the price is possible, this would leave us unaware of exploits paths that could be executed with an increasing price.
We therefore define the following two optimization tests for the difference in price before and after a call to one of our handlers of interest in our Properties
contract:
function optimize_user_increases_price_per_share() public returns (int256) {
if(currentOpType == OpType.ADD || currentOpType == OpType.REMOVE) {
if(_before.pricePerShare < _after.pricePerShare) {
maxPriceDifferenceIncrease = int256(_after.pricePerShare) - int256(_before.pricePerShare);
return maxPriceDifferenceIncrease;
}
}
}
function optimize_user_decreases_price_per_share() public returns (int256) {
if(currentOpType == OpType.ADD || currentOpType == OpType.REMOVE) {
if(_before.pricePerShare > _after.pricePerShare) {
maxPriceDifferenceDecrease = int256(_before.pricePerShare) - int256(_after.pricePerShare);
return maxPriceDifferenceDecrease;
}
}
}
which will set each of the respective maxPriceDifference
variables accordingly.
Note that the tests above are optimizing the difference in price so they will allow us to confirm if we are able to change the price by more than the 1 wei which we already know is possible from the initial property break.
Running the optimization tests
We can then run the tests using optimization mode by either modifying the testMode
parameter in the echidna.yaml
config file or by passing the --test-mode optimization
flag to the command we use to run Echidna. For our case we'll use the latter:
echidna . --contract CryticTester --config echidna.yaml --test-mode optimization --test-limit 100000
Note that we also increased the testLimit
so that we can give the fuzzer plenty of chances to find a call sequence that optimizes the value. This is key in allowing you to generate a truly optimized value, in production codebases we tend to run optimization mode for 100,000,000-500,000,000 tests but theoretically for a test suite with many handlers and state variables that influence the value you're trying to optimize, the longer you run the tests for, the higher the possibility of finding an optimized value.
Typically when we've found a value that's sufficient to prove an increase in the severity of the vulnerability, by demonstrating that the value can be made greater than a small initial value in the original property break, we'll stop the optimization run.
Conversely, if the optimization run shows no increase after a few million tests it typically means it was incorrectly specified or there is in fact no way to increase the value further.
Note: stopping an Echidna optimization run early doesn't allow the call sequence that increased the value to be properly shrunk as of the time of writing (see this issue). If you've already found a sufficiently optimized value and don't want to wait until the run completes, you can stop the run and start a new run with a shorter
testLimit
and reuse the corpus from the previous run to force Echidna to shrink the reproducer in the corpus.
After running the fuzzer for 100,000 runs for the above defined tests we get the following outputs in the console:
optimize_user_decreases_price_per_share: max value: 1000000000000000000
Call sequence:
CryticTester.asset_mint(0x2e234dae75c793f67a35089c9d99245e1c58470b,1005800180287214122)
CryticTester.vault_deposit(1)
...
optimize_user_increases_price_per_share: max value: 250000000000000000
Call sequence:
CryticTester.vault_deposit(2)
CryticTester.vault_deposit(1)
CryticTester.vault_deposit(1)
CryticTester.vault_withdraw(2)
from which we can see that an attacker can therefore not only decrease the price per share as our original property break implied, but they can also increase the price using a completely different call sequence. This is why it's key to define separate optimization tests for increases and decreases for whatever value your initial property breaks.
Investigating the issue
We can use the above call sequences with our Echidna log scraper tool to generate unit tests that allow us to debug the root cause of the issues by adding the following tests to our CryticToFoundry
contract:
// forge test --match-test test_optimize_user_decreases_price_per_share_0 -vvv
function test_optimize_user_decreases_price_per_share_0() public {
// Max value: 1000000000000000000;
asset_mint(0x2e234DAe75C793f67A35089C9d99245E1C58470b,1005800180287214122);
vault_deposit(1);
}
// forge test --match-test test_optimize_user_increases_price_per_share_1 -vvv
function test_optimize_user_increases_price_per_share_1() public {
// Max value: 250000000000000000;
vault_deposit(2);
vault_deposit(1);
vault_deposit(1);
vault_withdraw(2);
}
Decreasing price
For the test_optimize_user_decreases_price_per_share_0
test, we see that the asset_mint
is making a donation. We don't know which of the deployed contracts in the setup is deployed at the address to which the donation is made (0x2e234DAe75C793f67A35089C9d99245E1C58470b
) but we can make an educated guess that it's the vault contract because if the test allows us to manipulate the share price it must be affecting the ratio of shares to assets in the vault.
Logging the value of the deployed vault in the test we can confirm this:
Logs:
vault 0x2e234DAe75C793f67A35089C9d99245E1C58470b
So essentially this test is telling us that: "if I donate a large amount then make a deposit I can decrease the price per share".
Investigating further we can see that the ERC4626Vault::deposit
function calls ERC4626Vault::previewDeposit
which calls ERC4626Vault::convertToShares
:
contract ERC4626Vault is MockERC20 {
...
function deposit(uint256 assets, address receiver) public virtual returns (uint256) {
uint256 shares = previewDeposit(assets);
_deposit(msg.sender, receiver, assets, shares);
return shares;
}
...
function previewDeposit(uint256 assets) public view virtual returns (uint256) {
return convertToShares(assets);
}
...
function convertToShares(uint256 assets) public view virtual returns (uint256) {
uint256 supply = totalSupply;
if (supply == 0) return assets;
// VULNERABILITY: Precision Loss Inflation Attack
// This creates a subtle inflation vulnerability through precision loss manipulation
// The attack works by exploiting division-before-multiplication in specific scenarios
uint256 totalAssets_ = totalAssets();
// Step 1: Calculate share percentage first (division before multiplication)
uint256 sharePercentage = assets * 1e18 / totalAssets_; // Get percentage with 18 decimals
// Step 2: Apply percentage to total supply
uint256 shares = sharePercentage * supply / 1e18;
// The vulnerability: When totalAssets_ >> assets, sharePercentage rounds down to 0
// This allows attackers to:
// 1. Deposit large amount to inflate totalAssets
// 2. Make small deposits from other accounts that get 0 shares due to rounding
// 3. Withdraw their initial deposit plus the "donated" assets from failed deposits
return shares;
}
}
which identifies the bug that was planted.
If we expand the existing test, we can prove the observation in the comments that the first depositor's donation can drive the depositor's received shares for small deposits down to 0:
function test_optimize_user_decreases_price_per_share_0() public {
// Max value: 1000000000000000000;
asset_mint(0x2e234DAe75C793f67A35089C9d99245E1C58470b,1005800180287214122);
vault_deposit(1);
// switch to the other actor and deposit a small amount
switchActor(1);
vault_deposit(1e18);
console2.log("balance of actor 2 after deposit", vault.balanceOf(_getActor()));
}
Logs:
balance of actor 2 after deposit 0
this allows the attacker to steal shares of the other depositor(s) on withdrawal, an increase in severity of the originally identified bug of arbitraging by deflating the price.
Increasing price
For the test_optimize_user_increases_price_per_share_1
we can see that multiple small deposits followed by a withdrawal of a small amount allows the user to manipulate the price upward.
The root cause of this issue is the same but can you identify why it increases the price instead of decreasing it? What would the impact of being able to manipulate the price in this way?
Continue learning
To see how the Recon team used optimization mode in a real world project to escalate the severity of a vulnerability to critical during an audit of the Beraborrow codebase, check out this article.
To see how optimization mode was used to escalate the severity of a QA vulnerability discovered in the Liquity V2 governance codebase to a high severity, see this presentation.
Advanced Fuzzing Tips
This is a compilation of best practices that the Recon team has developed while using the Chimera framework.
Target Functions
For each contract you want to fuzz (your target contract), select the state-changing functions (target functions) you want to include in your fuzzing suite. Wrap the function in a handler which passes in the input to the function call and allows the fuzzer to test random values.
// contract to target
contract Counter {
uint256 public number;
function increment() public {
number++;
}
}
abstract contract TargetFunctions {
// function handler that targets our contract of interest
function counter_increment() public asActor {
counter.increment();
}
}
The easiest way to do this is with our Invariants Builder tool or with the Recon Extension directly in your code editor.
By using the asActor
or asAdmin
modifiers in combination with an Actor Manager ensures efficient fuzzing by not wasting tests that should be executed only by an admin getting executed by a non-admin actor. These modifiers prank the call to the target contract as the given actor, ensuring that the call is made with the actor as the msg.sender
.
// contract to target
contract Yield {
address admin;
uint256 yield;
modifier onlyAdmin() {
require(msg.sender == admin);
}
function resetYield(address _newAdmin) public onlyAdmin {
yield = 0;
}
}
abstract contract TargetFunctions {
// calling this as an actor would waste fuzz calls because it would always revert so we use the asAdmin modifier
function yield_resetYield() public asAdmin {
yield.resetYield();
}
}
Clamping Target Functions
Clamping reduces the search space for the fuzzer, making it more likely that you'll explore interesting states in your system.
Clamped handlers should be a subset of all target functions by calling the unclamped handlers with the clamped inputs. This ensures that the fuzzer doesn't become overbiased, preventing it from exploring potentially interesting states, and also ensures checks for inlined properties which are implemented in the unclamped handlers are always performed.
contract Counter {
uint256 public number = 1;
function setNumber(uint256 newNumber) public {
if (newNumber != 0) {
number = newNumber;
}
}
}
abstract contract TargetFunctions {
function counter_setNumber(uint256 newNumber) public asActor {
// unclamped handler explores the entire search space; allows the input to be 0
counter.setNumber(newNumber);
// inlined property check in the handler always gets executed
if (newNumber != 0) {
t(counter.number == newNumber, "number != newNumber");
}
}
function counter_setNumber_clamped(uint256 newNumber) public asActor {
// clamping prevents the newNumber passed into setNumber from ever being 0
newNumber = between(newNumber, 1, type(uint256).max);
// clamped handler calls the unclamped handler
counter_setNumber(newNumber);
}
}
Disabling Target Functions
Certain state-changing functions in your target contract may not actually explore interesting state transitions and therefore waste calls made by the fuzzer which is why our ABI to invariants tool only scrapes functions from the targeted contract ABI that are non-view/pure. Other functions (generally admin privileged ones) introduce so many false positives into properties being tested (usually via things like contract upgrades or token sweeping functions) that it's better to ignore them. Doing so is perfectly okay even though it will reduce overall coverage of the targeted contracts.
To make sure it's understood by others looking at the test suite that you purposefully meant to ignore a function we tend to prefer commenting out the handler or including a alwaysRevert
modifier that causes the handler to revert every time it's called by the fuzzer.
Setup
This section covers a few rules we've come to follow in our engagements regarding setting up invariant testing suites.
- Create your own test setup
- Keep the story clean
- State exploration and coverage
- Programmatic deployment
- Implicit clamping
Create Your Own Test Setup
If you're working on a codebase which you didn't originally develop, it's tempting to use the Foundry test setup that the developers used for their unit tests, however this can lead to introducing any of the biases present in the existing setup into the invariant testing suite.
We've found that it's best to create the simplest setup possible starting from scratch, where you only deploy the necessary contracts of interest and periphery contracts.
Periphery contracts can also often be mocked (see this section on how to automatically generate mocks using the Recon Extension) if their behavior is irrelevant to the contracts of interest. This further reduces complexity, making the suite more easily understood by collaborators and making it more likely you'll get full line coverage over the contracts of interest more quickly.
Keep The Story Clean
We call the "story" the view of the state changes made by a given call sequence in a tester. By maintaining only one state-changing operation per target function handler, it makes it much simpler to understand what's happening within a call sequence when a tool generates a reproducer that breaks a property that you've defined.
If you include multiple state-changing operations within a single handler it adds additional mental overhead when trying to debug a breaking call sequence because you not only need to identify which handler is the source of the issue, but the individual operation within the handler as well.
Take the following example of handlers defined for an ERC4626 vault:
// having multiple state-changing operations in one handler makes it difficult to understand what's happening in the story
function vault_deposit_and_redeem(uint256 assets) public asActor {
uint256 sharesReceived = vault.deposit(assets);
vault.redeem(sharesReceived);
}
// having separate target function handlers makes it easier to understand the logical flow that lead to a broken property in a reproducer call sequence
function vault_deposit(uint256 assets) public asActor {
uint256 sharesReceived = vault.deposit(assets);
}
function vault_redeem(uint256 assets) public asActor {
uint256 sharesReceived = vault.deposit(assets);
vault.redeem(sharesReceived);
}
Although this is a simplified example we can see that maintaining separate handlers for each state-changing function makes our story much simpler to read.
If you need to perform multiple state-changing operations to test a given property, consider making the function stateless as discussed in the inlined fuzz properties section
State Exploration and Coverage
The most important aspect of invariant testing is what you actually test, and what you actually test is implied by the target functions you define in your test suite setup.
Some contracts, like oracles, may be too complex to fully model (e.g. having to reach a quorom of 3 signers to update the price) because they would add overhead in terms of function handlers that require specific clamping and test suite setup.
In these cases, mocking is preferred because it simplifies the process for the fuzzer to change return values (price in the case of the oracle example) or allow more straightforward interactions to be made by your targeted contracts (without requiring things like input validation on the mocked contract's side).
Mocking is a destructive operation because it causes a loss of information, but the simplification it provides leads fuzzing and formal verification tools to explore possible system states much faster.
Adding additional handlers for things like token donations (transfering/minting tokens directly to one of your targeted contracts) can allow you to explore interesting states that otherwise wouldn't be possible if you only had handlers for your contracts of interest.
Programmatic Deployment
Most developers tend to write static deployments for their test suites where specific contract are deployed with some default values.
However, this can lead to missing a vast amount of possible system states, some of which would be considered admin mistakes (because they're related to deployment configurations), others which are valid system configurations that are never tested with fixed deployment parameters.
This is why we now prefer using programmatic deployments because they allow us to use the randomness of the fuzzer to introduce randomness into the system configuration that's being tested against. Although programmatic deployments make running a suite slower (because the fuzzer needs to find a valid deployment configuration before achieving meaningful line coverage), they turn simple suites into multi-dimensional ones.
This is best understood with an example of a system designed to accept multiple tokens. With a static deployment you may end up testing tokens with 6 and 18 decimals (the two most common extremes). However, with a programmatic deployment, you can test many token configurations (say all tokens with 6-24 decimals) to ensure that your system works with all of them.
We've standardized these ideas around programmatic deployments in our Manager contracts.
Programmatic deployment consists of adding 4 general function types:
- A
deploy
handler, which will deploy a new version of the target (e.g a token via_newAsset
in theAssetManager
) - A
switchTo
handler, to change the current target being used (e.g the_switchAsset
function in theAssetManager
) - A
getCurrent
internal function, to know which is the current active target (e.g the_getAsset
function in theAssetManager
) - A
getAll
internal function, to retrieve all deployments (e.g the_getAssets
function in theAssetManager
)
Using the pattern of managers can help you add multi-dimensionality to your test setup and make tracking deployed components simpler.
Implicit Clamping
Based on your deployments, configuration, and the target functions you expose, a subset of all possible system states will be reachable. This leads to what we call implicit clamping as the states not reachable by your test suite setup will obviously not be tested and therefore have a similar effect as if they were excluded via clamping.
Mapping out what behavior is and isn't possible given your suite setup can therefore be helpful for understanding the limitations of your suite.
With these limitations outlined, you can write properties that define what behaviors should and shouldn't be possible given your setup. Checking these properties with fuzzing or formal verification won't necessarily prove they're always true, simply that they're true for the setup you've created.
This tends to be the source of bugs that are missed with fuzzing, as ultimately you can only detect what you test and if your system isn't configured so that it can reach a certain state in which there's a bug, you won't ever be able to detect it.
Ghost Variables
Ghost variables are a supporting tool that allow you to track system state over time. These can then be used in properties to check if the state has evolved as expected.
In the Chimera Framework we've concretized our prefered method of tracking ghost variables using a BeforeAfter
contract which exposes an updateGhosts
modifier that allows you to cache the system state before and after a call to a given target function.
As a rule of thumb it's best to avoid computation in your updates to the ghost variables as it ends up adding addtional operations that need to be performed for each call executed by the fuzzer, slowing down fuzzing efficiency.
Do NOT put any assertions in your ghost variables and avoid any operation or call that may cause a revert. These cause all operations in the call sequence created by the fuzzer to be undone, leaving the fuzzer with a blindspot because it will be unable to reach certain valid states.
Overcomplicating ghost variables has the unfortunate side effect of making the coverage report look promising as it will show certain lines fully covered but in reality might be applying implicit clamping by causing reverts that prevent edge cases for certain properties being explored since an update that reverts before the property is checked will not generate a reproducer call sequence.
contract Counter {
uint256 public number = 1;
function setNumber(uint256 newNumber) public {
if (newNumber != 0) {
number = newNumber;
}
}
}
abstract contract TargetFunctions {
// updateGhosts updates the ghost variables before and after the call to the target function
function counter_setNumber1(uint256 newNumber) public updateGhosts asActor {
counter.setNumber(newNumber);
}
}
abstract contract BeforeAfter is Setup {
struct Vars {
uint256 counter_difference;
uint256 counter_number;
}
Vars internal _before;
Vars internal _after;
modifier updateGhosts {
__before();
_;
__after();
}
function __before() internal {
// this line would revert for any value where the number before < the new number
_before.counter_difference = _before.counter_number - counter.number();
}
function __after() internal {
_after.counter_difference = _before.counter_number - counter.number();
}
}
In the above example we can see that the __before()
function would revert for any values where the newNumber
passed into setNumber
is greater than the value stored in _before.counter_number
. This would still allow the coverage report to show the function as covered however because for all newNumber
values less than or equal to _before.counter_number
the update would succeed. This means that fundamentally we'd be limiting the search space of the fuzzer, preventing it from exploring any call sequences where newNumber
is greater than the value stored in _before.counter_number
and potentially missing bugs because of it.
Grouping Function Types
A simple pattern for grouping function types using ghost variables so they can easily be used as a precondition to a global property check is to group operations by their effects.
For example, you may have multiple deposit/mint operations that have the effect of "adding" and others of "removing".
You can group these effects using an enum type as follows:
enum OpType {
GENERIC,
ADD,
REMOVE
}
modifier updateGhostsWithType(OpType op) {
currentOperation = op;
__before();
_;
__after();
}
modifier updateGhosts() {
currentOperation = OpType.GENERIC;
__before();
_;
__after();
}
And add the updateGhostsWithType
modifier only to handlers which perform one of the operations of interest. All other handlers using the standard updateGhosts
modifier will default to the GENERIC
operation type so that you don't have to refactor any existing modifiers.
Now with the ability to elegantly track the current operation you can easily use the operation type to write a global property for it like so:
contract Counter {
uint256 public number = 1;
function increment() public {
number++;
}
function decrement() public {
number--;
}
}
abstract contract TargetFunctions {
// we set the respective operation type on our target function handlers
function counter_increment() public updateGhostsWithType(OpType.INCREASE) asActor {
counter.setNumber(newNumber);
}
function counter_increment() public updateGhostsWithType(OpType.DECREASE) asActor {
counter.setNumber(newNumber);
}
}
abstract contract BeforeAfter is Setup {
enum OpType {
INCREASE,
DECREASE
}
struct Vars {
uint256 counter_number;
}
Vars internal _before;
Vars internal _after;
modifier updateGhosts {
__before();
_;
__after();
}
modifier updateGhostsWithType(OpType op) {
currentOperation = op;
__before();
_;
__after();
}
function __before() internal {
_before.counter_number = counter.number();
}
function __after() internal {
_after.counter_number = counter.number();
}
}
abstract contract Properties is BeforeAfter, Asserts {
function propert_number_decreases() public {
// we can use the operation type as a precondition to a check in a global property
if(currentOperation == OpType.DECREASE)
gte(_before.counter_number, _after.counter_number, "number does not decrease");
}
}
}
Inlined Fuzz Properties
Inlined properties allow you to make an assertion about the system state immediately after a given state-changing target function call:
contract Counter {
uint256 public number = 1;
function setNumber(uint256 newNumber) public {
if (newNumber != 0) {
number = newNumber;
}
}
}
abstract contract TargetFunctions
{
function counter_setNumber(uint256 newNumber) public updateGhosts asActor {
try counter.setNumber(newNumber) {
// if the call to the target contract was successful, we make an assertion about the expected system state after
if (newNumber != 0) {
t(counter.number() == newNumber, "number != newNumber");
}
}
}
}
Repeating the same inlined property in multiple places should be avoided whenever possible.
Implementing the same inlined property in multiple places is essentially asking the fuzzer to break the property, not change the state (as an assertion failure prevents the call to the handler from completing), then to break the same property in a different way via another handler while already knowing that the property breaks. This is a waste of computational resources as you're asking the fuzzer to prove a fact that you already know instead of asking it to prove a new fact for which you're not sure if there's a proof (in the sense of a broken property, not a mathematical proof) or not.
If you find yourself implementing the same inline property multiple times, you should refactor them to only be assessed in one handler or checked as a global property using ghost variables.
If you can only implement a property as an inlined test but don't want multiple state changes to be maintained as it would make the story difficult to read, you can make your handler stateless using a stateless
modifier.
modifier stateless() {
_;
// the revert is only made after the handler function execution completes
revert("stateless")
}
This causes the handler to revert only at the end of execution, meaning any coverage exploration will be maintained and any assertion failures will happen before the revert.
Exploring Rounding Errors
Fuzzing is a particularly useful technique for finding precision loss related issues, as highlighted by @DevDacian in this blog post.
Simply put the approach for finding such issues is as follows. For any place where there is a division operation which you believe may lead to potential loss of precision:
- Start by using and exact check in a property assertion to check if the return value of a variable/function is as expected.
- Run the fuzzer and allow it to find a case where the return value is not what's expected via a falsified assertion.
- Create an Echidna optimization test to increase the difference with the expected value.
The results of the optimization run will allow you to determine the severity of any potential precision loss and how it may be used as part of an exploit path.
Chimera Framework
The Chimera framework lets you run invariant tests with Echidna and Medusa that can be easily debugged using Foundry.
The framework is made up of the following contracts:
When you build your handlers using Recon these files get automatically generated and populated for you. To use the framework in your project, you just need to download these files that get generated for you and add the Chimera dependency to your project:
forge install Recon-Fuzz/chimera
How It Works
The Chimera Framework uses an inheritance structure that allows all the supporting contracts to be inherited by the CryticTester
contract so you can add target function handlers for multiple contracts all with a single point of entry for the fuzzer via CryticTester
:
⇾ Fuzzer stopped, test results follow below ...
⇾ [PASSED] Assertion Test: CryticTester.add_new_asset(uint8)
⇾ [PASSED] Assertion Test: CryticTester.asset_approve(address,uint128)
⇾ [PASSED] Assertion Test: CryticTester.asset_mint(address,uint128)
⇾ [PASSED] Assertion Test: CryticTester.counter_increment()
⇾ [PASSED] Assertion Test: CryticTester.counter_increment_asAdmin()
⇾ [PASSED] Assertion Test: CryticTester.counter_setNumber1(uint256)
⇾ [PASSED] Assertion Test: CryticTester.counter_setNumber2(uint256)
⇾ [PASSED] Assertion Test: CryticTester.invariant_number_never_zero()
⇾ [PASSED] Assertion Test: CryticTester.switch_asset(uint256)
⇾ [PASSED] Assertion Test: CryticTester.switchActor(uint256)
⇾ [FAILED] Assertion Test: CryticTester.doomsday_increment_never_reverts()
The above output is taken from a run on the create-chimera-app template and we can see that we can call handlers for multiple contracts all through the interface of the
CryticTester
contract.
Ultimately this allows you to have one Setup
contract that can deploy multiple contracts that you'll be targeting and make the debugging process easier because each broken property can be turned into an easily testable unit test that can be run in the CryticToFoundry
contract.
This also simplifies the testing process, allowing you to use the following commands to run fuzz tests without having to modify any configurations.
For Echidna:
echidna . --contract CryticTester --config echidna.yaml
For Medusa:
medusa fuzz
The Contracts
We'll now look at the role each of the above-mentioned contracts serve in building an extensible and maintainable fuzzing suite. We'll be looking at examples using the create-chimera-app template project.
Setup
This contract is used to deploy and initialize the state of your target contracts. It's called by the fuzzer before any of the target functions are called.
Any contracts you want to track in your fuzzing suite should live here.
In our create-chimera-app
template project, the Setup
contract is used to deploy the Counter
contract:
abstract contract Setup is BaseSetup {
Counter counter;
function setup() internal virtual override {
counter = new Counter();
}
}
TargetFunctions
This is perhaps the most important file in your fuzzing suite, it defines the target function handlers that will be called by the fuzzer to manipulate the state of your target contracts.
Note: These are the only functions that will be called by the fuzzer.
Because these functions have the aim of changing the state of the target contract, they usually only include non-view and non-pure functions.
Target functions make calls to the target contracts deployed in the Setup
contract. The handler that wraps the target function allows you to add clamping (reducing the possible fuzzed input values) before the call to the target contract and properties (assertions about the state of the target contract) after the call to the target contract.
In our create-chimera-app
template project, the TargetFunctions
contract is used to define the increment
and setNumber
functions:
abstract contract TargetFunctions is
BaseTargetFunctions,
Properties
{
function counter_increment() public {
counter.increment();
}
function counter_setNumber1(uint256 newNumber) public {
// clamping can be added here before the call to the target contract
// ex: newNumber = newNumber % 100;
// example assertion test replicating testFuzz_SetNumber
try counter.setNumber(newNumber) {
if (newNumber != 0) {
t(counter.number() == newNumber, "number != newNumber");
}
} catch {
t(false, "setNumber reverts");
}
}
function counter_setNumber2(uint256 newNumber) public {
// same example assertion test as counter_setNumber1 using ghost variables
__before();
counter.setNumber(newNumber);
__after();
if (newNumber != 0) {
t(_after.counter_number == newNumber, "number != newNumber");
}
}
}
Properties
This contract is used to define the properties that will be checked after the target functions are called.
At Recon our preference is to define these as Echidna/Medusa assertion properties but they can also be defined as boolean properties.
In our create-chimera-app
template project, the Properties
contract is used to define a property that states that the number can never be 0:
abstract contract Properties is BeforeAfter, Asserts {
// example property test
function invariant_number_never_zero() public {
t(counter.number() != 0, "number is 0");
}
}
CryticToFoundry
This contract is used to debug broken properties by converting the breaking call sequence from Echidna/Medusa into a Foundry unit test. When running jobs on Recon this is done automatically for all broken properties using the fuzzer logs.
If you are running the fuzzers locally you can use the Echidna and Medusa tools on Recon to convert the breaking call sequence from the logs into a Foundry unit test.
This contract is also useful for debugging issues related to the setup
function and allows testing individual handlers in isolation to verify if they're working as expected for specific inputs.
In our create-chimera-app
template project, the CryticToFoundry
contract doesn't include any reproducer tests because all the properties pass.
The test_crytic
function demonstrates the template for adding a reproducer test:
contract CryticToFoundry is Test, TargetFunctions, FoundryAsserts {
function setUp() public {
setup();
targetContract(address(counter));
}
function test_crytic() public {
// TODO: add failing property tests here for debugging
}
}
BeforeAfter
This contract is used to store the state of the target contract before and after the target functions are called in a Vars
struct.
These variables can be used in property definitions to check if function calls have modified the state of the target contract in an unexpected way.
In our create-chimera-app
template project, the BeforeAfter
contract is used to track the counter_number
variable:
// ghost variables for tracking state variable values before and after function calls
abstract contract BeforeAfter is Setup {
struct Vars {
uint256 counter_number;
}
Vars internal _before;
Vars internal _after;
function __before() internal {
_before.counter_number = counter.number();
}
function __after() internal {
_after.counter_number = counter.number();
}
}
CryticTester
This is the entrypoint for the fuzzer into the suite. All target functions will be called on an instance of this contract since it inherits from the TargetFunctions
contract.
In our create-chimera-app
template project, the CryticTester
contract is used to call the counter_increment
and counter_setNumber1
functions:
// echidna . --contract CryticTester --config echidna.yaml
// medusa fuzz
contract CryticTester is TargetFunctions, CryticAsserts {
constructor() payable {
setup();
}
}
Assertions
When using assertions from Chimera in your properties, they use a different interface than the standard assertions from foundry's forge-std
.
The following assertions are available in Chimera's Asserts
contract:
abstract contract Asserts {
// greater than
function gt(uint256 a, uint256 b, string memory reason) internal virtual;
// greater than or equal to
function gte(uint256 a, uint256 b, string memory reason) internal virtual;
// less than
function lt(uint256 a, uint256 b, string memory reason) internal virtual;
// less than or equal to
function lte(uint256 a, uint256 b, string memory reason) internal virtual;
// equal to
function eq(uint256 a, uint256 b, string memory reason) internal virtual;
// true
function t(bool b, string memory reason) internal virtual;
// between uint256
function between(uint256 value, uint256 low, uint256 high) internal virtual returns (uint256);
// between int256
function between(int256 value, int256 low, int256 high) internal virtual returns (int256);
// precondition
function precondition(bool p) internal virtual;
}
Create Chimera App
To make getting started with Invariant Testing as easy as possible, we created the create-chimera-app template which you can use to create a new Foundry project with invariant testing built-in.
This template uses our Chimera framework to let you run invariant tests with Echidna, Medusa, Halmos, and Kontrol that can be easily debugged using Foundry. We've also incorporated contracts from our Setup Helpers repo which make managing the test suite setup much simpler.
Prerequisites
To make use of the fuzzing/formal verification tools that create-chimera-app supports, you'll need to install one of the following on your local machine:
Creating A New Project
First, you'll need to create a new project using the create-chimera-app template. You can use the Use this template button on GitHub to do this. This will automatically create a new repository using the create-chimera-app template on your GitHub account.
Note: you can also use the
forge init --template https://github.com/Recon-Fuzz/create-chimera-app
command to create a new project locally, or just clone the repo directly usinggit clone https://github.com/Recon-Fuzz/create-chimera-app
Create-chimera-app ships with the default Foundry template as an example. This is a simple Counter
contract that you can use to get a feel for how invariant testing works.
If you want to initialize a clean project without the boilerplate from the default Foundry template, you can use the create-chimera-app-no-boilerplate repo
Testing
An example invariant is implemented in the Properties
contract and some inline properties are defined in the TargetFunctions
contract.
You can test these properties using Echidna and Medusa using the following commands:
Echidna
echidna . --contract CryticTester --config echidna.yaml
Medusa
medusa fuzz
To run your tests on Recon, follow the instructions on the Running Jobs page.
Configurations
The create-chimera-app template includes two main configuration files that control how the fuzzers operate. We've set the defaults in the configuration so they work out-of-the-box with most repos so we recommend keeping most configurations set to their default except where noted below.
We've found that modifying configurations consistently can often be a source of errors and make it difficult for collaborators to understand what's going on. As a rule of thumb, we only modify our configuration at the beginning of an engagement (if at all), then leave it as is for the remainder of our testing campaign.
The only config we consistently change during testing is the
testLimit
for Echidna, by overriding the value with the--test-limit
flag in the CLI.
echidna.yaml
The echidna.yaml
file configures Echidna.
- testMode: Set to
"assertion"
to run assertion-based property tests which we use as our default - prefix: Set to
"optimize_"
because we don't use property mode, so this prefix is only used for optimization tests - coverage: Enables coverage-guided fuzzing and creates a corpus that can be reused in other runs for faster fuzzing
- corpusDir: Sets the name of the directory where the corpus file will be saved
- balanceAddr: Starting ETH balance for sender addresses
- balanceContract: Starting ETH balance for the target contract
- filterFunctions: List of function signatures to include or exclude from fuzzing (we recommend not using this and instead just not including function handlers in your
TargetFunctions
if you don't want to test a given function because it's clearer for collaborators) - cryticArgs: Additional arguments passed to the crytic-compile compilation process (leave this as is unless you need to link libraries)
- deployer: Address used to deploy contracts during the fuzzing campaign (set to
"0x7FA9385bE102ac3EAc297483Dd6233D62b3e1496"
for consistency with Foundry) - contractAddr: Address where the target contract is deployed (this will be the
CryticTester
contract) - shrinkLimit: Number of attempts to minimize failing test cases by removing unnecessary calls from the sequence
Additional configuration options:
- testLimit: Number of transactions to execute before stopping (we typically run
100000000
or higher for thorough testing) - seqLen: Maximum length of transaction sequences that Echidna will generate to try to break properties
- sender: Array of addresses that can send transactions to the target contracts (leave this set to the default because our framework switches senders via actors in the ActorManager instead)
The template is pre-configured to work with Foundry as the build system, allowing for seamless integration between compilation and fuzzing.
medusa.json
The medusa.json
file configures Medusa. The configuration sections include:
Fuzzing Configuration:
- workers: Number of parallel workers to use during fuzzing (defaults to max workers possible on your system given the number of cores)
- workerResetLimit: Number of call sequences a worker executes before resetting (set to
50
) - timeout: Maximum duration for the fuzzing campaign in seconds (set to
0
for indefinite) - testLimit: Number of transactions to execute before stopping (set to
0
for indefinite) - callSequenceLength: Maximum number of function calls to attempt to break a property (set to
100
) - corpusDirectory: Directory where corpus items and coverage reports are stored (set to
"medusa"
) - coverageEnabled: Whether to save coverage-increasing sequences for reuse (set to
true
) - deploymentOrder: Order in which contracts are deployed (ensures
CryticTester
is the target) - targetContracts: Array specifying which contracts to fuzz (ensures
CryticTester
is the target) - targetContractsBalances: Starting ETH balance for each target contract (hex-encoded)
- constructorArgs: Arguments to pass to contract constructors (unneeded because the
CryticTester
should never have a constructor; deploy all contracts in theSetup
) - deployerAddress: Address used to deploy contracts during fuzzing (set to
"0x7FA9385bE102ac3EAc297483Dd6233D62b3e1496"
for consistency with Foundry) - senderAddresses: Account addresses used to send function calls (leave this set to the default because our framework switches senders via actors in the ActorManager instead)
- blockNumberDelayMax: Maximum block number jump between transactions (set to
60480
) - blockTimestampDelayMax: Maximum timestamp jump between transactions (set to
604800
) - blockGasLimit: Maximum gas per block (you may need to raise this if your setup is very large)
- transactionGasLimit: Maximum gas per transaction (set to
12500000
)
Testing Configuration:
- stopOnFailedTest: Whether to stop after the first failed test (set to
false
) - stopOnFailedContractMatching: Whether to stop on bytecode matching failures (set to
false
) - stopOnNoTests: Whether to stop if no tests are found (set to
true
) - testAllContracts: Whether to test all contracts including dynamically deployed ones (set to
false
because we only want to test viaCryticTester
) - traceAll: Whether to attach execution traces to failed tests (set to
false
)
Assertion Testing Configuration:
- enabled: Whether assertion testing is enabled (set to
true
) - testViewMethods: Whether to test view methods for assertions (set to
true
) - panicCodeConfig: Configuration for which panic codes should fail tests
Optimization Testing Configuration:
- enabled: Whether optimization testing is enabled (set to
false
) - testPrefixes: Prefixes for optimization test functions (set to
["optimize_"]
)
Chain Configuration:
- codeSizeCheckDisabled: Allows deployment of large contracts (set to
true
) - cheatCodesEnabled: Whether cheatcodes are enabled (set to
true
because we use thevm.prank
cheatcode for switching actors) - forkConfig: Configuration for forking mode (enable for when you want to test against the live chain state)
Compilation Configuration (Do not change):
- platform: Compilation platform (set to
"crytic-compile"
) - target: Target directory for compilation (set to
"."
) - args: Additional compilation arguments (set to
["--foundry-compile-all"]
)
Linking Libraries
If your project requires the use of libraries with external
functions like the following ChainlinkAdapter
:
library ChainlinkAdapter {
/// @notice Fetch price for an asset from Chainlink fixed to 8 decimals
/// @param _source Chainlink aggregator
/// @return latestAnswer Price of the asset fixed to 8 decimals
/// @return lastUpdated Last updated timestamp
function price(address _source) external view returns (uint256 latestAnswer, uint256 lastUpdated) {
uint8 decimals = IChainlink(_source).decimals();
int256 intLatestAnswer;
(, intLatestAnswer,, lastUpdated,) = IChainlink(_source).latestRoundData();
latestAnswer = intLatestAnswer < 0 ? 0 : uint256(intLatestAnswer);
if (decimals < 8) latestAnswer *= 10 ** (8 - decimals);
if (decimals > 8) latestAnswer /= 10 ** (decimals - 8);
}
}
you'll need to link the library to its deployed bytecode manually, or you'll run into a runtime error in Echidna/Medusa that can't be debugged with Foundry.
To link the libraries, you simply need to modify the configuration files mentioned above like so, being sure to not remove the "--foundry-compile-all"
flag as this allows easier debugging of compilation issues.
For Echidna:
cryticArgs: ["--compile-libraries=(ChainlinkAdapter,0xf04),"--foundry-compile-all"]
deployContracts: [
["0xf04", "ChainlinkAdapter"]
]
For Medusa:
"fuzzing": {
...
"deploymentOrder": [
"ChainlinkAdapter"
],
...
},
"compilation": {
...
"platformConfig": {
...
"args": [
"--compile-libraries",
"(ChainlinkAdapter,0xf04)",
"--foundry-compile-all"
]
}
},
Expanding The Example Project
create-chimera-app is meant to be a starting point for your own project. You can expand the example project by adding your own contracts and properties. See the Chimera Framework section on what the role of each contract is and see the example project for how to implement properties using the framework.
If you're new to defining properties, check out this substack post which walks you through the process of converting invariants from English to Solidity using an ERC4626 vault as an example using Chimera.
Building Handlers
What Are Handlers?
Handlers are functions that help you test invariants by wrapping a call to a target contract with a function that allows you to add clamping (reduces the search space of the fuzzer) and properties all in the same execution.
For example, if we want to test the deposit function of an ERC4626 vault, we can build a handler that will call the deposit function and then assert some property about the state of the vault after the deposit is complete.
//This is the handler
function vault_deposit(uint256 assets, address receiver) public {
//We can add clamping here to reduce the search space of the fuzzer
assets = assets % underlyingAsset.balanceOf(address(this));
//This is the call to the target contract
vault.deposit(assets, receiver);
//We can add properties here to test the state of the vault after the deposit is complete
eq(vault.balanceOf(receiver) == assets);
}
Generating Handlers In Recon
To generate handlers in Recon all you need to do is paste the abi of the contract you'd like to build handlers for into the the text box on the Build Handlers tab. To get the ABI of a contract in a foundry project just compile your project and look in the out
directory for a file with the same name as your contract but with a .json
extension.
After pasting the ABI into the text box, click the Build Handlers button and Recon will generate the handlers for you.
You'll then be shown a screen where you can select the specific functions you'd like to add handlers for. You can toggle which functions are selected using the buttons next to the function name.
Adding Handlers To Your Project
Once you've generated the handlers for the functions of interest, you can add them to your project by clicking the Download All Files button.
This will download a zip file containing the handlers and necessary supporting files.
You can then unzip these files and add the recon folder as a child of your test folder. Additionally, move the medusa.json
and echidna.yaml
config files into the same folder as your foundry.toml
file.
Adding Dependencies To Your Project
Recon uses the Chimera and setup-helpers dependencies in the standard template that gets generated when you build handlers so you'll need to add these to your project.
Do this with the following command:
forge install Recon-Fuzz/chimera Recon-Fuzz/setup-helpers --no-commit
You'll then need to add chimera to your remappings. If your project does this in the foundry.toml
file, you can just add the following to the remappings
array:
remappings = [
'@chimera/=lib/chimera/src/',
'@recon/=lib/setup-helpers/src/'
]
If your project uses a remappings.txt file instead you can similarly add the following to the file:
@chimera/=lib/chimera/src/
@recon/=lib/setup-helpers/src/
Running The Project
Now you're almost ready to run the project, you just need to make sure the Setup.sol
file is correctly set up to deploy your system contracts.
Once you've done this and can successfully compile using the foundry command:
forge build
You can then run the project using the following commands for fuzzing:
Echidna
echidna . --contract CryticTester --config echidna.yaml
Medusa
medusa fuzz
Running Jobs In The Cloud With Recon
The jobs page is where you can run new fuzzing jobs on Recon's cloud service and see existing jobs you've run.
Using Recon’s Job running feature you can offload long-duration jobs to Recon’s cloud service so you don’t waste computational resources locally and can run long-term jobs without worrying about something failing at the last minute because of something you did on your local machine.
How To Run A Fuzzing Job
On the Jobs tab you'll see a form for running a job.
-
First select the tool you'd like to use to run your job. The Echidna, Medusa, and Foundry options use fuzzing. The Halmos and Kontrol options use formal verification.
-
Next add the repository you'd like to fuzz or verify in the GitHub Repo Url field. Additionally, you can specify a custom name for the job in the Job Name field. If you don't specify a custom name, Recon will generate one for you that is a hash with numbers and letters.
-
If you'd prefer to specify the organization, repo name, branch, and directory where the
foundry.toml
file is located, you can do so in the Organization, Repo Name, Branch and Directory fields. -
Next you can specify the configuration parameters for the tool you selected in step 1 using a config file or by specifying values in the form fields directly (NOTE: these will override any parameters set in the config file if one is passed in).
-
If your project uses a dependency system in addition to foundry, you can select the custom preinstall process in the dropdown menu.
Recipes
Recipes allow you to save job configurations for easy reuse when running jobs on Recon.
Creating A Recipe
To create a recipe, you can use the Recipes page.
-
First you need to select the tool you want to create the recipe for (Echidna, Medusa, Foundry, Halmos, or Kontrol).
-
Next you need to enter a name for your recipe.
-
You can add the URL of the repo you want to create the recipe for which will autofill the Organization, Repo, and Branch fields or if you prefer, you can manually fill in these fields.
-
Next you can add the fuzzing configuration for the recipe which will override the configurations in the config file for the tool.
-
If your project uses a dependency system in addition to foundry, you can select the custom preinstall process in the dropdown menu.
-
Once you've filled in all the fields, you can click the Create Recipe button to save the recipe.
Your recipe will then appear in the list of recipes at the bottom of the page. Recipes can be edited (using the Edit this Recipe button) or deleted (using the Delete this Recipe button) at any time.
Using A Recipe
To use a recipe, you can select the recipe from the list of recipes at the top of the Jobs page. This will prefill all the form fields for the job configuration and you can then edit any of the values as needed. The job can then be run as normal using the Run Job button.
Alerts
Alerts can be added to any recipe
You can specify whether an alert should trigger a Webhook or send a message on Telegram.
Creating, Updating, and Viewing Alerts tied to a Recipe
Navigate to a Recipe
Click on
Manage Alerts
on the top right
You'll be redirected to a page that displays all your alerts for that recipe.
Scroll down and click on Manage Alerts
to display and create all alerts attached to this Recipe
You will then be able to Delete, Disable, or Edit an alert.
Each alert can have:
- A threshold, which means the number of broken properties at which the alert will trigger
For Telegram Alerts, you just have to specify your TG username.
Make sure to Test Telegram Username
which requires sending a message to our TG Bot
Recon's bot is @GetRecon_bot
Please always confirm the ID of the bot and message us with any questions
Webhook Alerts
For webhooks, you can provide any URL and we will be sending the following data to it once an alert is triggered.
{
alerted: string,
jobId: string,
broken property: string,
sequence: string
}
Dynamic Replacement
Dynamic replacement allows you to replace the value of any variable in your contracts.
To see an interactive demo of how dynamic replacement works see here, to use dynamic replacement on a real project with a Recon pro account click here.
The key idea of dynamic replacement is to allow you to test multiple setup configuration variations or behaviors in your code.
You may be accustomed to creating separate "config" files that allow you to test different suite setup parameters such as:
- Maximum precision loss
- Number of actors
and other aspects of the suite.
With dynamic replacement you can change one line, click a button, and run multiple jobs that have different setup configurations, without needing to create and manage multiple branches. The chosen variables are replaced before running the fuzzer and every other part of the setup remains the same, allowing you to fine tune your testing approach for different scenarios.
When to use Dynamic Replacement
Toggle behavior via a boolean or by adding specific calldata
In the example above we want to allow certain unsafe ERC4626 operations via the ALLOWS_REKT
constant. We can use dynamic replacement in this case to toggle the boolean to true or false so that we don't need to maintain two forked branches to test this minor change.
Using constants for precision, decimals, hardcoded addresses
In the example above the suite was written to target a 18 decimal token. However, if the business requirements change and we want to be able to use the same suite for testing tokens of different decimal values, we can use dynamic replacement to reuse the suite with a different decimal value setup.
Fork testing
In the example above we can see the common practice of hardcoding multiple addresses on a fork test for a given chain. Using dynamic replacement we can replace these in the same branch, performing fork testing on new targets, or new chains.
Using Dynamic Replacement
Dynamic Replacement applies only to the Setup.sol
file in your repository.
To use:
- add the name of the variable you want to replace
- specify the interface/type of the variable
- specify the value you want to replace the existing value with
- (optional) add additional values you'd like to replace
Governance Fuzzing - WIP
Governance fuzzing allows us to simulate on-chain changes that modify the system state (function calls) or system configuration (governance function calls) in our forked testing setup to determine if the change causes a break in the existing properties.
How It Works
To execute governance fuzzing on a system we first need to have a fork testing setup that achieves full line coverage over the contracts of interest.
The fork setup must include calls to the vm.roll
and vm.warp
cheatcodes because Echidna does not automatically warp to the timestamp at the block that it’s forking from. This works around this by ensuring that Echidna only makes calls after the relevant block timestamp from which it’s forking using dynamic block replacement.
Once governance fuzzing is setup, it's triggered by listening to an event emitted on a contract of interest. This event then sets off a chain where a forked fuzzing job is started. Additionally, any values in the system setup that need to be replaced for effective testing can be replaced with new values from the triggering event or the forked chain state.
The fuzzing job that's run is a normal cloud fuzzing job and will show up on the jobs page.
Recon Tricks
We've been using Recon for over a year to run all of our cloud runs.
Here's some tips and tricks:
Run in exploration mode for an excessive amount of tests
Exploration mode is faster and builds a corpus faster.
While building your suite, make use of this mode.
Think about Corpus Reuse
Some runs can take days to explore all states.
If you reuse corpus, these days' worth of data can be replied to within minutes.
Always use corpus re-use.
There's a reason why we prominently allow you to download corpus and paste it into any job, it's because we know it works.
Don't acknowledge small precision losses before using Optimization Mode
Sometimes you'll break a property with a small precision threshold and think you're done.
From our experience, you should run some Optimization Tests.
The way the corpus for Optimization is built is a bit different, so running Optimization tests can lead you to different results.
Share your Jobs as soon as they start
You can save a lot of back and forth by creating a share and sending it to your team or auditors.
Recon jobs update automatically every minute, so once you know the job will work you can share it and go afk
Use Dynamic Replacement for Config
Thinking about multi-deployment tests? Admin Mistakes? Hardcoded Precision Tolerance?
Just set these as constants in your Setup
file and then use Dynamic Replacement.
You already paid for the cloud service, save time and run a few extra runs to help you test all edge cases.
Reuse your suite for Monitoring
Some invariant testing suites end up having hundreds of thousands of dollars worth of effort put into them.
Echidna, Medusa, and Foundry all support fuzzing on Forked tests
Repurposing a Chimera suite to work with forking takes just a couple of hours, and Recon supports API jobs with Dynamic Replacement.
We've demoed using Invariant Testing as the best monitoring at Fuzz Fest
Recon Extension
The Recon extension is a Visual Studio Code extension that allows you to use Recon's suite of tools directly in your code editor. It requires a Foundry project to be open in your workspace to use.
Getting Started
After installing the extension, your project will automatically be compiled when you open a new window in your code editor.
If your foundry.toml
file is located in a directory other than the root of your project, your project will not be compiled automatically. You can select the location in the sidebar under the RECON COCKPIT section:
which will allow you to build your project from the directory containing the foundry.toml
file.
Scaffold Your Target Functions
After building your project, you can choose which of your contract functions you want to add target functions for using the CONTRACTS section in the sidebar.
Once you've selected the contract functions you'd like to scaffold, you can generate a Chimera Framework template for your project using the Scaffold button in the sidebar under the RECON COCKPIT section:
This will install the Chimera dependency, generate the necessary fuzzer config files, and generate all the necessary test files for your project using the Create Chimera App template.
Your scaffolded target functions will be added to the recon/targets
directory as ContractName + TargetFunctions.sol
(e.g. Counter.sol
's target functions will be called CounterTargetFunctions.sol
).
Note: You can change settings for the project build configurations by clicking the gear icon next to the Generate Chimera Template button. This allows you to change the Echidna and Medusa configuration files as well as the project settings.
For more on how to use the Chimera Framework, check out the Chimera Framework section.
Function Handlers
Testing Modes
When selecting which contract functions you'd like to target, you can select from three testing modes to help define how you'd like to test your target functions:
- Normal Mode - This just adds a handler function as a wrapper around the underlying target function.
- Fail Mode - This will add a handler function that includes an assertion at the end of the call to the target function. You can replace this assertion with any other assertion you'd like to test.
- Catch Mode - This will add a handler function that includes a try catch block around the call to the target function. You can add any custom logic to the catch block such as an assertion about when a function should or shouldn't revert.
The testing mode for each target function can later be changed in the generated target function contract above each function using the codelense buttons:
Actor Mode
Actor modes allow you to specify which actor should be calling the target function:
- Actor - This uses the currently set actor returned by
_getActor()
to prank and call the target function. - Admin - This uses the admin role to call the target function (this is defaulted to
address(this)
).
The actor mode for each target function can later be changed in the generated target function contract above each function using the codelense buttons:
Auto Mocking
The Recon extension allows you to auto-mock specific contracts needed for your setup using the ABI to Mock tool. This tool will generate a mock of system contracts based on the contract's parameters and return values. This is useful for quickly getting coverage on the core contracts of your project that you target with your target functions while simulating functionality of periphery contracts whose behavior is unimportants.
Auto Mocking is triggered by right clicking on a contract's ABI file in the out
directory or directly via the Contract file of the contract you'd like to mock and selecting Generate Solidity Mock. This will generate a mock of the contract in the path specified in the RECON COCKPIT settings in the Mocks Folder Path field as ContractName + Mocks.sol
(e.g. Counter.sol
's mocks will be called CounterMocks.sol
).
Run with Echidna and Medusa
After generating a Chimera template for your project you can use the Run Echidna and Run Medusa buttons in the bar at the bottom of your code editor to run your suite with Echidna and Medusa.
Report Generation
After running your suite with Echidna or Medusa a report is automatically generated which you can see in the COVERAGE REPORT section in the sidebar.
This provides an markdown report of all the properties that were tested and the results.
You can also view a condensed version of the coverage report which removes contracts that aren't touched by the fuzzer by clicking the open in new tab button in the report section.
Reproducer Generation
If a property breaks, the extension will automatically generate a Foundry unit test reproducer for the property.
Clicking yes will add the reproducer to the CryticToFoundry
contract that's created when you generate a Chimera template.
The Recon extension also provides a codelense utility for running the Foundry reproducer unit test above the function via the Run Test button.
Coverage Report Compressor
- Right click on the coverage report and select "Clean Up Coverage Report"
- This will compress the report by removing contracts not touched by the fuzzer from the coverage report
Display coverage report as overlay with coverage gutters
- To do this you just need to have the coverage gutters extension installed and click the report in the COVERAGE REPORT section
- This will display the coverage report as an overlay using coverage gutters
- To remove the coverage gutters you can click the File Uncovered button on the bottom of your VS code window
Medusa Log Scraper
Usage
- Paste your Medusa logs generated at the end of your test run
- All function calls will be scraped automatically from the pasted logs
- Each property will generate a Foundry reproducer unit test
- Toggle cheatcodes as needed
Echidna Log Scraper
Usage
- Paste your Echidna logs generated at the end of your test run
- All function calls will be scraped automatically from the pasted logs
- Each property will generate a Foundry reproducer unit test
- Toggle cheatcodes as needed
Invariants Builder
Scaffold an Invariant Testing suite in seconds!
Usage
- Paste your contract's ABI (available in the
out/
directory after building your contracts using Foundry) - Add the name of your pasted contract's ABI in the "Your_Contract" form field
- Download the generated contracts into your repo using the "Download All Files" button
- Follow the instructions in the "Installation Help" section for the next steps after you've added the files to your repo
Bytecode Compare
Usage
- Paste 2 contract bytecodes
- Use the "Compare" button to generate a diff between them, optionally ignoring metadata
Bytecode To Interface
Usage
- Add a contract's bytecode: a. Using an rpc-url and contract address b. By pasting the bytecode directly
- Define an interface name to give your contract in the "Interface Name" form field
- Use the "Generate Interface" button to generate a Solidity interface for the given bytecode
Bytecode Static Deployment
Usage
- Paste the deployed bytecode of EVM contract (this should not include the constructor as it will generate the Initcode from the bytecode)
- The tool will automatically output the Initcode (constructor bytecode that gets executed on contract deployment) in the Encoded field
- The Both field will include the Initcode and the deployed bytecode.
Bytecode Formatter
Usage
- Use the As calldata/As words buttons to select how you'd like to display the formatted data.
- Paste bytecode or calldata into the calldata field
- The tool will automatically format the bytecode or calldata into 32 bytes per row. For calldata the first four bytes will be used to extract the function selector.
String To Bytes
Usage
- Type a string in the String field (only strings less than 32 bytes are supported)
- The tool will automatically format the string using UTF-8 bytes encoding and add a length parameter. The topmost output will display the string without padding and the bottommost output will display the string with padding.
Chimera
Chimera is a smart contract property-based testing framework. Write once, run everywhere.
Supports invariant tests written for:
- Foundry
- Echidna
- Medusa
- Halmos
Installation
forge install Recon-Fuzz/chimera
Motivation
When writing property-based tests, developers commonly face several issues:
- the amount of boilerplate code required
- the challenge of switching tools after the initial setup
- the difficulty in sharing results between different tools
Writing invariant tests that work seamlessly with Foundry, Echidna, Medusa, and Halmos is not straightforward.
Chimera addresses this problem by enabling a "write once, run everywhere" approach.
Limitations
Chimera currently only supports cheatcodes implemented by HEVM.
Foundry has extended these and offers functionality not supported by the HEVM cheatcodes, subsequently these must be accounted for when adding Chimera to a Foundry project as they will cause issues when running Echidna and Medusa. If adding Chimera to an existing Foundry project ensure that there are no cheatcodes implemented that aren't supported by HEVM as they will throw the following error: VM failed for unhandled reason, BadCheatCode <cheatcode hash>
.
While medusa supports etch
, echidna does not support it yet. Please note when using etch
in an echidna environment it will not work as expected.
Create Chimera App
A Foundry template that allows you to bootstrap an invariant fuzz testing suite the Chimera Framework that works out of the box with:
- Foundry for invariant testing and debugging of broken property reproducers
- Echidna and Medusa for stateful fuzzing
- Halmos for invariant testing
It extends the default Foundry template used when running forge init
to include example property tests supported by Echidna and Medusa.
Prerequisites
To use this template you'll need to have Foundry installed and at least one fuzzer (Echidna or Medusa):
Usage
To initialize a new Foundry repo using this template run the following command in the terminal.
forge init --template https://github.com/Recon-Fuzz/create-chimera-app
Build
This template is configured to use Foundry as it's build system for Echidna and Medusa so after making any changes the project must successfully compile using the following command before running either fuzzer:
forge build
Property Testing
This template comes with property tests defined for the Counter
contract in the Properties
contract and in the function handlers in the TargetFunctions
contract.
Echidna Property Testing
To locally test properties using Echidna, run the following command in your terminal:
echidna . --contract CryticTester --config echidna.yaml
Medusa Property Testing
To locally test properties using Medusa, run the following command in your terminal:
medusa fuzz
Foundry Testing
Broken properties found when running Echidna and/or Medusa can be turned into unit tests for easier debugging with Recon (for Echidna/for Medusa) and added to the CryticToFoundry
contract.
forge test --match-contract CryticToFoundry -vv
Halmos Invariant Testing
The template works out of the box with Halmos, however Halmos Invariant Testing is currently in preview
Simply run halmos
on the root of this repository to run Halmos for Invariant Testing
Expanding Target Functions
After you've added new contracts in the src
directory, they can then be deployed in the Setup
contract.
The ABIs of these contracts can be taken from the out
directory and added to Recon's Invariants Builder. The target functions that the sandbox generates can then be added to the existing TargetFunctions
contract.
Log Parser
A TypeScript package to convert Medusa and Echidna traces into Foundry reproducer unit tests.
Available in an easy to use UI in our:
NPM
Installation
yarn add @recon-fuzz/log-parser
Usage
import { processLogs, Fuzzer } from '@recon-fuzz/log-parser';
// Process Medusa logs
const medusaResults = processLogs(medusaLogContent, Fuzzer.MEDUSA);
// Process Echidna logs
const echidnaResults = processLogs(echidnaLogContent, Fuzzer.ECHIDNA);
API Reference
processLogs(logs: string, tool: Fuzzer): FuzzingResults
Processes fuzzing logs and returns structured results.
Parameters
logs: string
- The raw log content as a stringtool: Fuzzer
- The fuzzer tool used (Fuzzer.MEDUSA or Fuzzer.ECHIDNA)
Returns: FuzzingResults
interface FuzzingResults {
duration: string;
coverage: number;
failed: number;
passed: number;
results: any[];
traces: any[];
brokenProperties: any[];
numberOfTests: number;
}
Medusa and Echidna Conversion to Foundry
Converting Fuzzer Logs to Foundry Tests
Both Medusa and Echidna logs can be converted into Foundry test cases using the provided utility functions:
// For Echidna logs
import { echidnaLogsToFunctions } from '@recon-fuzz/log-parser';
const foundryTest = echidnaLogsToFunctions(
echidnaLogs,
"test_identifier",
"brokenProperty",
{ roll: true, time: true, prank: true }
);
// For Medusa logs
import { medusaLogsToFunctions } from '@recon-fuzz/log-parser';
const foundryTest = medusaLogsToFunctions(
medusaLogs,
"test_identifier",
{ roll: true, time: true, prank: true }
);
The conversion handles:
- VM state manipulation (block numbers, timestamps)
- Actor simulation (pranks for different senders)
- Function call sequences
- Special data types (addresses, bytes)
Generated tests will include the exact sequence of calls that triggered the property violation, making it easy to reproduce and debug issues found during fuzzing.
ABI to Invariants
NOTE: This repo is private, it's coming soon!
Given a contract ABI and options it creates:
- Target functions
- Create Chimera App Boilerplate
Available in an easy to use UI in our Invariants Builder
ABI to Mock
Given an ABI file generate a mock contract with functions that allow setting the return value of all view functions.
The simplest form of mocking that generates contracts that can be easily integrated into a fuzz testing suite.
Works well with the Recon Extension (coming soon) and the ABI to Invariants Builder.
Features
- Generates fully functional Solidity mock contracts
- Supports complex data types including structs and nested arrays
- Maintains function signatures and event definitions
- Customizable contract names and output locations
- Available as both a CLI tool and JavaScript library
Generated Mock Features
- Complete function signatures matching the original contract
- Setter functions for return values
- Complex type support (structs, arrays, mappings)
Installation
npm install abi-to-mock
Usage
Command Line Interface
npx abi-to-mock <abi-file> [options]
Options:
--out, -o Output directory (default: "./out")
--name, -n Contract name (default: "Contract")
Example:
npx abi-to-mock ./MyContract.json --out ./mocks --name MyContract
Using with Foundry
forge build
npx abi-to-mock out/ERC20Mock.sol/ERC20Mock.json --name Erc20Mock --out src/
Programmatic Usage
// Node.js
const AbiToMock = require('abi-to-mock');
// or ES Modules
import AbiToMock from 'abi-to-mock';
// From file
AbiToMock('./MyContract.json', './mocks', 'MyContract');
// From ABI object
const abi = [
{
"inputs": [],
"name": "getValue",
"outputs": [{"type": "uint256"}],
"type": "function"
}
];
AbiToMock.generateMock(abi, './mocks', 'MyContract');
// Browser usage
import { generateMockString } from 'abi-to-mock';
const mockCode = generateMockString(abi, 'MyContract');
API Reference
// Node.js API
AbiToMock(abiPath: string, outputDir?: string, name?: string): MockContract
AbiToMock.generateMock(abi: ABI[], outputDir: string, name: string): MockContract
// Browser API
generateMockString(abi: ABI[], name: string): string
Setup Helpers
These contracts were created with the intention of speeding up the setup process for an invariant testing suite.
For an example implementation of these contracts in use see the create-chimera-app repo.
ActorManager
The ActorManager
contract serves as the source of truth for actors that are being used in the fuzzing suite setup.
The primary functions of interest when setting up a suite are:
_addActor
- allows adding a new address as an actor that can be tracked_switchActor
- this should be exposed in a target function that can be called by the fuzzer to randomly switch between actors
To use the actors stored in the ActorManager, add the asActor
modifier on all of your target function handlers which pranks as the currently set actor.
For privileged functions you can use the asAdmin
modifier which calls the target functions as the tester contract (address(this)
). The tester contract is typically set as the default admin address by convention.
AssetManager
The AssetManager
allows tracking all assets being used in the fuzz suite setup.
Similar to the ActorManager
this serves as the source of truth for all assets used in the test suite and therefore no target function should be called that may transfer an asset not tracked in the AssetManager
.
The primary functions of interest when setting up a suite are:
_newAsset
- deploys an instance of theMockERC20
contract and adds it to tracking so it can be accessed as needed_getAsset
- used to clamp values used for tokens in calls to target functions_finalizeAssetDeployment
- a utility for minting tokens added via_newAsset
to all actors that have been setup and approving it to all contracts in the the system that may need to calltransferFrom
on it
Utils
Provides utilities for invariant testing
checkError
- allows checking if a revert reason from a function call is equivalent to the reason passed in as theexpected
argument
The checkError
function can take multiple string input types that may be thrown in the case of a function revert.
- error string - the string in revert due to a require statement (ex:
"a does not equal b"
inrequire(a == b, "a does not equal b")
) - custom error - custom errors defined in a contract's interface (ex:
error CustomError()
,error CustomErrorWithArgs(uint256 val)
); note that when passing in custom errors with arguments the argument name should be excluded (ex:"CustomErrorWithArgs(uint256)"
) - panic reverts - panic reverts for any of the reasons defined in the Solidity compiler here
function counter_setNumber(uint256 newNumber) public updateGhosts asActor {
try counter.setNumber(newNumber) {
// passes
} catch (bytes memory err) {
bool expectedError;
// checks for custom errors and panics
expectedError =
checkError(err, "abc") || // error string from require statement
checkError(err, "CustomError()") || // custom error
checkError(err, Panic.arithmeticPanic); // compiler panic errors
t(expectedError, "unexpected error");
}
}
Panic
A library that provides named variables corresponding to compiler panic messages. Used to more easily access these messages when using the checkError
utility.
library Panic {
// compiler panics
string constant assertionPanic = "Panic(1)";
string constant arithmeticPanic = "Panic(17)";
string constant divisionPanic = "Panic(18)";
string constant enumPanic = "Panic(33)";
string constant arrayPanic = "Panic(34)";
string constant emptyArrayPanic = "Panic(49)";
string constant outOfBoundsPanic = "Panic(50)";
string constant memoryPanic = "Panic(65)";
string constant functionPanic = "Panic(81)";
}
MockERC20
A minimal MockERC20
contract that lets you mock any standard ERC20 tokens that will be interacting with the system without requiring external dependencies.
Properties Table

This repo provides a template for creating a table to track properties that have been implemented for an invariant testing suite using the format that the Recon team uses in our engagements.
Examples
The following are examples of the use of this table for Recon engagements:
ERC7540 Reusable Properties
A collection of reusable properties for ERC-7540 asynchronous vaults.
Written in collaboration with Centrifuge.
Specification
Property | Description | Category |
---|---|---|
7540-1 | convertToAssets(totalSupply) == totalAssets unless price is 0.0 | High Level |
7540-2 | convertToShares(totalAssets) == totalSupply unless price is 0.0 | High Level |
7540-3 | max* never reverts | DOS Invariant |
7540-4 | claiming more than max always reverts | Stateful Test |
7540-5 | requestRedeem reverts if the share balance is less than amount | Stateful Test |
7540-6 | preview* always reverts | Stateful Test |
7540-7 | if max[method] > 0, then [method] (max) should not revert | DOS Invariant |
Usage
- Install the repo as a dependency to a Foundry project
forge install Recon-Fuzz/erc7540-reusable-properties --no-commit
- Inherit from the
ERC7540Properties
contract in the contract where you've defined your properties. For a test suite harnessed with Recon it will look something like this:
abstract contract Properties is Setup, Asserts, ERC7540Properties {}
- Add a way to change the
actor
state variable in theERC7540Properties
contract. The simplest way to do this is by exposing a target function with something like the following implementation:
function setup_switchActor(uint8 actorIndex) public {
actor = actorsArray[actorIndex % actorsArray.length];
}
- The properties defined in the
ERC7540Properties
contract are meant to hold for a vault implementation conforming to the ERC specification. If your implementation doesn't require certain properties to hold you can simply exclude them from your contract in which you define your properties:
abstract contract Properties is Setup, Asserts, ERC7540Properties {
function crytic_erc7540_1() public returns (bool test) {
test = erc7540_1(address(vault));
}
function crytic_erc7540_2() public returns (bool test) {
test = erc7540_2(address(vault));
}
}
by not adding a wrapper for properties other than erc7540_2
and erc7540_2
the other properties defined in ERC7540Properties
don't get evaluated by the fuzzer.
Opsec Resources
These are a few resources, stemming from this talk we did to help founders ship safely: https://docs.google.com/presentation/d/1Tzcj7FPj8QmvjdO79vPqPsRxyyyEwFhWl6SoRzFZ_do/edit?usp=sharing
1 Hour Consultation
Recon offers ongoing security retainers to review multisig operations and transactions, to see if it's a fit and get some free advice, get in touch here: https://tally.so/r/3jpV6E
Smart Contracts
-
Building a Safe Protocol, a 1 hour discussion with 4 repeating DeFi founders on what it takes to ship safe protocols: https://x.com/i/broadcasts/1BdxYqLoRgBxX
-
Most known smart contract exploit list: https://rekt.news/leaderboard
Governance
-
The Right Way to Multisig, 2 hours of discussion on the dangers of Multisigs and Practical Day-to-Day Governance in protocols, and how to get it right https://x.com/getreconxyz/status/1885249716386226572
-
Review of most popular hardware wallet for security https://x.com/PatrickAlphaC/status/1904075663318847785
Web2
- Decode Calldata: https://calldata.swiss-knife.xyz/decoder
Physical Opsec
List of most known Physical Attacks tied to Crypto: https://github.com/jlopp/physical-bitcoin-attacks
Glossary
Actor
The address that will perform the calls specific to TargetFunctions. See ActorManager for how to use these in a test suite.
CryticTester
The invariant testing contract that's deployed in the Chimera Framework and that the fuzzer will use to explore state and properties with the target functions defined on it.
CryticToFoundry
A Foundry test contract used to implement unit tests of property-breaking call sequences (reproducers) obtained with testing tools (i.e. Echidna, Medusa, Halmos, etc.).
Invariant Testing
The act of testing logical statements about smart contracts using tools that manipulate their state randomly (fuzzers) or by following all possible paths (formal verifiers).
Property
A logical statement about a smart contract or system that can be tested.
Reproducer
A call trace generated by a tool (fuzzer, formal verifier) that breaks a property of a contract or system.
Invariant
A property that should always hold for a smart contract or system.
Fuzzer
An engine/solver, a program that can perform stateful tests on a smart contract or system.
Some fuzzers are concrete, others concolic, and others symbolic.
Handler
A wrapper around a target function that makes a call to a target contract.
Target Function
The set of functions that the fuzzer will call to explore state and properties for the smart contract or system.
Shrinking
A process performed by the fuzzer to remove unnecessary calls from a call sequence that don't contribute to breaking a property.
Scaffolding
The set of smart contracts put into place to organize the code and tell the fuzzer how to explore states and test properties.
Echidna
A concrete fuzzer written in Haskell, using HEVM as its EVM engine.
Medusa
A concrete fuzzer written in Go, using GETH for the EVM engine.
Halmos
A symbolic fuzzer written in Python, using its own SEVM for the EVM engine.