
Chain Industries
February 09, 2026
How We Audit Smart Contracts: Process, Tools, and What We Find

Most teams treat smart contract audits as a black box. You hand over your code, wait a few weeks, and get back a PDF full of findings. What happens in between feels like a mystery.
It shouldn’t be.
Understanding how audits work makes you a better client, a better builder, and helps you write code that’s more secure before it ever reaches an auditor. It also helps you evaluate whether the audit firm you’re paying is actually doing thorough work or just running automated tools and calling it a day.
This article walks through our actual audit process, from the first call to the final report, and covers what we consistently find across dozens of engagements. If you’re preparing for an audit, this will help you know what to expect. If you’re evaluating auditors, this will help you ask the right questions.
Before the Audit Starts
The audit doesn’t start when we open the codebase. It starts with understanding what we’re looking at and why it exists.
What We Ask For
Before we write a single note, we ask clients to provide several things. First, the complete source code with all dependencies, pinned to a specific commit hash. We need to know exactly what we’re reviewing. If the code changes during the audit, findings can become invalid.
Second, documentation. This includes any whitepapers, technical specs, architecture diagrams, or even rough notes explaining what the protocol does. We need to understand the intended behavior before we can identify deviations from it.
Third, the deployment context. Which chains will this deploy on? What EVM version? Are there admin keys, multisigs, or timelocks? What’s the expected TVL? This context shapes our threat model, a contract holding $100K faces different risks than one holding $100M.
Fourth, known issues and design trade-offs. Every protocol makes deliberate compromises. We’d rather know about them upfront than waste time rediscovering decisions you’ve already made.
Scoping
We review the codebase to determine scope: which contracts are in, which are out, and how many lines of code we’re covering. Scope directly determines the audit timeline. Rushing an audit to meet a launch date is one of the most common mistakes we see, and one of the most dangerous.
A rough guideline: expect about 200–300 lines of meaningful review per auditor per day. A 2,000-line protocol typically needs 7–10 days for a thorough review. If someone offers to audit it in two days, they’re not doing manual review.
Phase 1: Manual Code Review
This is where the real work happens. Automated tools are useful, but they can’t understand your protocol’s business logic. A human auditor can.
First Pass: Understanding
We read the entire codebase end to end, like reading a book. The goal isn’t to find bugs yet, it’s to understand the system. We’re building a mental model of how the protocol works: how funds flow in and out, who has permissions to do what, what happens in edge cases, and where the trust boundaries are.
During this pass, we map out the architecture. We identify which contracts are entry points, which hold funds, which have admin privileges, and how they interact. We trace the lifecycle of a user interaction from start to finish.
Second Pass: Threat Modeling
With the mental model in place, we think adversarially. For every function, we ask: what would an attacker try here?
We focus on trust boundaries, any point where the contract interacts with external actors or contracts. This includes user inputs, oracle calls, callback functions, cross-contract calls, and admin operations. Each of these is a potential attack surface.
We also look at assumptions. Every protocol makes implicit assumptions about how it will be used. Our job is to find the cases where those assumptions don’t hold.
Third Pass: Line-by-Line Review
Now we go deep. Every function, every conditional, every state change gets scrutinized. We’re looking for specific vulnerability classes, but we’re also looking for logic errors that don’t fit neatly into categories.
This is where experience matters most. An auditor who has reviewed fifty DeFi protocols recognizes patterns, both good patterns and dangerous ones, that a less experienced reviewer would miss.
Phase 2: Automated Analysis
Manual review is essential, but automation helps us cover ground faster and catch issues that are easy for humans to overlook.
Static Analysis
We run static analysis tools across every codebase. Slither is our primary tool for Solidity, it catches a wide range of issues from reentrancy patterns to unused variables to dangerous delegatecall usage. Aderyn provides additional coverage for common vulnerability patterns.
Here’s what static analysis is good at: detecting known vulnerability patterns, identifying code quality issues, flagging deviations from best practices, and catching simple mistakes like missing access controls or unchecked return values.
Here’s what it’s bad at: understanding business logic, detecting economic attacks, finding complex multi-step vulnerabilities, and evaluating whether a design decision is appropriate for the protocol’s specific context.
We never rely on automated tools alone. We’ve seen plenty of contracts that pass every automated check and still have critical vulnerabilities that only a human can spot.
Fuzzing
Fuzzing generates random inputs and throws them at your contracts to find unexpected behavior. We use Foundry’s built-in fuzzer for most engagements, and Echidna for more complex property-based testing.
The key is defining good properties, invariants that should always hold true regardless of input:
// Example: total shares should never exceed total assets function invariant_sharesNeverExceedAssets() public { assert(vault.totalShares() <= vault.totalAssets()); } // Example: user balance should never exceed total supply function invariant_userBalanceBounded() public { assert(token.balanceOf(address(this)) <= token.totalSupply()); }
// Example: total shares should never exceed total assets function invariant_sharesNeverExceedAssets() public { assert(vault.totalShares() <= vault.totalAssets()); } // Example: user balance should never exceed total supply function invariant_userBalanceBounded() public { assert(token.balanceOf(address(this)) <= token.totalSupply()); }
When fuzzing breaks an invariant, we have a concrete input sequence that demonstrates the bug, which makes it much easier to understand and fix.
Symbolic Execution
For critical code paths, we sometimes use symbolic execution to mathematically verify properties. This is more thorough than fuzzing but also more resource-intensive, so we reserve it for the highest-risk components, typically token math, access control logic, and fund withdrawal paths.
Phase 3: Attack Simulation
Finding a potential vulnerability is one thing. Proving it’s exploitable is another.
Proof-of-Concept Exploits
For every high and critical finding, we write a proof-of-concept (PoC) that demonstrates the attack. This isn’t optional, it’s how we verify that the vulnerability is real, not theoretical.
A typical PoC is a Foundry test that simulates an attacker exploiting the vulnerability:
function test_exploitAccessControl() public { // Setup: attacker has no special permissions address attacker = makeAddr("attacker"); // Step 1: Attacker calls unprotected function vm.prank(attacker); vault.emergencyWithdraw(address(token), 1000e18); // Step 2: Verify funds were stolen assertEq(token.balanceOf(attacker), 1000e18); assertEq(token.balanceOf(address(vault)), 0); }
function test_exploitAccessControl() public { // Setup: attacker has no special permissions address attacker = makeAddr("attacker"); // Step 1: Attacker calls unprotected function vm.prank(attacker); vault.emergencyWithdraw(address(token), 1000e18); // Step 2: Verify funds were stolen assertEq(token.balanceOf(attacker), 1000e18); assertEq(token.balanceOf(address(vault)), 0); }
If we can’t write a working PoC, we downgrade the severity. Theory isn’t enough, we need to prove exploitability.
Fork Testing
We run attack simulations against mainnet forks to test interactions with real deployed contracts. This catches issues that only appear in the context of the live ecosystem, oracle behavior, liquidity conditions, gas costs, and interactions with other protocols.
function test_oracleManipulationOnFork() public { // Fork mainnet at a specific block vm.createSelectFork("mainnet", 18_500_000); // Simulate oracle manipulation via flash loan // ... // Verify the protocol behaves incorrectly // ... }
function test_oracleManipulationOnFork() public { // Fork mainnet at a specific block vm.createSelectFork("mainnet", 18_500_000); // Simulate oracle manipulation via flash loan // ... // Verify the protocol behaves incorrectly // ... }
Cross-Contract Analysis
Many vulnerabilities only appear when you consider how contracts interact with each other. A function that’s safe in isolation might be dangerous when called by another contract in a specific sequence.
We trace execution paths across contract boundaries, looking for reentrancy vectors, state inconsistencies, and privilege escalation chains.
What We Actually Find
After dozens of audits, patterns emerge. Here are the most common findings we see, organized by how frequently they appear.
Access Control Issues
This is the single most common finding. Functions that should be restricted to specific roles are left open, or role checks are implemented incorrectly.
Common bug, missing access control on sensitive function:
// VULNERABLE: Anyone can call this function setFeeRecipient(address _recipient) external { feeRecipient = _recipient; }
// VULNERABLE: Anyone can call this function setFeeRecipient(address _recipient) external { feeRecipient = _recipient; }
Fixed:
function setFeeRecipient(address _recipient) external onlyOwner { require(_recipient != address(0), "Zero address"); feeRecipient = _recipient; }
function setFeeRecipient(address _recipient) external onlyOwner { require(_recipient != address(0), "Zero address"); feeRecipient = _recipient; }
This seems obvious, but we find it in nearly every audit. It’s especially common in contracts with many admin functions, developers add a new function and forget the modifier.
Precision Loss in Token Math
DeFi protocols deal with tokens that have different decimal places, exchange rates that change over time, and rounding that compounds across operations. Small precision errors can be exploited at scale.
Common bug, rounding in the wrong direction:
// VULNERABLE: Rounds in user's favor on withdrawal function withdraw(uint256 shares) external returns (uint256 assets) { assets = (shares * totalAssets()) / totalSupply(); _burn(msg.sender, shares); token.transfer(msg.sender, assets); }
// VULNERABLE: Rounds in user's favor on withdrawal function withdraw(uint256 shares) external returns (uint256 assets) { assets = (shares * totalAssets()) / totalSupply(); _burn(msg.sender, shares); token.transfer(msg.sender, assets); }
Fixed, round down on withdrawal (against the user):
function withdraw(uint256 shares) external returns (uint256 assets) { assets = Math.mulDiv(shares, totalAssets(), totalSupply(), Math.Rounding.Down); _burn(msg.sender, shares); token.transfer(msg.sender, assets); }
function withdraw(uint256 shares) external returns (uint256 assets) { assets = Math.mulDiv(shares, totalAssets(), totalSupply(), Math.Rounding.Down); _burn(msg.sender, shares); token.transfer(msg.sender, assets); }
The general rule: round against the user to prevent extraction of value through repeated operations. Round down when users receive assets, round up when users deposit.
Missing Input Validation
Functions that accept external input without validating it create opportunities for unexpected behavior.
Common bug, no validation on critical parameter:
// VULNERABLE: slippage can be set to 100%, allowing sandwich attacks function swap( address tokenIn, address tokenOut, uint256 amountIn, uint256 minAmountOut // User can pass 0 ) external { // ... performs swap with no minimum output protection }
// VULNERABLE: slippage can be set to 100%, allowing sandwich attacks function swap( address tokenIn, address tokenOut, uint256 amountIn, uint256 minAmountOut // User can pass 0 ) external { // ... performs swap with no minimum output protection }
Fixed:
function swap( address tokenIn, address tokenOut, uint256 amountIn, uint256 minAmountOut ) external { require(tokenIn != address(0) && tokenOut != address(0), "Invalid token"); require(amountIn > 0, "Zero amount"); require(minAmountOut > 0, "Slippage protection required"); require(tokenIn != tokenOut, "Same token"); // ... performs swap }
function swap( address tokenIn, address tokenOut, uint256 amountIn, uint256 minAmountOut ) external { require(tokenIn != address(0) && tokenOut != address(0), "Invalid token"); require(amountIn > 0, "Zero amount"); require(minAmountOut > 0, "Slippage protection required"); require(tokenIn != tokenOut, "Same token"); // ... performs swap }
Oracle Manipulation
Protocols that rely on price oracles are vulnerable to manipulation, especially when using spot prices from AMMs.
Common bug, using spot price as oracle:
// VULNERABLE: Price can be manipulated via flash loan function getPrice() public view returns (uint256) { (uint112 reserve0, uint112 reserve1, ) = pair.getReserves(); return (uint256(reserve1) * 1e18) / uint256(reserve0); }
// VULNERABLE: Price can be manipulated via flash loan function getPrice() public view returns (uint256) { (uint112 reserve0, uint112 reserve1, ) = pair.getReserves(); return (uint256(reserve1) * 1e18) / uint256(reserve0); }
Fixed, use TWAP or Chainlink:
function getPrice() public view returns (uint256) { (, int256 price, , uint256 updatedAt, ) = priceFeed.latestRoundData(); require(price > 0, "Invalid price"); require(block.timestamp - updatedAt < STALENESS_THRESHOLD, "Stale price"); return uint256(price); }
function getPrice() public view returns (uint256) { (, int256 price, , uint256 updatedAt, ) = priceFeed.latestRoundData(); require(price > 0, "Invalid price"); require(block.timestamp - updatedAt < STALENESS_THRESHOLD, "Stale price"); return uint256(price); }
Spot prices from DEXs can be manipulated within a single transaction using flash loans. Always use time-weighted averages or reputable oracle networks for pricing.
Unsafe Upgrade Patterns
Upgradeable contracts introduce a whole category of vulnerabilities. The most common: unprotected initializers and storage collisions.
Common bug, initializer can be called by anyone:
// VULNERABLE: No access control, can be re-initialized function initialize(address _owner) public initializer { owner = _owner; }
// VULNERABLE: No access control, can be re-initialized function initialize(address _owner) public initializer { owner = _owner; }
Fixed:
function initialize(address _owner) public initializer { require(_owner != address(0), "Zero address"); __Ownable_init(_owner); __UUPSUpgradeable_init(); } function _authorizeUpgrade(address newImplementation) internal override onlyOwner {}
function initialize(address _owner) public initializer { require(_owner != address(0), "Zero address"); __Ownable_init(_owner); __UUPSUpgradeable_init(); } function _authorizeUpgrade(address newImplementation) internal override onlyOwner {}
We also check for storage layout compatibility between implementations. Adding, removing, or reordering state variables between upgrades can corrupt existing data.
Frontrunning and MEV
Any transaction that depends on ordering or timing can be exploited by MEV bots. Common targets include: large swaps without slippage protection, liquidations with profit margins, governance votes with last-minute changes, and token launches or auctions.
We evaluate which functions are MEV-sensitive and recommend mitigations like commit-reveal schemes, private mempools, slippage bounds, and deadline parameters.
The Report: What Good Looks Like
The audit report is the primary deliverable. A good report is more than a list of bugs, it’s a document that helps you understand your protocol’s security posture and take action.
Severity Classification
We classify every finding into one of four severity levels.
Critical means an attacker can steal funds, permanently break the protocol, or cause irreversible damage. These need to be fixed before deployment, no exceptions.
High means significant risk of fund loss or protocol disruption, but with some conditions or limitations. These should be fixed before deployment.
Medium means the issue could lead to unexpected behavior or moderate risk under specific conditions. These should be addressed but may be acceptable depending on context.
Low means minor issues, code quality improvements, or best practice violations. These are recommendations, not requirements.
What Each Finding Includes
Every finding in our reports contains a description of the vulnerability in plain language, the severity level with justification, the exact location in the code (file, function, line numbers), a proof-of-concept or clear reproduction steps, the potential impact if exploited, and a recommended fix with example code.
We also include context about why the vulnerability exists and what patterns to follow to avoid similar issues in the future. The goal is education, not just bug-finding.
Executive Summary
Every report starts with a high-level summary that non-technical stakeholders can understand. This covers the overall security posture, the most significant risks, and a clear recommendation on deployment readiness. If the protocol isn’t ready for mainnet, we say so directly.
7. After the Audit
The audit report isn’t the end, it’s a checkpoint.
Re-Review of Fixes
After the client addresses our findings, we review every fix. This is critical because remediation code is new code, and new code can introduce new bugs. We’ve seen cases where a fix for one vulnerability accidentally created another.
We verify that each fix addresses the root cause (not just the symptom), doesn’t introduce new issues, doesn’t break existing functionality, and matches the severity of the original finding.
Common Remediation Mistakes
There are patterns we see repeatedly in how teams fix audit findings.
Fixing the symptom instead of the cause means addressing the specific exploit scenario we described rather than the underlying vulnerability class. If we found a reentrancy bug in one function, check all functions, not just the one we flagged.
Over-engineering the fix sometimes happens when teams add complex solutions to simple problems. The best fixes are usually the simplest: add a modifier, reorder operations, validate an input.
Introducing regressions is surprisingly common. A fix in one contract can break an assumption in another. Always run your full test suite after every change.
Deployment Checklist
Before going to mainnet after an audit, we recommend verifying several things. Confirm that all critical and high findings are resolved. Ensure the deployed bytecode matches the audited code exactly. Verify that admin keys are properly secured with multisig or timelock. Set up monitoring for unusual transaction patterns. Have an incident response plan ready. And know how to pause the protocol if something goes wrong.
Ongoing Security
An audit is a snapshot of security at a point in time. The moment you change a single line of code, the audit’s guarantees no longer fully apply.
For protocols that evolve, we recommend establishing a regular audit cadence for significant changes, running continuous fuzzing in CI, using monitoring tools to detect anomalous on-chain behavior, and maintaining a bug bounty program to incentivize responsible disclosure.
Conclusion
Smart contract audits aren’t magic. They’re a structured process of understanding, analyzing, attacking, and documenting. The quality of the audit depends entirely on the rigor of that process and the experience of the people executing it.
A few things to remember. An audit is a checkpoint, not a guarantee. No audit can promise zero vulnerabilities, but a thorough audit dramatically reduces risk. Preparation matters. Teams that provide clear documentation, clean code, and good test coverage get more value from their audit. Fix the root cause, not the symptom. When you receive findings, address the underlying patterns, not just the specific instances. Security is ongoing. An audit covers a moment in time. As your protocol evolves, your security practices need to evolve with it.
If you’re preparing for an audit and want to make sure you get the most value from the process, we’re happy to help, whether that’s conducting the audit ourselves or helping you prepare for someone else’s.
Most teams treat smart contract audits as a black box. You hand over your code, wait a few weeks, and get back a PDF full of findings. What happens in between feels like a mystery.
It shouldn’t be.
Understanding how audits work makes you a better client, a better builder, and helps you write code that’s more secure before it ever reaches an auditor. It also helps you evaluate whether the audit firm you’re paying is actually doing thorough work or just running automated tools and calling it a day.
This article walks through our actual audit process, from the first call to the final report, and covers what we consistently find across dozens of engagements. If you’re preparing for an audit, this will help you know what to expect. If you’re evaluating auditors, this will help you ask the right questions.
Before the Audit Starts
The audit doesn’t start when we open the codebase. It starts with understanding what we’re looking at and why it exists.
What We Ask For
Before we write a single note, we ask clients to provide several things. First, the complete source code with all dependencies, pinned to a specific commit hash. We need to know exactly what we’re reviewing. If the code changes during the audit, findings can become invalid.
Second, documentation. This includes any whitepapers, technical specs, architecture diagrams, or even rough notes explaining what the protocol does. We need to understand the intended behavior before we can identify deviations from it.
Third, the deployment context. Which chains will this deploy on? What EVM version? Are there admin keys, multisigs, or timelocks? What’s the expected TVL? This context shapes our threat model, a contract holding $100K faces different risks than one holding $100M.
Fourth, known issues and design trade-offs. Every protocol makes deliberate compromises. We’d rather know about them upfront than waste time rediscovering decisions you’ve already made.
Scoping
We review the codebase to determine scope: which contracts are in, which are out, and how many lines of code we’re covering. Scope directly determines the audit timeline. Rushing an audit to meet a launch date is one of the most common mistakes we see, and one of the most dangerous.
A rough guideline: expect about 200–300 lines of meaningful review per auditor per day. A 2,000-line protocol typically needs 7–10 days for a thorough review. If someone offers to audit it in two days, they’re not doing manual review.
Phase 1: Manual Code Review
This is where the real work happens. Automated tools are useful, but they can’t understand your protocol’s business logic. A human auditor can.
First Pass: Understanding
We read the entire codebase end to end, like reading a book. The goal isn’t to find bugs yet, it’s to understand the system. We’re building a mental model of how the protocol works: how funds flow in and out, who has permissions to do what, what happens in edge cases, and where the trust boundaries are.
During this pass, we map out the architecture. We identify which contracts are entry points, which hold funds, which have admin privileges, and how they interact. We trace the lifecycle of a user interaction from start to finish.
Second Pass: Threat Modeling
With the mental model in place, we think adversarially. For every function, we ask: what would an attacker try here?
We focus on trust boundaries, any point where the contract interacts with external actors or contracts. This includes user inputs, oracle calls, callback functions, cross-contract calls, and admin operations. Each of these is a potential attack surface.
We also look at assumptions. Every protocol makes implicit assumptions about how it will be used. Our job is to find the cases where those assumptions don’t hold.
Third Pass: Line-by-Line Review
Now we go deep. Every function, every conditional, every state change gets scrutinized. We’re looking for specific vulnerability classes, but we’re also looking for logic errors that don’t fit neatly into categories.
This is where experience matters most. An auditor who has reviewed fifty DeFi protocols recognizes patterns, both good patterns and dangerous ones, that a less experienced reviewer would miss.
Phase 2: Automated Analysis
Manual review is essential, but automation helps us cover ground faster and catch issues that are easy for humans to overlook.
Static Analysis
We run static analysis tools across every codebase. Slither is our primary tool for Solidity, it catches a wide range of issues from reentrancy patterns to unused variables to dangerous delegatecall usage. Aderyn provides additional coverage for common vulnerability patterns.
Here’s what static analysis is good at: detecting known vulnerability patterns, identifying code quality issues, flagging deviations from best practices, and catching simple mistakes like missing access controls or unchecked return values.
Here’s what it’s bad at: understanding business logic, detecting economic attacks, finding complex multi-step vulnerabilities, and evaluating whether a design decision is appropriate for the protocol’s specific context.
We never rely on automated tools alone. We’ve seen plenty of contracts that pass every automated check and still have critical vulnerabilities that only a human can spot.
Fuzzing
Fuzzing generates random inputs and throws them at your contracts to find unexpected behavior. We use Foundry’s built-in fuzzer for most engagements, and Echidna for more complex property-based testing.
The key is defining good properties, invariants that should always hold true regardless of input:
// Example: total shares should never exceed total assets function invariant_sharesNeverExceedAssets() public { assert(vault.totalShares() <= vault.totalAssets()); } // Example: user balance should never exceed total supply function invariant_userBalanceBounded() public { assert(token.balanceOf(address(this)) <= token.totalSupply()); }
When fuzzing breaks an invariant, we have a concrete input sequence that demonstrates the bug, which makes it much easier to understand and fix.
Symbolic Execution
For critical code paths, we sometimes use symbolic execution to mathematically verify properties. This is more thorough than fuzzing but also more resource-intensive, so we reserve it for the highest-risk components, typically token math, access control logic, and fund withdrawal paths.
Phase 3: Attack Simulation
Finding a potential vulnerability is one thing. Proving it’s exploitable is another.
Proof-of-Concept Exploits
For every high and critical finding, we write a proof-of-concept (PoC) that demonstrates the attack. This isn’t optional, it’s how we verify that the vulnerability is real, not theoretical.
A typical PoC is a Foundry test that simulates an attacker exploiting the vulnerability:
function test_exploitAccessControl() public { // Setup: attacker has no special permissions address attacker = makeAddr("attacker"); // Step 1: Attacker calls unprotected function vm.prank(attacker); vault.emergencyWithdraw(address(token), 1000e18); // Step 2: Verify funds were stolen assertEq(token.balanceOf(attacker), 1000e18); assertEq(token.balanceOf(address(vault)), 0); }
If we can’t write a working PoC, we downgrade the severity. Theory isn’t enough, we need to prove exploitability.
Fork Testing
We run attack simulations against mainnet forks to test interactions with real deployed contracts. This catches issues that only appear in the context of the live ecosystem, oracle behavior, liquidity conditions, gas costs, and interactions with other protocols.
function test_oracleManipulationOnFork() public { // Fork mainnet at a specific block vm.createSelectFork("mainnet", 18_500_000); // Simulate oracle manipulation via flash loan // ... // Verify the protocol behaves incorrectly // ... }
Cross-Contract Analysis
Many vulnerabilities only appear when you consider how contracts interact with each other. A function that’s safe in isolation might be dangerous when called by another contract in a specific sequence.
We trace execution paths across contract boundaries, looking for reentrancy vectors, state inconsistencies, and privilege escalation chains.
What We Actually Find
After dozens of audits, patterns emerge. Here are the most common findings we see, organized by how frequently they appear.
Access Control Issues
This is the single most common finding. Functions that should be restricted to specific roles are left open, or role checks are implemented incorrectly.
Common bug, missing access control on sensitive function:
// VULNERABLE: Anyone can call this function setFeeRecipient(address _recipient) external { feeRecipient = _recipient; }
Fixed:
function setFeeRecipient(address _recipient) external onlyOwner { require(_recipient != address(0), "Zero address"); feeRecipient = _recipient; }
This seems obvious, but we find it in nearly every audit. It’s especially common in contracts with many admin functions, developers add a new function and forget the modifier.
Precision Loss in Token Math
DeFi protocols deal with tokens that have different decimal places, exchange rates that change over time, and rounding that compounds across operations. Small precision errors can be exploited at scale.
Common bug, rounding in the wrong direction:
// VULNERABLE: Rounds in user's favor on withdrawal function withdraw(uint256 shares) external returns (uint256 assets) { assets = (shares * totalAssets()) / totalSupply(); _burn(msg.sender, shares); token.transfer(msg.sender, assets); }
Fixed, round down on withdrawal (against the user):
function withdraw(uint256 shares) external returns (uint256 assets) { assets = Math.mulDiv(shares, totalAssets(), totalSupply(), Math.Rounding.Down); _burn(msg.sender, shares); token.transfer(msg.sender, assets); }
The general rule: round against the user to prevent extraction of value through repeated operations. Round down when users receive assets, round up when users deposit.
Missing Input Validation
Functions that accept external input without validating it create opportunities for unexpected behavior.
Common bug, no validation on critical parameter:
// VULNERABLE: slippage can be set to 100%, allowing sandwich attacks function swap( address tokenIn, address tokenOut, uint256 amountIn, uint256 minAmountOut // User can pass 0 ) external { // ... performs swap with no minimum output protection }
Fixed:
function swap( address tokenIn, address tokenOut, uint256 amountIn, uint256 minAmountOut ) external { require(tokenIn != address(0) && tokenOut != address(0), "Invalid token"); require(amountIn > 0, "Zero amount"); require(minAmountOut > 0, "Slippage protection required"); require(tokenIn != tokenOut, "Same token"); // ... performs swap }
Oracle Manipulation
Protocols that rely on price oracles are vulnerable to manipulation, especially when using spot prices from AMMs.
Common bug, using spot price as oracle:
// VULNERABLE: Price can be manipulated via flash loan function getPrice() public view returns (uint256) { (uint112 reserve0, uint112 reserve1, ) = pair.getReserves(); return (uint256(reserve1) * 1e18) / uint256(reserve0); }
Fixed, use TWAP or Chainlink:
function getPrice() public view returns (uint256) { (, int256 price, , uint256 updatedAt, ) = priceFeed.latestRoundData(); require(price > 0, "Invalid price"); require(block.timestamp - updatedAt < STALENESS_THRESHOLD, "Stale price"); return uint256(price); }
Spot prices from DEXs can be manipulated within a single transaction using flash loans. Always use time-weighted averages or reputable oracle networks for pricing.
Unsafe Upgrade Patterns
Upgradeable contracts introduce a whole category of vulnerabilities. The most common: unprotected initializers and storage collisions.
Common bug, initializer can be called by anyone:
// VULNERABLE: No access control, can be re-initialized function initialize(address _owner) public initializer { owner = _owner; }
Fixed:
function initialize(address _owner) public initializer { require(_owner != address(0), "Zero address"); __Ownable_init(_owner); __UUPSUpgradeable_init(); } function _authorizeUpgrade(address newImplementation) internal override onlyOwner {}
We also check for storage layout compatibility between implementations. Adding, removing, or reordering state variables between upgrades can corrupt existing data.
Frontrunning and MEV
Any transaction that depends on ordering or timing can be exploited by MEV bots. Common targets include: large swaps without slippage protection, liquidations with profit margins, governance votes with last-minute changes, and token launches or auctions.
We evaluate which functions are MEV-sensitive and recommend mitigations like commit-reveal schemes, private mempools, slippage bounds, and deadline parameters.
The Report: What Good Looks Like
The audit report is the primary deliverable. A good report is more than a list of bugs, it’s a document that helps you understand your protocol’s security posture and take action.
Severity Classification
We classify every finding into one of four severity levels.
Critical means an attacker can steal funds, permanently break the protocol, or cause irreversible damage. These need to be fixed before deployment, no exceptions.
High means significant risk of fund loss or protocol disruption, but with some conditions or limitations. These should be fixed before deployment.
Medium means the issue could lead to unexpected behavior or moderate risk under specific conditions. These should be addressed but may be acceptable depending on context.
Low means minor issues, code quality improvements, or best practice violations. These are recommendations, not requirements.
What Each Finding Includes
Every finding in our reports contains a description of the vulnerability in plain language, the severity level with justification, the exact location in the code (file, function, line numbers), a proof-of-concept or clear reproduction steps, the potential impact if exploited, and a recommended fix with example code.
We also include context about why the vulnerability exists and what patterns to follow to avoid similar issues in the future. The goal is education, not just bug-finding.
Executive Summary
Every report starts with a high-level summary that non-technical stakeholders can understand. This covers the overall security posture, the most significant risks, and a clear recommendation on deployment readiness. If the protocol isn’t ready for mainnet, we say so directly.
7. After the Audit
The audit report isn’t the end, it’s a checkpoint.
Re-Review of Fixes
After the client addresses our findings, we review every fix. This is critical because remediation code is new code, and new code can introduce new bugs. We’ve seen cases where a fix for one vulnerability accidentally created another.
We verify that each fix addresses the root cause (not just the symptom), doesn’t introduce new issues, doesn’t break existing functionality, and matches the severity of the original finding.
Common Remediation Mistakes
There are patterns we see repeatedly in how teams fix audit findings.
Fixing the symptom instead of the cause means addressing the specific exploit scenario we described rather than the underlying vulnerability class. If we found a reentrancy bug in one function, check all functions, not just the one we flagged.
Over-engineering the fix sometimes happens when teams add complex solutions to simple problems. The best fixes are usually the simplest: add a modifier, reorder operations, validate an input.
Introducing regressions is surprisingly common. A fix in one contract can break an assumption in another. Always run your full test suite after every change.
Deployment Checklist
Before going to mainnet after an audit, we recommend verifying several things. Confirm that all critical and high findings are resolved. Ensure the deployed bytecode matches the audited code exactly. Verify that admin keys are properly secured with multisig or timelock. Set up monitoring for unusual transaction patterns. Have an incident response plan ready. And know how to pause the protocol if something goes wrong.
Ongoing Security
An audit is a snapshot of security at a point in time. The moment you change a single line of code, the audit’s guarantees no longer fully apply.
For protocols that evolve, we recommend establishing a regular audit cadence for significant changes, running continuous fuzzing in CI, using monitoring tools to detect anomalous on-chain behavior, and maintaining a bug bounty program to incentivize responsible disclosure.
Conclusion
Smart contract audits aren’t magic. They’re a structured process of understanding, analyzing, attacking, and documenting. The quality of the audit depends entirely on the rigor of that process and the experience of the people executing it.
A few things to remember. An audit is a checkpoint, not a guarantee. No audit can promise zero vulnerabilities, but a thorough audit dramatically reduces risk. Preparation matters. Teams that provide clear documentation, clean code, and good test coverage get more value from their audit. Fix the root cause, not the symptom. When you receive findings, address the underlying patterns, not just the specific instances. Security is ongoing. An audit covers a moment in time. As your protocol evolves, your security practices need to evolve with it.
If you’re preparing for an audit and want to make sure you get the most value from the process, we’re happy to help, whether that’s conducting the audit ourselves or helping you prepare for someone else’s.
More articles

Redefining Financial Freedom
Preparing for Solidity 0.9.0: What’s Deprecated and What to Do About It
January 09, 2026

A New Standard for Asset Protection
Securing Digital Assets in the Web3 Era
February 10, 2025

Balancing Innovation and Compliance
Navigating the Regulatory Landscape of Web3