TLDR: We’re in the middle of a software security arms race—hackers are adopting new tricks every day. As cyberattacks become more sophisticated, Almanax is leveraging LLMs to detect hidden threats in smart contracts and software supply chains once deemed impossible to catch.
The PR team for Web3 is working overtime—the amount of hacks is enough for any crypto advocate to never touch their wallet again. The bigger the projects get, the more they become prime targets for sophisticated attacks, and with devastating consequences. Consider the Bybit hack, which cost victims $1.46 billion—a staggering figure and one of the most significant cyber heists of all time.
How does an entire industry plagued with so many issues tackle this challenge? We need a layered approach. At Almanax, we developed an AI-powered security platform that scans for a wide array of vulnerabilities in the most common smart contract languages (Solidity, Rust, and Move). Traditional tools like static analysis often drop the ball on logic bugs once deemed machine unauditable (Demystifying Exploitable Bugs in Smart Contracts), but Almanax is stepping in to catch even the subtlest business logic flaws.
By leveraging LLMs, Almanax excels at catching issues like the one discovered in this recent Cantina competition for Farcaster Attestation (lines 260–271).
Here, the contract calculates a final payout by taking the total balance, subtracting the deposit amount, and adding the reward amount. In certain scenarios, this calculation can demand more ETH than the contract really holds—or, if the balance is high enough, it can overpay and let an attacker withdraw more than they deserve.
Issue is in the calculation
reward = address(this).balance - depositAmount + challengeRewardAmount;
When the calculated payout exceeds the total balance, the transfer cannot be completed. In Ethereum, any attempt to send more ETH than available will fail (revert). Because of this, the contract completely halts the challenge flow. Legitimate challengers are then blocked from receiving any reward. This effectively locks the challenge mechanism in a denial-of-service state.
Suppose the contract has 3 ETH in total. The deposit amount required is 1 ETH, and the intended reward is 1.5 ETH. Instead of simply sending out 1.5 ETH, the contract calculates:
3 ETH (balance) - 1 ETH (deposit) + 1.5 ETH (reward) = 3.5 ETH
But the contract only holds 3 ETH. It tries to send 3.5 ETH, which it doesn’t have, and the transfer reverts, blocking any payout.
Conversely, if the contract does have enough ETH to cover the inflated payout, the same buggy formula lets an attacker remove a larger amount than intended. By artificially increasing the contract’s balance (for instance, depositing extra ETH first), the attacker can exploit the calculation to siphon off an inflated challenge reward. Over multiple challenges, this can drain a significant portion of the contract.
In the provided example, Almanax not only classifies issues by severity (Critical, High, Medium, Low) but also explains the root cause, making it far easier to remediate. If the protocol had scanned their repository with Almanax, the scan would have flagged this logical oversight as a critical vulnerability—issue solved.
But Almanax’s vision is to move beyond smart contracts. While smart contracts are the epicenter of catastrophic exploits, the software supply chain presents an equally dangerous vector. A single compromised third-party dependency can lead to devastating outcomes across multiple projects.
This infamous typosquatting attack (typosquatting: registering names nearly identical to legitimate ones to deceive developers) on the Go ecosystem mimicked the popular BoltDB module to distribute a backdoor. This malicious package could have led to widespread system compromise, data exfiltration, and persistent access for attackers.
Details of the Attack
The attack's ability to remain undetected for years highlights the dangers of vulnerabilities lurking in everyday systems. Similar incidents have been observed in other ecosystems, such as npm and PyPI, where attackers have leveraged slight variations in package names to trick developers into installing compromised software. Similar to the logical bug discussed earlier, if an organization had scanned their dependencies with Almanax, they would have caught this major issue.
Large language models are already showing promising versatility in code analysis. Whether it’s EVM-based smart contracts or conventional software in Go or Rust, LLMs can understand syntax, semantics, and even context-sensitive business logic.
As attacks grow more elaborate and costly, investing in robust, AI-driven security is becoming a necessity—not a luxury. Hidden malware and logic vulnerabilities still lurk in today’s repositories, waiting to be discovered.
Follow us on Twitter and LinkedIn, and check out Almanax to stay ahead of emerging threats.
An AI Security Engineer fixing your code issues
New York, New York