Over the past 20 years, we’ve seen companies shifting from DevOps practices to DevSecOps, with an ever-increasing focus on integrating security tools and practices in the development process - what’s often referred to as “shifting left”. This involved procedures like adding static analysis tools (SASTs) into CI/CD pipelines, monitoring for insecure dependencies, scanning container images, using dynamic analysis tools (DASTs) to simulate the behavior of an app in production, and more broadly an increase in time and resources spent on testing before anything is pushed to production.
The blockchain industry has often prioritized speed over security, outsourcing most of the security testing to external auditors. The many high-profile hacks and the $9B dollars stolen in the past 3 years have been a wake-up call for the industry to finally be more serious in their security practices. Bug bounties, audit redundancy (getting audited by multiple firms), and the use of static analysis tools like Slither have started to become the norm for projects launching onchain.
In a world where money lives at the protocol level, a hacker can walk away with billions of dollars if a critical vulnerability is exploited, unlike a Web2 hack, which usually involves data theft that can be ransomed.
We’re finally seeing a renaissance for blockchain security, which is making institutions starting to feel comfortable with being active players in this space. The Internet of the future will be a place where money can be exchanged globally and peer-to-peer in the same way an email is exchanged. Many crypto companies are building the infrastructure and the applications to bring this vision to reality, which will require us to have even stronger security practices and products.
I recently came across a new architecture referred to as CyberDevOps proposed by Federico Lombardi, CISO at Conio. CyberDevOps brings an extreme shift left in security by integrating cybersecurity tools within a DevSecOps pipeline. It helped Conio fix up to 100% of known bugs and vulnerabilities and significantly improve their code quality.
Here’s a visual comparison between DevOps, DevSecOps, and CyberDevOps taken from Federico’s paper:
I’m convinced that most mature companies will progressively adopt a similar framework. The key takeaway here is that security is a matter of how many layers of protection we’re putting between us and bad actors. Even if they pass the first and second layer, can we stop them at layer 3 or 4? This concept may be familiar to many of you and is usually referred to as the Swiss Cheese Model, also popular in the aerospace industry where I started my career.
Google recently mentioned that 25% of all of their code is AI-generated. We’re headed into a future where that number is going to go up exponentially, and security teams will need to be equipped with proper tools to do their job, especially with AI coding assistants found to be the source of many errors in code.
Popular static analyzers today are mostly rule-based and have shown clear limitations in identifying logical bugs, given how hard it is to write rules for every possible type of vulnerability. When it comes to code security, the path forward to detect machine unauditable bugs and protect our critical infrastructure points to AI models specialized in security.
Today, bad actors are already using LLMs to facilitate their hacks. Security is a cat and mouse game and both companies and governments will need to respond to increasingly frequent and effective attacks with more powerful tools. It’s a matter of national security and Almanax is proud to be building in this space.
The first product we brought to market is a vulnerability scanner and SAST that uses a custom-trained, AI agentic pipeline to identify code vulnerabilities in a codebase. It already demonstrated its value by identifying code issues missed by human developers and security auditors, which could have otherwise been exploited by bad actors with significant monetary losses. We were awarded multiple bug bounty rewards for these discoveries from different blockchain companies and Vitalik Buterin himself.
Dependency scanning, integration and unit tests, and DASTs are areas where LLMs have also proven effective in increasing detection rates and decreasing the high number of false positives, often big pain points for security teams who need to sort through hundreds of alerts on a daily basis. Our internal experiments showed promising results in using LLMs to filter out false positives from these alerts, regardless of whether we are generating them or they are coming from other tools.
Our vision is to build an AI Security Engineer that can assist developers and security teams in all their day-to-day operations to extinguish software exploits.
It’s an exciting time to be operating in this space.
Francesco Piccoli
An AI Security Engineer fixing your code issues
New York, New York