I’m in Buenos Aires right now at the DeFi Security Summit, where I joined a panel on AI and Web3 security along with @blocksec (EigenLabs), @ChanniGreenwall (Olympix), @jack__sanford (Sherlock), and @nicowaisman (Xbow). What not long ago felt like a “distant future” is now being discussed as a very concrete roadmap for the coming years: both defensive and offensive AI‑powered technologies are going to advance rapidly, especially in vulnerability discovery. Everything auditors do today manually and through a patchwork of tools is gradually being bundled into more powerful and accessible automated stacks. It’s important to look at reality soberly: the hope that we can reliably “fence off” models from undesirable use is illusory. Any sufficiently capable model is, by definition, dual‑use. Providers will add restrictions, filters, policies — but that’s not a fundamental barrier. Anyone with motivation and resources will spin up a self‑hosted model, assemble their own agentic stack, and use the same technologies without worrying about ToS. You can’t design security under the assumption that attackers won’t have access to these tools. The economics of attacks are also far from straightforward. In the short term, attacks will get cheaper: more automation, more “wide‑area bombardment,” more exhaustive exploration of states and configurations without humans in the loop. But over the long run, as defensive practices and tools catch up, successful attacks will become more expensive: coverage will improve, trivial bugs will disappear, and effective breaches will require serious infrastructure, preparation, and expertise. This will shift the balance toward fewer incidents — but those that do happen will be far more complex and costly. My main takeaway: we’ll have to revisit the entire security lifecycle, not just “cosmetically improve” audits. How we describe and understand risk profiles, how threat models evolve with AI in the picture, how we structure development, reviews, testing, deployment, on‑chain monitoring, incident response, and post‑mortems — all of this will need to be rethought. Traditional audits will remain a key piece, but they can no longer be the sole center of gravity. The reality is that AI symmetrically amplifies both defenders and attackers — and Web3 security will have to adapt its entire operating model to this asymmetric arms race.
724
6
本頁面內容由第三方提供。除非另有說明,OKX 不是所引用文章的作者,也不對此類材料主張任何版權。該內容僅供參考,並不代表 OKX 觀點,不作為任何形式的認可,也不應被視為投資建議或購買或出售數字資產的招攬。在使用生成式人工智能提供摘要或其他信息的情況下,此類人工智能生成的內容可能不準確或不一致。請閱讀鏈接文章,瞭解更多詳情和信息。OKX 不對第三方網站上的內容負責。包含穩定幣、NFTs 等在內的數字資產涉及較高程度的風險,其價值可能會產生較大波動。請根據自身財務狀況,仔細考慮交易或持有數字資產是否適合您。