This blog post has been reported on and distorted by a lot of tech news sites using it to wax delusional about AI’s future role in vulnerability detection.

But they all gloss over the critical bit: in fairly ideal circumstances where the AI was being directed to the vuln, it had only an 8% success rate, and a whopping 28% false positive rate!

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    43
    ·
    4 days ago

    Of course, part of that wiring will be figuring out how to deal with the the signal to noise ratio of ~1:50 in this case, but that’s something we are already making progress at.

    This line annoys me… LLMs excel at making signal-shaped noise, so separating out an absurd number of false positives (and investigating false negatives further) is very difficult. It probably requires that you have some sort of actually reliable verifier, and if you have that, why bother with LLMs in the first place instead of just using that verifier directly?

    • killingspark@feddit.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 days ago

      Trying to take anything positive from this:

      Maybe someone with the skills of verifying a flagged code path now doesn’t have to roam the codebase for candidates? So while they still do the tedious work of verifying, the mundane task of finding candidates is now automatic?

      Not sure if this is a real world usecase…

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 days ago

        As the other comments have pointed out, an automated search for this category of bugs (done without LLMs) would do the same job much faster, with much less computational resources, without any bullshit or hallucinations in the way. The LLM isn’t actually a value add compared to existing tools.