Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Cool thanks for doing the effort post.

    My (wildly optimistic by sneerclubbing standards) expectations for “LLM agents” is that people figure out how to use them as a “creative” component in more conventional bots and AI approaches

    This was my feeling a bit how it was used basically in security fields already, with a less focus on the conventional bots/ai. Where they use the LLMs for some things still. But hard to spread fact from PR, and some of the things they say they do seem to be like it isn’t a great fit for LLMs, esp considering what I heard from people who are not in the hype train. (The example coming to mind is using LLMs to standardize some sort of reporting/test writing, while I heard from somebody I trust who has seen people try that and had it fail as it couldn’t keep a consistent standard).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 hours ago

      his was my feeling a bit how it was used basically in security fields already

      curious about this reference - wdym?

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        18 hours ago

        ‘we use LLMs for X in our security products’ gets brought up a lot in the risky business podcast promotional parts basically, and it sometimes leaks into the other parts as well. That is basically the times I hear people speak somewhat positively about it. Where they use LLMs (or claim to use) for various things, some I thought were possible but iffy, some impossible, like having LLMs do massive amounts of organizational work. Sorry I can’t recall the specifics. (I’m also behind atm).

        Never heard people speak positively about it from the people I know, but they also know I’m not that positive about AI, so the likelyhood they just avoid the subject is non-zero.

        E: Schneier is also not totally against the use of llms for example. https://www.schneier.com/blog/archives/2025/05/privacy-for-agentic-ai.html quite disappointed. (Also as with all security related blogs nowadays, dont read the comments, people have lost their minds, it always was iffy, but the last few years every security related blog that reaches some fame is filled with madmen).

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          18 hours ago

          Ah, I don’t listen to riskybiz because ugh podcast

          Schneier’s a dipshit well past his prime, though. people should stop listening to that ossified doorstop

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            18 hours ago

            Ow yeah, I don’t disagree on that, even if I do keep up with them. Just making my sources obvious. (One of the ticks I do find valuable from the Rationalists, the verbosity and tendency to try and over explain isn’t as valuable, but hard to shift (and the one feeds in the other (and… im doing it again ain’t I?))).

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              18 hours ago

              that’s fair - the first half of my post was certainly more about me than anything (but was also an indication as to why I don’t hear that particular angle much - I’ve also ensured I get as little as possible advertising in my life)

              other part: nah still, fuck schneier

              rest of your comment reminds me of the tact filters post (albeit in a different angle)

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                ·
                17 hours ago

                Tact filter is prob what went wrong with the Lawyer person discussed elsewhere, I never had heard about it (or had forgotten).