• helvetpuli@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    UNESCO has a bit more of an open culture than some of the other specialized agencies. It can be difficult to get the others to adopt their innovations though.

  • BarrierWithAshes@fedia.io
    link
    fedilink
    arrow-up
    16
    ·
    5 days ago

    Damn these guys are moving up fast. Seems like yesterday they got picked up by GNOME now they’re supporting the UN?!

  • Goretantath@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    5 days ago

    Aww cmon, why isnt it the maze one, now everyones going to be using more energy to browse the web…

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        4 days ago

        Are you talking about anubis? Because you’re very clearly wrong.

        And now I think about it, regardless of which approach you were talking about, that’s some impressive arrogance to assume that everyone involved other than you was a complete idiot.

        Eta:

        Ahh, looking at your post history, I see you misunderstand why scrapers use a common user agent, and are confused about what a general increase in cost-per-page means to people who do bulk scraping.

          • rook@awful.systems
            link
            fedilink
            English
            arrow-up
            14
            ·
            4 days ago

            Bruh, when I said “you misunderstand why scrapers use a common user agent” I didn’t require further proof.

            Requests following an obvious bulk scraper pattern with user agents that almost certainly aren’t regular humans are trivially easy to handle using decades old techniques, which is why scrapers will not start using curl user agents.

            I’m not saying it won’t block some scrapers

            See, the thing is with blocking ai scraping, you can actually see it work by looking at the logs. I’m guessing you don’t run any sites that get much traffic or you’d be able to see this too. Its efficacy is obvious.

            Sure scrapers could start keeping extra state or brute forcing hashes, but at the scale they’re working at that becomes painfully expensive and the effort required to raise the challenge difficulty is minimal if it becomes apparent that scrapers are getting through. Which will be very obvious if it happens.

            once it’s in a training set, all additional protection is just wasted energy.

            Presumably you haven’t had much experience with ai scrapers. They’re not a “one run and done” type thing, especially for sites with frequently changing content, like this one.

            I don’t want to seem rude, but you appear to be speaking from a position of considerable ignorance, dismissing the work of people who actually have skin in the game and have demonstrated effective techniques for dealing with a problem. Maybe a little more research on the issue would help.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 days ago

        Why do people think this appplies only to Firefox? Is it because it’s checking for “Mozilla” in the UA string? Might wanna check what their own browser uses (I don’t care what browser you have, it probably has “Mozilla” in the user agent string)