You can follow their Mastodon account here:

https://mastodon.archive.org/@internetarchive

People are rightfully angry. I hope this helps the world relize that we need more than one public digital library in the world. When the EU (for example) does not have a digital public library and relies on archive.org, it heightens everyone’s vulnerability to a single point of failure.

For me, I cannot access roughtly half the world’s websites right now because Cloudflare blocks me – which makes me almost wholly reliant on archive.org and to some extent google caches via 12ft.io.

(update)
Looks like there is a project underway – a Digital Knowledge Act being proposed:

https://communia-association.org/2024/10/09/video-recording-why-europe-needs-a-digital-knowledge-act/

  • SubArcticTundra@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    2 months ago

    Something like an internet archive – where the body of data is too large and important to store in one place – is where having a federated framework similar to Lemmy would make a lot of sense. What’s more, there are many different organisations which have the incentive to archive their own little slice of the internet (but not those of pthers), and a federated model would help in linking these up into one easily navigable, and inherently crowd-funded, whole.

    • activistPnk@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      Indeed those are good ideas. In fact it would be possible to solve the enshitification problem at the same time. If someone copy-pastes a whole article into their Lemmy post, then it solves the problem of getting the article out of Cloudflare jail (or other varieties of prisons and barriers).

      There is a university that has its own small in-house archive. I forgot which uni. But the idea was that any papers or articles produced by students or profs at that university would naturally refer to outside docs. Of course it’s a problem when those outside references are unreachable. So the university archives everything referenced by in-house papers to ensure the integrity of the sources. Outsiders do not have the power to add a page to the archive… only to browse the archives made by insiders. Every university should be doing this.

      All this only covers articles though. There are lots of web resources that need to be archived. Ideally it should be integrated into the browser. Instead of fetching and throwing it away, browsers could keep a local archive. From there, it’s a matter of getting the browsers talking to each other over Tor. Perhaps using IPFS (something I’ve been putting off looking into but seems to be part of the answer).