this post was submitted on 12 Oct 2023
1168 points (98.7% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54772 readers
411 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] _dev_null@lemmy.zxcvn.xyz 14 points 1 year ago (1 children)

It exists, it's called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.

And therein lies the issue: if you place a robots.txt out for the content, all bots will ignore the content, including search engine indexers.

So huge publishers want it both ways, they want to be indexed, but they don't want the content to be archived.

If the NYT is serious about not wanting to have their content on the webarchive but still want humans to see it, the solution is simple: Put that content behind a login! But the NYT doesn't want to do that, since then they'll lose out on the ad revenue of having regular people load their website.

I think in the case of the article here though, the motivation is a bit more nefarious, in that the NYT et al simply don't want to be held accountable. So there's a choice to be had for them, either retain the privilege of being regarded as serious journalism, or act like a bunch of hacks that can't be relied upon.

It exists, it's called a robots.txt file that the developers can put into place, and then bots like the webarchive crawler will ignore the content.

the internet archive doesn't respect robots.txt:

Over time we have observed that the robots.txt files that are geared toward search engine crawlers do not necessarily serve our archival purposes.

the only way to stay out of the internet archive is to follow the process they created and hope they agree to remove you. or firewall them.

https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/