Get the top HN stories in your inbox every day.
raggi
It's amusing that Xe managed to turn what was historically mostly a joke/shitpost into an actually useful product. They did always say timing was everything.
I am kind of surprised how many sites seem to want/need this. I get the slow git pages problem for some of the git servers that are super deep, lack caches, serve off slow disks, etc.
Unesco surprised me some, the sub-site in question is pretty big, it has thousands of documents of content, but the content is static - this should be trivial to serve, so what's going on? Well it looks like it's a poorly deployed Wordpress on top of Apache, with no caching enabled, no content compression, no HTTP 2/3. It would likely be fairly easy to get this serving super cheap on a very small machine, but of course doing so requires some expertise, and expertise still isn't cheap.
Sure you could ask an LLM, but they still aren't good at helping when you have no clue what to ask - if you don't even really know the site is slower than it should be, why would you even ask? You'd just hear about things getting crushed and reach for the furry defender.
adrian17
> but of course doing so requires some expertise, and expertise still isn't cheap
Sure, but at the same time, the number of people with expertise to set up Anubis (not that it's particularly hard, but I mean: even be aware that it exists) is surely even lower than of people with Wordpress administration experience, so I'm still surprised.
If I were to guess, the reasons for not touching Wordpress were unrelated, like: not wanting to touch a brittle instance, or organization permissions, or maybe the admins just assumed that WP is configured well already.
raggi
I have trouble with that because it’s brimming full of plugins too (see them all disorganized all over the source), and failing to keep such a system up to date ends in tears rapidly in that ecosystem.
jtbayly
My site that I’d like this for has a lot of posts, but there are links to a faceted search system based on tags that produces an infinite number of possible combinations and pages for each one. There is no way to cache this, and the bots don’t respect the robots file, so they just constantly request URLs, getting the posts over and over in different numbers and combinations. It’s a pain.
mrweasel
> I am kind of surprised how many sites seem to want/need this.
The AI scrapers are not only poorly written, they also go out of their way to do cache busting. So far I've seen a few solutions, CloudFlare, require a login, Anubis, or just insane amounts of infrastructure. Some site have reported 60% of their traffic coming from bots not, smaller sites is probably much higher.
MrJohz
Fwiw, I run a pretty tiny site and see relatively minimal traffic coming from bots. Most of the bot traffic, when it appears, is vulnerability scanners (the /wp-admin/ requests on a static site), and has little impact on my overall stats.
My guess is that these tools tend to be targeted at mid-sized sites — the sorts of places that are large enough to have useful content, but small enough that there probably won't be any significant repercussions, and where the ops team is small enough (or plain nonexistent) that there's not going to be much in the way of blocks. That's why a site like SourceHut gets hit quite badly, but smaller blogs stay largely out of the way.
But that's just a working theory without much evidence trying to justify why I'm hearing so many people talking about struggling with AI bot traffic and not seeing it myself.
nicolapcweek94
Well, we just spun up anubis in front of a two user private (as in publicly accessible but with almost all content set to private/login protected) forgejo instance after it started getting hammered (mostly by amazon ips presenting as amazonbot) earlier in the week, resulting in a >90% traffic reduction. From what we’ve seen (and Xe’s own posts) it seems git forges are getting hit harder than most other sites, though, so YMMV i guess.
mrweasel
I actually have a theory, based on the last episode of the 2.5 admins podcast. Try spinning up a MediaWiki site. I have a feeling that wiki installation are being targeted to a much higher degree. You could also do a Git repo of some sort. Either two could give the impression that content is changed frequently.
cedws
PoW anti-bot/scraping/DDOS was already being done a decade ago, I’m not sure why it’s only catching on now. I even recall a project that tried to make the PoW useful.
xena
Xe here. If I had to guess in two words: timing and luck. As the G-man said: the right man in the wrong place can make all the difference in the world. I was the right shitposter in the right place at the right time.
And then the universe blessed me with a natural 20. Never had these problems before. This shit is wild.
underdeserver
Squeeze that lemon as far as it'll go mate, god speed and may the good luck continue.
gyomu
If you’re confused about what this is - it’s to prevent AI scraping.
> Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums
https://anubis.techaro.lol/docs/design/how-anubis-works
This is pretty cool, I have a project or two that might benefit from it.
x3haloed
I’ve been wondering to myself for many years now whether the web is for humans or machines. I personally can’t think of a good reason to specifically try to gate bots when it comes to serving content. Trying to post content or trigger actions could obviously be problematic under many circumstances.
But I find that when it comes to simple serving of content, human vs. bot is not usually what you’re trying to filter or block on. As long as a given client is not abusing your systems, then why do you care if the client is a human?
xboxnolifes
> As long as a given client is not abusing your systems, then why do you care if the client is a human?
Well, that's the rub. The bots are abusing the systems. The bots are accessing the contents at rates thousands of times faster and more often than humans. The bots also have access patterns unlike your expected human audience (downloading gigabytes or terabytes of data multiples times, over and over).
And these bots aren't some being with rights. They're tools unleashed by humans. It's humans abusing the systems. These are anti-abuse measures.
immibis
Then you look up their IP address's abuse contact, send an email and get them to either stop attacking you or get booted off the internet so they can't attack you.
And if that doesn't happen, you go to their ISP's ISP and get their ISP booted off the Internet.
Actual ISPs and hosting providers take abuse reports extremely seriously, mostly because they're terrified of getting kicked off by their ISP. And there's no end to that - just a chain of ISPs from them to you and you might end with convincing your ISP or some intermediary to block traffic from them. However, as we've seen recently, rules don't apply if enough money is involved. But I'm not sure if these shitty interim solutions come from ISPs ignoring abuse when money is involved, or from not knowing that abuse reporting is taken seriously to begin with.
Anyone know if it's legal to return a never-ending stream of /dev/urandom based on the user-agent?
bbor
Well, that's the meta-rub: if they're abusing, block abuse. Rate limits are far simpler, anyway!
In the interest of bringing the AI bickering to HN: I think one could accurately characterize "block bots just in case they choose to request too much data" as discrimination! Robots of course don't have any rights so it's not wrong, but it certainly might be unwise.
t-writescode
> I personally can’t think of a good reason to specifically try to gate bots
There's been numerous posts on HN about people getting slammed, to the tune of many, many dollars and terabytes of data from bots, especially LLM scrapers, burning bandwidth and increasing server-running costs.
ronsor
I'm genuinely skeptical that those are all real LLM scrapers. For one, a lot of content is in CommonCrawl and AI companies don't want to redo all that work when they can get some WARC files from AWS.
I'm largely suspecting that these are mostly other bots pretending to be LLM scrapers. Does anyone even check if the bots' IP ranges belong to the AI companies?
praptak
The good thing about proof of work is that it doesn't specifically gate bots.
It may have some other downsides - for example I don't think that Google is possible in a world where everyone requires proof of work (some may argue it's a good thing) but it doesn't specifically gate bots. It gates mass scraping.
fc417fc802
Things like google are still possible. Operators would need to whitelist services.
Alternatively shared resources similar in spirit to common crawl but scaled up could be used. That would have the benefit of democratizing the ability to create and operate large scale search indexes.
brikym
As both a website host and website scraper I can see both sides of it. The website owners have very little interest in opening their data up; if they did they'd have made an API for it. In my case it's scraping supermarket prices so obviously big-grocery doesn't want a spot light on their arbitrary pricing patterns. It's frustrating for us scrapers but from their perspective opening up to bots is just a liability. Besides bots just spamming the servers getting around rate limits with botnets and noise any new features added by bots probably won't benefit them. If I made a bot service that would split your orders over multiple supermarkets, or buy items temporally as prices drop that wouldn't benefit the companies. All the work they've put into their site is to bring them to the status quo and they want to keep it that way. The companies don't want an open internet, only we do. I'd like to see some transparency laws so that large companies need to publish their pricing.
gbear605
The issue is not whether it’s a human or a bot. The issue is whether you’re sending thousands of requests per second for hours, effectively DDOSing the site, or if you’re behaving like a normal user.
laserbeam
The reason is: bots DO spam you repeatedly and increase your network costs. Humans don’t abuse the same way.
starkrights
Example problem that I’ve seen posted about a few times on HN: LLM scrapers (or at least, an explosion of new scrapers) exploding and mindlessly crawling every singly HTTP endpoint of a hosted git-service, instead of just cloning the repo. (entirely ignoring robots.txt)
The point of this is that there has recently been a massive explosion in the amount of bots that blatantly, aggressively, and maliciously ignore and attempt to bypass (mass ip/VPN switching, user agent swapping, etc) anti-abuse gates.
undefined
mieses
There is hope for misguided humans.
undefined
namanyayg
"It also uses time as an input, which is known to both the server and requestor due to the nature of linear timelines"
A funny line from his docs
xena
OMG lol I forgot that I left that in. Hilarious. I think I'm gonna keep it.
didgeoridoo
I didn’t even blink at this, my inner monologue just did a little “well, naturally” in a Redditor voice and kept reading.
mkl
BTW Xe, https://xeiaso.net/pronouns is 404 since sometime last year, but it is still linked to from some places like https://xeiaso.net/blog/xe-2021-08-07/ (I saw "his" above and went looking).
xena
I'm considering making it come back, but it's just gotten me too much abuse so I'm probably gonna leave it 404-ing until society is better.
pie_flavor
Unfortunately it is also false (if taken out of context; Anubis rounds the time to the nearest week, which is probably good enough if the next-nearest week is valid too). Clock desync for a variety of reasons is pervasive - you can't expect the 10th percentile of your users to be accurate even to the day, and even the 25th percentile will be five minutes or so off.
AnonC
Those images on the interstitial page(s) while waiting for Anubis to complete its check are so cute! (I’ve always found all the art and the characters in Xe’s blog very beautiful)
Tangentially, I was wondering how this would impact common search engines (not AI crawlers) and how this compares to Cloudflare’s solution to stop AI crawlers, and that’s explained on the GitHub page. [1]
> Installing and using this will likely result in your website not being indexed by some search engines. This is considered a feature of Anubis, not a bug.
> This is a bit of a nuclear response, but AI scraper bots scraping so aggressively have forced my hand.
> In most cases, you should not need this and can probably get by using Cloudflare to protect a given origin. However, for circumstances where you can't or won't use Cloudflare, Anubis is there for you.
JsonCameron
Yeah. Unfortunately at the current moment it does prevent indexing. Perhaps down the line we can whitelist search engines ips. However some like google, use the same for the AI and search indexing.
We are still making some improvements like passing open graph tags through so at least rich previews work!
snvzz
>Those images on the interstitial page(s) while waiting for Anubis to complete its check are so cute!
Love them too, and abhor knowing that someone is bound to eventually remove them because found to be "problematic" in one way or another.
pohuing
There's this funny instance[1] of someone afraid their their gf might see them and think they're into anime. But anyhow using an image and the image itself is up to the site since Anubis let's you configure it.
[1] https://discourse.gnome.org/t/anime-girl-on-gnome-gitlab/276...
prologic
I've read about Anubis, cool project! Unfortunately, as pointed out in the comments, requires your site's visitors to have Javascript™ enabled. This is totally fine for sites that require Javascript™ anyway to enhance the user experience, but not so great for static sites and such that require no JS at all.
I built my own solution that effectively blocks these "Bad Bots" at the network level. I effectively block the entirety of several large "Big Tech / Big LLM" networks entirely at the ASN (BGP) by utilizing MaxMind's database and a custom WAF and Reverse Proxy I put together.
xyzzy_plugh
A significant portion of the bot traffic TFA is designed to handle originates from consumer/residential space. Sure, there are ASN games being played alongside reputation fraud, but it's very hard to combat. A cursory investigation of our logs showed these bots (which make ~1 request from a given residential IP) are likely in ranges that our real human users occupy as well.
Simply put you risk blocking legitimate traffic. This solution does as well but for most humans the actual risk is much lower.
As much as I'd love to not need JavaScript and to support users who run with it disabled, I've never once had a customer or end user complain about needing JavaScript enabled.
It is an incredible vocal minority who disapprove of requiring JavaScript, the majority of whom, upon encountering a site for which JavaScript is required, simply enable it. I'd speculate that, even then, only a handful ever release a defeated sigh.
prologic
This is true. I had some bad actors from the ComCast Network at one point. And unfortunately also valid human users of some of my "things". So I opted not to block the ComCast ASN at that point.
xyzzy_plugh
Exactly. We've all been down this rabbit hole, collectively, and that's why Anubis has taken off. It works shockingly well.
prologic
I would be interested to hear of any other solutions that guarantee to either identity or block non-Human traffic. In the "small web" and self-hosting, we typically don't really want Crawlers, and other similar software hitting our services, because often the software is either buggy in the first place (Example: Runaway Claude Bot) or you don't want your sites indexed by them in the first place.
Cyphase
For anyone wondering, Oracle holds the trademark for "JavaScript": https://javascript.tm/
prologic
Which arguably they should let go of
jadbox
How do you know it's an LLM and not a VPN? How do you use this MaxMind's database to isolate LLMs?
prologic
I don't distinguish actually. There are two things I do normally:
- Block Bad Bots. There's a simple text file called `bad_bots.txt` - Block Bad ASNs. There's a simple text file called `bad_asns.txt`
There's also another for blocking IP(s) and IP-ranges called `bad_ips.txt` but it's often more effective to block an much larger range of IPs (At the ASN level).
To give you an concrete idea, here's some examples:
$ cat etc/caddy/waf/bad_asns.txt # CHINANET-BACKBONE No.31,Jin-rong Street, CN # Why: DDoS 4134
# CHINA169-BACKBONE CHINA UNICOM China169 Backbone, CN # Why: DDoS 4837
# CHINAMOBILE-CN China Mobile Communications Group Co., Ltd., CN # Why: DDoS 9808
# FACEBOOK, US # Why: Bad Bots 32934
# Alibaba, CN # Why: Bad Bots 45102
# Why: Bad Bots 28573
runxiyu
Do you have a link to your own solution?
JsonCameron
I have a pretty similar one. (Works off of the same concept) https://github.com/JasonLovesDoggo/caddy-defender if you're curious. Keep in mind this will not protect you against residential IP scraping.
prologic
Not yet unfortunately. But if you're interested, please reach out! I currently run it in a 3-region GeoDNS setup with my self-hosted infra.
roenxi
I like the idea but this should probably be something that is pulled down into the protocol level once the nature of the challenge gets sussed out. It'll ultimately be better for accessibility if the PoW challenge is closer to being part of TCP than implemented in JavaScript individually by each website.
pona-a
There's Cloudflare PrivacyPass that became an IETF standard [0], but it's rather weird, and the reference implementation is a bug nest.
fc417fc802
Ship an arbitrary challenge as a SPIR-V or MLIR black box. Integrate the challenge-response exchange with HTTP. That should permit broad support and flexible hardware acceleration.
The "good enough" solution is the existing and widely used SHA( seed, nonce ). That could easily be integrated into a lower level of the stack if the tech giants wanted it.
tripdout
The bot detection takes 5 whole seconds to solve on my phone, wow.
bogwog
I'm using Fennec (a Firefox fork on F-Droid) and a Pixel 9 Pro XL, and it takes around ~8 seconds at difficulty 4.
Personally, I don't think the UX is that bad since I don't have to do anything. I definitely prefer it to captchas.
Hakkin
Much better than infinite Cloudflare captcha loops.
gruez
I've never had that, even with something like tor browser. You must be doing something extra suspicious like an user agent spoofer.
praisewhitey
Firefox with Enhanced Tracking Protection turned on is enough to trigger it.
megous
Proper response here is "fuck cloudflare", instead of blaming the user.
xena
Apparently user-agent switchers don't work for fetch() requests, which means that Anubis can't work with people that do that. I know of someone that set up a version of brave from 2022 with a user-agent saying it's chrome 150 and then complaining about it not working for them.
oynqr
Lucky. Took 30s for me.
nicce
For me it is like 0.5s. Interesting.
cookiengineer
I am currently building a prototype of what I call the "enigma webfont" where I want to implement user sessions with custom seeds / rotations for a served and cached webfont.
The goal is to make web scraping unfeasible because of computational costs for OCR. It's a cat and mouse game right now and I want to change the odds a little. The HTML source would be effectively void without the user session, meaning an OTP like behavior could also make web pages unreadable once the assets go uncached.
This would allow to effectively create a captcha that would modify the local seed window until the user can read a specified word. "Move the slider until you can read the word Foxtrott", for example.
I sure would love to hear your input, Xe. Maybe we can combine our efforts?
My tech stack is go, though, because it was the only language where I could easily change the webfont files directly without issues.
lifthrasiir
Besides from the obvious accessibility issue, wouldn't that be a substitution cipher at best? Enough corpus should render its cryptanalysis much easier.
cookiengineer
Well, the idea is basically the same as using AES-CBC. CBC is useless most of the time because of static rotations, but it makes cracking it more expensive.
With the enigma webfont idea you can even just select a random seed for each user/cache session. If you map the URLs based on e.g. SHA512 URLs via the Web Crypto API, there's no cheap way of finding that out without going full in cracking mode or full in OCR/tesseract mode.
And cracking everything first, wasting gigabytes of storage for each amount of rotations and seeds...well, you can try but at this point just ask the admin for the HTML or dataset instead of trying to scrape it, you know.
In regards to accessibility: that's sadly the compromise I am willing to do, if it's a technology that makes my specific projects human eyes only (Literally). I am done taking the costs for hundreds of idiots that are too damn stupid to clone my website from GitHub, letting alone violating every license in each of their jurisdictions. If 99% of traffic is bots, it's essentially DDoSing on purpose.
We have standards for data communication, it's just that none of these vibe coders gives a damn about building semantically correct HTML and parsers for RDF, microdata etc.
lifthrasiir
No, I was talking about generated fonts themselves; each glyph would have an associated set of control points which can be used to map a glyph to the correct letter. No need to run the full OCR, you need a single small OCR job per each glyph. You would need quite elaborate distortions to avoid this kind of attack, and such distortions may affect the reading experience.
creata
There's probably something horrific you can do with TrueType to make it more complex than a substitution cipher.
lifthrasiir
GSUB rules are inherently local, so for example the same cryptanalysis approach should work for space-separated words instead of letters. A polyalphabetic cipher would work better but that means you can't ever share the same internal glyph for visually same but differently encoded letters.
cookiengineer
The hint I want to give you is: unicode and ligatures :) they're awesome in the worst sense. Words can be ligatures, too, btw.
rollcat
The problem isn't as much that the websites are scraped (search engines have been doing this for over three decades), it's the request volume that brings the infrastructure down and/or costs up.
I don't think mangling the text would help you, they will just hit you anyway. The traffic patterns seem to indicate that whoever programmed these bots, just... <https://www.youtube.com/watch?v=ulIOrQasR18>
> I sure would love to hear your input, Xe. Maybe we can combine our efforts?
From what I've gathered, they need help in making this project more sustainable for the near and far future, not to add more features. Anubis seems to be doing an excellent job already.
pabs3
It works to block users who have JavaScript disabled, that is for sure.
udev4096
Exactly, it's a really poor attempt to make it appealing to the larger audience. Unless they roll out a version for nojs, they are the same as "AI" scrapers on enshittyfying the web
throwaway150
Looks cool. But please help me understand. What's to stop AI companies from solving the challenge, completing the proof of work and scrape websites anyway?
crq-yml
It's a strategy to redefine the doctrine of information warfare on the public Internet from maneuver(leveraged and coordinated usage of resources to create relatively greater effects) towards attrition(resources are poured in indiscriminately until one side capitulates).
Individual humans don't care about a proof-of-work challenge if the information is valuable to them - many web sites already load slowly through a combination of poor coding and spyware ad-tech. But companies care, because that changes their ability to scrape from a modest cost of doing business into a money pit.
In the earlier periods of the web, scraping wasn't necessarily adversarial because search engines and aggregators were serving some public good. In the AI era it's become belligerent - a form of raiding and repackaging credit. Proof of work as a deterrent was proposed to fight spam decades ago(Hashcash) but it's only now that it's really needed to become weaponized.
marginalia_nu
The problem with scrapers in general is the asymmetry of compute resources involved in generating versus requesting a website. You can likely make millions of HTTP requests with the compute required in generating the average response.
If you make it more expensive to request a documents at scale, you make this type of crawling prohibitively expensive. On a small scale it really doesn't matter, but if you're casting an extremely wide net and re-fetching the same documents hundreds of times, yeah it really does matter. Even if you have a big VC budget.
Nathanba
Yes but the scraper only has to solve it once and it gets cached too right? Surely it gets cached, otherwise it would be too annoying for humans on phones too? I guess it depends on whether scrapers are just simple curl clients or full headless browsers but I seriously doubt that Google tier LLM scrapers rely on site content loading statically without js.
ndiddy
AI companies have started using a technique to evade rate limits where they will have a swarm of tens of thousands of scraper bots using unique residential IPs all accessing your site at once. It's very obvious in aggregate that you're being scraped, but when it's happening, it's very difficult to identify scraper vs. non-scraper traffic. Each time a page is scraped, it just looks like a new user from a residential IP is loading a given page.
Anubis helps combat this because even if the scrapers upgrade to running automated copies of full-featured web browsers that are capable of solving the challenges (which means it costs them a lot more to scrape than it currently does), their server costs would balloon even further because each time they load a page, it requires them to solve a new challenge. This means they use a ton of CPU and their throughput goes way down. Even if they solve a challenge, they can't share the cookie between bots because the IP address of the requestor is used as part of the challenge.
Hakkin
It sets a cookie with a JWT verifying you completed the proof-of-work along with metadata about the origin of the request, the cookie is valid for a week. This is as far as Anubis goes, once you have this cookie you can do whatever you want on the site. For now it seems like enough to stop a decent portion of web crawlers.
You can do more underneath Anubis using the JWT as a sort of session token though, like rate limiting on a per proof-of-work basis, if a client using X token makes more than Y requests in a period of time, invalidate the token and force them to generate a new one. This would force them to either crawl slowly or use many times more resources to crawl your content.
FridgeSeal
It seems a good chunk of the issue with these modern LLM scrapers is that they are doing _none_ of the normal “sane” things. Caching content, respecting rate limits, using sitemaps, bothering to track explore depth properly, etc.
charcircuit
If you make it prohibitively expensive almost no regular user will want to wait for it.
xboxnolifes
Regular users usually aren't page hopping 10 pages per second. A regular user is usually 100 times less than that.
bobmcnamara
Exponential backoff!
ndiddy
This makes it much more expensive for them to scrape because they have to run full web browsers instead of limited headless browsers without full Javascript support like they currently do. There's empirical proof that this works. When GNOME deployed it on their Gitlab, they found that around 97% of the traffic in a given 2.5 hour period was blocked by Anubis. https://social.treehouse.systems/@barthalion/114190930216801...
dragonwriter
> This makes it much more expensive for them to scrape because they have to run full web browsers instead of limited headless browsers without full Javascript support like they currently do. There's empirical proof that this works.
It works in the short term, but the more people that use it, the more likely that scrapers start running full browsers.
SuperNinKenDo
That's the point. An individual user doesn't lose sleep over using a full browser, that's exactly how they use the web anyway, but for an LLM scraper or similar, this greatly increases costs on their end and partially thereby rebalances the power/cost imbalance, and at the very least, encourages innovations to make the scrapers externalise costs less by not rescraping things over and over again just because you're too lazy, and the weight of doing so is born by somebody else, not you. It's an incentive correction for the commons.
sadeshmukh
Which are more expensive - you can't run as many especially with Anubis
undefined
perching_aix
Nothing. The idea instead that at scale the expenses of solving the challenges becomes too great.
userbinator
This is basically the DRM wars again. Those who have vested interests in mass crawling will have the resources to blast through anything, while the legit users get subjected to more and more draconian measures.
SuperNinKenDo
I'll take this over a Captcha any day.
userbinator
CAPTCHAs don't need JS, nor does asking a question that an LLM can't answer but a human can.
Proof-of-work selects for those with the computing power and resources to do it. Bitcoin and all the other cryptocurrencies show what happens when you place value on that.
ronsor
I know companies that already solve it.
wredcoll
I mean... knowing how to solve it isn't the trick, it's doing it a million times a minute for your firehose scraper.
udev4096
Anubis adds a cookie name `within.website-x-cmd-anubis-auth` which can be used by scrapers for not solving it more than once. Just have a fleet of servers whose sole purpose is to extract the cookie after solving the challenges and make sure all of them stay valid. It's not a big deal
creata
Why is spending all that CPU time to scrape the handful of sites that use Anubis worth it to them?
vhcr
Because it's not a lot of CPU, you only have to solve it once per website, and the default policy difficulty of 16 for bots is worthless because you can just change your user agent so you get a difficulty of 4.
mushufasa
I looked through the documentation and I've come across a couple sites using this already.
Genuine question: why not leverage the proof-of-work challenge literally into mining that generates some revenue for a website? Not a new idea, but when I looked at the docs it didn't seem like this challenge was tied to any monetary coin value.
This is coming from someone who is NOT a big crypto person, but it strikes me that this would be a much better way to monetize organic high quality content in this day and age. Basically the idea that Brave browser started with, meeting it's moment.
I'm sure Xe has already considered this. Do they have a blog post about this anywhere?
Get the top HN stories in your inbox every day.
Related Anubis: Proof-of-work proxy to prevent AI crawlers (100 points, 23 days ago, 58 comments) https://news.ycombinator.com/item?id=43427679