Get the top HN stories in your inbox every day.
jzebedee
undefined
wolfi1
is there already an av2 encoder for ffmpeg?
dirasieb
i don't even own a dgpu capable of encoding av1 and they're already coming out with av2?
dylan604
If the thing you are working on is already out of date to the point there is already significant work on the 2 version, why would you continue?
hulitu
> AV2 is the next-generation video coding specification from the Alliance for Open Media
Oh no. Not another one. I presume this one makes lossy better, or faster or both.
delfinom
looks at if AV2 is dead in the water
https://www.sisvel.com/insights/av2-is-coming-sisvel-is-prep...
yep
Telaneo
They've done the same thing with AV1, and I can't see that having prevented adoption, nor can I imagine Sisvel wanting to poke the bear that is AOMedia unless they're certain their case is absolutely watertight.
walrus01
I see zero public evidence that they've filed any lawsuits against the members of AOM in any jurisdiction. I'm sure there's been a lot of threatening letters sent...
ronsor
This is a thinly veiled extortion racket and any competent system would fine them into bankruptcy.
mort96
We need a more efficient way to eliminate bullshit patents or bullshit patent infringement claims than "violate them then spend millions on lawyers to fight them in court".
imtringued
The disgusting part is that they are proud of how complicated and exploitable this patent situation is, acting as if they were the key experts in developing codecs when they are just experts in gating access to them. Like, their entire business model is based on negating the value of the inventions.
walrus01
Sisvel is a patent troll. Take a look at the combined list of all companies that are the AOM and tell me with a straight face that all of their corporate in house counsel specializing in intellectual property law are wrong.
asveikau
I don't know this stuff super well but I imagine it's not necessarily about the lawyers being right or wrong so much as what they can convince people of. The ideal scenario for the patent troll is they can intimidate you into licensing with them. Another good outcome for them (though more costly) is they can convince some non-expert in court. In either case the big players behind the codec can defend themselves but a small one just picking it up downstream as OSS can't.
shmerl
Trolls will always be trolls. The need to fight them just shows the need to reform the garbage patent system to make sure no one can ever patent software.
BLKNSLVR
You can tell Sisvel are a bunch of grifters by the fact they use slight grey text on a slightly less grey background.
Aesthetics over function; style over substance. If that's their web design policy it's likely their policy in all other aspects.
I'm also not sure that they're aware that intellectual property rights no longer exist in the US. If AV2 was vibe coded, there would be no case.
astrange
> If AV2 was vibe coded, there would be no case.
…for copyright. Not for anything else. Patents would still apply.
tensor
Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
thresh
We had to set it up on the parts of VideoLAN infra so the service would remain usable.
Otherwise it was under a constant DDoS by the AI bots.
hectormalot
Maybe I’m naive about this, but I didn’t expect AI scrapers to be that big of a load? I mean, it’s not that they need to scrape the same at 1000+ QPS, and even then I wouldn’t expect them to download all media and images either?
What am I missing that explains the gap between this and “constant DDoS” of the site?
thresh
You cant really cache the dynamic content produced by the forges like Gitlab and, say, web forums like phpbb. So it means every request gets through the slow path. Media/JS is of course cached on the edge, so it's not an issue.
Even when the amount of AI requests isnt that high - generally it's in hundreds per second tops for our services combined - that's still a load that causes issues for legitimate users/developers. We've seen it grow from somewhat reasonable to pretty much being 99% of responses we serve.
Can it be solved by throwing more hardware at the problem? Sure. But it's not sustainable, and the reasonable approach in our case is to filter off the parasitic traffic.
nijave
I think there's a few things at play here
- AI scrapers will pull a bunch of docs from many sites in parallel (so instead of a human request where someone picks a single Google result, it hits a bunch of sites)
- AI will crawl the site looking for the correct answer which may hit a handful of pages
- AI sends requests in quick succession (big bursts instead of small trickle over longer time)
- Personal assistants may crawl the site repeatedly scraping everything (we saw a fair bit of this at work, they announced themselves with user agents)
- At work (b2b SaaS webapp) we also found that the personal assistant variety tended to hammer really computationally expensive data export and reporting endpoints generally without filters. While our app technically supported it, it was very inorganic traffic
That said, I don't think the solution is blanket blocks. Really it's exposing sites are poorly optimized for emerging technology.
Y-bar
They are a scourge, they never rate-limit themselves, there are a hundred of them, and a significant number don’t respect robots.txt. Many of them also end up our meta:no-index,no-follow search pages leading to cost overruns on our Algolia usage. We spend way too much time adjusting WAF and other bot-controls than we should have.
eks391
You've gotten several comprehensive responses so far and I want to add a niche corner that people might assume might not have the bot problem but still does.
I run a website that hosts tools for my family: games and a TV interface for the kids, remote access to our family cloud and cameras, etc. Sensitive things require log in and have additional parameters required for access of course.
I specifically blocked bots from search engines so my site is never indexed, as I'm not selling anything nor want any attention, as well as some other public non-malicious bots in case they communicate with Google, just to be safe there, and my robots.txt doesn't allow anything.
I assume then, that the only way a bot could even find my site is to do what the indexers do: brute force try every single possible ipv4 address hoping to hear something back, as my domain should not be known (and isn't simple enough to be quickly guessed), and most traffic must be malicious, or indexing (AI overview and other scrapers won't be finding it via web search).
Since it isn't indexing, and keeping everything in simple black and white boxes, my remaining traffic is family or malicious bots, and 99.9% isn't family.
I currently have the most strict bot-blocking setup I could come up with, which nicely cut down on quite a bit of traffic, but I do still receive ~2k attempts per day, which as you can imagine, still is around 99% not traffic, as I have fewer than 20 kids, and my kids aren't using the site nonstop.
Conveniently, my setup has never accidentally blocked a family member, so I'm pleased with the setup.
eipi10_hn
Yes, it's that BIG of a load: https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/
undefined
nijave
While I do sympathetize with the AI DDoS situation, it'd be nice if there were a solution that allows them to work so they can pull official docs.
For instance, MCP, static sites that are easy to scale, a cache in front of a dynamic site engine
thresh
Of course, static websites is the best solution to that problem.
Our documentation and a main website are not fronted by this protection, so they're still accessible for the scrapers.
stefantalpalaru
[dead]
nerdralph
I highly doubt there is no other technically feasible option to block the AI bots. You end up blocking not just bots, but many humans too. When I clicked on the link and the bot block came up, I just clicked back. I think HN posts should have warnings when the site blocks you from seeing it until you somehow, maybe, prove you are human.
goobatrooba
I'm sure there are many solutions for many problems, but expecting a small Foss development team to know or implement them all is rather unreasonable.
I think the world gains more if the VLAN team focuses on their amazing, free contribution to the world, than if they spend the same time trying to figure out how to save you two clicks.
We all hate that this is happening, but you don't need to attack everyone that is unfortunately caught up in it.
overfeed
> I highly doubt there is no other technically feasible option to block the AI bots.
If you have discovered such an option, you could get very wealthy: minimizing friction for humans in e-commerce is valuable. If you're a drive-by critic not vested in the project, then yours is an instance of talk being cheap.
thresh
I'm all ears on how we can fix it otherwise.
Keep in mind that those kinds of services: - should not be MITMed by CDNs - are generally ran by volunteers with zero budget, money and time-wise
notenlish
Nearly every single website I'm not logged into these days want me to "confirm I'm not a bot".
it is incredibly annoying but what can you do? AI scrapers ruined the web.
port11
The internet is such a Tragedy of the Commons… its citizens that act selfishly and in bad faith will slowly make it unusable.
honktime
Its pretty explicitly not a tragedy of the commons. Its a tragedy of the ruling class abusing the resources of the 'commons' to extract value. There is nothing 'commons' about trillion dollar companies extracting all available value from the labor of the working class. That's just the tragedy that'll bring around the death of society, the same tragedy that brings all other tragedys
throw-the-towel
The commons in question is the internet itself.
amusingimpala75
Thank you for describing the tragedy of the commons
dirasieb
tragedy of the commons with your ideological buzzwords sprinkled in, truly innovative
dyauspitr
There’s definitely lots of problems with the ruling class and wealth disparity. Perhaps the defining problems of our current age.
That being said, so many of the plebs suck. Like 2% will ruin everything for everyone.
esseph
> its citizens that act selfishly and in bad faith will slowly make it unusable
It's rarely been the citizens that have been the problem, but the governments and companies that seek the use the network connection for their overwhelming benefit.
Re (above):
> Not on topic, but wow the internet has very quickly devolved into: click -> "making sure you're not a bot", click -> "making sure you're a human", click -> "COOKIES COOKIES COOKIES", click -> "cloudflare something something"
fastball
wat. The protections in place that the OP is talking about are almost entirely due to (not government and company) bad actors.
codedokode
No, it is because citizen allow treating them like this.
pixelpoet
No one's even clicking anymore, everything implores me to tap or swipe these days, and everything is optimised for humans with one eye above the other.
Then I press the X to close the all-caps banner commanding me to install the app, upon which I get sent to the app store. Users of the website refer to it as an app.
rayiner
Wow I’m glad it’s not just me. I thought my IP block had gotten caught up in some known spamming or something.
timpera
Their bot-detection page took more than 40 seconds to complete on my low-end smartphone. This sucks.
tomwheeler
At least this one was significantly faster than Cloudflare and required no action on my part.
ZeroGravitas
A little extra context:
Dav1d was the surprisingly fast assembly implementation of AV1 decoding. Even for something in hand-coded platform-specific-assembly I think the general impression was that they'd done amazing work to really chase down every last bit of potential performance.
It didn't initially exist when AV1 was first rolled out and its arrival was a step change in powering adoption on devices without hardware decoding.
Dav2d is likely to play a similar role, but it exists from the start of AV2 and can build on the work of dav1d, so should have an even bigger effect.
In a weird reverse chicken and egg scenario, having really good software decode that can be deployed will spur on hardware development and adoption due to network effects.
infogulch
AV2 video codec delivers 30% lower bitrate than AV1, final spec due in late 2025 (videocardz.com) | Oct 2025 | 277 points | 223 comments | https://news.ycombinator.com/item?id=45547537
tgsovlerkhgsel
I care much less about bitrates than about hopefully finally settling on one series of "standards". It looks like H.266 is dead in the water (I haven't even heard of it existing) so we might finally settle on AV2 as "the" new standard, rather than having the infight with half of hard/software only supporting either the state-of-the-art codec from the H26x or the AVx series...
Beretta_Vexee
Don’t worry, hardware manufacturers are going to keep ripping us off with HDR encoding. HDR10, HDR10+, Dolby Vision, and so on.
Telaneo
Glorious. Really looking forward to seeing how much better than AV1 it actually turns out to be. It's a shame it'll take a while before we'll have a decent encoder (it took an annoyingly long time until SVT-AV1 was usable).
amitbidlan
Mostly ASM for performance critical paths is a pattern that never gets old. The VideoLAN team did the same with dav1d and it paid off. Curious how much of dav2d ends up staying C as AV2 matures.
thebeardisred
I've noticed that a number of comments seem to have missed the recursive naming pattern:
Dav1d AV1 Decoder
Dav2d AV2 Decoder
Just like "GNU's Not Unix"pkos98
off topic, but related to the recent github alternative discussion:
Wow, this gitlab instance looked so much cleaner/simpler and less clunky than my past experiences! Also loaded really fast on first page load as well as subsequent actions
toasty228
Gitlab has been better than github in almost every aspect for a few years already, I don't understand why anyone still bother with github at that point
risho
is there any understanding of how big of an improvment av2 will be over av1?
ChadNauseam
About 30% better compression than AV1 at equivalent quality. But it'll be a while before it's a good idea to use AV2 in your home media server. (AV1 is still not that broadly supported)
jiffygist
Where do I get example AV2 videos to test it?
shmerl
Nice.
What's the current state of of Dolby trying too attack AV1 ecosystem (Snapchat more specifically)? I hope there is an organized fight back by AOM against these trolls.
razighter777
Agreed. Software patents were a mistake in general. It is impossible to implement a modern video codec without using work in patents because of how overbroad and poorly written they tend to be.
0x0
Just recently noticed this got posted to deb-multimedia, although I think there is a typo in the package description....
https://www.deb-multimedia.org/dists/unstable/main/binary-am...
... it says "fast and small AV1 video stream decoder"
... should probably be "AV2" ?
Get the top HN stories in your inbox every day.
Project description:
If you're out of the loop like me: - from https://av2.aomedia.org/