Just watched 5 mins of anime with info pictured. In that time I didn't see any issues.
What valid excuse to fansubbers have to not be using this?
To not be using what?
For the time being, 10-bit H.264 gives you better results at the same (sane) bitrates while being many times faster to encode. Using HEVC is pointless until the encoding applications get better.
I never see episodes that are 10-bit H.264 flac 2.0 and are only 204MiB.
>while being many times faster to encode
are you not giving heaps of weight to this instead?
>I never see episodes that are 10-bit H.264 flac 2.0 and are only 204MiB.
Fansubbers actually care about quality, unlike idiots who re-encode HS video to HEVC or do something equally stupid.
>Fansubbers actually care about quality
most of them don't particularly, but fair enough.
Still doesn't address 'are you not giving heaps of weight to this instead?' though.
If you want a timely fansub, HEVC is not the way to get it.
And is it actually better? The last time I checked, it was comparable to 10bit video encoded with x264, but not really any better.
>If you want a timely fansub, HEVC is not the way to get it.
I'm not asking horriblesubs to come out a use it, but HEVC takes 3 times longer than h.264 10-bit
The info in OP is from a BD encode, so timely is relative as well.
So I just made a fresh compile of x265 from master. Let's see what happens when we pit it against 10-bit x264, shall we?
This might take a while, though... x265 is currently going at ~0.28 fps and dropping for preset placebo 1080p video at CRF 28 (the default).
Because 1080p is almost always an upscale on currently airing shows, and FLAC bloats the file size to shit.
If you care that much, download the higher quality raws yourself and add the subs later. You probably won't even have to resync
Congratulations on missing the point of the thread and giving bad advice afterwards.
Sounds good, keep us updated.
x265 is slow as fuck (day-plus encodes) for 5-10% improvements. DivX HEVC is reasonably fast but also hilariously ugly, and only minimally tweakable.
There are solutions to problem a), but they would involve spending a few bucks of donation money and changing a workflow everyone's had since the mid-00s, so no one really cares. Fansubbing is dead anyway.
Could the encoding be done via cloud platform which utilizes the power of multiple computers?
Everyone here always visit Doom9?
>Not downloading raws and the .srt so you can disable the subs when you want an screenshot.
>this is actually what most of non-english speakers do with movies.
step it up
>so you can disable the subs when you want an screenshot
You do know you can do that with internal softsubs too, right?
You can turn off softsubs even if they are part of the file.
This is just dumb. Pure new is showing.
I bet he thinks all fansubs are deceased ocean-dwellers.
I'm not sure what this has to do with OP, please explain.
>press S to turn off subs
so hard, right?
It's the same thing as with the subtitles in a separate file. Surely you wouldn't rename the .srt just to make a screenshot?
I believe he means we can bypass the fansubbers' encodes that way.
Not really, W simply works better for disabling subs.
With madVR and XySubFilter screenshots don't have subs when taken in full screen and do when in window mode. Pretty convenient.
You could cut the time by around half with creative scripting and the current process.
Beyond that, you'd need to do one of three things:
1) Teach old dogs new tricks - have the encoder manually identify scene changes and add them to the list of "safe" places to split up the encode that your script has already extrapolated from the cuts in the .avs.
2) Parallelize even more to compensate for time spent on a pre-encode workraw and analysis pass
3) Just split it up wherever the fuck, which creates its own inefficiencies unlikely to appeal to someone who's doing all this work for minimal gains just to be "right".
Unfortunately, cutting by half is still 4-6x longer than just slopping x264 at it, and that's a nonstarter for something that would just be done for bragging rights over the competition. There's no competition anymore anyway and if there was it'd mean letting them release a day before you.
How about striving for quality releases? Or BD encodes where speed is not an issue?
pretty inconvenient by the sound of things
How? I watch things in full screen and usually take many screenshots, which I prefer without subs, so it's convenient for me.
Quality's usually a matter of having the right people on a project, not how long you give them. This is one of the few exceptions, and it's a barely-noticeable one to the kinds of people (either translators, editors, or social butterflies who just happened to have brought everyone together) that lead groups. So when it comes time to decide, it's really hard to tell TL and edit (usually yourself!) that their work will get far less exposure in order to make sure that 50 or 100 videophiles think the encoder's "amazing" rather than just "good".
It also tends to reduce the quality of the other components, because they just plain quit caring as much and because a constant drumbeat of short deadlines is the best learning tool for overall quality (even if it does mean things slip short-term.)
BDs are a whole other can of worms where I'm nowhere near as familiar with the psychology of people that do them, maybe someone else can chip in?
>So I just made a fresh compile of x265 from master. Let's see what happens when we pit it against 10-bit x264, shall we?
I sure hope you're using 8bit x265, because last time I checked Main10 support was completely messed up (broken rf scale, broken aq, broken everything) resulting in inferior quality even at equal or higher bitrates compared to regular 8bit encodes.
>complaining about speed
>using preset placebo
You have only yourself to blame.
>What valid excuse to fansubbers have to not be using this?
Yeah, I compiled 8bit x265. Also, I switched over the preset veryslow because placebo was just too slow (~0.2fps, veryslow gives at least ~0.5fps). x265 has certainly improved quite a bit since last time I tried it, but banding still seems to be an issue. And speed-wise 10-bit x264 is still 5-6 times faster than 8-bit x265.
>it's really hard to tell TL and edit (usually yourself!) that their work will get far less exposure in order to make sure that 50 or 100 videophiles think the encoder's "amazing" rather than just "good"
How has anime ever progressed from divx then? Serious question
>but banding still seems to be an issue
Is this not something that can be changed either a)waiting for better x265 10-bit support, or b)fixing up your script?
Can you also elaborate on how x265 is better than last time you tried it (and when was that)?
>banding still seems to be an issue
You can try feeding it material with ordered dithering, but the result's still not great.
Using Main10 actually does fix the banding to a certain degree, but outside of gradients its AQ seems to be completely broken for now.
Also in both 8bit and 16bit x265 changing the AQ strength setting from the default had rather devastating results for overall picture quality when I played with it.
ymmv, ianal, etc.
>You can try feeding it material with ordered dithering
Gradfun3 uses ordered dithering by default, and that's what I'm feeding it (I cut a short 250 frame clip from Arpeggio ep1 BD, ran gradfun3 on it and saved it to y4m for feeding into x265 and x264).
>still supporting MultiCoreWare shit
Beware of Daiz, the main responsible for killing x265.cc
MKV+softsubs allowed for a far faster production process.
Once you had a raw, the DivX process was:
encode workraw -> tl -> edit -> encode RC -> qc -> encode final -> release
The MKV process is:
encode premux -> tl -> edit -> mux -> qc -> mux -> release
Each encode process takes a few hours wherein no one else can really do anything. With MKV, a v2 for major errors, which became less frequent as people became more experienced, is also 5 minutes of work and a 40k replacement script rather than another 4-hour encode and another 200mb file to distro.
10bit H.264 was technically a time loser, but it took encode times from ~2 hours to ~4 hours rather than from ~4 hours to ~1day. It also happened around the same time as almost everything got captions and Japanese streaming sites got guaranteed insta-raws, so the TL could start right away and finish around when the raw did (4 hours or so total) rather than start when the raw was done and finish 4 hours later (6 hours total).
A lot of the shit you've been writing has little to do with how things actually are.
Wraws are still made today, because they're a fuckton faster to do than waiting for premux to complete. 10-bit encoding is only about ~30-40% slower, not 50%. Also, everything very much doesn't have captions, they're still a more of an exception than the rule for anime. Also, you can't time or typeset to himado raws and translating can be a pain too due to the horrible video and audio quality, so they're generally only used for preliminary work until an actual wraw is available.
Also, quality is really a group philosophy kind of thing, and encoders very much aim for high quality regardless of how incompatible it is (if we only cared about being okay and fast and compatible then we'd still be using 8-bit H.264).
And competition still very much exists, otherwise we wouldn't have multiple groups working on the same shows.
>10-bit encoding is only about ~30-40% slower, not 50%.
You mean "not 100%"
Uh, yeah. Brainfart on my part.
why do you skip the difficult questions daiz?
Waiting for better x265 10-bit support would work. "Fixing up my script" not so much since I already have pretty much the ideal 8-bit situation (smooth gradients with ordered dithering).
As for how x265 is better than last time, the overall compression quality just seems to have improved and while the banding issue is still present, it's a lot less pronounced than it used to be. I don't remember when exactly the last time I tested x265 was, but it was quite a while ago.
Anyway, the banding issue is a pretty bad one - even if you could get better quality than 10-bit x264 otherwise, the banding sticks out in a rather bad manner, and right now the only way to fix that would be to increase the bitrate... and most likely you'd have to go so high that 10-bit x264 would just be an all-around better idea (and again, much faster).
All in all, we'll definitely move to H.265 (10-bit) at some point, but right now there's just no real reason to switch. On top of the banding and speed issues we also don't have a "stable" spec for muxing HEVC into Matroska yet (though that'll probably the first thing to be solved out of these).
Okay I'll take the bait.
Let's you watched a blu-ray-release of a TV show.
Had you watched the non-upscaled lossy audio version, you yould have had the EXACT SAME "results".
Daiz, do you know if the cuvid option in LAV works with 10bit? I looked around and saw people saying that it doesn't work, but when I tried it it didn't have any of the green blocks of shit that would normally happen when trying to play 10bit video with something not compatable.
100% was (extremely, rounding to the nearest hour) roughly the change we experienced in 2011. Of course, that factors in better debanding that wouldn't be crushed right back out, and doesn't factor in later improvements to 10-bit speed. I've never bothered to pay much attention to 8-bit afterward, but I wouldn't be amazed if the gap was smaller now.
As for the rest, I'm not sure you completely understood my point. The TL's preliminary work should be more than enough for a complete edit and time (and most timing work CAN be done to the split audio perfectly fine, especially with a concurrent stats pass.) This obviated the small (1-2 hours) increase in encode time.
I'm not sure where at all you think you're going with compatibility
Could HEVC have better support 4:4:4 than 10bit in its current state?
>On top of the banding and speed issues we also don't have a "stable" spec for muxing HEVC into Matroska yet
what stability issues are you referring to here?
Thanks for this. I didn't really consider the difficulty that hardsubs presented.
In english please.
>especially with a concurrent stats pass.
What, you can encode normally and gets a stats file? I thought that only happened when doing 2pass.
Also, wouldn't it not have info on all the keyframes due to the keyint being at a normal level?
>banding this, banding that
Care to chech the YUV values, see if they change one by one?
I mean: is the banding the result of using limited range rec709, and the only solution to dither? Is undithered RGB smooth enough?
Instead of using 10-bit to solve this (keeping in mind HEVC is hurt less by the limited precision due to assorted improvements), wouldn't it make sense to try other stuff to solve the banding, such as using other colorspaces?
I'm thinking PC.601 for a start (the one JPEG uses - easy to test since it's supported by most tools), and optimally TCoCg(-R).
>roughly the change we experienced in 2011
No, not really. 10-bit encoding was already quite mature in Fall 2011 when the widespread switch happened, the reason we waited until that was because the playback side needed to catch up. The gap hasn't changed much since then.
And you're still speaking like it would be common for groups not do wraws today (or didn't do them back when we were doing 8-bit H.264) and go straight to premuxes, which, as I already said, is very much not true. Not doing wraws is very much an exception, not a rule.
Also, to get keyframes/a stats file, you pretty much need to run some sort of encoding, and if you're doing keyframes you might as well do actual wraws for people to use. For example, what I do these days is encode both 720p wraw (for typesetters) and 480p wraw (for everyone else) and generate keyframes with the latter.
As for the compatibility part, you wrote this:
>So when it comes time to decide, it's really hard to tell TL and edit (usually yourself!) that their work will get far less exposure in order to make sure that 50 or 100 videophiles think the encoder's "amazing" rather than just "good".
Which is basically saying that subbers would choose compatibility over quality, and that's clearly not true.
So you encode 3 raws in total? 2 workraws and one for release?
I've experimented with doing it the really ghetto way - copy the avs, trim out the filters, throw it into VirtualDub with a higher process priority than avs2yuv/x264.
Of course it will miss keyframes that are due to keyint, but the missing those just makes life easier while timing since you're using it as a proxy for scene changes rather than for video analysis.
Yes. Workraws only take like 10-20 minutes to encode (I do them in parallel), while the actual encode takes hours. It would be just ridiculous to skip the wraws and wait for the full encode for doing stuff like typesetting or anything else.
Oh, that's right, you waited until that fall.
I suppose it's possible to still do workraws, but in my experience it's been very rare to make them rather than premuxes, and it's gotten rarer as content has gotten less dirty. There's a very, VERY small window of usefulness for something that's not much higher quality than a 200mb pubraw but not much faster than a premux, unless you're overdoing the filtering or are working on a BD release where filesizes become an issue.
I suppose it could make typesetting slightly quicker, but I've worked on some real TS nightmares and the only one I've seen it applied to was making WRs for entirely different reasons. For rush typesetting we normally just give the typesetter the TS/avs combo.
And to clarify for you: that's not compatibility over major quality differences, it's significant release time over extremely minor ones. The same decision we make when we Mocha something rather than hand-animate it, or choose veryslow over placebo, or get ep 1 out right away rather than waiting for official character names to show up on toy boxes.
>There's a very, VERY small window of usefulness for something that's not much higher quality than a 200mb pubraw but not much faster than a premux, unless you're overdoing the filtering or are working on a BD release where filesizes become an issue.
You mean besides having the exact same timing / cutting as the final release?
>why do you skip the difficult questions daiz?
Why did you skip reading the difficult posts that already addressed your "difficult" questions? (Namely >>101682889 and >>101683053)
>Could HEVC have better support 4:4:4 than 10bit in its current state?
Not really, as the Main/Main10 4:2:0 specs are final while specs for other color spaces are still drafts.
Also how would you even define "better" in a case of apples vs thumbtacks like this?
>what stability issues are you referring to here?
There is no official way of muxing HEVC into Matroska. There's only some unofficial thing hacked together by the DivX people.
So once there's an official spec, files that have been muxed with their current tool might (or might not) end up being out-of-spec, broken and unsupported by demuxers.
>is the banding the result of using limited range rec709
No, it is the result of a multitude of factors, like an immature encoding software making bad decisions, picking the wrong coding tools, quantizing too aggressively, etc. The limited range is just icing on the cake. Remember, we are talking about a transform codec with strict limits on precision here.
>and the only solution to dither?
Well, dithering is part of the solution. Dithering before encoding can alleviate the impact of both limited range and the inherent quantization of 8bit color planes, but requires wasting quite a lot of bitrate in nearly flat areas of the image to preserve the high frequency noise that dithering effectively is.
Dithered color conversion to RGB for playback is also necessary, to reduce the impact of rounding/truncation errors.
>Is undithered RGB smooth enough?
It never is.
>It never is.
JPEG won't preserve dither except at the crazy high qualities and you don't see anybody complaining about banding.
Yes. It's rare to need the exact timing/cutting as the final release, in both keyframe data and actual video, a couple hours early, but before the TL's anywhere near done.
If there's something about your process that means the typesetter has a 6pm class and needs to be done by then, or the encoder has Australian internet and the final encode is going to take eight hours to upload, or you're editing a simulcast and your "TL" is a ripper script that takes thirty seconds, go for it!
>Could HEVC have better support 4:4:4 than 10bit in its current state?
H.264 already has separate color plane coding which could be used for efficient 4:4:4 encodes, it's just that x264 doesn't support it since it's part of Hi444PP instead of Hi10P
>It never is.
At 8bit at least.
But yes, in an ideal world we would be transporting video in high bit depth L*a*b* color space.
Of course any hardware designers would kill you if you ever required them to implement support for it.
Bunch of nerds.
fuck off to facebook, normalfag
What timeframe are we looking at for h.265 to have a proper encoder/decoder?
I understand that we're all waiting for x265, but will something else come out before it?
>JPEG won't preserve dither except at the crazy high qualities and you don't see anybody complaining about banding.
It's considerably more rare, but you do, in fact.
Kyoukai no Kanata for example, where some people would take PNG caps to preserve the smoothness of Mirai's pantyhose.