[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Search] [Home]
Settings Home
/3/ - 3DCG

Thread archived.
You cannot reply anymore.

File: Annotation.png (284 KB, 764x934)
284 KB
284 KB PNG

Other conferences cool too.
Saw some cool stuff, talked to a lot of old friends that are somehow still in the "industry".

Also, too many S 0 Y -fed betas running around. What's it with guys balding at fucking 20?
I'd like to go some time.


My favorite paper from this year. I kind of want to implement this, but I'll be limited with the quality of the training data I can generate so I don't know if it'll be worth it. It's perfect use of training data in for a vfx pipeline though.
TL;DR it, please? Seems to be some kinda auto-skinning/deformation thingy?
If you watch the video they show the core principles of the paper, which boils down to:
>we can achieve an order of magnitude more performance from complex animation rigs using deep learning, good enough to run them on shitty iPads.
This is probably useless for the film industry because the video states they already get 35FPS using complex animation rigs on high-end PCs, which is all they need for look-dev.
Even further, you still need the fully rigged model before you can approximate it with greater performance, so if your look-dev tells you "the rig is wrong" then you're fucked.

But this could potentially be extremely useful for other real-time applications (games), the problem is whether our other solutions are better or not.
Video games already use blendshapes/morph targets for detailed facial animation, which have proved evidently performant enough for what they want to do, at some cost.
Trained machine learning models are well known for being fuckin' huge both on disk and in memory.
They're also well known for inferencing very fast and if you dump them into Nvidia's new tensor cores they might perform even better.
But this doesn't mean you can suddenly have every side character running complex animation rigs.
First, because of the aforementioned memory issues, and second because other algorithms may wind up optimizing better, even on Nvidia's Tensor/RTX cores.
I dont think it's useless for film at all. The application for this is to augment the animation puppets. Lookdev would never happen on a puppets geo since that's only ever a proxy representation. They'd always require the full mesh and if they required it in a deformed pose then it'd need to go through a simulation step anyway.

The nice thing about that method for the big houses is that they already generate tons of simulation data so this would be simply reusing resources. It also represents a computationally cheap way to represent more accurate nonlinear deformations which is what you want because it'll be a more accurate representation of the simulated mesh. The only puppet setup would be rhe rigid skinning, something simple and automated at every place I've seen, so its a time saver on the rigging side. It looks like it should scale linearly against mesh size and number of puppets in a scene, which is a big thing for me as something that represents nonlinear deformation.

I'd really like to test it for resource usage. That I'm still unsure about but when I read the paper the overheads looked fairly minor, in comparison to other algorithms which attempted to solve similar problems like psd.
Sounds like I'm missing a lot of context on the film industry then.

I've skimmed the paper
>scale linearly against mesh size and number of puppets
The hidden layers are fixed size, so this should be the case.
The paper is more about the optimizations than the technique itself.
Reducing input size by using a model per bone and aggregating small bones to parent bones.
Reducing output size with statistics magic, this part seemed a bit glossed over.

Seems like GPU isn't an option--the model is so small that it's simply faster to compute it on the CPU than to waste time transferring the transforms back and forth over the PCI bus.
Looking at the sizes of the model (67.5MB total for a 168K vert mesh with 45 models), that's not absolutely enormous--I've seen bigger--but perhaps more memory than some lead software engineers are willing to dedicate to a single character's animation.
>7.7ms parallel
could be good
Proudly inventing elegant solution to non existing problems since 1973

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.