[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k] [s4s] [vip] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home]
Settings Home
/3/ - 3DCG

4chan Pass users can bypass this verification. [Learn More] [Login]
  • Please read the Rules and FAQ before posting.
  • There are 20 posters in this thread.

05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
06/20/16New 4chan Banner Contest with a chance to win a 4chan Pass! See the contest page for details.
[Hide] [Show All]

File: Withspotlights2.jpg (278 KB, 1920x1160)
278 KB
278 KB JPG
"Realtime rendering in UE4 is the future of CG"

Why do they say this /3/?

> Lighting in UE4 is black magic fuckery with more bullshit than 1992 POV-ray, bleeding, still needs to be baked
> Transferring animations to UE4 is a pain in the ass, no deformers or constraints, baking animations is still error prone in all DCC packages
> Transferring simulations nearly impossible
> Time and energy wasted transferring assets into Unreal requires a separate technical director
> No custom shaders
> Real hero assets require hacks - geometry limits, texture size limits, no UDIM
the CG industry (and the game industry) are both fueled by hype and hysteria.

im in a cg group on facebook and most of the people there didn't even touch cg or created any content, also these groups are managed by game companies that pump their shitty phone games to the retards over there
Jesus you opened a thread. Now I'll have to ask for one last time for somebody to help me, I'm fucking PISSED.
And I'd understand if there were some problems with complicated objects and I'd have to be really good at making lightmaps to get it right. But these are fucking basic modular pieces, wall is literally a single squared polygon, and yet I can't snap them together and have a lighting without any issues...
Use real time lighting, it will get rid of the need for lightmaps and long ass lighting builds.
You mean dynamic/movable lighting? First, you don't get GI in that case, and second, your lighting just can't be as good. Check out UE4 lighting academy on YT.
The advantage of realtime rendering is the ability to look at stuff in realtime. Duh.

If you need super duper accurate everything, use a raytracer.
Not him, but CryEngine has realtime GI.

From what I understand, CE's SVOGI is a ripoff of nvidia's VXGI but has been more actively developed. It also implements frustrum shadows and some other advanced shit as well as realtime single light bounce - where UE4 bakes light bounces and uses lightmass to sort of simulate light bounces in realtime. But they're impractical for lighting environments bigger than a CoD or battlefield map. Voxel based GI is best for achieving quality, and performance on a big scale or having day/night changes. But it might not look as good as lightmass on small area which has lightmass and baked lighting.
How big of a difference it actually is between baked lighting and dynamic lighting in UE4? Is it just the GI?

I just want to achieve the best lighting possible, if I can do it without having to fuck with baking, even if it is not the most optimized method (idgaf about a game, this is for a single scene for a showcase), just show me how.

I've been watching some UE4 lighting academy and he seems to be always using baked lighting, archviz guys also use it, so afaik movable lights should probably not be used in this case. Even that guy from the lighting academy said that UE4 is not meant to work like that but people somehow still manage to get decent looking scenes with dynamic lights only. But it could look much better with baked lights. His words.
Baked lighting increases game size while dynamic lighting decreases performance. This involves various maps and many objects/polygons, and can explain why some games with good performance are huge in size upwards of 50gb.

For the archviz guys, there's a few threads in the forum, this is one of them:


It covers tweaking the engine lighting configs to mold it into a would-be renderer like Vray. Real-time performance isn't considered important here. That particular thread is a little outdated now though. The guys you want to follow is Raghu and...I cant remember his name but it starts with a K...

Accidentally deleted: You can find current relevant information towards the end of the thread, whichever starts near June/July because the lighting system had a major features added/tweaked before then.
Thanks, I'll check that thread out, seems like a good source of info.

Basically in my case, I want to create a simple interior, but if I model it as a single piece, my walls will look really bad up close because every polygon wouldn't take much UV space. That's why I need to go modular, so I can have a separate texture map for every wall piece, but now I'm having so many additional problems with it that I don't know what to do anymore.

Maybe I could even model an interior as a single piece, but apply multiple materials to different polygons, so I get more texture space that way, but maybe it's a stupid idea.
Why can't you just use tileable textures like everyone else?
Also, most 'Archviz' guys spend days tweaking the lighting just to render an utterly generic bedroom. Waste of time.
What do you mean by that? I put a wall texture in SP on my model. Export it as 4k. If it goes on a single modular wall, it will be super high res. But if that texture goes on a full interior where every wall takes only a tiny bit of UV space, it will look much lower res. Am I doing something wrong?
But there are games, too. There is no choice but to figure it out.
Games have a different goal tho.

Learn about texel density and tileable textures. The problem you are experiencing is why tileable textures exist.
Texel density is basically the ratio of pixels to size of your UV space. For example, if you have a 1k texture (1024x1024 pixels) and you have a UV space of 1x1 meter, you will have a texel density of 1k pixel/meter. 1k texel density is really good and would be suitable for something you get really close to in your game.
The problem with large meshes is that textures can't be infinitely big. A 4k texture can easily be a couple of MB in size. Now if you have a wall of 4x4 meters and you slap a 4k texture on there, it's gonna look nice and crisp, because you're still getting your texel density of 1k pixel/meter. But you just spend like 4 MB on a fucking wall. So what happens if your wall is bigger? What if it's 4x8 meters? Now you have two choices: you can either fit all of the wall in your UV space. Your 4k texture will now cover all of the 8 meters, and your wall will look like it has a lower resolution because you just halved your texel density. 4k is still 4k, it just stretches over a larger area now. To maintain your 1k texel density, you would need an 8k texture. And so on.
Since that's obviously not reasonable, you use tileable (seamless) textures. For example, to achieve 1k texel density on a 4x4 wall, you could just use a 1k texture and tile it 4 times. Your UV shell would be 4 times as large as your 0-1 UV space. 8 times as large for your 4x8 wall. By using a tileable 1k texture, you could maintain your tex density on your entire wall, while using less memory but actually achieving a better (sharper) look.

There are infinite UV spaces next to the one you see in whatever software you use. One 0-1 space is your entire texture, so if you would be making a modular wall system for example, you would have to make sure that the UV shell of the module fits exactly into the 0-1 space. Otherwise you will see seams when you put the modules together.

Anyway, because of the texel density thing, software like Substance Painter isn't really suitable for texturing large meshes. Either you fit all your shells into the 0-1 space, which means you will either need fuckhuge textures to maintain tex density, or you will end up reducing your density. Assets that are in the same environment should have the same or very similar texel densities, otherwise there will be noticeable differences in quality/resolution. By same environment I mean close to each other. A background object that you will never see will obviously not need as good a tex density as your main characters guns, for example.
Anyway, you can, in theory, import meshes with tiled UVs into SP. But when you paint on one tile, it's gonna paint in all of them. And obviously, you can't bake tiled maps.
What you could do is use several texture sets so you get a bit more UV space per texture. For example, you're making a modular wall with a window in the middle. The UV of the wall itself would need to fit exactly into your UV space so that your modules won't have any seams. But that would leave very little space for your window - so you could make a second texture set for your window so that it has its own UV space. Obviously, the drawback to that is that you will need more textures and shaders, so it will cost more memory.

For texturing large meshes, look into Substance Designer to create tileable textures.
I don't know much about the Unreal engine but judging by this and other threads that I've seen here and on their forums, it attracts a lot of people who have absolutely no business with computers.
It's free and there is very impressive art created with it. Naturally it's going to attract a lot of noob and tard.
One more thing about your 4k wall. That is an absolute waste of memory.
Everything you see in your game has to be loaded into the vram of your GPU. A 4k map with 4+ MB may not sound like much, but that shit adds up fast.
Go into unreal and open a 4k texture and look at the memory requirement. Then change its max ingame size to 2k, 1k and so on and see the difference it makes.
Keep in mind that the average GPU on Steam has less than 4GB of vram.

I know you probably don't care about performance right now but it's always a good idea to learn good habits.
One of the funniest threads I've seen was about some guy complaining that his room model, which was full of T-junctions, would not light correctly. The rest of the thread was other people agreeing that the problem was due to "slight differences between threads" and the solution would be to render with a single core machine.
Thanks. I actually know about texel density, but as you said, I can't do it in SP so I decided to build modular walls instead. It also makes it easier to change things later on. But then again, fuck that when I can't light it properly now...

I know, that's why I explicitly said that this is for a single interactive scene, not for a game. I want to achieve the best visuals possible in UE, and for a scene of that size, this won't affect the performance. I'm not making a game, that would be a completely different story then.

That's a silly solution, but I mean, "slight differences between threads" IS what is causing this to happen.
If you wanna see some really dumb people go into facebook ue4 groups.

Half the threads are people asking if their computer can run ue4, the other half are people who can't into UV mapping and wonder why their shit is all messed up...
You can actually have custom shaders, although you have to recompile the engine to get them in
But you can't take advantage of the Material Editor in this case.
> Lighting in UE4 is black magic fuckery with more bullshit than 1992 POV-ray, bleeding, still needs to be baked
Watch the training course on UE4's youtube channel. It's not that hard and it addresses your exact issue.
> Transferring animations to UE4 is a pain in the ass, no deformers or constraints, baking animations is still error prone in all DCC packages
Never had any problems exporting with blender but I haven't exported a lot of animations.
> Transferring simulations nearly impossible
I've seen it done, alembic should make it easy
> Time and energy wasted transferring assets into Unreal requires a separate technical director
> No custom shaders
You what now?
Custom shaders can use the material editor
I just don't understand why those lightmap issues are not their top priority. They are reffering to it as a "current" problem for years now. Now I wonder how many UE4 games were using fully modular assets and how did they get around this issue. If the solution is to "just make some walls longer", that will just complicate everything and increase the amount of time needed to model all those different modular meshes.
File: nogi.png (351 KB, 1191x530)
351 KB
351 KB PNG
What is your opinion on dynamic indirect light? It has to be turned on in console settings atm since they are working on it, but it gives you some GI for movable lights. Is this at least a passable replacement for baked lighting?

First pic is without GI (single spotlight in a long corridor), second will be with dynamic indirect light turned on.
File: withgi.png (515 KB, 1192x533)
515 KB
515 KB PNG
And this is with it turned on. I had to place a directional light outside of the corridor. I can turn up the strength, but I put it on minimum.
In that scene you need to add lightning "portals" so there are more light rays shot through doors, narrow hallway openings...

That happened because there aren't too many light rays being thrown in those closed areas, therefore, the lightmap doesn't receive enough information to create the lighmap, giving the appereance of a low res lighmap.
File: ok.png (523 KB, 1194x538)
523 KB
523 KB PNG
Maybe it's too low, so here's one more.
In terms of quality,
Stationary > Static > Dynamic

In terms of perfomance (better to worse)
Static > Stationary > Dynamic

Therefore, you should use stationary as much as possible, and only use dynamic when you really need it (such as movable objects and shit).

Seriously, read the ue documentation, if you read everything under the lighning section you will get a fairly big idea on how it works and how to use it.
File: see.png (420 KB, 1194x797)
420 KB
420 KB PNG
I'm reading it and watching those training videos on YT, but I can't solve that stupid modular problem. I'll have to redo all of my walls apparently, but I'm trying to find a way to not have to do that. That's why I'd rather just use dynamic lights because baking this shit is pain in the ass. But it also happens with modular meshes from the starter content, so it's not like I can't into lightmapping.
can you post your lightmap uvs?
File: uvlightmap.png (173 KB, 776x745)
173 KB
173 KB PNG
Can't be simpler than this.

1-Make sure you are using at least a 64lm for those planes (to ensure a minimum of quality, even tho they are simple af)
2-In your lightmass settings (should be in world settings I guess) decrease "static lighting level scale" to something like 0.1
3-Increase "indirect lighning quality" to something like 2
4-Set "indirect quality smoothness" to arond 0.5/0.6
5-Just to see how it really looks, when baking, choose "production quality" instead of "preview"
6-If you still get seams, leave some sensible padding between the uv shell and the edge of the 0-1 uv space border, reimport, and rebake.
File: kms.png (575 KB, 1196x535)
575 KB
575 KB PNG
Thanks, I'll try that.

>pic related
Walls are now actually a lot smoother, but I'm still getting some fucked up edges, and now some of them have a really bold dark line. Although some also look good. Idk.

Just one more thing, does it really make difference if I import a single wall plane and build a hall inside the engine, and when I build a hall in 3ds max and then attach everything and import into UE4? I don't see the difference desu, and when I attach the meshes, I noticed that I actually have overlapping vertices and my polygons got dark and fucked up (although that doesn't seem to affect the bake, because it was the same even after I fixed it), but it is complicating things.

Also, one sided vs two sided material?
File: yey.png (484 KB, 1199x538)
484 KB
484 KB PNG
Woah, I think I fixed it. I accidentally had some meshes flipped, but even though it was two-sided material, it didn't bake properly. Hmm, I'll have to test it one more time, since maybe it's too much direct light.
File: no.png (489 KB, 1196x537)
489 KB
489 KB PNG
Hmm... Maybe I should consider suicide.

This is how it really looks, when only indirect light is illuminating those walls. Fuck me.
File: finito.png (658 KB, 1196x535)
658 KB
658 KB PNG
I don't want to blogpost anymore, this is the last one.

Followed this guide >>585085
but also reduced the size of my lightmap now, so there is plenty of space around the UV island, and I connected the pieces in 3ds max this time. Even with indirect lighting it finally seems to be working fine. I can still see small light bleed on the edges now (always a different problem, jesus christ...), but I don't mind these too much. They can be barely seen.

Sorry for shitposting, this will maybe be helpful for some lurker.
File: Sin título.png (5 KB, 819x211)
5 KB
>but also reduced the size of my lightmap now

that is what I meant with "sensible padding"

As a tiny little tip, If you want to dissimulate those corners a little bit more, add AO

Also, to avoid bleeding in the corners where the edges of two modular meshes connect, take a look at this pic: try to make them overlap a little

Changing "lightning bias" setting in your lights to a lower (or higher, dont remember) value can fix it too.
File: overlapping.png (384 KB, 777x734)
384 KB
384 KB PNG
Thanks for the tips!

Do you maybe know what is the proper way of exporting all the modular pieces I connect in 3ds max together? I just realized that if I attach everything, my lightmap UVs will overlap. This wasn't the problem when I was exporting the same meshes, because they all had the same UV island, but now when I attached a different modular piece (doorway), I noticed it is overlapping with everything else, and now I'm getting completely dark walls on bake, everything gets destroyed. Pic related.
Simply generate the light uvmaps again, blender has a button that does so automatically, max should have it too

OR, when importing the new mesh into unreal tell it to auto generate lightmaps
You don't have to attach it.
Just snap it. Not everything needs to be one connected mesh.

Also, if you're only doing interior scenes, make a lightblocker around your model. Basically an outer shell.
File: why.png (603 KB, 1204x563)
603 KB
603 KB PNG
I think that (from personal experience and from what I've also heard), combining meshes will fuck up the lightmaps, so if I import my fbx scene without attaching, all of my modular pieces will have to be rebuilt again. I've also tried auto generating lightmaps, but it produces a bad result and bake.

Now I'm trying to import it and make it work properly with "fbx scene import" option, where I can import it as a blueprint, but the problem is that, even though the walls in this case keep their relative positions, but they are not aligned vertically, so I get pic related.

So much time wasted just to get clean wall edges... Terrible.
File: damn.png (800 KB, 1201x567)
800 KB
800 KB PNG
It fucking works.
ayy congrats it's looking good!
This is not an UE issue, you just don't know what the hell you're doing. I ran into the same things tho, so don't worry. But Google would probably be a better way to get solutions here.

Anyway, merging meshes in UE should not mess up lightmaps. Combining them on import probably will tho - for obvious reasons, because there is only one channel for lightmaps so they all go into the same texture space. What you need to do is put the lightmaps together in Max before exporting your meshes. Say you build a Robot out of a couple of pieces that you don't want to weld because you want to pose them later. You would first do the UVs for each individual piece so it's easier, but then you gonna have to select all pieces and put all the UV shells in the same texture space so you can have one texture across the entire robot. Same goes for lightmaps. Then you can pose it in max but combine the meshes on import into UE with proper mapping.
It doesn't make sense to do this for large environment pieces tho, because the lightmaps would end up very small. Just don't attach or weld or combine the modules and the mapping should stay.

The alignment issues are probably because you did not reset transforms. Whenever you mirror, Symmetrie or otherwise transform objects in Max, Max will remember that information. Sometimes it gets put into the FBX and messes up transforms in other applications. So what you need to do every time you export something is to either go under 'hierarchy' and then 'reset transform' and 'reset scaling'. Or nuke it from orbit, go to the wrench menu and reset xform. This should also reveal normals that might have been flipped by mirroring but appeared right - a common bug. Resetting xform will collapse your modifier stack, and you still have to collapse the xform modifier too. But you should collapse your stack anyway before exporting because you don't need that information in UE.
When you did that, you can also freeze the object, this can help as well.

Now your meshes will be in the right place relative to world zero (0,0,0).
But if you import into UE, they will first be in the content browser, and you will have to drag them into your scene. If you want them to be in the right place now, you have to zero them out, by putting 0 into the location values in the details panel, or by clicking the little curved arrow next to them (resets to default wich is 0 but is the right position in world space)
You're wrong. Only single core computers can correctly render lightmaps because multi core porcessors ha slight differences between threads and even AAA games suffer from that. It has been proven by people with a fancy title on UE forums, so it's true.
Then what would be the solution?
Oh fuck off

Even if that may be the case, it's certainly not the issue OP is having.
The government would like you to believe computers are deterministic machines, but UE forum users uncovered the truth.
Thanks, at first I wasn't sure if I didn't make a mistake again, but those really are stationary lights, and even when I change their strength, indirect light on the walls far away still doesn't create seams, so everything is good now, finally.

Thanks for the explanation, too. I was just tired of going through UE forum threads so I had to ask here.

>Combining them on import probably will tho - for obvious reasons, because there is only one channel for lightmaps so they all go into the same texture space.
Yeah, this is what I actually meant, "combine meshes" on import puts everything in the same texture space like here >>585113.

Here's a short summary of how I did it this time, maybe somebody will find it helpful:

3DS Max workflow
>setup lightmaps for each modular piece
>snapped them together with instancing
>without attaching or grouping anything, exported as fbx and CHECKED "preserve instances"

UE4 workflow
>used "FBX SCENE IMPORT" from the UE menu, NOT regular import (this is a big difference and only yesterday evening have I found out about it)
>there I chose that it imports the fbx scene as a BLUEPRINT
>this way the scene automatically imports in the scene like this >>585130, AS A SINGLE OBJECT
>it also provides you with a blueprint where you can reposition all the pieces from your original scene in 3DS Max
>when you position them, you just compile it and the object in the scene readjusts to match the blueprint

Because in the scene all the modular pieces are combined into a single object, baking doesn't create those problems with seams, but you can still snap everything and edit inside of the blueprint.

This actually surprises me a bit, because guys from Epic were saying you can't do anything about those seams and shadows when you use very modular kit, but this method obviously fixes everything and you can still make your environment as modular as you like. Their solution was to build larger modular pieces.
>The alignment issues are probably because you did not reset transforms.
And yeah, I completely forgot about that, thanks for reminding me. Because of this I had to reposition everything manually, but no big deal.

Watch from 19:55:


>"We don't recommend overmodularizing your level"

See. But the method I used seems to solve this problem. Strange.
dunno my university is pushing hellblade and thier fucking shitty realtime shit
>dunno my university is pushing hellblade and thier fucking shitty realtime shit
What do you mean?
>> Time and energy wasted transferring assets into Unreal requires a separate technical director
this is the biggest reason we use Unity still, many of us, most of us even all agree moving to UE4 would be great but the amount of manhours that would need to be spent converting assets is off the charts
What the fuck do you need to convert? What is so problematic about importing assets into Unreal?
I mean, custom shaders are possible if you are willing to edit the Engine's source code or get your hands dirty in the material editor.

I accomplished a toon shader with shadows without doing any modifications to the engine, and by using the forward renderer.
your mesh is not optimized for what you are trying to do
watch the lighting videos or just do this simple thing
Create 1 ( One ) mesh instead of 9 as you currently use
how optimized is it
On the level shown in the screencap, runs at about 500FPS with Epic Settings and an uncapped framerate on a GTX 1070 and i5-7600k

How do you uncap frame rate? I thought max was 200
I believe you go into the general settings and uncheck smooth framerate.

As for the max framerate, I'm not sure how I hit 500FPS.

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.