[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Search] [Home]
Settings Home
/3/ - 3DCG

Thread archived.
You cannot reply anymore.

Any 2D artist here that moved to 3D?

I want to start learning 3D but the whole medium is incredibly intimidating to me, I think students might be fine but I'm self taught and don't know where to even begin.
If we were in 2004 when technology was simpler It might've been easier, but during the last 10+ years a lot of new techniques and methods to create and render a model are out there, making it more confusing

And the UI, just baffling, for 2D I can open any random Graphics program and understand what each tools mean in 5 minutes, but 3D is a high learning curve.

Some people say 3D is easier than 2D, but I find that hard to believe
>Some people say 3D is easier than 2D, but I find that hard to believe

well they are right for several reasons.
in 3D you don't have to shade because the rendering is physically based, meaning it will shade itself depending on the lightning/environment.

2. most of it is technique and going by example. in 2D everyone has their own method of drawing. in 3D there is only 1 way to model everything (2 matter of fact, but its negligible).
in 2D, you have to do alot of concepting because more often then not you are the concept artist. in 3D you just work from 2D concepts.

now 2D is easier because you have alot less to learn. but it depends.. if you follow basic tutorials in 3D you will see that you can get a very good render done in just under 40 minutes. in 2D it depends more on your expertise and how good you are

texturing is done procedurally. you just get a dirt brush for example and splash dirt on your model. easy right? well not exactly.
some models still require you to handpaint them like in 2D, and this is where 2D skills come in handy.

3D is incredibly easy to get into. Hell, I just made a whole scene with Principled Shaders and HDRI and I just started using Blender 6 hours ago!
Show it, I hope I'm not guessing right what it is about.
>in 3D there is only 1 way to model everything (2 matter of fact, but its negligible).
there is like 3 ways of modelling. low poly, subdiv and boolean. technically low poly can fall under subdiv, but in subdiv you mostly focus on having n gons rather than triangles since it will subdivide better than triangles, while in low poly you only want quads and triangles since you dont subdivide.
File: dke.jpg (172 KB, 1499x710)
172 KB
172 KB JPG
lighting has hardly anything to do with 3d. principled is like your most common shader, depending on your renderer that are several others. the rest is basic stuff you can easily do. but again thats not 3d.
>lighting has hardly anything to do with 3d
[source needed]
File: 1451168514324.jpg (2.91 MB, 4000x2803)
2.91 MB
2.91 MB JPG
you do realize lighting can be set up in 2D formats right? especially with photographs. its not a strict 3D thing. if you come from a 2D background and understanding lighting, you can take it to 3D. the most common shit you will find with lighting is your ambient occlusion, base colors, Sub surface scattering, metallic, and specular. thats not all of them of course depending on the shader. This is just defining the material of the object. so i have no idea why you would bring up a principled shader. when you use HDRI you dont really have control of lighting. lighting comes from the light objects you create. not an environment. an environment is just a preview of what the object would look in an a diffused state. the light objects are the ones that control your shadows, and bounce light. then your renderer controls the radiosity of those bounce lights. so again its not a 3d thing. as long as you understand lighting you can take it to 3D. pic related is 2D.
so again when talk about 3D, were talking about modelling,rigging,animating,texturing and simulating but not lighting.
File: 1519863436670.jpg (28 KB, 419x420)
28 KB
ok half of this makes sense and the other half is completely retarded
for instance, of course if you understand lighting you can use it in 2D work or bring it to 3D
but what the hell is about HDRIs being an environment for preview in a diffuse state? that makes no sense whatsoever. An HDRI can be used to simulate a fully lit environment, it has exposure information and can definitely create all the shadows and other qualities (contrast, color tint, etc.) a hand-made setup would do.

Lighting is part of 3D because without it, you don't have an image, just like in photography or cinema. It's used as part of the creative pipeline to showcase your models and interact with all the surfacing work. You might wanna reconsider what a full 3D pipeline is.
>but what the hell is about HDRIs being an environment for preview in a diffuse state? that makes no sense whatsoever
HDRI is literall an image in a sphere. there are no lights unless you actually implement one and usually you dont even need to because all you need is just a simple color to determine the ambiance. so all you will have is diffused lighting, aka soft shadows.

Lighting is part of 3D because without it, you don't have an image, just like in photography or cinema. It's used as part of the creative pipeline to showcase your models and interact with all the surfacing work. You might wanna reconsider what a full 3D pipeline is.

but you do realize there is compositing right?
you can model,texture,rig and animate without lighting.
Anon, you lack some fundamental knowledge on the matter.
Check out some tutorials on what are HDRIs and how to use them.
Also, lighting and compositing are somewhat related, but they are two distinct parts of the workflow.
>he thinks HDRI is special
stop lol
not what I said, but lol all you want
for the moment you just look like an arrogant newbie talking out of his ass
File: 1423017713462.jpg (6 KB, 252x240)
6 KB
holy fuck being this dumb wow
>retard BTFO on lighting
look mate, I'm not gonna reply anymore, cause you're either trolling or dumb as a rock, there's no other option, so.. yeah
never change /3/
File: DK Donkey Kong.jpg (204 KB, 1499x710)
204 KB
204 KB JPG
Me on the right
DKE is always in the back of my mind when I'm working on something and then asks online for feedback but only thing I get back from posting work are internet slang insults.
What are each of these passes?
If I had to guess:
Detail 1 + AO 1/4
Detail 2 + AO 1/2
AO 1
AO 2
Wings + AO 2
AO 3
Specular 1
Specular 2 + AO 3
Specular 2 + AO 3 (minus values) (tfw looks like a perfect clay render)
Color + AO 3
Color 2 + AO 3 + Specular 2
Color Correction 1 (contrast)
Color Correction 2 (levels) + BG
Color Correction 3 (saturation) + Post Processing 1 (bloom)
Post Processing 2 (tonemapping)
Oh, and the super soft lighting means you don't need to do any shadow/lighting or GI passes. A bit of a cheat, but still impressive nonetheless.
File: annotate.jpg (3.21 MB, 3000x2102)
3.21 MB
3.21 MB JPG
>you don't need to do any shadow/lighting or GI passes
thats just a standard diffuse lighting set up, its easy as fuck to add shadow/light groups to that. but it would make the image more busy. diffuse lighting in itself is good enough.

3D sits in this intersection where you will use both the fluid, tactile feel for the subject matter you need to have highly evolved to do traditional art
As well as the measured deliberate calculating touch of a mechanic technician.
To excel at it you need a bit of both in you. People who are really 'left brain' will have to grow a bit of 'right' and vice versa to do well.

If you overcome this initial fear for something out of your natural waters the rewards are tools that empower you to do incredible things.
You just have to let go of a lot of the ego of being a traditional artist and embrace the fact you're to become a mental cyborg
Where part of what you create is you and part of what you create is the computer.

The notion that 3D is somehow easier probably comes from the people who lack hand/eye coordination and a sense of visceral creativity but still have good spatial awareness.
These people usually can do well in tasks like creating architecture or mechanical designs where taking your time plotting thought out curves is the name of the game
Rather than pulling something into existence feeling out the shape with impressionistic lines and lumps as one would while drawing or sculpting something.

As someone who's been in this game for 20 years I want to dispel the notion that 3D life was simpler in the past.
The content's fidelity was lower the further back in time you go but at the same time the effort artists had to invest to create that level of content was higher.
The tools where not any simpler to use, they where cruder and way less responsive due to the slower machines.

Forget about the quality of the end results, as far as enjoying the process and having yourself be the limiting factor instead of the media tomorrow is always a better time than yesterday in this game.
That's a cool image breakdown, I've done my 2D in a similar manner painting the luminosity separate from the colors but never quite this methodical.
The way he split it into an actual occlusion pass and shadow pass just made me smile. Gotta try and do that next time I decide to paint something substantial.
>HDRI is literall an image in a sphere. there are no lights unless you actually implement one
Stop shitting up the board with your ignorance.
it's okay anon, it's not the worse position you can be in; you know you still need to learn, but that's the first step toward getting better..

..not like this fucker here >>664498 >>664501

thank you
its literally just a fucking map, all you do is plug that shit in in any fucking 3d program. there are no lights at that point thats why you use a sky dome light you mongoloid. and for the object to recieve any of the fucking map information you would also have to map it in the lighting information. again, stop being retarded.
File: active1.jpg (95 KB, 1259x797)
95 KB
would you look at that no lights and everything is still lit.
I started 3D late last year. My biggest advice is to seriously invest in good tutorials/classes on the basics of Maya and understanding the key foundations and languages in the field you are interested in such as UVs, mesh, textures, etc.
Maya is the Photoshop of 3D and regardless of the opinion of Autodesk most 3D programs use or share said languages.
Next get Zbrush. It's the most art friendly program out there. It allows you to dick around making 3D shit without worrying about the bullshit. Once you are comfortable with the program decide what your goal is and work towards and beginning to end pipeline and refine it.
File: nolights.jpg (56 KB, 1078x683)
56 KB
yup i really have no "fundamental knowledge":^)
>this argument about hdris
jesus christ
Okay, on a semantic level, yes, a HDRI is literally just a plain image. Well, not so plain, as it aims to capture a greater dynamic range of luminosity than regular digital photography would be able to, hence the name.
But the thing is we're on a 3DCG board. No one here would purposely argue at such a pedantic level unless they're trolling. Everyone understands that a HDRI in relation to 3DCG effectively refers to the usage of a equirectangular high dynamic range image as an emissive texture for a dome light or skybox or whatever you want to call it.
>/p/ pls gtfo
thats what ive been saying. the other fags kept insisting is something more than just a map lol, its a special map, nothing more.
>its literally just a fucking map
>HDRI is literall an image in a sphere
Incorrect. HDRI stands for High Dynamic Range Image, and, as such, it refers only to an image encoded in a way not limited to a small range of values, such as the traditional 0-255. It's also not a map, unless you feed it to a shader which uses UV coordinates (the mapping) in order to determine which pixel of the image corresponds to a given position on the mapped geometry.

An HDRI usually becomes "an image on a sphere" only when assigned to an environment/dome light. The render engine then uses the values of this HDRI as a light source; that is, terminating rays, if they reach the sphere with the mapped HDRI, as having hit a light with the values (color and intensity) that the HDRI encodes for that hit point. In this way, an environment/dome light behaves similarly to an area light.

So, when you say that an HDRI is just an image, that you still need lights to see a scene, you are saying something quite incorrect. An HDRI mapped to an environment light is enough to lit a scene, even with hard shadows, because the HDRI, set up in that way, is in itself a light.

I guess you may be thinking of lights as something that kinda bursts ray into the scene, and so it would make sense to say that an HDRI is just an image, not a light. But this is wrong, given how render engines work.
>So, when you say that an HDRI is just an image, that you still need lights to see a scene,
nigga i said you dont need lights, thats what my arugment was. and you dont even need an image to have the scene lit, therefore yes those HDRI images are just fucking maps.
im literally showing you right here>>664702
no lights and no map and its still lit. do you lack any form of reading comprehension?
What >>664707 says be true. using HDRI images mapped to the inside of a sphere is a common method to simulate global illumination in renders.
To get a good effect going you need a spherical panorama image taken with a HDRI capable camera and stored in a format that retains the high dynamic range.

When doing this you're using the texel data of the texture on the inside of the sphere as your light vector and light intensity value when rendering so you don't need to use any other light sources in the scene unless you want to.

Even when you know the facts of a topic inside and out it's ill advised to go on the internet and call people tards of the R.
But if you absolutely must it is vital to be highly certain that you're not speaking out of your own anus.
Even if your identity is concealed by anonymity here you will have to endure the shame next time you meet your own gaze in the mirror.
File: capture.jpg (279 KB, 1637x1023)
279 KB
279 KB JPG
I'm replying to the anon claiming that an HDRI image (remember that we are talking in the context of 3D rendering) doesn't have anything to deal with lighting (>>664498), that it's just "an image in a sphere" (>>664494), that an HDRI needs lights to work (>>664490, >>664699), that (apparently that's what it's meant) an HDRI can only give diffuse illumination (>>664490), and also implying that there's something ridiculous or nonsensical about putting an HDRI in a dome light (>>664501).

Pic attached: a scene without lights, a plane textured with an HDRI, and two sample objects. Not meant to counter all of the points above, just the ones about an HDRI requiring lights. (And right now I lost track of who's arguing what.)
all im saying is HDRI are just images/maps not some form of light. and HDRI literally means a big ass panoramic image AKA HIGH DYNAMIC RANGE. it does no form of lighting, and i have already proved it with my blank canvas being lit with just a solid color. basic science people.
It doesn't mean big, nor ass, nor panorama anon. It's not a term from computer graphics, it's a term from digital photography in reference to sensor data that stores pixel light intensity values that goes beyond the range we can display on a screen.
you just explained the dynamic part retard. its still a big image in a panoramic format ,faggot.
lolwut, I can make you an HDRI of 10 x 10 px.
big as in file size retard, since theyre mostly stored in loseless formats
No, resolution has nothing to do with it. You could store a 1pixel image in a HDRI format, if you want a pixel as bright as the sun for purposes or something.

Most HDRI's are not panorama either. They're just regular RAW sensordata images from cameras photographers use to process and enhance the final look of a image in post.

You're in massive damage control and insulting people left and right while making a giant ass out of yourself talking about things outside of your expertise. Is this really the kinda person you wanna be just because you're anonymous on the internet anon?
>No, resolution has nothing to do with it.
i literally just said it was file size not pixel density.
and in this case for 3d imaging, yes they are going to be panoramic. I love how none of you faglords can read, and take the littlest detail in my sentences and twist it. gotta love the internet full of retards.
I can ensure you that the filesize of a 1x1 pixel HDRI map is absolutely tiny anon.

>faglordretard from C3 to C7, check.

Your move.
your turn.
File: reallymakesyouthink.png (23 KB, 240x240)
23 KB
Pretty straightforward and simple
lol, keep moving the goalpost. You said "big ass panorama", what does that have to do with lossless encoding? At this point, I'm convinced you're just a troll.
trying to comprehend what it is you think you did there but it's the chess equivalent of Kasparov resolving a check by jumping up in a piledriver coming down hard on the board trying to conceal the king inside his clever ass.
im sorry you read it one way when it was meant another? lossless encoding means bigger filesizes, do i need to spell it out for you?
dont think too hard brainlet.
File: capture.jpg (39 KB, 556x358)
39 KB
>i literally just said it was file size
>lossless encoding means bigger filesizes
No. Lossless encoding means that there is no loss of information. You may have smaller file sizes than with the original encoding even if you are using a lossless codec.
File: optimus.jpg (32 KB, 298x298)
32 KB
>Vector Sigma.
>they dont know anything about compression rates
keep acting retarded
Feel free to prove false anything that I wrote on that post, and that shows your counterpoints to be true.
File: abxG.jpg (99 KB, 700x928)
99 KB
Holy fuck, I'm glad I bailed on the HDRI "debate" 2 days ago...
I can't believe some retards still try to argue that it's just a map unrelated to lighting when it's literally its only use. The 32bit range IS lighting information, it exists to irradiate the scene.
Git gud /3/
OP, just so we get back on topic, the new techniques, methods, and softwares from the last 10+ years in fact make the whole process easier and faster. It was incredibly tideous to create good proper 3D back in the day, especially for a single individual (studios had more resources of course).

The UIs sure are some sort of backward alien language compared to PS or AI, but unfortunately it's an integral part of the process - if you want to make the jump, there's no avoiding it.

Finally, let's not kid ourselves: 3D is probably harder than 2D as a whole because of all the technical things to know, but 2D isn't simple either when you think about it. They just have different obstacles and shortcomings, but overall I'd say that 3D has a lot of advantages over 2D, at least to help out in your illustration creation.

Basic understanding of a 3D software can allow you to block out entire scenes, create perspectives, preview lighting, etc. a lot easier than 2D can, so use it to your advantage and don't worry about being "correct" at first. The ease will come naturally if you're willing to learn more and more as you go forward.
>I can't believe some retards still try to argue that it's just a map unrelated to lighting
Thankfully, I think it's only one (1) retard. Since the condition is not contagious I hope, we're safe.
I really hope you're right, cause it's pretty basic stuff we're talking about here
This anon doesn't understand technically what hes talking about. Hes wrong but is somewhat correct if you read between the lines

There are two ways of modeling, NURBS and poly modeling. There are multiple ways of going about poly modeling and poly modeling would be the most likely thing you would get into. He does mix styles with methods for some reason as well which is ludicrous.

Poly modeling is what you're going to be doing in 3d most likely. Think of NURBS as vectors but 3d if you want the closest comparison I can think of off the top of my head.
Well in theory all 3D is vectors, because you can scale it however you want without losing "resolution". It's not pixels or voxels or whatever. I think nurbs is more like a spline approach to modeling instead of being vertex-based, but yeah, it's not super common in most fields of work.

This other anon >>664486 indeed mixes up a bunch of techniques and it makes little sense. Also you never want ngons.. it's quads or triangles, end of story.
>Think of NURBS as vectors
theyre not vectors. theyre an approximation of subdivisions from a cage with points like a lattice, which is literally what subdiv is, youre modelling with a cage. so nurbs is not a unique workflow, they used nurbs mostly with booleans. and boolean operations alone is a technique for modelling which is why i said its a way of modelling. nurbs and poly modelling are one in the same.

then explain why my scenes are still lit without lights, and without maps. it wouldnt change the image lighting if i added a HDRI, just the color information and last time i checked color=/light>>664702
File: 1466474807488.png (165 KB, 639x462)
165 KB
165 KB PNG
>it wouldnt change the image lighting if i added a HDRI, just the color information

well, here's your main wrong assumption
if that's the result you get, it means you don't know how to properly use HDRIs
there are hundreds of tutorials online that you could watch to learn how they work.. but no, just keep arguing here mate
also check your gamma, if your settings are wrong you obviously won't have a proper result
File: poly-nurbs.jpg (364 KB, 1109x717)
364 KB
364 KB JPG
again explain why my scene is still lit.

and to continue this post, this is how much nurbs and poly differ in cage modelling. and why you can see why boolean operations would work better on nurbs surfaces.
The biggest reason why you have so many anons arguing against you is because you're arguing on a purely conceptual level.
Yes, on paper NURBS and SubD modelling follow similar principles, but in practice they are entirely different modelling _techniques_. You can't model the same way with NURBS than with SubD. To reiterate once again, they function on similar concepts, but they require completely different techniques in application. If they were truly one and the same, we'd see plenty of waifu made with NURBS, wouldn't we?
It's the same thing with this goddamn HDRI argument. You're taking it at the conceptual level, where yes, a HDRI is just an image with extended luminosity range beyond conventional digital means. But everyone and their goddamn dog knows that on a 3DCG board where we discuss 3DCG and its applications "HDRI" automatically refers to a HDRI's application as an emissive texture in a skydome to cast light on a scene, using the additional luminosity range to contrast _contrast_.
It's like arguing about processed meat. Yes, on the purely semantic level, processed meat is any meat that's gone through any process. A butchered chicken is processed meat, a cooked steak is processed meat, etc. But, everyone fucking knows that by "processed meat" you truly mean the shit that's pumped full of nitrates whatsoever to last longer/taste different. Not just any meat that goes through any process.
Learn to apply some context to your thinking.
>as an emissive texture in a skydome to cast light on a scene
thats literally not what you do, you literally just grab the hdri and plug it into the environment map slot and youre done. and as far as nurbs i literally just told you that booleans are more preferable with them. it lets you control the surface render if you wanted to tessellate which booleans will mess with. thats why you really only see people model nurb objects with booleans especially on programs that are fixed on that like rhino3d not damage control tho buddy, i know youre just as retarded as the rest.
>again explain why my scene is still lit.
This (pic related) is very common among render engines. Did you turn the relevant setting off?
>you literally just grab the hdri and plug it into the environment map slot and youre done
And woosh, you don't even understand how a HDRI applied in 3DCG contexts works. Fantastic stuff. Alright, argument's done then.
well no shit buddy. still it depends on the background color, if the background color is black there will be no light. if its a grey it will render the scene with that. thats not different for an hdri or any image, so again hdris are not special. thats why if we ever baked textures with no image on the background, as long as our background color is grey the information will be rendered.

you literally just put in an image in the environment map, the map will have spherical uv texture assigned to it and youre done, thats it. if you wanted to control exposure,highlights,midtones and shadows you can. thats not exclusive to hdri, nor is the effect that different with them.

Yeah it was a shit analogy meant to convey that something made in NURBS can be considered different than something poly modeled. They both have completely outcomes and have different means to the outcomes. Also you can examine two models and tell which one is poly modeled unless there was a conversion.

I think that's enough explanation I can't really think of any other more fitting analogy. I'm pretty tired though my bad if I provide any misinformation. I have much more of a fundamental understanding of poly modeling than nurbs
>you literally just put in an image in the environment map
This is how you USE a HDRI, not how a HDRI WORKS. And once again, how it works in a 3DCG ENVIRONMENT. Because, like we've covered earlier, a plain ol' HDRI, that isn't being used for anything, just has additional luminosity data. You're misunderstanding the argument once again.
And look, if you really want to clear the air, let's refer to it as IBL instead. Image-based lighting. You can use any equirectangular image in IBL, but a HDRI works best simply because of its additional luminosity data allowing you to express a greater degree of light intensities.
theyre the same in terms of what they do. and what they do is cage modelling, you dont have to make it any more difficult than it has to be. you can adjust the shape with both of them with points and thats the main thing to take away. obviously they have pros and cons but they dont entirely define what they are, cage modelling. the biggest thing when you work with these is you cant sculpt nurbs, but you can with polys.
File: 1518826971886.jpg (17 KB, 319x268)
17 KB
imagine being this much of a douchebag
stop arguing guys, it's pointless
>image based lighting
this is literally what i explained earlier on mapping the image information on to the skydome to control temperature. but this is entirely different topic. all im saying is an hdri is just an image and thats not going to change. yes IBL covers more because you literally need a skydome for it to work.
Cry harder.
>all im saying is an hdri is just an image and thats not going to change
See the "processed meat" argument above.
A HDRI is just an image, but when applied in a 3D environment it becomes synonymous to IBL. That's what everyone else here that's been arguing against you understands and that you don't.


That's about it.
To explain why I bring up the "processed meat" argument again, imagine this.

(You)'re with Bob, and I ask a simple question:
>what's a processed meat?
>Bob: Ham, salami, pepperoni.
>(You): Any meat that's been butchered. Butchering is a process.

Let's say the conversation moved to us talking about 3D and I ask another question:
>what's a HDRI?
>Bob: A texture used to light environments.
>(You): An image with an extended luminosity range.

Yes, (You)'re technically not wrong in both cases, but (You)'re entirely missing out the context of the conversation. And on a 3DCG board, the context is immediately 3DCG unless otherwise changed.
i wasnt misunderstanding anything, i was making a simple point but you were all being literally autistic about it.
To be fair, you're the one being autistic about it. Like I said, everyone already knows that by HDRI we mean a HDRI used in IBL, but here you are going off about how it's "just an image".
Ok I looked into it. You are the one who's wrong.
>theyre the same in terms of what they do.
Yes. But really only when you simple it down to "in the end you have a model"
>and what they do is cage modelling
Kind of yes, but I don't think you understand what that means as it appears you don't understand the different types of curves and there uses and still the end product is vastly different from a cage modeled to a nurbs model. Good point here would be a circle. Your method would provide an ngon nurbs would provide a mathematical circle
>the biggest thing when you work with these is you cant sculpt nurbs, but you can with polys.
This isn't the biggest thing but yes it is a thing. Sculpting is a style of poly modeling, but because of the way that models are made in nurbs you couldn't sculpt as sculpting is essentially moving polys and adding polys. I shouldn't have to explain further why that doesn't really work for a nurbs workflow

But it all comes back to the circle. (get it)
If you make it one way you'll have an ngon, if you make it the other way you have a mathematical circle.

I would recommend reading up on NURBS some more the way it works on a technical level is incredible and just how long they've been in use and proven.
HDRI and IBL are two entirely different things tho. IBL (hence the lighting) requires you to have a skylight/skydome hdri is just mapping the image on to the background in a spherical uv and that doesnt require a light for it to work. it will render properly still. we can stop arguing now since were going back and forth here, neither of us are wrong.
that circle still has points for you control i dont know why you think thats a good example. the main point has been that you use control points as lattices for these meshes. just because nurbs calculates subdivisions depending on the object doesnt make it different from how its used. why do you think you can toggle on nurbs display in the editable poly tab? because its the same thing with approximating subdivisions. and again, the biggest difference is when it comes to boolean operations which nurbs surfaces are good for.
>HDRI and IBL are two entirely different things
Yes they are.
But your explanation is wrong. A HDRI is _used_ in IBL. And HDRIs are so commonly used in IBL that people just refer to IBL as HDRI.
It's a lot like how people call tissues Kleenex or carbonated/soft drinks Coke, except that in this situation HDRI isn't a brand name. It's just become synonymous.
>hdri is just mapping the image on to the background in a spherical uv
And no, this is equirectangular mapping. A HDRI is, like we've said many times in this thread, just an image. You take an equirectangular HDRI, and you map it equirectangularly to the background, then you get what you're talking about. Which is still different from what others were talking about.
File: dervish_sph.jpg (1.63 MB, 5400x2700)
1.63 MB
1.63 MB JPG
>you map it equirectangularly to the background
equirectangular is not a projection buddy, its an orientation. if you look online for any image to be mapped into a sphere you will get pic related.
Oh my god man yes it's a similar IN practice but no it's a completely separate practice than poly modeling.

Just because it's a similar technique doesn't make it result in even comparable end products. Polygon 3d was made specifically for rendering in a 3d environment, nurbs is not. Nurbs allows for mathmatical perfection and execution.
>Nurbs allows for mathmatical perfection and execution.
why the fuck do you think i keep bringing up booleans?
...that's a literal prime example of an equirectangular projection. And all these projection methods are built to be mapped on a sphere, for fuck's sake. Mercator, Cassini, Gall-Peters, even equirectangular, they're all methods of presenting a sphere's surface as a 2D plane.
>equirectangular is not a projection buddy, its an orientation.
Just search "equirectangular projection", jesus christ. You don't even get coherent results for "equirectangular orientation".
Literally how does that make your point? Seriously guy you keep pretending to have an understanding but then say it's still literally poly modeling even though it's fundamentally not?

You're almost at the point where you're going to proclaim that all 3d modeling is literally the same because it results in a 3d model, therefore you're correct in your original claim that boolean low poly and cage modeling are the 3 ways to model. You're the hdri guy I think and there's no way after all this it's just (you)s you're after. I think you are some sort of magical retard who's very existence drains the souls of others through internet contact. Please for the love of God and all that is holy do not procreate and end your miserable life before you get on a plane and try to argue with the pilots about the type of jet fuel their burning until they get so mentally drained they crash the plane into a fucking skyscraper to end your retardardation
buddy youre confusing terms here. mapping = projecting images on to a surfaces. there is cylindrical,conal, and most importantly spherical. equirectangular is how the image is oriented to be MAPPED on to a sphere, equirectangularl is not a mapping/projection solution.
youre trying to sound like a smart ass by saying mathematics is involved in here, i mean no shit duh. you keep insisting nurbs does it differently. it doesnt, you can convert a fucking sphere into a nurbs objectn and boom you will have a cage for it and ready to edit. they both do the exact same thing in terms of SUBDIVISIONS. and thats what you keep insisting nurbs does different, it calculates these subdivisions differently. no it doesnt its the exact same for subdivisions anywhere.
Alright, so I reread my earlier posts, would it please you if it instead read this:
>You take an equirectangular HDRI, and you map it SPHERICALLY to the background
I'll admit, equirectangular wasn't the word to be used there, but it sure as hell is a type of projection.

>buddy youre confusing terms here
Maybe to you I am, but you're defining your own goddamn terms here. Everyone calls it an equirectangular projection.
If you really want to play this semantics game,
>how the image is oriented
"orientation: the relative position or direction of something", as is defined by the dictionaries. So you're telling me "equirectangular" is a way to move/rotate my image so it maps correctly on a sphere? Or would you like to tell me what your definition of "orientation" is before we continue?

Don't actually kill yourself just seek help anon. Hopefully a therapist can handle your retardedness without blowing their brains out. Can you at least put on a trip so I can laugh at you outside of this thread?

Here's an excerpt from a tool meant to turn a 3d mesh into a nurbs surface

>A mesh represents 3D surfaces as a series of discreet facets, much as pixels represent an image with a series of colored points. If the facets or pixels are small enough, the image appears “smooth”. Yet, if we zoom in enough, we can still see the “pixelization” or granularity and that the object is not locally smooth and continuous.

>NURBS surfaces are mathematical representations of curves and surfaces. They are capable of representing complex free form surfaces that are inherently smooth, and they keep their smooth shape when editing. There is no pixelization or granularity as with a mesh. Thus they behave more like a real person’s face rather than a pixelized image of that same face.

>It is important to note here that NURBS can be easily converted to Meshes at any time, in the same way that you can easily take a digital image of a person’s face with a camera. So, going from Meshes to NURBS is like trying to reconstruct a person’s face from a pixelized digital image - it is a much more difficult task, and there are no quick automatic methods.

A 3d poly mesh is fundamentally different from a nurbs surface. It is why it has to be converted and like you said can't be sculpted.
File: 1266198709767.jpg (16 KB, 330x417)
16 KB
>they both do the exact same thing in terms of SUBDIVISIONS. and thats what you keep insisting nurbs does different, it calculates these subdivisions differently. no it doesnt its the exact same for subdivisions anywhere.
File: map.jpg (37 KB, 615x427)
37 KB
yes it a projection outside of 3D, but in 3D its not a projection buddy. you wont find that anywhere here.
dude you dont model so i get it. stop pretending you do.
>he doesnt know what a subivision is
>he thinks its something thats permanent to the mesh
That's because projection isn't fucking mapping, you dolt. They're different things. Projection is how you unwrap a 3D object onto a 2D plane. Mapping is how you wrap a 2D plane around a 3D object. Of course they're going to ask what kind of mapping you want, not what kind of projection you have, because there are multiple projection types available for each mapping type.
Please, feel free to explain how a surface modeled with NURBS is mathematically identical to an ideal surface represented by a discrete polygonal model to be subdivided an n number of times, where n -> inf.
>he's devolved into just saying I don't know what I'm talking about instead of trying to refute it
>hurt durr you don't even 3d model!
>literally saying mapping on my image
okay buddy w/e you say
At this point, I'm having a quite healthy laugh. The guy is pure gold.
please fucking learn what subdivision is god you are fucking annoying
I think he's actually serious too

Wait you are actually serious.
You made the claim in the first place and the guy asked you to back it up. He isn't obligated to prove you wrong the burden of proof is on your shoulders.
>he doesn't even know what his own image means
Holy fuck, they're asking you WHAT KIND OF MAPPING YOU WANT so you can MAP a PROJECTED image onto one of the three environment types provided.

If you have an
you can
it because
is a projection type meant for

Look man, tell me: is English your first language? The degree at which you're misunderstanding everything is just a bit too far.
that is literally not what it means
Okay, would you like to prove me wrong? Pull out a dictionary, spit research papers in my mouth, decimate my testicles, show me that I'm an idiot. If not you've done nothing to help your case. Even if it turns out that I'm wrong, hell, I'd probably convince a few idiots to join me too since I'm actually trying to explain shit. But so far all you've done is go "no u're wrong lollle XDD" at not just me but this little subdivisioning argument you're having with other anons.
File: 1464473273894.gif (820 KB, 500x422)
820 KB
820 KB GIF
This whole HDRI/lighting/projection/mapping thing has turned into the worst fucking shitposting exchange I've ever seen on this board, and that includes all the Blendlets jerking off to the inevitable demise of Autodesk happening in their heads

again: >>664843
File: subdivision.jpg (237 KB, 1830x654)
237 KB
237 KB JPG
okay... if you ever caged modelled you would know you can get the same effects of vector based surfaces like nurbs with opensubdiv, but you wouldn't knowthat because you dont model.
you miss read map = mapping, its not. i think English is not your first language buddy.
ment for
but cont.

standard subdivision modifierse like turbosmooth and opensubdiv use
Bi-cubic uniform B-spline

while NURBS is
Nonuniform rational B-spline

notice the spline part. that means they both have control points for SUBDIVISION.
Nonuniform rational and Bi-cubic uniform is what separates these two in terms of how the subdivided mesh operates, one gives you a bit more leeway with the points, the other doesnt. so again please learn about subdivision surfaces before you make yourself look like a retard again.
>i think English is not your first language buddy.
Let's see:
>nconsistent capitalisation (no capitalisations at the start of sentences, English is capitalised correctly despite the other errors)
>misspelling of "misread"
>lack of apostrophe in "it's"
>inconsistent use of contractions (if you're going to contract "it is" into "its [sic]", just do the same for "is not" and turn it into "isn't")
>lack of comma after "language"
Shit, I guess I must really be poor at English then. :^)

Hey, if you're gonna sit here and not elaborate a little more, I'm just gonna throw ad hominems at you. Because you and I damn well know that while "a map" and "the action of mapping" are indeed different things, they refer to the exact same concept, which is what I was referring to.

It's fun, try it some time.
>trying to correct grammar on the internet
well atleast we know who won :)
Ah wait, hold on, I think I get it now. You thought I was referring to "maps" in >>664883 in as "texture maps", when I was really just using the verb form of "mapping" referring to the act of wrapping a 2D plane around a 3D object. Sure, okay, if that's the case then my bad. I should have been more clear, my dear ESL.

Can't handle a little ad homs, hm? Don't like it when your fat stubby fingers can't type accurately and you get called out for it?
take it easy dude, breathe in and out.
Wait a second my dudes... Is this the "it's ugly" guy?
Look, my dude, I'm just trying to get a rational response outta you/that guy. If you act like that, it's fine for me to act like this too, right? If we can't have big boy talk, let's have fun and games.


some more sources for your dumb little brain.
i take back what i said about opensubdiv acting like nurbs. nurbs smoothes geometry the same way catmull clark does because theyre both splines. its just with nurbs you control how you edit mesh differently with points, surfaces and curves(no patches so no extrusions hence the nonuniform)unlike uniform splines like turbosmooth. its a lot more linear. in the end they both smooth the same thats why pixar puts the in the same boat as b splines. i was right all along, theyre the same.
That source literally says you will have a different outcome with a nurbs model than a subdiv. Also Wikipedia is not a source especially when you don't link the article.


>acknowledged they are different
>stills says they are literally the same

>You didn't actually read this source that lays out how they aren't literally the same in the first few paragraphs
>The most common way to model complex smooth surfaces is by using a patchwork of bicubic patches such as BSplines or NURBS.

>However, while they do provide a reliable smooth limit surface definition, bi-cubic patch surfaces are limited to 2-dimensional topologies, which only describe a very small fraction of real-world shapes. This fundamental parametric limitation requires authoring tools to implement at least the following functionalities:

>smooth trimming
>seams stitching

>Both trimming and stitching need to guarantee the smoothness of the model both spatially and temporally as the model is animated. Attempting to meet these requirements introduces a lot of expensive computations and complexity.

>Subdivision surfaces on the other hand can represent arbitrary topologies, and therefore are not constrained by these difficulties.

The rest of the article is about the pros of subdivision surfaces and what can be done.

I went to sleep thinking this was over i really didn't expect this when I got up
File: bsplinesnurbs.jpg (23 KB, 830x314)
23 KB
no they are not different. subdiv is a b spline just like nurbs is. i just ackowledge that open subdiv uses tessellation shit for games, which b splines dont do.

>>stills says they are literally the same
it wasnt me that just said that, literally pixar says that aswell, pic related.
File: file.png (75 KB, 817x487)
75 KB
The article you posted from Pixar says NURBS a total of one (1) time and it uses it to explain that NURBS and subdivision surfaces are not the same
what are you arguing lol? the whole article is about tessellation and not nurbs, i posted the article because a well known company like pixar puts nurbs and b splines in the same fucking category. give me any instance where nurbs subdivisions are different. link me any fucking source, but you wont because they dont exist.
Source related
also you do realize on that entire paragraph its talking both b splines and nurbs right?

nurbs is a generalization of beizer curves(bi-cubic patch surfaces) and b spines so again when it burings up subidivions surfaces, its reference to both nurbs and general splines.
why do i have a feeling you keep misreading everything i say. why do i also have feeling you believe nurbs isnt a subdivision. dont tell me you actually believe nurbs is not a subdivision surface.
I asked you to provide a source to your claims, you provided a document that in plain text is counter to your claims, a screenshot of wikipedia, and a youtube video about tessellation.

You still haven't actually backed up your claims.

Are you using machine translation on your sources or something?
Here have some sources that back me up.

>Two main technologies are available to design and represent freeformsurfaces: Non-Uniform Rational B-Splines (NURBS) and subdivision surfaces.

>Traditionally, these shapes have been
modeled with NURBS surfaces despite the severe topological restrictions that NURBS impose. In order to move beyond these restrictions, we have recently introduced subdivision surfaces into our
production environment.
That one was written by Pixar.
File: nurbs.jpg (51 KB, 933x231)
51 KB
>Both representations are built on uniform B-splines
thanks for backing up my claims to>>664930
and to pic related
and heres a better source
now get the fuck out of my board.
File: getfucked.jpg (49 KB, 964x727)
49 KB
i hope your feeling like a complete retard now.
You literally just posted a slideshow explaining the differences between subdivision surfaces and NURBS

This image still expresses a difference between the two.

The whole slideshow is about how the two practices express and calculate surfaces differently.
I don't feel like a retard but this conversation is making me wish I was as blissfully retarded as you are.
both images are literally saying they are the fucking same, the difference being nurbs has weighted control points, how dumb are you honestly? what are you seeing that makes nurbs different from subdivision, fucking what?
how are you overlooking this, how? are you this BTFO?
Its almost like rational curves are enough to make NURBS considerably different than anything else?

You can't pretend like rational curves are enough of a non part of NURBS to make it the same as another method of 3d. ITS THE FUCKING THING THAT MAKES IT STAND OUT AND BE ITS OWN THING IN THE FIRST PLACE

What are you arguing? Are you on your own side?
File: methods.jpg (149 KB, 300x880)
149 KB
149 KB JPG
lol youre just fuming at being wrong. my argument this whole time from start to finish has been that nurbs is a subdivision and its one in the same like any other. it having weighted points doesnt make it any more different than the types of other subdivision methods there are, its still a subdivision method. so stay mad little boy.
File: file.png (217 KB, 327x316)
217 KB
217 KB PNG
I'm not going to tell you whats wrong with your image. look very closely at it.

Was this what you based your argument on in the first place?
>he thinks nurbs and nurms operate differently
theyre both still controlled by weighted points, thats why theres a tab to adjust creases and weights. those are parameters for nurms. nurbs are restricted to specific splines and surfaces thats why nurms is used for modelling. wana keep going big boy?
You're shitposting at this point right? You do realize that one is a method of model creation and the other is an algorithim for subdividing and smoothing an existing model?

Creating a model using a low poly cage with a subdiv is still poly modeling
>Creating a model using a low poly cage with a subdiv is still poly modeling
huh? no shit thats called cage modelling. the algorithm is the fucking same, nurbs and nurms are both both controlled by weights, what are you smoking?
You're still arguing that fucking NURBS is no different than a subdivision surface modifier.

I have provided loads of evidence explaining that they seem similar but are different.

I will ask again, are you shitposting or retarded?
>is no different than a subdivision surface modifier.
i did't compare it to any modifier lol i wasnt showing you a modifier i was showing you that nurbs has a subdivision method and its not exclusive to nurbs. a subdivision modifier like open subdiv and turbosmooth dont use nurms.

>they seem similar
they dont seem, they are.
File: file.png (151 KB, 1306x552)
151 KB
151 KB PNG
NURMS is a fucking Catmull–Clark algorithm to smooth a surface by adding polys to it. If you fucking turbosmooth an object the 3dsmax documentation refers to it as a nurms object

They are similar sure but are not the same thing.

Can we end this argument with you accepting that you thought that the M was a B. Please why do you have this inability to accept when you are wrong? HOW FUCKING AUTISTIC ARE YOU
wtf is it with the raging turbo autists in this thread?
>Can we end this argument with you accepting that you thought that the M was a B.
>he thinks this is what i was refering to this whole time
dude you got fucking btfo and trying to damage control, ive already won.
test of autism between some guy and some 3rd worlder
>dont tell me you actually believe nurbs is not a subdivision surface
NURBS patches are NOT subdivision surfaces. By modeling with NURBS patches you are using parametric surfaces; by using subdivision, you are modeling approximations of a smooth limit surface.

You're constantly cherry-picking text from different sources trying to prove your preconceptions. The thing is, you don't understand what you cherry-pick, and your preconceptions are wrong. Stop it already.
Somehow the HDRI "debate" turned into a NURBS vs SubD "debate". It's astounding. Certainly, this is the best thread on /3/ right now.
File: tenor.gif (3.26 MB, 480x358)
3.26 MB
3.26 MB GIF
come on man.
File: 1249653520012.jpg (67 KB, 522x465)
67 KB
After the way this thread went, I wonder whether >>664421 is still scared of the technical side of 3D, or, instead, is more scared of the kind of people (assumedly) doing 3D.
File: 1548696796728.jpg (74 KB, 540x540)
74 KB
let's be clear, 90% of the r/3/tards here represent the lowest possible shit-tier of people doing 3D worldwide
I'm not OP but am kinda in the same boat as them. I only screen capped the good information which seemed to be in the first quarter of this thread, next thing you know, the whole thread has completely gone to shit.
Let's make an index of posts:

>>664810 (most relevant?)

Those are the ones I found to have something of relevance in answer to OP, putting aside the shit-flinging about HDRIs and NURBS/SubD.

I had a good laugh skimming over those posts again. What a wreckage of a thread.
wow thanks adding this to my list
thanks for flagging mine as most relevant maybe
It's the same as anywhere on the internet, most of the people who have anything substantial to contribute say their piece and then moves away from the topic.
Their contribution is then drowned out by the noise of hairless apes insulting each-other over who is right in an attempt to elevate the imaginary E-peen.

It's like a glitch in human psychology where it's perceived as an act of weakness to ever admit you're wrong that causes this to occur over and over again.
Juvenile intellects are doomed to keep trying to out-google one another until the day it's patched.

I too behaved in similar manners back when I was a teen and thought I knew more than I did.

This is more about the nature of people who yet can't handle having anonymity and not representative of any community at large.
Where ever your record is tied to a lasting identity things like this won't occur as you can't flick shit without getting shunned.
>I too behaved in similar manners back when I was a teen and thought I knew more than I did.
,nice projecting faggot.
Won't happen today or tomorrow but the simplest way to have people in your life stop treating you like an edgelord is to cease acting like one.
>>665088 was meant for >>665086. scuse' my fuck-up.
File: q5OL30E_d.jpg (5 KB, 250x174)
5 KB
>This post

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.