Say I wanted to do ghetto diy quixel megascans. What kind of equipment would I need?pic unrelated
>>572086For a start, let's consider what you're trying to emulate.Quixel has numerous custom-built scanners around the world that are each tuned to capturing different types of objects, but by and large it's simply a very accurate, automated method of photogrammetry.What they do is basically construct a large box containing in it all possible lighting, filtering and capture equipment, which can isolate a roughly 2x2 meter area from any kind of outside light source (they even do this at night just to make sure), and it automatically captures and processes all necessary images on the spot.The only real thing that seperates their results from just doing the plain way is specifically the high degree of standardization and color accuracy across all assets, and that they can do it on a dirt road outside instead of a studio.Even if you don't care for that degree of perfection, the basics of this type of processing still apply: you need a camera with a very wide dynamic range, multiple lights you can set up from every direction, and polarizing filters - a glass screw-on filter for the camera lens and flexible gel filters for the lights. The first phase is to light the surface you want to capture from 4 to 8 different angles and take a shot of each one, the second phase is to extract the albedo and specularity by taking two additional shots with all lights on, one regular image and one with the mentioned polarizing filters fitted. make sure to mark the positions of where all filters are "in phase" to make sure none of the lights are producing reflections.Polarizing filters will remove all reflections from the surface of an object, so you can produce a nice albeido map from it, as well as using the difference between it and the regular exposure to produce a specular map.To actually process the images you shot, you need Substance Designer 6 or 2017, as it has nodes specifically designed for this, where you input the directionally-lit images and it figures out the rest.
>>572089Getting on to equipment, a camera with a sensor which has very high dynamic range is preferable as photogrammetry goes to shit if you have blown highlights or crushed shadow detail, not a problem in a studio or with otherwise very controlled lighting, but you can't always depend on things going your way.Luckily such cameras today are nowhere nearly as expensive as they used to be, you want something equipped with a modern Sony sensor. This could be a Sony A7-series camera, or a Nikon D8xx, even if you have to get it second-hand. Stay away from Canon.For lights you generally want some sort of daylight-balanced source with a high CRI rating, a poor color rendering index may cause issues with metamerism - colors shifting hue due to poor light properties. Our eyes aren't especially sensitive to this, but camera sensor very much are. You can get away with around 2-4 lights but of course the more you have, the less work you'll have to do moving shit after you set them up.
>>572089Hi! Thanks for all that very interesting information. Saved it for future reference.One thing I'm wondering is how they do all their additional maps. They offer:>Ambient Occlusion>Cavity>Gloss/Smoothness>Roughness>Normal (Tangent-Space)>Displacement>Bump>NormalBump>Opacity>Translucency>Fuzz>BrushIs all of this calculated from geometry, color and albedo?>What they do is basically construct a large box containing in it all possible lightingInteresting. This would explain why their tree scans appear to be chopped off, but I'm not sure how they would deal with something like these scans: https://megascans.se/library/packs/lava-field>custom-built scannersIt was time for me to learn electronics and optics anyway, and I'm not afraid to get my hands dirty.>To actually process the images you shot, you need Substance Designer 6 or 2017, as it has nodes specifically designed for this, where you input the directionally-lit images and it figures out the rest.I was more thinking about doing my own software for this, which I plan to open source together with the scans (free for non commercial use).>you need a camera with a very wide dynamic rangeI really wanted an excuse to buy one.
>>572092For the types of lights, strobes are the most convenient as the flash happens immediately and at a high intensity, which guarantees a sharp image and it will dramatically reduce the effects of external light on the final image. LED lights are a nice pick for constant-illumination.Don't get fluorescent lights as they have a tendency to strobe if your exposure is faster than the electrical refresh (50/60hz depending on where you are) and also shift hues a lot into green/blue. Our eyes see fluorescent light as mostly neutral but it's actually not.For the camera lens, you want a macro lens of some sort, as it will let you focus as close to your surface as you need, so you can capture anything from standard-size objects to tiny snippets of fabric. If you do go for a non-macro lens (particularly if you feel the angle of view is too narrow), do make sure to check reviews for two things: how sharp it is across the frame, and whether or not it has field curvature (FC). If a lens is extremely sharp but with heavy FC, it's more useless as a softer lens with no FC, as this is the tendency of a lens to bend the field of focus away from the point of focus, so even if it has sharp corner performance, a flat object will be out of focus at the edges. Macro lenses lack FC by design.And last but not least you'll need a solid tripod that you can extend and/or bend into any position you could need, particularly how well it handles shooting straight down, as that is what you'll be doing 99% of the time.
>>572093AO is usually just calculated mathematically, there is no physically-accurate method to generate it unless you're rendering in a physically based-engine.Opacity for stuff like foliage is simple, they're photographed against a blue background and chroma-keyed in post.The other map types Substance will spit out automatically based on the captures images.>I was more thinking about doing my own software for this, which I plan to open source together with the scans (free for non commercial use).Unfortunately the technical aspects elude me past "plug this into a node", although I do know how to build a normal map this way manually in Photoshop. In theory it should be possible to deduce everything from the core images and processing them properly, but an academic paper is the source for that, not some random dude online.
>>572092>>572094>>572097thisand for models agisoft's photoscan is ahead by a mile
>>572093Here's the method I used before substance to make normal maps with: http://www.zarria.net/nrmphoto/nrmphoto.htmlPic related is some image I pulled off google that showcases what photographing a person with cross-polarization does; removes reflections and even some light scattering, leaving only the raw diffuse color.
>>572102holy shit, that's fucking cool
>>572093a used a6000 or a6300 would be my choice, high res, top class apsc image quality and the e mount has a v short flange distance so you can adapt basically any lens ever made to itand theyre way cheaper than the full frames