Trying to use photogrammetry for vidya but my meshes don't look so hot right now.
I want to create environments and I try to do this by walking around the sides of the building and taking pictures of the opposite/adjacent side. I go 360 degrees to get the full environment yet it doesn't look good.
Pls help /3/
123 catch isn't made for that plus the topology is absolute shit.
learn to model
what the fuck are you doing stop
You can't model the entire universe. Anon could easily be doing something AR/VR related where modeling each individual environment quickly becomes an impossibly complex task.
>You can't model the entire universe
No Man's Sky says hello :)
I think the best thing to do would probably make 123d catches of individual things like walls and the ground.
but no man's sky is procedurally generated.
The actual universe, not a human-created one.
try agisoft,its good
Are you legitimately a retard?
If you've got a rebuttal, make it. Otherwise don't waste your time.
123dcatch is made for small stuff to share with people on NSAbook .
get visualSFM , its scale-independent (you can do microscopic shit and planets) surface from motion (pics of the same thing with slightly different angles)
getting it into something like 3ds max is a bit tricky but i used pics>visualSFM>Meshlab(clean the pointcloud)>Autodesk ReCap (convert it to autodesk's botnet format that max supports)>3ds max
this is definitely the best way to scan things ,i did some scans with a kinect for 360 and kinectSDK for pc but it only works for scales kinect is built for but it gives realtime results.
its shit , i tried it and like 4 other paid photogrammetry programs and they were all shit .
VisualSFM is the best one .
forgot my pic
How many photos should I take per VisualSFM project?
123d catch isn't that good for anything except for maybe 3d reference
Id be wasting my time trying to help, thus me asking if you are legitimately retarded.
Which btw is confirmed.
Try 3DF Zephyr
Its definitely the best. Tried it a few times.
visualSFM also is good but its much more complex due to it being open source.
ANY photogrammetry software WONT give you out 1:1 replicas of the model. (laser scans would be much closer to that) BUT still you need to polish it a bit afterwards.
If you need it for a GAME - it will be just enough. EVEN semi good 3D model with a good texture will be looking GREAT from a distance. and that 3DF Zephyr will do great at that.
U know why all are shit? Because you need good photos to begin with. High res, and preferably in good lighting conditions.
How dumb are you to try it on a motherboard of somesort, its too complex with tiny details and things.
32 hundred dollards for the one with the proper features? are they paying you?
Try it. Its trial is the full version BUT you cant save.
I dont care - pirate it ffs.
how many pictures do you need for a passable result?
Can you just take a video of something spinning with a high framerate and split each frame to use as an image?
>how many pictures do you need for a passable result?
10-40 depending on lighting, object, and your camera
>Can you just take a video of something spinning with a high framerate and split each frame to use as an image?
you could do that, but 123dcatch already takes way too long for an object composed of 20 images. Increasing the resolution and higher amount of frames would, needless to say, increase difficulty in creating it.
>You can't model the entire universe
my sperm says you're wrong.
You're not supposed to use photogrammetry to recreate entire environments.
You use photogrammetry to recreate individual objects, be they blades of grass, large rocks, or whole trees, and then place them intelligently around your environment to recreate it.
Enjoy actual professionals using photogrammetry in a serious way