[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k] [s4s] [vip] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home]
Settings Home
/3/ - 3DCG

4chan Pass users can bypass this verification. [Learn More] [Login]
  • Please read the Rules and FAQ before posting.
  • There are 7 posters in this thread.

10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
06/20/16New 4chan Banner Contest with a chance to win a 4chan Pass! See the contest page for details.
05/08/16Janitor acceptance emails will be sent out over the coming weeks. Make sure to check your spam box!
[Hide] [Show All]

File: imageQuer332.jpg (1.02 MB, 1063x1594)
1.02 MB
1.02 MB JPG
I'm studying architecture and the past semester I was working with robots and used Rhino Grasshopper to basically turn a robot into a 3D printer extruding sodium acetate (also known as hot ice).
The result is pic related and can also be seen in this video:

When mixing sodium acetate with water, you can create a supersaturated solution that immediately starts to crystalize really fast as soon as it is triggered by something. This allowed us to create this additive process - the liquid drops down and the drop immediately crystalizes and becomes hard upon hitting a surface.

The next step of this project is to get a digital 3D model of our results and explore what kind of architecture it might be. It's obviously very experimental and does not have much to do with architecture in the tradiotional sense.
Anyway, as you can see, the objects we created have very intricate surfaces with a whole lot of detail, which is what makes them so interesting. They are also pure white, but over time turn to a more mushy greyish color, as the sodium soaks up some of the water in the air. Both these characteristics make it very hard to capture the object with photogrammetry. I have tried Autodesks 123D catch, but the meshes and textures it put out were full of holes and not nearly accurate enough.
Now I can try and use a better camera and the 123D desktop app instead of the smartphone app, but I also want to explore the possibility of simply animating or simulating the process.

File: imageQuer117.jpg (1.01 MB, 1594x1063)
1.01 MB
1.01 MB JPG


So I come here to ask how you would go about that. I know my way around Rhino, and I have very basic knowledge of Maya, but I never really did any animation or simulation. I tried modeling a 3d model similar to our results with Grasshopper, and it sort of works, but it's only a very rough approximate and does not look as natural.
Also, it would be really cool to be able to replicate the entire process in a 3D software, and eventually adapt it to virtually 3D print even more objects without having to use actual materials.

So how would you go about this? What software would you recommend? Is this even possible to achieve in 1 or 2 weeks for someone pretty new to animation or simulation?
File: ParallelSteps[1].jpg (66 KB, 296x600)
66 KB

If you are willing to sacrifice your prints you might be able to use photogrammetry. to capture them. Paint or spray them in something to make it easier for the cameras to pick up on the geometry. Spray paint of some sort ( test it to see how it reacts) or a powedered dye you can rub over the prints to "dirty" them up. It looks like you have access to some pretty decent machines, so as the techs who deal with 3d scanning what the best option is.

I think that since your already working in grasshopper to produce the prints you should stay within it. Check out this thread, it might be helpful.

Also start a thread on the grasshopper forums asking for assistance. There are plenty of very clever people who will probably give you some assistance.

My guess is that you might need to use agent based techniques, but im not sure. Would you mind uploading your grasshopper definition which you tried so i can take a look?

Either way looks amazing, great work!

I'm very willing to sacrifice the prints, they kinda erode over time anyway and we have about 15kg of the stuff left and can easily make more.

I guess it might be possible to spraypaint them with non water based paint, tho some kind of pigment airbrush would probably work better. Good idea!

I will make a thread on the GH forums tomorrow and link it here. I can then upload the definitions as well.
But this was the first time I worked with GH, so I'm a real noob.
The robot paths are more or less planar curves, so there is no 3D model or shape prior to printing that one could go of. So it would be very cool to be able to simulate the process.
But for modeling a finished print in GH I simply drew some curves in rhino, divided them in GH and set every point to be the base plane of a mesh sphere with a random radius.
I got a very rough model that way but couldn't figure out how to boolean the meshes into one single mesh, and it's probably the worst and most inefficient way of modeling such a shape.
File: 20161211_215454.jpg (184 KB, 1280x720)
184 KB
184 KB JPG
Also your pic looks very interesting, the left part of the shapes especially. Looks very similar to the layering you can see in our prints.
File: wrgwgw.jpg (276 KB, 600x600)
276 KB
276 KB JPG

Your probably on the right track with the division of the curve and the spheres. Boolean operations on meshes tend to be messier and hard to control. Stick to the standard sphere and then convert to mesh later in the process. Easier to work with although may be slower. Using a spheroid shape also might be better than a regular sphere.

another way may be to generate a series of closed profile curves around your initial paths, then loft them together. The end conditions will probably not be great through.

pic is from the thread i linked, thought you might find it useful.
sorry not the one i just posted, the previous one. This one is from some sort of agent based meshing thing i think.
File: 20161207_191715.jpg (278 KB, 1280x720)
278 KB
278 KB JPG
The threads are really fucked to view on mobile, gotta look through them on my pc later. But I did saw some great ideas already.
Lofting might actually work pretty good especially considering the more horizontal parts of the prints and that the drops are not really spheres when they crystalize. I will try that later, thanks!

The agent based meshing kinda looks like pic related. I've never heard of it but I will read up on it later.

When 3d modeling architecture you never really learn about meshes or more advanced modeling, you usually get by with simple polygons.. sad
File: screen1.png (476 KB, 1920x1080)
476 KB
476 KB PNG
So this is what I've been trying, except I used mesh spheres and the boolean didn't work on those for some reason.
Using polys now and it seems to work for these simply shapes. Now going to add some offset variations to the points and also try and replace the spheres with something that looks more like a flat drop, and I will also try the loft solution.

I'm also reading through the threads you linked and I'm very intrigued to try that out. Looks like a much more sophisticated and versatile solution.
File: loft.jpg (112 KB, 971x750)
112 KB
112 KB JPG
I didn't get around to making a thread on GH forums yet, because I was tied up with work.
However, simply dividing a curve with circles, adding some offset and lofting them seems to come pretty close and, most importantly, is way faster than boolean operations, since the loft ends up being just one surface.
Offsetting the surface by a small amount made the result even more real, since it got rid of the sharp edges appearing here and there. Turning it into a mesh and smoothing a bit got rid of a few weird looking overlaps.
Pretty happy with this.

Next step will be to try and deform the circles themselves and figure out a way to scatter some random "drops" or details along the surface. No idea how to do that tho, so any input would be great.
File: method 1.jpg (303 KB, 2374x1026)
303 KB
303 KB JPG

Looks good so far.

deforming the circles is one option, theres a custom xyz scale component which might do it. Another way is to generate closed curves from three or four points using the interpolate or nurbs component. Comes out looking more like a droplet.

I did a quick try of a method you might find useful. You generate random circles, then split each circle into three parts. randomly place points on two of the parts and then make a curve through the first division point and the two random points for a teardrop shape curve.

The reason you keep that first point is so all the curves are aligned properly for a when you loft. If they are not then you get a messy twisting loft that can have self intersecting parts. This is part of the reason why just generating three or four random points on each circle isnt necessarily the best solution. Try it and you'll see.

Also you probably want more control over the shape of the curves and the random points could be placed very close, making a very small curve or producing very sharp shapes.

Theres another way involving moving points which i'll test and post later. Pic shows how to do this one, apologies for the shitty explanation sketches.
File: method 2.jpg (427 KB, 2597x1111)
427 KB
427 KB JPG
this is the second method which moves points to make the curves instead. probably offers more control. hope you find this helpfull.
File: section & view.jpg (457 KB, 1920x1080)
457 KB
457 KB JPG
Wow, thanks a lot! This is really useful especially for the more horizontal parts of our structures. I saved your instructions and will play around with the script, but I'm very busy with work so unfortunately it will have to wait till the weekend.

Got some feedback today tho and they really liked the model we made with the lofting script, so thank you again, and we're def on the right track.
It looks like we will turn our objects into some sort of mars colony megastructure or something of the sort. Probably gonna set up a grid of towers connected by the horizontal bridges, and do it in a huge scale so that one tower will be around 2km tall.

Question is if we kinda pre-produce the individual parts using the scripts and then simply bool them together in rhino, or if we can manage to combine them into one script and bake a single, finished polysurface. Would be great to get a "finished" model that you can change and adapt to different topology on the fly by simply changing the curve length or point distribution in GH. With that we could then also feed the curves back into the robot script and have the robot build an actual model on some marsian landscape.. I will know more on the weekend.

Here's a section and view of the tower I made yesterday. Need to add some people for scale tho.
sounds good. I can imagine some some really nice drawings and images coming out of this. I actually enjoy helping people with grasshopper, helps keeps me sharp and makes me think.

Bool-ing each peice together in rhino could work, but you'll need to be mindful of how they join together. There could be some pretty weird ugly joints produced. You might need to develop a connection definition to resolve them properly. Producing a single poly surfaces from multiple input curves would be very difficult, especially if the topology of the curves is complex. Fortunately meshes are better at dealing with these sorts of situations. i would suggest that you try to bool the lofted parts together into a single polysurface ( make sure each loft is closed). Convert the resulting polysurface into a mesh and apply smoothing to them to reduce discontinuances. The weaverbird plugin has some good subdivision components. Starling is another mesh plugin which could be useful but i think it is more indepth / complex, i haven't used it much.

One thing i would say is careful analysis of the actual material behavior is key when dealing with these kinds of projects. If you look at work coming out of people like Achim Menges, Theo Spyropolis and the the AADRL, all of the best stuff relies on understanding material behavior and process and then translating those to digital processes , rather than focusing on form.
File: Shot 001_1.jpg (268 KB, 1920x1080)
268 KB
268 KB JPG
Hey, sorry it took so long to answer, had a lot of work to do. I was able to use grasshopper to make it a bit quicker tho.

I am trying out your scripts right now and they work great. Don't got anything to show yet tho, so here's a small mockup of what we might go for, using the model I already had. Gonna look a lot better with the randomized curves, and we're gonna build a whole "city" out of these structures, going of the grid we get when going of the material properties. The material can bridge about 7-8 cm before collapsing, which is pretty impressive for that size, and we get a sort of triangular grid to set up our "city".

We're actually looking into the work you mentioned, the same was suggested by our professor. Also looking into metabolism and a couple of japanese architects/artists from the 70s, where this kind of utopian ideas where explored a lot. Elon Musk and all the talk today of space travel and mars colonialization is a good base to go off as well.

The most difficult part for us right now is to come up with a way to how these structures could actually be realized in such a large scale irl. It does not have to be current tech, but it should be somewhat believable sci-fi wise. Our current angle is that large spacecraft could travel to planets and use some local material that is common in most places to construct these structures as some sort of hub or harbor for spaceships. Then, when the humans arrive, they can dock their ships onto our structures, where they find a habitable atmosphere inside.
You're def right in that we gotta come at it more from a material angle tho.
As for the booling of the indiviual surfaces, it does not have to work perfectly (would be cool tho). We're probably going to make some renderings in Twinmotion, so it won't matter if some of the meshes or surfaces intersect. Lots of photoshop will also help in concealing any ... errors.

The whole modeling in grasshopper is mainly for us to be able to accurately explore what we want the robot to build in the end, and to get renderings out of it.
Thanks to your help, it is now accurate enough so we can get a very good preview, and set up the robot path and script accordingly so we can build a nice physical model. Gonna upload some pics and a timelapse vid of it once we're done, which will be in the middle of february.

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.