[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Home]
Settings Home
/3/ - 3DCG

Thread archived.
You cannot reply anymore.

File: veach_scene[1].jpg (42 KB, 684x404)
42 KB
Can sound be simulated just like light can in any path tracing renderer? Could we get a 3d model of an oboe to produce sound in the same way a real oboe could? Any projects attempting to do something like this?
Depends on software.

Ansys certainly can, althogh its not exactly the kind of simulation you need.
Some games do have realistic sound physics with Doppler's effect and echo.
File: Capture.png (39 KB, 421x392)
39 KB
i like rainbow six siege's sound alot.
thats not what that is. OP wants a procedural sound engine that can bounce sound off of surfaces ina way that produces realistic sounds. Like the way a flute is just wind passing through a tube but the vibrations from the wind hitting the material produces music.
i think frostbite engine is capable of simulating sounds

for example in some areas you could hear gun fire be different than others
This could become very complicated, depending on how much you want.

these guys did something close to what you want
>"Three-dimensional simulation of the flute
using the Lattice Boltzmann Method"
If you're simulating sound, you obviously want to be getting it in real time for it to be of any use, and that would take a lot of hardware resources.

What most game engines do is take existing samples and use various effects to position the sound in space and create reflections of the sound and so on. There might only be a few sound variations for each type of sound source, but with the engine modulating it with distance and environment you can get the impression of a larger variety of sounds.

What OP is asking is actually simulating the circumstances under which a sound might come to exist, such as the flow of wind through the tube of an instrument and the way it creates changes in air pressure to generate the appropriate sound.
This would probably require a beefy PC just to simulate one instrument, depending on how accurate the simulation has to be to get a sound that's anything lifelike.

Honestly I think the technology exists already for this, albeit between several different things. X Plane is attempting to accurately simulate wind activity, which they do pretty well and are able to run an in depth 3D simulation on top of without totally melting mid-to-high level PCs. Someone would just need to figure out how to tie that wind data to the physical simulation of the instrument.
I would assume that it would be faster to just alter the sound to fit its environment by using points in space which would trigger an alteration of the main sound source, based off of approximations, in relation to where the scene viewer is within the space.

This would seem to be far less taxing than simulating the ways in which objects modulate sound, but I am sure there is someone out there attempting just this. However, then you would have to question how accurate the source of the sound is. Sure, let's say you have found a way to have a 3D oboe modulate sound the same way a real oboe could. But to truly do this, someone would need to accurately simulate the ways in which the human body uses an oboe in order to produce the sound. You just don't blow a specific amount of air into an oboe in order to play it correctly; the air needs to fit a certain pattern of flow which is created by mouth posture and the reed.

I know I'm probably reaching far off-topic though. These are just my assumptions, but I feel like someone has already done all of this before. Toyota created a robot that could play a number of pieces with a trumpet. I would imagine they gathered data on how humans use the trumpet in order to produce the sounds needed to play music. Although, I don't think they used this data in order to create real-time simulations since it seems as if the robot only plays a set number of pre-defined pieces.

I think it would be easier to simulate this sort of work with the source of the sound being produced from something like a speaker instead.
Will someday Blender implement the ability to make 3D models instead of being a useless piece of garbage?
Physical modelling is a thing.
>bounce sound off of surfaces ina way that produces realistic sounds
you record the sound.
you place the speaker to play the sound.
you recreate the environment you want to emulate.

>"want sound to bounce off of 3d meshes with proper reverberation."
that is not possible unless you code it to be possible.
but yes, it is possible.
alternatively, you can 3d print the model and blow on it, idk

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.