I expected they'd be either visually watching the changing contours of an opaque liquid, or somehow using refraction to get multiple visual angles of the same features, but…
…they're repeatedly dipping it, and using the volume displacement to reconstruct the shape. Amazing. The site is hammered right now so I can't get more details: anyone see how many dips are required to get the highest-detail models they show on the landing page?
Grad students don't have much budget for voice actors.
That being said, they still make efforts. When I was in grad school, I was routinely asked to do voice overs for videos that accompanied paper submissions by students who had strong accents when speaking English.
Well... to be fair, if you wanna make sure your study reaches as many people as possible, you must eventually need to improve your presentational skills. Like, you know, we do for everything else. I understand you didn't imply this, but IMHO there is very little pride in wanting to sound introvert forever, intentionally. Now I just hope the parent didn't say that due to misogyny.
Voice presenting for a recording is more of a practical art than we really give credit for. Unless you do it a lot, you aren't going to be very good at it.
The good ones snarf ideas from other good presenters when they see something useful.
A lot of bad ones imitate other presentations as the "minimum standard" because creating a good speech takes a LOT of work.
I'm also going to point out that a lot of tech speakers aren't native speakers of English. If you made me, a native English speaker, give a presentation in French or Chinese, I'm guaranteed to be a bad speaker.
It's pretty cool. Based on the video, by dipping at various angles it looks like the process is solving an inverse problem in the Radon or Hough transform domains, something similar to what is done in medical tomographic imaging.
I just want to confirm that I understand what's going on here since you seem to get it. I have no background in this but am curious.
They're dipping this thing into liquid multiple times in different ways and then measuring how much the volume has changed from the initial touch of the liquid to the volume of the object as fully submersed? Also, it seems that they are first 3D printing a 3D model, dipping, and then comparing the scan to the original 3D model? Is there any chance that the types of models they're choosing are skewing the accuracy of the results? They seem to be choosing models that don't have a lot of surface texture or much fine detail and I'm assuming that's a limit for all 3D scanning tech right now?
Edit: Also, how does this thing handle 3D scanning of something like a sponge or box that might absorb the liquid? I imagine that's just not possible with this kind of scan, right?
From the video, it appears that they are dipping by 1cm, measuring volume, dipping by 1cm more, measuring volume, etc. So they end up, effectively, with a set of 1cm slices and the volume of each slice. For one orientation. Then they repeat, getting slices at a different orientation.
Not only would it not work with something absorbent, but (per other comments below) does not (currently) work with shapes that trap air or catch liquid.
I think they're 3D-printing test shapes because it's the simplest way to generate interesting test structures. Otherwise they'd have to manufacture (for example) a bunch of columns plus a ball using metalwork or wood or something. Some of their models look like they were originally scanned from small sculptures -- although in this day and age, it's possible they were just created as 3D models in the first place.
Those are valid limitations, but current 3d imaging techniques I know of are either with photography, laser scanning or touch probing. All of the above also face those limitations.
My initial interest was that it would bring the cost down a lot, but they don't seem to bring great precision and they require a moving robot with at least 3 axes which is unlikely to be very cheap for precision material.
You don't need a robot arm at all. Consider clamping the test object inside a transparent sphere, resting on wheels that allow the sphere to be rotated. Instead of dipping, partly fill the sphere with liquid, test displacement at many orientations, then return to upright where more liquid can be added, repeat. You'd get the same measurements, just in a different order.
Actually besides 3 step motors ( for rotating in three axis ), rubber wheels to hold the sphere and a sensor to measure the water level, I don't think you need more. What I don't know is how much water you need, probably this can also be computed.
It's a better idea than dipping with a robot arm. And also faster.
Yeah... I just wondered why they were 3D printing things instead of getting normal, everyday objects that they could dip like keys, cups, pens, etc. I have a feeling it's not detailed enough for those use cases yet.
That seems backwards. Keys, cups, and pens are pretty trivial shapes. (Assuming you can get good angles on the cup so that it doesn't have air pockets.) More complex shapes like the weird statues they're using seem like a more robust test.
…they're repeatedly dipping it, and using the volume displacement to reconstruct the shape. Amazing. The site is hammered right now so I can't get more details: anyone see how many dips are required to get the highest-detail models they show on the landing page?