Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I expected they'd be either visually watching the changing contours of an opaque liquid, or somehow using refraction to get multiple visual angles of the same features, but…

…they're repeatedly dipping it, and using the volume displacement to reconstruct the shape. Amazing. The site is hammered right now so I can't get more details: anyone see how many dips are required to get the highest-detail models they show on the landing page?



In their presentation they have a bit more info. For the elephant model they show the results at 100, 500 and 1000 dips.

https://youtu.be/yHvyPnkuAiw?t=1m30s


Why is it that all voices sound monotone when presenting a SIGRAP paper.


Grad students don't have much budget for voice actors.

That being said, they still make efforts. When I was in grad school, I was routinely asked to do voice overs for videos that accompanied paper submissions by students who had strong accents when speaking English.


Not everyone has loud, presentational voices.


Well... to be fair, if you wanna make sure your study reaches as many people as possible, you must eventually need to improve your presentational skills. Like, you know, we do for everything else. I understand you didn't imply this, but IMHO there is very little pride in wanting to sound introvert forever, intentionally. Now I just hope the parent didn't say that due to misogyny.


Voice presenting for a recording is more of a practical art than we really give credit for. Unless you do it a lot, you aren't going to be very good at it.


Which is the same as with anything else really. You need to practice to be good at something.


Yes, this is another case of the 10,000 hour rule.


Why are all the speakers at tech conferences so bad? And why do good presenters get invited to every conference on the planet?

Public speaking is a skill. It takes work and practice. Most tech folks have neither the time nor desire to do so.


It's copycat behavior. They are imitating other presentations.


Who does? The bad one's or the good one's? Genuine question.


Both.

The good ones snarf ideas from other good presenters when they see something useful.

A lot of bad ones imitate other presentations as the "minimum standard" because creating a good speech takes a LOT of work.

I'm also going to point out that a lot of tech speakers aren't native speakers of English. If you made me, a native English speaker, give a presentation in French or Chinese, I'm guaranteed to be a bad speaker.


Ah, perfect. Thanks!


It's pretty cool. Based on the video, by dipping at various angles it looks like the process is solving an inverse problem in the Radon or Hough transform domains, something similar to what is done in medical tomographic imaging.


I just want to confirm that I understand what's going on here since you seem to get it. I have no background in this but am curious.

They're dipping this thing into liquid multiple times in different ways and then measuring how much the volume has changed from the initial touch of the liquid to the volume of the object as fully submersed? Also, it seems that they are first 3D printing a 3D model, dipping, and then comparing the scan to the original 3D model? Is there any chance that the types of models they're choosing are skewing the accuracy of the results? They seem to be choosing models that don't have a lot of surface texture or much fine detail and I'm assuming that's a limit for all 3D scanning tech right now?

Edit: Also, how does this thing handle 3D scanning of something like a sponge or box that might absorb the liquid? I imagine that's just not possible with this kind of scan, right?


From the video, it appears that they are dipping by 1cm, measuring volume, dipping by 1cm more, measuring volume, etc. So they end up, effectively, with a set of 1cm slices and the volume of each slice. For one orientation. Then they repeat, getting slices at a different orientation.

Not only would it not work with something absorbent, but (per other comments below) does not (currently) work with shapes that trap air or catch liquid.

I think they're 3D-printing test shapes because it's the simplest way to generate interesting test structures. Otherwise they'd have to manufacture (for example) a bunch of columns plus a ball using metalwork or wood or something. Some of their models look like they were originally scanned from small sculptures -- although in this day and age, it's possible they were just created as 3D models in the first place.


Those are valid limitations, but current 3d imaging techniques I know of are either with photography, laser scanning or touch probing. All of the above also face those limitations.

My initial interest was that it would bring the cost down a lot, but they don't seem to bring great precision and they require a moving robot with at least 3 axes which is unlikely to be very cheap for precision material.


You don't need a robot arm at all. Consider clamping the test object inside a transparent sphere, resting on wheels that allow the sphere to be rotated. Instead of dipping, partly fill the sphere with liquid, test displacement at many orientations, then return to upright where more liquid can be added, repeat. You'd get the same measurements, just in a different order.


The robot arm is expensive because it needs precision hardware (backlash free bearings/gearing).

Your idea also requires precision hardware to move the sphere.


Actually besides 3 step motors ( for rotating in three axis ), rubber wheels to hold the sphere and a sensor to measure the water level, I don't think you need more. What I don't know is how much water you need, probably this can also be computed.

It's a better idea than dipping with a robot arm. And also faster.


Yeah... I just wondered why they were 3D printing things instead of getting normal, everyday objects that they could dip like keys, cups, pens, etc. I have a feeling it's not detailed enough for those use cases yet.


I think the main reason would be that they can compare their scans directly against the original models.

If they were to use real world objects, any comparison would have to use another scanner, which would introduce its own bias.


That seems backwards. Keys, cups, and pens are pretty trivial shapes. (Assuming you can get good angles on the cup so that it doesn't have air pockets.) More complex shapes like the weird statues they're using seem like a more robust test.


Keys and pens have fine details that would be pretty impressive to scan accurately.


This method would only work with objects that are rigid in all orientations, too - so bunches of keys are out.


They're using computed tomography techniques, which is exactly the way I would expect.


Yeah, super simple. All you do is start with a bath of water and a robot arm, then just scan the rest of the elephant!

/s


Thanks for the laugh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: