Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
3D scanning by dipping into a liquid (sdu.edu.cn)
350 points by jakobegger on July 11, 2017 | hide | past | favorite | 117 comments


I expected they'd be either visually watching the changing contours of an opaque liquid, or somehow using refraction to get multiple visual angles of the same features, but…

…they're repeatedly dipping it, and using the volume displacement to reconstruct the shape. Amazing. The site is hammered right now so I can't get more details: anyone see how many dips are required to get the highest-detail models they show on the landing page?


In their presentation they have a bit more info. For the elephant model they show the results at 100, 500 and 1000 dips.

https://youtu.be/yHvyPnkuAiw?t=1m30s


Why is it that all voices sound monotone when presenting a SIGRAP paper.


Grad students don't have much budget for voice actors.

That being said, they still make efforts. When I was in grad school, I was routinely asked to do voice overs for videos that accompanied paper submissions by students who had strong accents when speaking English.


Not everyone has loud, presentational voices.


Well... to be fair, if you wanna make sure your study reaches as many people as possible, you must eventually need to improve your presentational skills. Like, you know, we do for everything else. I understand you didn't imply this, but IMHO there is very little pride in wanting to sound introvert forever, intentionally. Now I just hope the parent didn't say that due to misogyny.


Voice presenting for a recording is more of a practical art than we really give credit for. Unless you do it a lot, you aren't going to be very good at it.


Which is the same as with anything else really. You need to practice to be good at something.


Yes, this is another case of the 10,000 hour rule.


Why are all the speakers at tech conferences so bad? And why do good presenters get invited to every conference on the planet?

Public speaking is a skill. It takes work and practice. Most tech folks have neither the time nor desire to do so.


It's copycat behavior. They are imitating other presentations.


Who does? The bad one's or the good one's? Genuine question.


Both.

The good ones snarf ideas from other good presenters when they see something useful.

A lot of bad ones imitate other presentations as the "minimum standard" because creating a good speech takes a LOT of work.

I'm also going to point out that a lot of tech speakers aren't native speakers of English. If you made me, a native English speaker, give a presentation in French or Chinese, I'm guaranteed to be a bad speaker.


Ah, perfect. Thanks!


It's pretty cool. Based on the video, by dipping at various angles it looks like the process is solving an inverse problem in the Radon or Hough transform domains, something similar to what is done in medical tomographic imaging.


I just want to confirm that I understand what's going on here since you seem to get it. I have no background in this but am curious.

They're dipping this thing into liquid multiple times in different ways and then measuring how much the volume has changed from the initial touch of the liquid to the volume of the object as fully submersed? Also, it seems that they are first 3D printing a 3D model, dipping, and then comparing the scan to the original 3D model? Is there any chance that the types of models they're choosing are skewing the accuracy of the results? They seem to be choosing models that don't have a lot of surface texture or much fine detail and I'm assuming that's a limit for all 3D scanning tech right now?

Edit: Also, how does this thing handle 3D scanning of something like a sponge or box that might absorb the liquid? I imagine that's just not possible with this kind of scan, right?


From the video, it appears that they are dipping by 1cm, measuring volume, dipping by 1cm more, measuring volume, etc. So they end up, effectively, with a set of 1cm slices and the volume of each slice. For one orientation. Then they repeat, getting slices at a different orientation.

Not only would it not work with something absorbent, but (per other comments below) does not (currently) work with shapes that trap air or catch liquid.

I think they're 3D-printing test shapes because it's the simplest way to generate interesting test structures. Otherwise they'd have to manufacture (for example) a bunch of columns plus a ball using metalwork or wood or something. Some of their models look like they were originally scanned from small sculptures -- although in this day and age, it's possible they were just created as 3D models in the first place.


Those are valid limitations, but current 3d imaging techniques I know of are either with photography, laser scanning or touch probing. All of the above also face those limitations.

My initial interest was that it would bring the cost down a lot, but they don't seem to bring great precision and they require a moving robot with at least 3 axes which is unlikely to be very cheap for precision material.


You don't need a robot arm at all. Consider clamping the test object inside a transparent sphere, resting on wheels that allow the sphere to be rotated. Instead of dipping, partly fill the sphere with liquid, test displacement at many orientations, then return to upright where more liquid can be added, repeat. You'd get the same measurements, just in a different order.


The robot arm is expensive because it needs precision hardware (backlash free bearings/gearing).

Your idea also requires precision hardware to move the sphere.


Actually besides 3 step motors ( for rotating in three axis ), rubber wheels to hold the sphere and a sensor to measure the water level, I don't think you need more. What I don't know is how much water you need, probably this can also be computed.

It's a better idea than dipping with a robot arm. And also faster.


Yeah... I just wondered why they were 3D printing things instead of getting normal, everyday objects that they could dip like keys, cups, pens, etc. I have a feeling it's not detailed enough for those use cases yet.


I think the main reason would be that they can compare their scans directly against the original models.

If they were to use real world objects, any comparison would have to use another scanner, which would introduce its own bias.


That seems backwards. Keys, cups, and pens are pretty trivial shapes. (Assuming you can get good angles on the cup so that it doesn't have air pockets.) More complex shapes like the weird statues they're using seem like a more robust test.


Keys and pens have fine details that would be pretty impressive to scan accurately.


This method would only work with objects that are rigid in all orientations, too - so bunches of keys are out.


They're using computed tomography techniques, which is exactly the way I would expect.


Yeah, super simple. All you do is start with a bath of water and a robot arm, then just scan the rest of the elephant!

/s


Thanks for the laugh.


That is amazing. I think I've looked at every photogrammetry, desconstruction, hand modeling etc... technique for 3D reconstruction and this one takes the cake for ingenuity, quality and capability.

Not sure how practical it is right now, but I wonder if you could do this with air volume at a high enough delta measurement resolution you might get some amazing results.


I think I've looked at every photogrammetry, desconstruction, hand modeling etc... technique for 3D reconstruction and this one takes the cake for ingenuity, quality and capability.

Elsewhere in this thread, 'proee' links to what might be an even more interesting technique, which restrains the object in a dodecahedral "cage" (to allow for precise angular positions) and then measures the amount of liquid necessary to create a set predetermined rise in liquid level. http://www.romansystemsengineering.com/hypothesis.html.

Combining some aspects of the two, it might make sense to start with the object at the bottom of an empty container (in a cage or otherwise restrained) and add liquid at a known constant rate (as for a titration). Then generate a 2D graph of time against liquid height for a number of known angles, and solve in the same manner as this paper describes.


> Combining some aspects of the two, it might make sense to start with the object at the bottom of an empty container (in a cage or otherwise restrained) and add liquid at a known constant rate (as for a titration).

I believe this is isomorphic to the draining mechanism described in their paper.


I was thinking that accuracy would be easier if you are starting with a dry container, so that one wouldn't need to use fluorinert. You could always combine the two: measure while filling, and then while draining, then average.

I also wonder if instead of using discrete angles and multiple fills (or drainings) one could just tilt the container, possibly even slowly rotating it continuously. Add a squirt, measure the liquid level for a 360 rotation, then add another.

Edit: just saw your other comment suggesting similar things!


I think the original link is much more elegant, though this example is certainly impressive.


How about firing air at an object through a series of nozzles, measure the resulting turbulence and then solve for shape ;-)


> measure the resulting turbulence

Just a quick solution of a million-dollar math problem (https://en.wikipedia.org/wiki/Millennium_Prize_Problems#Navi...) and you're on your way!


You don't have to solve the Navier Stokes equations analytical for this. You don't even have to solve them explicitly at all: I can see a grad student just getting a lot of experimental data and throwing it all at a neural net with good results.


> You don't have to solve the Navier Stokes equations analytical for this.

Yes, to be sure you are right. It was just a silly (and, as you point out, mathematically inaccurate) joke.


It doesn't solve the hidden cavity problem very well, though.


Indeed. At least not in a way I can think of within thirty seconds.


If you're going to allow infinite engineering effort then I propose going all thermodynamic and having a VERY large piston connected to a small volume, change the volume and therefore pressure of a large quantity of room temperature gas, perhaps a short hydrocarbon like butane or propane or chlorine or ammonia or any one of numerous refrigerants, then the piston can vary the pressure setting the fluid level without moving anything but the piston. It may not be fast but computers can be very patient. Given some pumps and mixing vanes it could generate a ton of thermodynamic data. So toss in a piston, hit start, come back a couple days later to a 3-d model AND chemical composition data.


A somewhat similar, much simpler process is in wide use in the manufacturing industry for form measurement:

https://www.google.de/search?q=air+gaging


Seems like the turbulence would present difficulties. How about lasers? Oh wait that's a known method.. :)

How about shining a normal light though and just inverting the shadow calculations.. hmm.. has that been done?


I think I might have done what you are talking about a while ago. See my write up on my website:

http://jack.minardi.org/software/whats-in-a-shadow/


Fascinating. I'd wondered a while back how feasible it is to reconstruct an object from a 3-view orthographic projection, e.g. https://s-media-cache-ak0.pinimg.com/originals/b2/92/a9/b292.... Your work seems related.


Cool!


I feel like this would only work in limited cases of laminar flow and even then the scattering of the air would be complicated af I think


If you can solve the math (even just numerically), you can in principle do it for non-laminar flow.


I think that's already a thing, right? They use that for corneal deformation at a minimum.


My friend created an advanced fluid scanner using a dodecahedron. His method is novel in that it:

1. Does not require rotation of the DUT, but instead uses just rising fluid level.

2. Uses permeable fluid so it achieves full density scans.

He spent a number of years trying to get the product to market as a startup, but ran out of personal funding.

He believes Archimedes may have used the Roman dodecahedron as a fluid scanner to test the quality of their projectiles to improve accuracy.

See http://www.romansystemsengineering.com/our_product.html


What property of a dodecahedron makes it useful for 3D scanning?


Wow, this seems like an beautiful technique.

Apparently, it's not that the dodecahedron is uniquely suitable, rather it offers the best compromise between competing factors. In this process, it mostly serves as a cage to hold the object for immersion, although it has some other useful properties as well:

"There is no other 3d platonic structure that has a higher fill-factor (volume that can be inscribed within a sphere of rotation) relative to the entrance hole (i.e. face area), while minimizing periphery length."

"Intuitively, one might suspect that both the dodecahedron and the icosahedron are reasonable choices of structure for the given constraints, with their subjective scores of 4.6 and 4.0 respectively. The dodecahedron offers increased fill factor, aperture size, and minimizes the periphery length, and is simple to manufacture."

http://www.romansystemsengineering.com/hypothesis.html


Found a presentation on this technique on youtube:

https://www.youtube.com/watch?v=yHvyPnkuAiw


Thanks. Video in the article kept timing out.

Edit: it looks like it's the same video (same length and intro).


Comment on the Hacker News system - most combinations of {edu,ac,gov,mil}.{$ccTLD} should probably be collectively treated as a TLD for site-display purposes. e.g. sdu.edu.cn (Shandong University) would be more descriptive than plan edu.cn (some academic institution in China).



Wow this is very impressive. I was thinking originally they were using the milk scanning technique - http://www.instructables.com/id/GotMesh-the-Most-Cheap-and-S...


Does anyone here know if there is an original paper or where this technique originated? It has to be pretty old and I can't imagine nobody thought of it before, but I'm somehow unable to find anything else than the posts on instructable, hackaday and youtube.


I was wondering that too, I've not found an authoritative source.


But they should be able to improve the milk technique by using different dipping angles as shown here and in many cases it would probably be much faster then this technique (but in other cases impossible to get all the geometry).


I once scanned myself at a maker fair in a similar manner. A swimming pool of blue dye with a camera facing above was used so that objects could be scanned by looking at their outline in the blue dye as they were dipped in(a different approach to the volume transforms presented here). Now to do this with a person involved strapping that person to a board and slowly dunking them in. Overall, the experience was unpleasant and what I imagine waterboarding is like, but hey at least I got a 3d scan of myself.


From what I understand, which is admittedly not very much, waterboarding is a lot worse than being submerged. It's not so much dunking you into water as it is pouring water up your breathing passages.


The approved method is to hold the person down on his back, then quickly apply a sheet of Saranwrap over the face followed by a washcloth, followed by many five gallon buckets of water. The drowning reflex is elicited long before a typical breath hold time. If water gets into the breathing passages, that's very bad - it means the procedure was done incorrectly.

The procedure is terminated quickly once the person is entering distress due to lack of oxygen.

The first few times it's done on a person, it's very frightening. Later, it becomes very annoying, especially if they wake you up at 4 AM for another go.


Opinions differ about the adjective 'approved'. I also doubt that, in practice, the procedure is "terminated quickly once the person is entering distress".

You make it sound like the DoD, Amnesty International and the Red Cross had a couple of meetings on this and came up with a humane way of doing this.


Being submerged wasn't bad, it was the water that flowed into my nose near the end of the scan that was bad. It was in no way as bad as waterboarding(especially since I was able to immediately get out), but enough to throw me into a bit of a coughing fit.


you are being submerged in liquid? therefor holding your breath? therefor all together avoiding any of the torture induced by waterboarding?

if you're interested in hammering out some science for us, waterboard yourself and report back. i think you might be surprised.


I did not mean for my post to make waterboarding seem to be simply unpleasant. It is not, it is horrible and inhumane practice whose use cannot be justified.

That being said getting a good scan required me lying on my back holding still as I was slowly lowered into the water. At the end of the scan I had water going into my nose. Holding my breath was unable to prevent this. Certainly not as bad as waterboarding, but certainly enough to elicit a coughing fit.


got it. thanks mate. was an honest question. maybe negligent on my part, but i had something like a dunking booth in mind. wasn't considering a rush up the nose. definitely at least a bit closer to waterboarding than what i'd imagined. :)


Hey, if you add a force sensor to the dipping arm couldn't you, in principle, obtain a 3d density map of the scanned object as well using archimede's principle?


No, I don't think so. Archimedes' principle tells us that the decrease in force required to support the object is equal to the weight of the displaced liquid. The force of gravity on the object itself is the same, regardless of which portion of the object is submerged.


Indeed, I also reached this conclusion upon thinking about it a bit more.

So a force sensor on the arm would only be good as a way of measuring what they are already measuring, that is, the volume of the displaced liquid.


I'm not so sure. If the sensor was able to take a 2D measurement of the weight, the rotations could be used to calculate the 3rd dimension of density.


How about if you're using a compressible fluid?


See my other comment. It's an entirely novel approach to fluid scanning and includes 3d density scans.

http://www.romansystemsengineering.com/our_product.html



This is completely different.


Awesome technique! But I cannot imagine this ever being a fast process. 1000 dips for a small model, and you cannot dip it with force.


As long as the equipment is cheap, the speed isn't that big of a deal. Imagine a thousand $10k machines set up on a studio lot that is constantly scanning items - it would be a boon for the gaming and film industries where production takes months or years and they have several steps of increasing graphics and art work complexity. I bet you could also set up cameras and use the refractive index of the liquid to calculate textures while its scanning.


Looks like they're doing 3 minutes per dip - that sums to over two days of continuous dipping.


Suggestion for increasing applicability: Start with the optical scan, then only use the method to nail the occluded parts. And instead of just gathering data consider what angle will give you the biggest amount of new information next. Not sure if authors tried either.


Isn't this "dip transform" basically the (inverse) Radon transform[0] used in CT and MRI?

[0] https://en.wikipedia.org/wiki/Radon_transform


Not quite for two reasons: Displacement is integrated over all slices under the liquid (just take the derivative to reverse). And more importantly, CT reconstruction (can't speak for MRI) is often done slice by slice which cannot be done for this system because you are an area integral rather than a line integral; this means that to get a full reconstruction you have to rotate the object in more than one axis.

There are certainly similarities though.


Fun. So make a closed spherical assembly with lidar something to range the fluid height and put it in a three axis mount so the direction of down can be continually and smoothly changed, then drain the water very slowly.

Then change the fluid for something with less surface tension (hurray more uses for chlorofluorocarbons), and put it in a 20g centrifuge, and perhaps scanning times will be reasonable. :)


I wonder if they are taking water cohesion into account.

e.g, some of the water will stick to the sides of the object.


Pre-dip the object.


I wonder how much of a role wetting/capillary effects play in this? The liquid interface will distort as it approaches the object, and will try to meet at a certain contact angle (based on surface tensions etc). Correcting for this might help improve the resolution of the scans?


I would predict that it would be rather difficult to dip-scan a sponge. Might work if it was fully saturated (waterlogged) in advance, though.


This is a really clever solution!


Hugged to death :/

How do they handle overhangs that trap bubbles?

Maybe shaking and scanning in reverse? (can stall cause weird effects when the air can't get back in, but should be more detectable.


They address this in the paper, but from the opposite angle (ho):

  It should be noted that our dipping scheme assumes that the
  object has no vertical _caps_ in any orientation. A cap is a vertical
  cavity that forms a vessel, in which water can be accumulated if
  the object is elevated vertically and air can be trapped, generating
  air pockets when the object is dipped in the opposite orientation.
  Most caps, if they exist, would be small and would have a minor
  effect akin to noise on the dip transform. Nevertheless, caps can be
  detected by dipping and then lifting back the object with the liquid
  trapped in the cap, yielding two different water levels. Flipping the
  object vertically allows detecting air pockets as they become caps.


Mobile friendly:

> It should be noted that our dipping scheme assumes that the object has no vertical _caps_ in any orientation. A cap is a vertical cavity that forms a vessel, in which water can be accumulated if the object is elevated vertically and air can be trapped, generating air pockets when the object is dipped in the opposite orientation. Most caps, if they exist, would be small and would have a minor effect akin to noise on the dip transform. Nevertheless, caps can be detected by dipping and then lifting back the object with the liquid trapped in the cap, yielding two different water levels. Flipping the object vertically allows detecting air pockets as they become caps.


Thanks, I forget that quoted text is horrible on HN on mobile.


You're not quoting text, you're formatting it as code. Just put "> " instead of indenting.


aren't there some pretty basic CSS rules to even make the formatted code look better on a narrow screen mobile? something like `white-space: pre-wrap`, IIRC?

maybe I should look into how to add custom CSS rules to a particular site (HN), can you do that native in firefox, or does it need a plugin?


By dipping from various different angles


It's a bit like a fluid pet scan, each angle will give partial information, you "integrate" to gather the surface.

Gives me ideas.


Think of the intro to the Ghost in the Shell, where the android body is floating up through a bath, and the manufacturer layer on the body is dissolving and being removed. This method could detect that the material is 100% removed from the body as well as confirm that the body is created exactly to spec.


I guess that would be possible, but one has to do some trickery to find false data.

Maybe correlating the exact oposite dip works: Dipping a bowl shows a negative volume then the bowl begins to fill. A dome (reversed bowl) will show additional volume of trapped air at the same point.


Can someone ELI5 as to how this gathers the spatial data for the samples? In my ignorance, it doesn't seem clear to me as to how this works. I get that you would be gathering different volume data with each dip but it seems to me that this information would look like a graph that rises and falls back to zero. In the video, they showed each dip as gathering an accurate 2d cross section of the object. For instance, the 2d slices of the elephant on each dip graph. They seem to be able to shape a closed 2d polygon slice per dip and it is mystifying to me. Am i missing a part of the process or what? Is there some imaging going on as well on top of the volumetric sampling?

Edit: I should add that this really impresses me regardless of how they do it. Ive always thought it was a pretty big bummer how optical 3d scanning looks so incomplete in a lot of cases.


No, they only get a graph of the volume of each slice. They have to do this many times and compute something like a Radon transform (computed tomography) to reconstruct the shape.


Is the software open sourced? This looks like it would translate into a fun hack-a-day project. At first sight, the required hardware seems pretty basic, or? It would be awesome if someone replicated it with a RaspPi or something and posted a step-by-step tutorial.


That is very clever. So slow you'd have to run it overnight, but that could be OK for some applications.

A good test: run it on an auto throttle body. Those have lots of voids and holes, and some people need to duplicate existing ones.


Doesn't look like they're figured out how to scan voids and holes yet.


Yes, they can scan voids. Look at the examples. Blind holes or convexities that trap air could be a problem, but they should be able to detect such inconsistencies.


I half expected this to be tongue in cheek exposition on creating molds. I guess it was on my mind after watching a friend artist creates sculpture molds for pewter casting. https://www.joshhardie.com/fullscreen-page/comp-izfweszu/f59...


That seems equivalent to trying to get a joint distribution from its marginal distributions. So the constraint is probably that either it needs to be convex or you need to have a prior estimation of the object's cavities which means you need to know the 3d shape beforehand to have a mathematically guaranteed measurement.


No such constraint exists for the Radon transform. There are limitations associated with limited data, but mathematically if you had perfect knowledge of the slices you could obtain a perfect reconstruction for any compactly supported generalized function (I'd usually say distribution but I don't mean the statistical object).



No images, but doesn't hang on them like the Google cache: http://web.archive.org/web/20170711133251/http://irc.cs.sdu....


This one has at least a few images: https://archive.is/QuLKN


Note that this only works for objects that don't have any flexible parts and don't interact with the water in any other way than pushing it aside (e.g. soak water).


The same approach could be used with any liquids, and possibly gasses too. As a proof of concept, this is incredible. I can imagine a ton of applications too. Scanning fossils is the first that comes to mind. I'm sure there could be plenty of applications in forensics too.


How does one come up with something like this? The method is everything but straight forward and not practical at all, but it still produces good results. Amazing work.


How does it determine shape when measuring volume displacement? is it only measuring the displacement of the top surface-tension-layer of water as if it were a slice?


As it dips the model into the liquid, it displaces an amount equal to the volume already measured plus one cross section at the current water level. By continuously measuring the dips you can determine the area of every slice. By dipping at different angles you can create a large number of those cross section datasets. By merging and doing some hairy math (but well understood, it's the same used for MRIs) you can work out the original shape.

It's a clever technique, but undoubtedly slow since it requires on the order of 500-1000 careful dips of an object to get a reasonable level of detail. I'm guessing they probably aren't using plain old water since then they'd have to worry about surface tension, evaporation, etc...


Surface tension and evaporation would just show up as noise that would disappear with enough dips.


Epic!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: