Nice. It's definitely an optimization problem. But you have to look at numerical error.
I had to do a lot of work on GJK convex hull distance back in the late 1990s.
It's a optimization problem with special cases.
Closest points are vertex vs vertex, vertex vs edge, vertex vs face, edge vs edge, edge vs face, and face vs face.
The last three can have non-unique solutions. Finding the closest vertices is easy but not sufficient. When you use this in a physics engine, objects settle into contact, usually into the non-unique solution space. Consider a cube on a cube. Or a small cube sitting on a big cube. That will settle into face vs face, with no unique closest points.
A second problem is what to do about flat polygon surfaces. If you tesselate, a rectangular face becomes two coplanar triangles. This can make GJK loop. If you don't tesselate, no polygon in floating point is truly flat. This can make GJK loop. Polyhedra with a minimum break angle between faces, something most convex hullers can generate, are needed.
Running unit tests of random complex polyhedra will not often hit the hard cases. A physics engine will. The late Prof. Steven Cameron at Oxford figured out solutions to this in the 1990s.[1] I'd discovered that his approach would occasionally loop. A safe termination condition on this is tough. He eventually came up with one. I had a brute force approach that detected a loop.
There's been some recent work on approximate convex decomposition, where some overlap is allowed between the convex hulls whose union represents the original solid. True convex decomposition tends to generate annoying geometry around smaller concave features, like doors and windows. Approximate convex decomposition produces cleaner geometry.[2] But you have to start with clean watertight geometry (a "simplex") or this algorithm runs into trouble.
Yeah I agree, the error analysis could be many blogs in and of itself. I kinda got tired by the end of this blog. I would like to write a post about this in the future. For global solvers and iterative.
> Finding the closest vertices is easy but not sufficient.
As I'm sure you are aware, most GJK implementations find the closest features and then a one shot contact manifold can be generated by clipping the features against each other. When GJK finds a simplex of the CSO, each vertex of the simplex keeps track of the corresponding points from A and B.
> A second problem is what to do about flat polygon surfaces
Modern physics engines and the demo I uploaded do face clipping which handle this. For GJK you normally ensure the points in your hull are linearly independent.
CamperBob2 1 hours ago [-]
There's been some recent work on approximate convex decomposition, where some overlap is allowed between the convex hulls whose union represents the original solid.
I wonder if it would be smart to restate the problem in just those terms -- managing bounding-volume overlap rather than interpenetration at the geometric level.
If everything is surrounded by bounding spheres, then obviously collision detection in the majority of cases is trivial. When two bounding spheres do intersect, they will do so at a particular distance and at a unique angle. There would then be a single relevant quantity -- the acceptable overlap depth -- that would depend on the angle between the BV centers, the orientation of the two enclosed objects, and nothing else. Seems like something that would be amenable to offline precomputing... almost like various lighting hacks.
Ultimately I guess you have to deal with concavity, though, and then the problem gets a lot nastier.
msteffen 4 hours ago [-]
I'm trying to work through the math here, and I don't understand why these two propositions are equivalent:
1)
min_{x,y} |x-y|^2
x ∈ A
y ∈ B
2)
= min_{x,y} d
d ≥ |x-y|^2
x ∈ A
y ∈ B
What is 'd'? If d is much greater than |x-y|^2 at the actual (x, y) with minimal distance, and equal to |x-y|^2 at some other (x', y'), couldn't (2) yield a different, wrong solution? Is it implied that 'd' is a measure or something, such that it's somehow constrained or bounded to prevent this?
OlympicMarmoto 1 hours ago [-]
This is the epigraph form of the problem. You try to find the point with the lowest height in the epigraph.
But why would d be much greater. The problem asks to minimise d, and so it cannot be greater than the smallest |x-y|^2.
mathgradthrow 3 hours ago [-]
I can't read substack on my phone, so I can't see the article, but the correct statement that is closest to what you have written is just that d is any real number satisfying this inequality. We define a subset U of AxBxR by
U={(a,b,x):x>|a-b|^2}
and then were looking for the infimum of (the image of) U under the third coordinate function
d(a,b,x)=x
leoqa 3 hours ago [-]
Aside: I learned the Sep Axis Theorem in school and often use it for interviews when asked about interesting algorithms. It's simple enough that you can explain it to non-technical folks. "If I have a flashlight and two objects, I can tell you if they're intersected by shining the light on it". Then you can explain the dot product of the faces, early-exit behavior and MTV.
reactordev 5 hours ago [-]
This is novel indeed! What about non-spherical shapes? Do we assume a spherical bounds and just eat the cost? Either way, narrow phase gets extremely unwieldy when down to the triangle level. Easy for simple shapes but if you throw 1M vertices at it vs 1M vertices you’re going to have a bad time.
Any optimization to cut down on ray tests or clip is going to be a win.
bob1029 2 hours ago [-]
> Do we assume a spherical bounds and just eat the cost?
We pick the bounding volume that is most suitable to the use case. The cost of non-spherical bounding volumes is often not that severe when compared to purely spherical ones.
Yeah triangle-triangle is really dependent on number of triangles.
I noticed that issue is 6 years old, what’s the current state?
bruce343434 5 hours ago [-]
Most likely this can be preceded by testing branches of some spatial hierarchy datastructure, 1 million squared is a lot to compute no matter the algorithm
reactordev 4 hours ago [-]
Without optimizations of the vertices buffer, correct, it’s a 1T loop. But we can work on faces and normals so that reduces it by a factor of 3. We can octree it further as well spatially but…
There’s a really clever trick Unreal does with their decimation algorithm to produce collision shapes if you need to. I believe it requires a bake step (pre-compute offline).
I’d be fine with a bake step for this.
OlympicMarmoto 4 hours ago [-]
Do you mean non-convex shapes? You can do a convex decomposition and then test all pairs. Usually games accelerate this with a BVH.
andrewmcwatters 3 hours ago [-]
Usually you have a render model and a physical model which is a degenerate version of the viewed object, with some objects tailored for picking up, or allowing objects to pass through a curved handle, etc.
I would assume using this algorithm wouldn't necessarily change that creation pipeline.
reactordev 37 minutes ago [-]
I’m trying to find a way to NOT have hull models included in my games. Saving players potentially GBs of disk space.
Constructing Bvh’s on the fly from the high fidelity models we use. Without incurring a performance penalty like we are. So we can improve collision detection instead of clipping due to low res hull models.
The OP’s source code builds a Bvh but it still does so in a way that we’re able to clog it with 34M vertices. Sadly, we’re still going to have to explode geometry and rebuild a hierarchy in order to iterate over collisions fast. I do like the approach OP took but we both suffer from the same issues.
andrewmcwatters 4 minutes ago [-]
Yeah, I definitely get the concern. Depending on how many models you're working with though, you're still burning CPU time calculating all of these, or you're burning disk space, because they have to exist one way or another.
I've never seen an approach that doesn't avoid collision meshes, and I think it's probably going to be infeasible for the foreseeable future due to the size and complexity of current model exports. Especially ones exported unmodified from source for use with virtual geometry.
I had to do a lot of work on GJK convex hull distance back in the late 1990s. It's a optimization problem with special cases.
Closest points are vertex vs vertex, vertex vs edge, vertex vs face, edge vs edge, edge vs face, and face vs face. The last three can have non-unique solutions. Finding the closest vertices is easy but not sufficient. When you use this in a physics engine, objects settle into contact, usually into the non-unique solution space. Consider a cube on a cube. Or a small cube sitting on a big cube. That will settle into face vs face, with no unique closest points.
A second problem is what to do about flat polygon surfaces. If you tesselate, a rectangular face becomes two coplanar triangles. This can make GJK loop. If you don't tesselate, no polygon in floating point is truly flat. This can make GJK loop. Polyhedra with a minimum break angle between faces, something most convex hullers can generate, are needed.
Running unit tests of random complex polyhedra will not often hit the hard cases. A physics engine will. The late Prof. Steven Cameron at Oxford figured out solutions to this in the 1990s.[1] I'd discovered that his approach would occasionally loop. A safe termination condition on this is tough. He eventually came up with one. I had a brute force approach that detected a loop.
There's been some recent work on approximate convex decomposition, where some overlap is allowed between the convex hulls whose union represents the original solid. True convex decomposition tends to generate annoying geometry around smaller concave features, like doors and windows. Approximate convex decomposition produces cleaner geometry.[2] But you have to start with clean watertight geometry (a "simplex") or this algorithm runs into trouble.
[1] https://www.cs.ox.ac.uk/stephen.cameron/distances/
[2] https://github.com/SarahWeiii/CoACD
Yeah I agree, the error analysis could be many blogs in and of itself. I kinda got tired by the end of this blog. I would like to write a post about this in the future. For global solvers and iterative.
> Finding the closest vertices is easy but not sufficient.
As I'm sure you are aware, most GJK implementations find the closest features and then a one shot contact manifold can be generated by clipping the features against each other. When GJK finds a simplex of the CSO, each vertex of the simplex keeps track of the corresponding points from A and B.
> A second problem is what to do about flat polygon surfaces
Modern physics engines and the demo I uploaded do face clipping which handle this. For GJK you normally ensure the points in your hull are linearly independent.
I wonder if it would be smart to restate the problem in just those terms -- managing bounding-volume overlap rather than interpenetration at the geometric level.
If everything is surrounded by bounding spheres, then obviously collision detection in the majority of cases is trivial. When two bounding spheres do intersect, they will do so at a particular distance and at a unique angle. There would then be a single relevant quantity -- the acceptable overlap depth -- that would depend on the angle between the BV centers, the orientation of the two enclosed objects, and nothing else. Seems like something that would be amenable to offline precomputing... almost like various lighting hacks.
Ultimately I guess you have to deal with concavity, though, and then the problem gets a lot nastier.
1) min_{x,y} |x-y|^2
2) = min_{x,y} d What is 'd'? If d is much greater than |x-y|^2 at the actual (x, y) with minimal distance, and equal to |x-y|^2 at some other (x', y'), couldn't (2) yield a different, wrong solution? Is it implied that 'd' is a measure or something, such that it's somehow constrained or bounded to prevent this?https://en.wikipedia.org/wiki/Epigraph_(mathematics)
U={(a,b,x):x>|a-b|^2}
and then were looking for the infimum of (the image of) U under the third coordinate function
d(a,b,x)=x
Any optimization to cut down on ray tests or clip is going to be a win.
We pick the bounding volume that is most suitable to the use case. The cost of non-spherical bounding volumes is often not that severe when compared to purely spherical ones.
https://docs.bepuphysics.com/PerformanceTips.html#shape-opti...
Edit: I just noticed the doc references this issue:
https://github.com/bepu/bepuphysics2/issues/63
Seems related to the article.
I noticed that issue is 6 years old, what’s the current state?
There’s a really clever trick Unreal does with their decimation algorithm to produce collision shapes if you need to. I believe it requires a bake step (pre-compute offline).
I’d be fine with a bake step for this.
I would assume using this algorithm wouldn't necessarily change that creation pipeline.
Constructing Bvh’s on the fly from the high fidelity models we use. Without incurring a performance penalty like we are. So we can improve collision detection instead of clipping due to low res hull models.
The OP’s source code builds a Bvh but it still does so in a way that we’re able to clog it with 34M vertices. Sadly, we’re still going to have to explode geometry and rebuild a hierarchy in order to iterate over collisions fast. I do like the approach OP took but we both suffer from the same issues.
I've never seen an approach that doesn't avoid collision meshes, and I think it's probably going to be infeasible for the foreseeable future due to the size and complexity of current model exports. Especially ones exported unmodified from source for use with virtual geometry.
And part two: https://www.flipcode.com/archives/Theory_Practice-Issue_02_C...