Help modelling sensors

Hey, really appreciate the support I’ve gotten here so far. I’m trying to figure out how to model a slightly unusual situation, and want to check my assumptions before going too far down the implementation rabbit hole and getting stuck.

I’m using Amethyst/NCollide (and maybe eventually NPhysics) to implement audio games. These are sound-only, or sound-heavy, games targetting blind/visually-impaired players. Audio games rely heavily on sensors to provide players with indications of what’s around them. So for instance, if a player is navigating a world and there’s a wall 4 units ahead, I need to somehow present not only what the player will hit in 4 units, but also that it is 4 units away.

I think I have this correctly modeled with collision groups. I have a group allocated for non-physical colliders and non-physical sensors. There are essentially two types of collisions/intersections I need to model:

  • Non-physical sensor hits non-physical area. I’d like to tell a player that a new road is X units ahead. The player’s physical collider can enter that road, I just need to know that the sensor is hitting a new area so I can tell them about the transition. The concept of “area” is a bit vague and will be speced out on individual maps.
  • Non-physical sensor hits physical area. The player needs to know that a wall, pit, etc. is ahead, and that they’ll either not be able to pass, or that something bad will happen to them if they try to. Essentially the above logic applied to a wall, pit, monster, etc.

I’m modelling the second half of the above requirements with shapes in distinct collision groups. I’m not sure how to model the first part, though.

My initial instinct was to use a ray, but while it looks like I can tell that a ray collides with something via RayInterferences, it doesn’t look like I can tell where the ray collides. Is that accurate?

The other idea I had was using a Segment, but I suspect a ray would be conceptually easier to start with since it isn’t bounded. In situations like these, it can be tough knowing how much detail is too much or too little to present, so being able to work with unbounded queries would be great. Plus, with a ray, I don’t have to calculate an opposite endpoint of the player’s sensor along their course at some arbitrary distance.

Can I use a ray for this, or are there other implementation strategies I haven’t thought of?

Hi!

You can get both what intersects the ray and where. Assuming you are relying on the CollisionWorld, you can call collision_world.interferences_with_ray(&ray, &groups). This will return an InterferencesWithRay iterator that yield tuples of type (&'a CollisionObject<N, T>, RayIntersection<N>).

The first tuple element is the touched object and the second gives you where the ray hit the object as well as the normal as the hit point. Note that the hit point has to be computed from the ray intersection’s toi field (see the documentation of that field):

for (o, inter) in collision_world.interferences_with_ray(&ray, &groups) {
    let hit_point = ray.point_at(inter.toi);
    // Do other stuffs...
}

Note that the results are not sorted by closeness. So if you need only the closest hit, you’ll have to select the one with the smallest inter.toi value.

The ray-cast approach seems very suitable here.
Another option that I don’t know if it suits your need exactly would be to have colliders for the areas themselves too. Then you will get a proximity event when the player enters or leaves the area collider. Note that if your areas have complex shapes, you should use Compound shape instead of a TriMesh (because the triangle mesh has no interior so you won’t get any intersection when the player is fully inside of the mesh).

Thanks, very helpful.

My current setup just has a simple arena, with its borders modeled as
Segments. I realize that my segments aren’t getting added because I
don’t know what position to pass into world.add(...) when setting them
up, and that makes me wonder if I’m doing that incorrectly.

Given that a segment specifies its start and end point, what position
should I set when adding them to the world? And how does that position
work with the start and end passed in during creation? I.e. are those
points in world coordinates and expected to agree with the position
passed to world.add(...), or are they local to the position and
translated onto the world?

Thanks again.

The segment end-points are expressed in the local-space of the collision object. Thus the world-space segment that will be considered will be the local-space segment you constructed with its endpoints multiplied by the collision object position.

Got it, thanks. So to confirm that I’m on the same page:

My world is a 300X300 square. I’d like segments at the borders. To put a
segment at the left and right borders, I’d set the start point to (0,
150), the endpoint to (0, -150), then I could use that same segment but
set the position to (-150, 0) and (150, 0) for the left and right
borders respectively? The position is thus the midpoint of both
segments, and the endpoints will be set to (-150, 150) and (-150, -150)
for the left border and (150, 150) and (150, -150) for the right border?

If by “set the position” you mean “set the position of the collision object” then, yes, this looks correct.

Got it, I’m now getting collisions with the arena edge. Thanks, one
more. The ray’s direction should be a normalized vector representing a
relative direction the ray is oriented toward, right? Not the Euler
angles of rotation? So if I want the ray to face along the X axis, it
should have a direction of [1., 0., 0.] and not [0., 0., 0.]? (I.e. a
direction representing where the ray points, not the angles of rotation
on the axis?)

Thanks for all the help.

Yes. More generally, a ray that starts at the point a and passes through the point b can be given the direction b - a (or any nonzero positive multiple of that).

The ray direction is not required to be normalized. If it is not normalized, then the ray-cast result’s toi field will be divided by the ray direction norm (and the result of ray.point_at(inter.toi) mentioned in my first answer will still be correct as it won’t be affected by that).

Thanks for all the help. Only a couple more things to implement and I can move on from collision detection to game mechanics.

I realize there’s another type of sensor I need. In my asteroids shooter, I need to detect when an asteroid will hit the player and alert them to move out of its path. I’m using ncollide3d even though my game is 2-D because I eventually want full 3-D motion. My asteroids are modeled as spheres, and the action takes place on the X-Z axis, with Y used for rotation.

Modeling this in my head, I imagine a tennis ball pushing a can end-on, where the can is a non-physical collider and, at least in my game, overlaps with the ball. The can’s radius is that of the ball, and its height is the radius plus the speed times my collision alert warning threshold in seconds.

I have some code that never triggers. I do get collisions between my asteroids and my player, but never any collisions between the cylindrical bumper and the player. I know at least one issue I’m having is that I’m not sure how to model the cylinder both oriented in the ball’s direction of travel and on its side, since I think by default the cylinder is oriented with its centerline along the Y axis. I imagine I want some rotation of PI/2, PI, or PI*1.5. I’ve tried every combination of these along the X and Z axis, but can’t trigger a collision. Since Y is rotation, I haven’t tried appending a new rotation there.

Here’s what I’ve got:

        let mut collision_groups = CollisionGroups::new();
        collision_groups.set_membership(&[0]); // Player's ship is the only object in group 0.
        collision_groups.set_whitelist(&[0]);
        collision_groups.set_blacklist(&[1,2,3,4]);
        for (asteroid, transform, speed) in (&asteroids, &transforms, &speeds).join() {
            for (collider, player) in (&colliders, &player).join() {
                let radius = match asteroid.size {
                    AsteroidSize::Large => config.target_large_radius,
                    AsteroidSize::Medium => config.target_medium_radius,
                    AsteroidSize::Small => config.target_small_radius,
                };
                let height = radius + speed.0 * config.collision_alert_seconds;
                let bumper = Cylinder::new(height / 2., radius);
                let mut transform = transform.clone();
                transform.append_rotation_x_axis(consts::PI*1.5); // Tried every right angle and every axis but Y
                let aabb: AABB<f32> = bumper.bounding_volume(transform.isometry());
                // info!("AABB: {:?}", aabb);
                for obj in world.interferences_with_aabb(&aabb, &collision_groups) {
                    info!("Move or you're done for!");
                }
            }
        }

What am I missing?

FWIW, I think I’ve cracked this. I switched all of my code over to NPhysics, which seems to integrate everything far better than I was on my own. I’d been avoiding it because my game is arcade and I don’t need “real” physics, but I think it’s easier to strip away what I don’t need from NPhysics, than it is to start with NCollide and implement the behaviors I want. I don’t know how accurate that perception is, but I’ll leave it here in case it benefits anyone else.

Anyhow, collider sensors now work. Should anyone find this in the future, I switched away from Cylinder to Cuboid and my collision sensor now works. There appears to be a ShapeHandle implementation for Cuboid and not for Cylinder. I’m not sure if that’s true for a reason that impacted this initial attempt with only NCollide, but basing everything on NPhysics and using a Cuboid seems to have gotten me the sensor I wanted.