Hey, really appreciate the support I’ve gotten here so far. I’m trying to figure out how to model a slightly unusual situation, and want to check my assumptions before going too far down the implementation rabbit hole and getting stuck.

I’m using Amethyst/NCollide (and maybe eventually NPhysics) to implement audio games. These are sound-only, or sound-heavy, games targetting blind/visually-impaired players. Audio games rely heavily on sensors to provide players with indications of what’s around them. So for instance, if a player is navigating a world and there’s a wall 4 units ahead, I need to somehow present not only what the player will hit in 4 units, but also that it is 4 units away.

I think I have this correctly modeled with collision groups. I have a group allocated for non-physical colliders and non-physical sensors. There are essentially two types of collisions/intersections I need to model:

- Non-physical sensor hits non-physical area. I’d like to tell a player that a new road is X units ahead. The player’s physical collider can enter that road, I just need to know that the sensor is hitting a new area so I can tell them about the transition. The concept of “area” is a bit vague and will be speced out on individual maps.
- Non-physical sensor hits physical area. The player needs to know that a wall, pit, etc. is ahead, and that they’ll either not be able to pass, or that something bad will happen to them if they try to. Essentially the above logic applied to a wall, pit, monster, etc.

I’m modelling the second half of the above requirements with shapes in distinct collision groups. I’m not sure how to model the first part, though.

My initial instinct was to use a ray, but while it looks like I can tell that a ray collides with something via `RayInterferences`

, it doesn’t look like I can tell *where* the ray collides. Is that accurate?

The other idea I had was using a `Segment`

, but I suspect a ray would be conceptually easier to start with since it isn’t bounded. In situations like these, it can be tough knowing how much detail is too much or too little to present, so being able to work with unbounded queries would be great. Plus, with a ray, I don’t have to calculate an opposite endpoint of the player’s sensor along their course at some arbitrary distance.

Can I use a ray for this, or are there other implementation strategies I haven’t thought of?