Rotating ndarray::Array3 using na::Rotation3

I’m interested in using ndarray with nalgebra to rotate a point cloud points: Array3 where points.dim is (3, height, width) by a rotation rot: na::Rotation3. Basically the idea is to treat points as a matrix of size (3, height * width) and do rot * points.

So far I have this, but I’m not sure if it’s efficient.

pub fn rotate(rot: &na::Rotation3<f64>, points: &Array3<f64>) -> anyhow::Result<Array3<f64>> {
    let (_, height, width) = points.dim();
    let slice = na::DMatrixSlice::from_slice_with_strides(
        points
            .as_slice()
            .ok_or(anyhow!("points array malformed (not contiguous?)"))?,
        height * width,
        3,
        1,
        height * width,
    );
    let rotated = slice * rot.transpose();
    let rotated_array: Array3<f64> =
        ArrayView3::from_shape((3, height, width), rotated.as_slice())?.to_owned();
    // is there a way to avoid the copy from the nalgebra to the array?
    Ok(rotated_array)
}

Alternatively instead of making a na::DMatrixSlice out of my Array3, I could make an ndarray::ArrayView out of the Rotation3 or any 3 x 3 matrix and use ndarray linear algebra to perform the multiplication…


btw for some motivation to rotate this sort of Array3 of size (3, height, width) by a na::Rotation3, this is a common scenario when:

  • applying a color calibration matrix to rgb or whatever color values (so instead of Rotation3 you’d just have a 3 x 3 matrix), e.g. to convert between different linear color spaces
  • transforming structured point clouds (from a lidar such as the iPhone 12 lidar or an Ouster lidar or depth sensor like the Kinect)
  • transforming the vertices of a virtual geometry image (recently popularized by Unreal Engine 5’s “Nanite”)