I’ve been using nalgebra a bit now, and while I was coding, I noted few questions/remarks about difficulties I encountered, mostly about efficiency. Here is the list, without any specific order:

Is there an efficient way of mapping over a matrix and providing the current position in the matrix (row/col) in addition to the value?

I was trying to reproject points from one image to another, thus needed 2D coordinates (in the original image) of the point currently reprojected. Meanwhile, I solved it doing:

for (index, val) in matrix.iter().enumerate() {
let (col, row) = helper::div_rem(index, nrows);

Perhaps a double for loop and a matrix.get_unchecked(i,j) would be faster, I didn’t benchmarked it.

For Unit renormalize, could it be more efficient to use a first order Taylor approximation since it is supposed to be already near 1.0? I’ve seen this in a C++ lib using unit quaternions, that’s what made me look at it.

Which functions are more efficient for matrix creation? It took me some time to realize that all small size matrices had dedicated “struct-like” representation instead of an underlying vector. So now I believe the new() constructor is the more efficient for low size matrices. What about for dynamic sizes? People interested in numeric computation are most probably interested in those little efficiency questions.

Would it make sense to have specific init functions for DVector? For example, the from_column_slice doesn’t really need to provide the vector length. Or the from_fn doesn’t need the second parameter in the function.

Is there a way to create a matrix by moving (taking ownership) data from a Vec? I didn’t deep dive into Matrix implementation but I didn’t see any in the API, only from array slices (which are copied then I guess).

Why do decompositions (.svd(), .qr(), …) consume the matrix? Sometimes it forces a clone(). I don’t remember how those decompositions are implemented but if they avoid copies internally I guess it makes sense then to consume. PS: the generated link to .cholesky() when using the search functionnality is wrong Oo. It should point to http://nalgebra.org/rustdoc/nalgebra/base/type.SquareMatrix.html#method.cholesky but points instead to http://nalgebra.org/rustdoc/nalgebra/linalg/cholesky/type.SquareMatrix.html#method.cholesky.

For Unit renormalize , could it be more efficient to use a first order Taylor approximation since it is supposed to be already near 1.0? I’ve seen this in a C++ lib using unit quaternions, that’s what made me look at it.

That’s an interesting idea. Though I would keep the current renomalize and add an extra method like renormalize_taylor so the user still has a choice of method. Issue created: https://github.com/sebcrozet/nalgebra/issues/376

Which functions are more efficient for matrix creation? It took me some time to realize that all small size matrices had dedicated “struct-like” representation instead of an underlying vector. So now I believe the new() constructor is the more efficient for low size matrices. What about for dynamic sizes? People interested in numeric computation are most probably interested in those little efficiency questions.

Yes, the new() constructor is the most efficient for matrices and vectors of dimension up to 6. The most efficient constructor for other matrices would be the .from_column_slice(...). Documenting this

Would it make sense to have specific init functions for DVector ? For example, the from_column_slice doesn’t really need to provide the vector length. Or the from_fn doesn’t need the second parameter in the function.

It would make sense, but might not be so simple to implement if we want those init functions to have the same names as for matrices. I will have to investigate this further to be able to say exactly what can be done to improve this. New issue: https://github.com/sebcrozet/nalgebra/issues/377

Is there a way to create a matrix by moving (taking ownership) data from a Vec ? I didn’t deep dive into Matrix implementation but I didn’t see any in the API, only from array slices (which are copied then I guess).

Why do decompositions ( .svd() , .qr() , …) consume the matrix? Sometimes it forces a clone() . I don’t remember how those decompositions are implemented but if they avoid copies internally I guess it makes sense then to consume.

Yes, decompositions consume the matrix to avoid copies and avoid one allocation. Often they are modified in-place to contain some part of the result of the decomposition. For example it is stored by the QR struct as it is modified to contain the whole QR decomposition.

PS: the generated link to .cholesky() when using the search functionnality is wrong Oo. It should point to http://nalgebra.org/rustdoc/nalgebra/base/type.SquareMatrix.html#method.cholesky but points instead to http://nalgebra.org/rustdoc/nalgebra/linalg/cholesky/type.SquareMatrix.html#method.cholesky .

Thank you for the answer on all points! I’ve subscribed to the issues to be notified when things happen.

PS: I’ve noticed that you’ve tagged some issues with easy. Don’t know if you know about this but Github have some special meaning issues called “help wanted” and “good first issue”. Basically they help people onboarding and contributing. This is especially useful in events like Hacktoberfest next month, where those issues are somehow highligted.