-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintainership roundtable and discussion #1272
Comments
I'd like to give away more of "my role" since I don't have the bandwidth unfortunately. That means bringing on permanent maintainers. I maybe want to keeping going working on lower level stuff, like I have with matrixmultiply now and would maybe with numerical simd for other blas-like operations that could benefit ndarray. I guess it's a bit of a pickle now that the organization is bigger than just one repo but low activity across the board. I can't necessarily do everything without asking others. The status of the code is "not great" in terms of how easy it is to maintain and change (me knows most of the internals, some lack of abstractions for internals, lots of unsafe code that works just because of careful contributors, easy to mess it up). What do @termoshtt @nilgoyette @adamreichold @jturner314 @LukeMathWalker think about this? What's the direction for ndarray (there's a lot that can be done - modernisation using const generics)? Are there other projects that we should emulate? Or that have made ndarray redundant? |
@bluss Since you mentioned matrixmultiply and simd in the other post, did you see the work from sarah (faer-rs)? |
@bluss Thanks for the update because the community has been worried about the maintenance of such important Rust library. Many projects rely on its existence and can't find any drop in replacement. I hope the ndarray creators and maintainers can come up with a long term solution. I am sure there are people who would be happy very least to review PRs. |
@bluss good to see this is not abandoned! I also really appreciate the lack-of-bandwidth problem, suffering from it myself regularly. The issue of how to bring on "permanent" maintainers is an ongoing problem though, at least for open-source projects like this as they often live and die at the bandwidth (or interest) of a small handful of people. Given that this project does not seem to have the corporate backing that tends to address this particular problem via financial incentives, one possible option is that maintenance is handled via committee, with membership that can be changed. This would require first setting up a contributing guideline and a code of conduct that would enable said committee to exist, but it may allow progress to actually progress without the time-poor bottleneck in the mix. Just my two cents, really happy to see this conversation happening :-) |
I think bringing in more people to share the load is a good idea. It can still fail as volunteers sometimes just do not have any time to contribute. For example, we do have multiple active maintainers who continue the work independently at PyO3. But currently, our active phases almost never overlap which makes small changes slow and it often feels impossible to obtain the necessary consensus for large changes. As for actually doing it, I see two options: You give some people you are able to trust somewhat access and let it run living with the likely but hopefully temporary breakage resulting from that. Or you increase your time investment for a while to actively guide new people into reviewing PR and making releases but I am not sure if that is possible at all.
For me personally, with my
I do not know of any with the same "fundamental data structure" focus as Is the NumFOCUS organisation something you could see yourself contacting and asking for (monetary) support? Would money alone actually solve anything? |
I think the thing it does help with is that funding enables somebody to justify prioritizing their time to doing maintenance should there be conflicting pressures on them as well. I mean it's far from a perfect solution, but life is expensive so unless one can afford to volunteer their time to an open-source software project (and many people can and do, don't get me wrong) then if there is demand X that pays the rent vs. really-intersting-project Y, then X will usually win. A financial incentive simply helps to level this field a bit. As for directions / applications / focus, it occurs to me that a selling point that could be used to attract some funding (I don't know how any of this stuff works, outside my realm of experience) is that using these shiny-new rust implementations of ubiquitous python libraries does have massive market appeal--just look at how hot Dispensing with BLAS/LAPACK and gaining out-of-the-box parallelization has immeasurable value to many industries and use-cases after all. |
I do not disagree, but I would like to add that this reasoning is limited to situations where one works on a project basis. If you have a steady job and obligations to a family, funding for individual projects does not change how much time one has for FOSS work. |
That's my main problem with this crate if I'm going to help maintain it. When I open the internals, I don't understand what I'm reading. I'm usually able to add a method and whatnot, but I don't feel knowledgeable enough for "more complex" stuff.
This is an excellent idea and this is already what's going on. I created
This is exactly my opinion. I don't think |
Just to highlight the importance of this library. We use NDArray as one of our backends for Burn's deep learning framework. |
Personally, now that I'm no longer a student, am working full-time, and have more responsibilities, I have less time and energy to devote to FOSS. And, unfortunately, I don't have much need for I do think that an n-dimensional array type is very important; while It would be great to bring on more people to take over the maintenance. I'd also be happy to move my As far as improvements go, I think that it would be possible to simplify I have some ideas for how to update the internals and API using traits, GATs, and const generics, but I doubt I'll find the time to implement it all myself. If someone is interested on working on it, I'd be willing to chat about it.
Yeah, |
Great input from everyone. I wasn't fully aware of faer-rs, no, so thanks for the pointer. I would like to invite those participating in the discussion here to become collaborators in ndarray. Can I for example ask @adamreichold, are you interested? Do you have any contacts that are? |
Took me a while to consider the commitment but yes, I am interested. I would be glad if I could help with maintenance and eventually further development. I do think my own time budget and my inexperience in maintaining this particular project imply that I could not immediately tackle any large charges. On contrary, in the beginning I would deliberately limit myself to building and packaging issues and reviewing contributions with the aim of producing point releases and hopefully eventually a 0.16.0 release. Ideally, I will be able to learn enough to do more in the future. (I also do not want to give a wrong impression, I do not consider myself well-networked and have few contacts beyond direct collaboration via the FOSS projects. I will ask the one acquaintance who I think could be in a position to contribute though.) |
I find myself in jturner's situation (less/no more ndarray at work, for a while), but I really love
|
Awesome, I've added you on this repo, but there is more admin to do - the whole org - which we will get to |
Thought I'd chime in here that I'd be happy to put my hand up to volunteer for some sort of maintainer / reviewer status. At the moment I'm also trying to contribute to rapl so I've at least got my mind in the correct linear algebra / tensor space to be thinking about this. Work schedule is a bit up and down, rather "up" at the moment so free time is at a premium and contributions will be slim for the next month or so. However, I do have enough availability to do reviews most any time, and am happy to participate in any planning where my input may be of value. |
Hey! I was looking to start contributing to If my understanding is correct it sounds like going through the good first issues and sending PRs might not be the most useful thing to do right now? In any case, it might be worth updating the status section of the readme to make the information readily available 🙂 |
@bluthej This project is nor dead nor actively maintained. It seems that there has been as much action in the 2 last months than in the 3 least years, which is kinda promising. Look at the commits to get a better understanding. As a side note, we built an important part of our main project (medical imaging company) on ndarray and we do not regret it at all. Could it be better? Yes, of course. Does it offer everything we needed? Yep. I can't answer your specific question (Should I contribute?), but I can at least say that your issues/discussions will be answered and your MR will be read. |
It seems to me that the apparent stagnation of this project is only partially explained by the original contributors being too busy. This is normal, but it could be expected that new contributors show up for such an important or even foundational (for numerical Rust code) crate. I suspect that the very high level of complexity of this library plays a role in limiting contributions. And my gut feeling is that a fair share of this complexity is due to overengineering and could be removed. Are there people who would be interested in discussing the feasibility of a radically simplified and modernized "next generation" ndarray? This could eventually lead to a prototype that could eventually either be absorbed into ndarray proper, or the old ndarray could eventually be deprecated, or remain as a compatibility layer, or whatever. I believe that work on such a simplified and modernized ndarray could be a way to revitalize numerics in Rust. There seems to be consensus that if ndarray was started today with the benefit of hindsight (and with current rustc), its design would be quite different:
While the above does not seem to be controversial, here are some additional ideas for further streamlining:
With all or most of the above, a new ndarray should be radically simpler and smaller, both in terms of implementation and API. User code that reads any array of |
I tried contributing to this crate and everything non-trivial was/is too complex for me. As a professional Rust/C++/Python programmer, I'm not particularly proud of writing this, but it is the way it is. So, yes, I totally agree with your sentence. Now that I've said that I'm ignorant about the internals, here's my opinion :) on some of your points
|
Agreed! I had the same gut impression when I came to the library. However, after working intensively on the array reference RFC, I can say that much of the complexity is more warranted than it first appears. I'd want to be careful not to confuse "unneeded complexity" with "undocumented complexity". Still, I think there are cleaner designs that could keep the same capability with clearer abstractions.
I am interested in this, but I'd strongly encourage this effort to be done under the
I believe if the library is designed carefully, we can leave the door open to (and maybe should provide) statically-dimensioned arrays. I think this for a few reasons: firstly, if you look at projects like
I think this is a bit of a misconception about the job of
Interesting note on these two: I've been thinking it over a lot, and I think generics in this case are very hard to avoid as cleanly as the above example. Obviously you need one for the type of the element. Given that I'm arguing for keeping statically-dimensioned arrays (and opening the door to fancy other layouts), that requires a second generic. And after a lot of thought, I think that you essentially have two options for things like
Sorry if those explanations kinda suck; I'm having trouble explaining clearly what I think is a fundamental trade off. Also, a disclaimer: it may be possible to still do this by managing to write a
Music to my ears. Finally, a few thoughts that aren't included in the above conversation:
|
Great to see your interest for this @akern40 and @grothesque! IxDyn is slow because when I've been working on it, it's a feature that's an extra, along for the ride, and has not been designed for. Its purpose has been to help encapsulate data, not be used for numerical operations. I like the analysis done here, and the complexity level in ndarray is high like you say. The internal model of ndarray is not too complicated, but the knowledge of it is used in too many places. So refactoring that would definitely be welcome. (To some extent RawArrayView is an example of finding a more basic common building block for some operations - but its reason to exist is to handle uninitialized array elements correctly.) And yes, the focus should (ideally..) be on continuous delivery (making releases). That's how we make real contributions to the ecosystem. This is also maybe the hardest thing to recruit, someone (or a group) who can take over driving making releases. 🙂
I agree with the general push towards this ideal, without saying anything now about if dimensionality information should be static or not, if it should be a concrete type or a trait based interface or not. |
Thanks for the feedback, @bluss! Can you chime in at all about your opinions on keeping the data-level As for pushing releases, I'd be happy to be part of a team that works on this. I'm a relative newcomer, but I'm strongly interested and have an ok understanding of the array part of the codebase (I'm a little shakier on iteration and dimensionality, but those will come with time). I've also got the time right now in a way that others with more life responsibilities may not. Happy to talk offline if that's of interest. |
It's funny I stumbled into a use case today, the same day I am reading your comment. I also use Thanks you all for your work! |
Thanks for your comments! I will reply to all the points, but this may take me some time. @akern40 wrote:
I fully agree about avoiding to split the ecosystem. What I tried to suggest is that there might be value to explore the viability of a radically simplified ndarray foundation in a separate crate. That may give clarity about what is feasible without having to consider current users of ndarray. Once it is clear what is feasible, the necessary changes could be added to ndarray proper in a way that gives users and depending crates time to adapt. But without knowing what is feasible, it seems difficult to justify bold changes. |
Now for @akern40 wrote:
Sure, these typing systems are useful, but they are not just about ndim, but also about shapes (like fixing the length of some axes, or expressing that these two axes have the same length or that "unit"). Is anyone capable of doing such checking at compile time in any language? Already a Rust array library where ndim and all elements of shape are static but fully generic would be very cool. However I am not sure whether technically the language is ripe even for this. (The nalgebra crate might be doing just what is feasible right now.) Since ndarray’s core business is dynamically shaped arrays (as in BLAS/LAPACK-style linear algebra and not in 3d vector linear algebra), adding partially static shapes to this would only increase complexity: Given that already a purely static-shaped array library would be technically very difficult, going beyond that seems even more so. Case in point: ndarray’s current model (dynamic shape/strides with optionally static ndim) is very limited compared to jaxtyping, but it’s already responsible for a fair share of API and implementation complexity (without much gain to show for it as I try to demonstrate below). My impression is that the price for the small gain in static checking is too high.
I would like to argue that it is incoherent to pair static ndim with dynamic shape and strides. Shape is a more low-level property in the sense that it matters in the innermost loops, while ndim is a more high-level property that rather determines the number of loops. (For optimizing inner loops knowing statically the innermost element of shape and strides would be more useful.) Moreover there are typically "few" possible values for ndim, so even if it is dynamic, it can be still dealt with efficiently:
It seems to me that the only real advantage of static ndim is the ability of the compiler to catch some errors at compile time. But note that ndim is just one among several potentially useful properties that could be encoded in the type (at least in principle). Singling out ndim, a property that can be checked at runtime at negligible cost, seems to me a needless complication for a library focused on rather large dynamically shaped arrays. @nilgoyette wrote:
It would no longer be possible to express in the type system that @bluss wrote:
But |
@daniellga funny how those things happen! Question about that usage: is it important for your use that the data specifically is
Agreed. I actually have a repo that I was using for mocking up the array reference RFC; if that's a good place, happy to use its issues/PRs/codebase as a place for people to do some design concepts. On the note of
Even if this I the only advantage, that seems worth it to me? I also think it aligns with a Rust ethos: if we can get the compiler to check it for us, let's do that.
I actually think this is the stronger argument for maintaining genericity in some "layout" parameter. Seems like it would be better to build in a generic that lets us (and others) play with layouts (fixed dim, fixed stride, diagonal, sparse(?), etc), then expose that as a lower-level API. Maybe
Just to throw my two cents in here, I came to |
I am going to ping @sarah-ek into this discussion also since I seem to recall her mentioning the idea of including tensor ops for faer-rs. Assuming I am not imagining this, it could be a good opportunity for collaborative planning at the very least in terms of API design and whatnot. |
Ya faer-rs is awesome! I'd love to see some collaboration between ndarray and other libraries that deal with n-dimensional arrays. I'd hope that ndarray could provide strong connective tissue off of which other libraries could interoperate and build. |
Here is another multidimensional array library for Rust: https://docs.rs/mdarray/latest/mdarray/ @akern40 wrote
That's my hope as well. Most of the libraries that I've seen (including faer-rs, rlst, and the above mdarray) have the following points in common:
Of course there is nothing wrong with implementing a multidimensional array library as an exercise or for a particular application. But given that the underlying storage is very similar, it seems that there is space to try to strengthen the role of ndarray as common infrastructure as far as technically feasible. The way I see it (do you agree?) the fundamental design decision/constraint behind ndarray is that it must provide an efficient abstraction for arbitrary, dynamic multidimensional dense arrays: this is useful general, but in particular for projects like https://github.com/PyO3/rust-numpy Keeping this in mind, I hope that some of the reasons why all these libraries rolled out their own array types can be mitigated by providing some coherent subset of the following (list is incomplete):
Of course not everything can be done at the same time and coming up with a nice, efficient and coherent design is difficult, but I have the impression that several people here are interested in participating in this effort. I do hope that it will be possible to at least cover the application areas of several "large array" crates in a single infrastructure crate. (Providing common infrastructure also for nalgebra-like applications seems more difficult. For example #879 relies on explicitly storing the shape and the strides for each array, which seems a no-go for fast 2x2 arrays.) |
Also relevant: the new Unfortunately, I do not think that the design can be replicated in Rust. Or can it? |
Either Zulip or Discord would work for me, feel free to create either. (I can also do it, as long as we decide which one to use. 🙂 ) Disclosure: the previous "official" channel for ndarray was #rust-sci:matrix.org. If we create something new, rather not use matrix IMO |
Ok, for any/all those interested, I have created a Zulip organization that you can sign up for here. It has the broad name "Rust Multidimensional Arrays", in the hopes that this can eventually be a place to converse about the topic in general in Rust (e.g., for a design working group), in addition to a mode of communication for logistics. Please note: I have included a Code of Conduct borrowed from GitHub and included it in the announcements channel. I have also set some fairly strict limits on various kinds of activity such as channel creation and direct messaging, in addition to requiring signup via GitHub or GitLab. If people find these restrictions cumbersome, let me know and I can relax them. Edit: I've added an email option because signups apparently weren't working with just GitHub/GitLab. |
Great! Right now it says an invite is required to join. (Github login path.) Edit: no, ok, I could join without that, using email. I might have misread that you said we had to use Github/Gitlab signup. |
Maybe we need two things:
Here is rough sketch of sub-leader responsibility:
|
@grothesque Thanks for the great input about mdspan and that mdarray project. This project wants a modernization of its fundamental datastructures, and mdarray more or less looks like it's exactly that. What does that mean, what does ndarray have left to offer? Should we just use mdarray instead?
Faer explains well in their documentation:
And IMO this applies fully the same way to ndarray at its current state as well, ndarray should have the same sentence in its docs. |
I've been a little quiet on the discussion for the past month specifically because I've been working to figure out a technical path forward that tries to account for your comments @grothesque, and I think I've reached the point where I'm ready to share more publicly. If you check out https://github.com/akern40/ndarray-design/tree/design-doc you'll see a repo that I've been using to sketch out this design. As per your suggestion, the README provides a comprehensive design document that explains (and advocates for) the design that I suggest. I'm still working on the example code in that repo, but it really just implements what's described in the document. There are still things that need work: a re-design of the dimensionality trait, moving
This design is far from perfect; to say the least, it has holes that will need to be filled as it is actually incorporated into |
Thanks for the design proposal @akern40. I will comment, but my throughput is limited, unfortunately. @bluss wrote:
Yes, exactly, only that it fell from the sky and is much better than I could have imagined. I'm still in the process of understanding its inner workings, but I am very impressed by what I have seen so far. Perhaps the author of mdarray, @fre-hu, would like to join our discussion here?
In its current form, mdarray supports only static number of dimensions (or rank), so it would not be suitable for interfacing with NumPy à la rust-numpy. (This is also a dealbreaker for my specific use case.) I am trying to understand whether the design could be extended to support a dynamic number of dimensions without losing its advantages. In a second iteration, I will try to understand how (ideas from) both projects could be merged. My understanding so far is that mdarray addresses the gripe that I formulated above about ndarray's static ndim not being very useful because all the elements of the shape are dynamic. It does this through introduction of types that abstract data layouts. For example there is a layout where all the dimensions are dynamic except the innermost one which is static. This is not as general as the approach that mdspan of C++ takes, but it might be a good compromise for Rust in the absence of variadic generics. |
@termoshtt that's interesting too. You're right that we need to be ready to accept maintainers continuously. I'm not sure we need to section it up so rigidly. It's a good idea that maintainers are not responsible for everything, and I don't think they are either, it's fine to focus on favourite or focus or knowledge areas. (I do so too.) We should probably delineate exactly how a maintainer forum should work - only maintainers in this case - should it be on github discussions, issues or on zulip? I'm most interested in getting to work with a few people making PRs and so on rather than making formal structures. I want to be in a place where multiple maintainers feel confident to merge their own work (if it doesn't require more feedback) or merge other's PRs, without asking me. |
Thanks for the interest and inviting me. First, I can say that mdarray is a hobby project to explore what is possible. I will not really have time to drive it further myself, so I'm happy if it can be used in some way or if there are ideas that can be reused. One thing I wonder is that it seems difficult to fulfill all requirements in one library. The design in mdarray works well with arrays that are directly addressable on CPU, and where you want control over the layout and optimize with static information. But I'm unsure if it can be generalized to other use cases. About dynamic rank, yes it would be possible similar to ndarray. It will a bit more complex to derive array types, and it will not always be possible to get accurate layout types for array views and instead have to fallback to strided layout. Another question is element order, where I use column major only. Maybe it makes more sense to switch to row major. It could also be possible to make it parameterized, but there is a risk it will increase complexity quite a bit. |
Hello @fre-hu!
Yes, it’s already very useful in this way! I think that you are not the only one with time constraints - contributors to ndarray have pretty much the same problem. It would be great if a community of interested people could be established behind one library to maintain some momentum. I do have some hope that this is happening here.
Likewise Fortran’s built-in arrays or C++’s new mdspan/mdarray do not fulfill all requirements, but look at what impact the latter is having: Fortran people discuss that the last huge advantage of Fortran over C++ is going away. Now Rust unfortunately doesn’t have variadic generics nor generic const expressions and it seems that both are still far away, but perhaps we could still manage to significantly improve on current ndarray as a general-purpose md-array abstraction in Rust.
Ndarray’s layouts are fully strided. The rank can be either static or generic. Wouldn’t it be possible to add such a general (but less efficient) layout to mdarray, while maintaining the other more static layouts? Then we would have a library that could accept any array from Numpy say, but algorithms could be still implemented in Rust for specific layouts. Fallible conversions would be proposed between the different layouts. Not sure how cumbersome the resulting library would have to be. Hopefully one could profit from the strengths of Rust’s packaging by limiting the content of the basic library to infrastructure, and keep actual algorithms in separate exchangeable crates.
Is your choice motivated by BLAS/LAPACK being (marginally) more efficient for column-major data? Do I understand correctly that mdarray is column major in the sense that the restricted layouts are column major? But the fully strided layout can accept any (fixed rank) strided array, right? Right now in Rust we cannot have a fully generic ndspan like in C++, but it should be possible to have a set of useful layouts for both column-major and row-major within a single library, or do you see a problem with this? |
I think the simplest way is to add dynamic rank as a new shape type and keep the existing layout types. The shape types for static rank are tuples of dimensions (each static or dynamic), and the new type will instead consist of a Box/Vec. The resulting layout mapping will then use Box/Vec for both shape and strides. There can be limitations and for some operations like creating array views and permuting dimensions the rank must be static. But yes you can always convert to static rank for calculations.
The choice is only to have a convention, and then column major is common for linear algebra. It is used both for memory layout and to give the order of dimensions in iteration. Using strided layout with row major data will work, but operations that depend on iteration order will have worse access pattern. It works fine for interfacing though, and internally one could make a copy or reverse indicies. To have full support for both row and column major would require one more generic parameter for the order. I had it in an earlier version, both removed it as it made both the library and interface more complex. C++ mdspan gets around this since it is quite thin. |
ndarray-linalg maintainership discussed in issue rust-ndarray/ndarray-linalg#381 |
100%
Possibly folks consider that not ergonomic or at least usual, but you basically can do that already by just building dynamic (ndarray) tensors of static tensors. I prototyped that a bit here: You can even have the scalar type be a |
Ok, money where my mouth is! There is now1 code in the
Feedback is, as always, greatly appreciated! Footnotes
|
Great, thanks! I cloned and also started looking at it. I see that it compiles and the tests run - but as far as I can see the tests do not touch the new code. Is it already possible to run something (even rudimentary), other than instantiating the new structures? I finally managed to do a first read of your design document. (I wanted to first experiment and understand the inner workings of @fre-hu's mdarray crate which I believe I now finally do. (Do not hesitate to look at the issues I opened there (number 1 to 4).) It seems to me that the design you propose is in many ways similar to mdarray, which I think is a good thing, notably the bits relating to array references (called In the design document you write about ndarray and constant dimensions:
Can you point me to the relevant part of the code, because from what I have seen so far ndarray's arrays always have dynamic shapes, i.e. individual elements of the shape are not part of the type.
My favorite aspect of mdarray's design is how it allows to mix dynamic and compile-time shapes. See for example this comment. I think that this design allows to combine the strengths of ndarray and nalgebra in a single crate, and I do not see a reason why this approach could not be adopted by a redesign of ndarray. Any thoughts on this? |
@akern40 in the proposed design, it would be great to include the ability to adopt straight away different backends such as recent popular crates like apache arrow-rs which the community is adopting quite fast and has a slim api for custom allocators which can open doors to hold not only cpu storage, but also cuda, wpgu, etc. For this reason, I recently dropped ndarray from main core of a crate I maintain for computer vision and deep learning moving towards to my own much simpler Tensor struct based on arrow::Buffer. See: https://github.com/kornia/kornia-rs/blob/main/crates/kornia-core/src/tensor.rs One extra reason for me was to have a lightweighted crate without all the bells and whistles of ops, axis iterators, etc in order to keep it very minimal. |
Not yet, this is very preliminary, just lays out what the fundamental data structures would look like. My next step is working on an implementation path forward that is as backwards-compatible as possible.
I think that's good as well! I didn't look too closely at
Ah sorry that line is supposed to indicate that you don't need another / different generic to handle constant dimensions, you could build that into the
Absolutely agreed here - |
Backend flexibility is a major goal of the current design! I'm curious - when you say "adopt straight away", what do you mean by that? As in, you'd like to see an Arrow-based backend included as a first-class supported
Is the idea here to just have a type that you can use for storage? Say there existed an |
For all those who have been following along and participating in this discussion, you may be interested in seeing #1440, which is one step towards doing some core design work on the library. It doesn't even come close to achieving everything we've been talking about, but it will make most functions1 significantly easier to write - hopefully reducing friction for newcomers and veterans alike - and is 99% backwards compatible2. As a sneak peek, a function that takes in two mutable 2D arrays that aren't raw views (i.e., they both have fn two_arrays<A, S1, S2>(arr1: &mut ArrayBase<S1, Ix2>, arr2: &mut ArrayBase<S2, Ix2>)
where
S1: DataMut<Elem = A>,
S2: DataMut<Elem = A>,
{
// Fun with multidimensional arrays! If only the signature weren't such a pain...
} under that PR can be written as fn two_arrays<A>(arr1: &mut ArrayRef2<A>, arr2: &mut ArrayRef2<A>)
{
// That seems better!
} These two functions are functionally equivalent. This is just an example; the same idea goes for immutable borrows, but you'd take In fact, that's not the only new type: functions that only want to read/write to an array's layout (shape / strides) also get a new API, although it's slightly more complex. The prior syntax would have been fn two_arrays_just_shape<A, S1, S2>(arr1: &mut ArrayBase<S1, Ix2>, arr2: &mut ArrayBase<S2, Ix2>)
where
S1: Data<Elem = A>,
S2: Data<Elem = A>,
{
// Fun with shapes! We'd still like a nicer signature, though...
} and is now fn two_arrays_just_shape<T, A>(arr1: &mut T, arr2: &mut T)
where T: AsRef<LayoutRef2<A>>
{
// I think this is better, even if it's not quite as nice as the `ArrayRef` example
} There is a rather-unavoidable reason for the roundabout If this sneak peek is confusing, please know that when that PR gets merged and we write a changelog, I will do my best to write a clear explanation with thorough examples of the new API. Footnotes
|
I'm just looking at the activity level in terms of PRs being merged, wondering if this project is still a thing?
The text was updated successfully, but these errors were encountered: