-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate lifetime values #15
Comments
How about combining this with more coarse grained memory management? Seeing how Tao3D "concepts" nicely composed in the presentations app etc., what about making memory management explicit by providing a pattern I mean, memory is not infinite and most data in it actually have a lifetime. Garbage collectors just hide this fact. Why not to require the programmer think about memory also? Sure, not about every single tiny Leaving As a side effect, this could at the same time work as a "built-in wrapper over malloc()" to ensure low pressure on the system One wouldn't need to care about loops with low number of iterations and smaller allocations. For other cases, there is nothing easier than just prepending the loop body with There are definitely things I didn't consider as this is in no way thought out proposal. But I wanted to let others know. This idea naturally builds upon the assumption that references will not escape the context which I don't know how to ensure in XL, but maybe there are some ways. |
After short experimentation I found out the I turns out, the main problem is that people tend to think about In other words, for stack values, it's just one layer (there is no "meta" layer) - one just writes a variable name and all is done automatically - thus one cares only about the value itself and not the "existence" of the value. With heap values, it's two layers (there is a "meta" layer taking care of existence of the value and then the "main" layer is the value itself) - one first interacts with the "meta" layer in the context of the variable and then finally interacts with the value itself and then needs to interact again with the "meta" layer every single time the value shall change it's meta property size and finally on top of that one needs to also interact with the "meta" layer after the value of the variable is not being useful any more. In yet other words, the culprit seems to be the "assignment" operation in most languages which under the hood does very different things (often pretty convoluted - based on types, context, situation, multithreading/processing, heuristic, etc. the rval is being shallow copied, weakly linked, deep copied, moved, borrowed, etc.) thus totally blurring our perception of the huge difference between stack and heap memory. My proposal would then be to treat I've spend now quite some time writing down how it could work, but then I realized I'm more or less describing Nim's ORC and Lobster's GC. Both actually inserting Back to the topic - I actually think that lifetime values are important (maybe not necessary though) to make the advanced flow analysis more effective and thus I see lifetime values as on optimization technique. |
One thing I forgot to mention - because management of heap memory can be seen as manual "cache" management, the same ORC-like garbage collection can be used also for seamless persistent values. Thus providing a seamless and very easy way to make the application state persistent (resumable) unlike most other languages. This could be a killer feature with commercial potential. Imagine e.g. https://scattered-thoughts.net/writing/imp-intro/ but actually practically usable. |
@dumblob Thanks a lot for the thoughts. I cannot address all of it in a timely fashion, so I'll try to focus on a few essential aspects. Feel free to resteer the discussion if you feel I forgot something 😃. First, Second, your observations about memory allocation being a side effect that is not stack-ordered is perfectly valid. This has been a problem for a very long time, with solution ranging from manual management (e.g. C) to automated garbage collection (e.g. Lisp) to static type-based deallocation (e.g. Ada) to dynamic constructors/destructors (e.g. in C++) to the borrow checker (e.g. Rust), I believe roughly in chronological order. All these have advantages and drawbacks, and most exist in multiple variants. I don't know how many garbage collectors have been invented, but I suspect it is more than I suspect. The XL approach wants to be extensible. This includes garbage collection, for example. Basic garbage collection algorithms are relatively simple, e.g. mark and sweep, but they are also quite generic. Why garbage collect memory and not files or threads or keys or whatever you want? The In short, what you said about |
Thanks for your thoughts!
Yep. Just yesterday I saw what devs of Passerine try to use - they chose a bit different approach to the good old idea "deep copy everything everywhere and only if there is an appropriate guarantee, pass it by reference". Usually this is being evaded by combining e.g. flow analysis with some GC technique. But Passerine guys do it by restricting the language and hope their rules are so little restrictive that people almost won't notice. They call it "vaporization". https://slightknack.dev/blog/vaporization/ I actually think this might be yet another approach for XL - but compared to others, it seems much easier to implement than lifetime or even simple garbage collection. Passerine is also highly versatile language ("almost" like XL 😉), so maybe their "vaporization" is something to look at before one spends hundreds of hours implementing some more complex memory management scheme. |
Lifetime is a new concept in XL, inspired by the Rust "borrow checker", and destined to make it possible in XL to implement a wider class of borrow-checks from the library.
A
lifetime
value is generated for the expressionlifetime X
, which returns a compile-time constant that can be compared to other lifetime constants. For instance, a reference type can be made from an owning value with the following condition:The text was updated successfully, but these errors were encountered: