[Apr 14 Discussion] An Efficient Implementation of SELF, a Dynamically-Typed Object-Oriented Language Based on Prototypes #316
Replies: 7 comments 17 replies
-
Overall I think this is a very interesting paper on the implementation of a very interesting language! SELF gives the programmer a lot of freedom and power via the lack of classes and types, and computation using message passing. This poses various challenges to the implementation, which the authors address via using maps for object representation, and devising techniques like dynamic customized compilation, type prediction, message inlining, and more. It would be interesting to know what specific techniques/ideas may have been passed down to modern JIT compilers. One thing that I find curious is that their evaluation does not include any comparison against an implementation of SELF itself? (no pun intended) This isn't the first paper on SELF, so I assume there was some prior implementation of the language... but I may be wrong there. But I do think it would have been better if they had some naive, non-efficient implementation to compare against so they can highlight the effects of their optimizations! But I may be having a big misunderstanding here, please correct me if I am! Also as a side note, I think the programming environment for SELF that was described in the paper is as seen on this video here. I don't have much to say about it besides that it's very different from what I expect an IDE to look like! It's interesting to see how people may have programmed about 30 years ago though. |
Beta Was this translation helpful? Give feedback.
-
This paper was super interesting; but like how visiting an old science museum is interesting. You get to feel excited about the great olds who came up with all the techniques that are considered ordinary and commonplace in today’s day and age. |
Beta Was this translation helpful? Give feedback.
-
This paper is about a dynamic object-oriented language called SELF. It doesn’t have class or variable, instead it has prototype and message. The paper proposes two methods to alleviate the space and time cost of object-oriented dynamic languages. For the space cost, it uses map as a mechanism to allow objects in an inheritance family to share non-assignable data. Each child only has to store a copy of assignable data. For the time cost, it segragates object references and bytes area, so that byte array objects are never scanned, and object headers are never parsed. I think these two methods are transferrable to general dynamic language implementation, and are probably already used in modern implementations. The other interesting thing is its compiler design. The paper mentions that dynamic language has little static type information, which makes type inference and optimization difficult. Interestingly, SELF just enumerate all possible types for a function and compile different machine code for each. The paper also mentions that SELF supports dynamic inheritance, the ability to change inheritance relationship at runtime. I’ve never used such feature before (or I have used it without knowing), so I’m curious about when would it be useful to change inheritance at runtime. |
Beta Was this translation helpful? Give feedback.
-
I thought the discussion about memory segregation in S3.2 was interesting, and it got me thinking about the applicability of this technique in other languages. Of course we separate the stack and the heap, but, within the heap, |
Beta Was this translation helpful? Give feedback.
-
First of all, I was slightly triggered by this sentence from the introduction:
Did they mean that the concept of type declaration was archaic, or that there might be out-of-date type signatures in an existing code? I hope it's the latter, which is indeed an advantage of dynamic languages... The dynamic inheritance seems like an interesting concept, but the example they gave was nonsatisfactory. It seemed like something that can be easily accomplished by an if statement. This |
Beta Was this translation helpful? Give feedback.
-
I found this paper interesting, but definitely in more of a foundational way rather than presenting a lot of ideas that are new to me (makes sense because of when this paper was published). Although I agree with the point made in lecture that there's no such thing as a "dynamic language", some parts of this paper show the importance of codesign between the language and the compiler. This goes beyond just dynamic sematics lending themselves to dynamic compilers; the specific dynamic sematics affect the details of the compiler/interpreter. For example, the way SELF handles objects and the use of maps in the compiler implementation are intertwined. This fact is kinda obvious in hindsight, but it's still good to see a concrete example. |
Beta Was this translation helpful? Give feedback.
-
I thought the customized compilation work was intriguing, because of the relative simplicity of the idea, yet how powerful the idea is in getting optimization potential. I feel this ties into the idea of optimizing the common case, where the compiler anticipates common cases and optimizes on those common cases using inlining. I imagine this principle, particularly message splitting, could be applied in languages like Python, where the plus operator would be used generally for ints and floats. I also feel this is similar to what was presented in lesson 11, where branches can be pushed as far upwards as possible, so that hopefully, there is only straightline code after the branches, with each choice of straightline code being the target of further optimization. I am also curious about how often primitive inlining or message splitting can be applied. I feel it would be interesting to see how many times each of these opportunities is allowed to be used, compared to the total number of possible chances. |
Beta Was this translation helpful? Give feedback.
-
Thread for discussion of paper here
Beta Was this translation helpful? Give feedback.
All reactions