Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unification of submodels and distributions #2485

Open
Tracked by #2420
penelopeysm opened this issue Feb 10, 2025 · 8 comments
Open
Tracked by #2420

Unification of submodels and distributions #2485

penelopeysm opened this issue Feb 10, 2025 · 8 comments
Labels
Milestone

Comments

@penelopeysm
Copy link
Member

penelopeysm commented Feb 10, 2025

@mhauru @willtebbutt and I discussed submodels this evening (10 Feb). The present issue is that our support for submodels is currently halfway done - we are able to extract the return value of a submodel, but not its latent variables.

(Note that this has always been true, even with the old @submodel macro; TuringLang/DynamicPPL.jl#696 merely changed the syntax we used to achieve this.)

(1) Overview

After a fair bit of back and forth, the summary of the interface we would like is something along these lines:

using DynamicPPL, Distributions

@model function inner()
    x ~ Normal()
    y ~ Normal()
    return "my string"
end

@model function outer()
    a ~ Normal()
    b ~ inner()
    @show b      # Should be a NamedTuple{:x, :y}
    @show b.x    # Should be a float
    c ~ inner() {OP} retval
    @show c      # Should also be a NamedTuple{:x, :y}
    @show retval # Should be "my string"
end

# Conditioning on submodel variables should work
outer() | (@varname(c.x) => 1.0)
# This should ideally work too
outer() | (c = (x = 1.0,),)

for some infix operator {OP} (see section 3.2 below for some possible options).

Note that there are several changes with respect to the current behaviour (as of 10/2/2025):

  1. No need to wrap in to_submodel if possible (I am not totally sure if this is doable)
  2. Manual prefixing should not be needed and may be disallowed
  3. Prefixing should occur not by prepending directly to the symbol (as is currently done), but rather by making the submodel's variables be a field of the parent model's variable. Thus, we can write @show c.x instead of @show var"c.x".
  4. The lhs of a tilde should capture the submodel's random variables instead of its return value.
  5. The return value, if desired, can be extracted by placing a further operator on the right-hand side of the submodel.

Although we are collectively in favour of this interface, this is not meant to be set in stone yet, and there are several further points of discussion, which are detailed below.

(2) Motivation

Turing models in general have two types of 'useful information' that one might want to extract:

  1. The values of the random variables inside. This is best represented by the model trace, i.e., VarInfo that is used during execution.
  2. Since @model function ... end itself expands into a function definition (the so-called 'model evaluation function'), this function will itself also have a return value.

This return value may be constructed from the random variables' values, and in many of the DynamicPPL/Turing docs, this is indeed the case; however, this is not mandatory and in general the return value can contain arbitrary information.

With models, these two pieces of information are obtained respectively using rand() and function calls:

julia> using Distributions

julia> using DynamicPPL, Distributions

julia> @model function f()
           x ~ Normal()
           return "hello, world"
       end
f (generic function with 2 methods)

julia> model = f()
Model{typeof(f), (), (), (), Tuple{}, Tuple{}, DefaultContext}(f, NamedTuple(), NamedTuple(), DefaultContext())

julia> rand(model)
(x = 0.12314369056401028,)

julia> model()
"hello, world"

Currently, x ~ to_submodel(inner()) does not assign the random variables in inner() to x, but rather the return value. This means that there are several inconsistencies between the behaviour of submodels and distributions:

  1. The obvious difference is that with a distribution on the rhs, the value of x is sampled by calling rand(dist). With a submodel on the rhs, the value of x is obtained by calling inner()().
  2. It is not possible to calculate the logpdf of a submodel inner() evaluated at x. This is because the return value x, in general, has no relationship to the random variables contained inside inner(), and indeed there is no guarantee that a well-defined 'distribution' of return values exists.
  3. In x ~ to_submodel(inner()), although the variables of inner() are added to the VarInfo and the resulting chains from sampling, x itself is not.

This proposal therefore seeks to unify the behaviour of submodels and distributions in a way that is internally consistent and thus easier for users to intuit. In particular, it is proposed that:

  1. The syntax lhs ~ rhs is reserved for the results of sampling from a submodel or distribution using rand(). The result of sampling from a model should be some kind of data structure (a NamedTuple, struct, or dictionary) which allows for indexing. The variable lhs (or its subvariables) should always be part of the VarInfo and it should be possible to condition on them.

  2. We adopt new syntax, in the form of lhs ~ submodel {OP} retval where {OP} is an infix operator, to extract the return value of a submodel (if so desired). Because distributions do not have return values, this syntax would only be accepted when used with submodels in the middle. The {OP} retval section may be omitted, in which case the return value is simply discarded.

  3. Running a submodel without extracting its random values (i.e. just writing submodel {OP} retval) should be forbidden, because in such a case, users should refactor their code to use a plain Julia function instead of a submodel.

(3) Concrete steps

  1. Decide if the general idea makes sense.

  2. Decide on the infix operator {OP}. We would probably like the operator to (1) be ASCII-compatible; (2) resemble a rightwards arrow.

    • I originally proposed ~>, but this is not allowed by the Julia parser.
    • The best boring option I see is -->
    • >>= is also possible, and I have a Haskell bias towards it, but it technically conflicts with right-bit-shift-and-assign.
    • The simpler -> and => are probably best avoided because they are already used for anonymous functions and Pair respectively.
  3. Figure out the data structure that should be obtained when sampling from a submodel. Right now, rand(model) returns a NamedTuple. To me, this feels like the most natural interface to use; it 'makes sense' that if t is a random variable in a submodel, c ~ submodel should allow us to access c.t. It is possible that we may want to use a different type of data structure that retains more information (i.e. is closer to a varinfo) but still has an interface that allows field access.

  4. Figure out how to obtain this data structure when sampling from a submodel. My original proposal was to evaluate submodels with a special wrapper context, say SubmodelContext, which would collect sampled variables and their values in a NamedTuple as each assume statement was hit. (Note, the behaviour of this would be very similar to ValuesAsInModelContext.) However, it seems quite plausible that this could be obtained simply by subsetting the global varinfo.

  5. Implement this in the DynamicPPL compiler. Note that this may require special attention to e.g. operator precedence / associativity which may in turn place more restrictions on the possible operators used. Some extra abstract type machinery will likely also be needed if we plan to not wrap submodels in a new type; my suspicion is that this might actually be the hardest part of it.

  6. Iron out the odd bits of conditioning submodels. I actually suspect that all the infrastructure necessary for this is already in place, and it's mostly a matter of making sure that writing a comprehensive set of tests to make sure that everything behaves 'as expected'.

  7. Iron out the expected behaviour when varnames conflict, e.g. if we have c ~ submodel() then we should probably not allow the identifier c to be reused on the lhs of another tilde.

  8. Write tests. And more tests. And more tests. Even with as elegant an implementation as we can come up with, my gut feeling is that there are bound to be many awkward edge cases!

  9. Turn the contents of this issue into documentation. (I wrote it up, so the hard bit's already done 😉)

(4) Alternatives considered.

The main alternative considered was to use two different operators for extracting the random variables and the return value, plus one for extracting both, so something like:

@model function inner()
    x ~ Normal()
    y ~ Normal()
    return "my string"
end

@model function outer()
    a ~ Normal()
    b ~ inner()
    @show b       # Should be a NamedTuple{:x, :y}
    retval {OP1} inner()
    @show retval  # Should be "my string"
    c, retval2 {OP2} inner()
    @show c       # Should be a NamedTuple{:x, :y}
    @show retval2 # Should be "my string"
end

for some infix operators {OP1}and {OP2}.

We also considered having a single statement b ~ submodel return some data structure from which the random variables could be accessed using b.vars and the return value with b.retval.

However, we all agreed that the main proposal here is better, because its syntax is more elegant and it also does not introduce any extra layers of indirection.

@mhauru
Copy link
Member

mhauru commented Feb 11, 2025

Thanks for being the main brains behind this proposal and for an excellent write up. I don't really have much to add, I agree with essentially everything in the OP.

I'm still thinking a bit about whether we could come up with an even more elegant way of getting the return values. I think the core premise here, that c ~ inner() should create a NamedTuple with the latents of inner and add to __varinfo__ all the latents of inner prefixed with c, and that some complementary syntax should be used to extract the return value of inner(), is solid. Submodels are primarily models, which is to say things that take values for a set of random variables and return logprobs and allow sampling from the prior of those variables, and secondarily are allowed to return arbitrary Julia objects for whatever strange needs to user may have. The second infix operator as part of the RHS is the best proposal I've seen or can think of for capturing the return values, but I may keep thinking, see if there could be something even better.

@yebai
Copy link
Member

yebai commented Feb 11, 2025

Looks good; one simple solution to unification is: to_distribution(model()) would construct a NamedTupleDistribution that returns (latent=..., retval=...).

I prefer to minimise extra syntax or macro defined by Turing.jl. This is not a strict rule, but we should keep the bar high.

@willtebbutt
Copy link
Member

This is basically what @penelopeysm mentioned as an alternative that we considered -- see (4) Alternatives considered in her above post. There she discussed returning a Tuple, and you're proposing a NamedTuple, but it's basically the same. I do think that calling it to_distribution is a step in the wrong direction though, because we won't be able to implement logpdf on it, and we won't be able to condition on it. i.e. if you have a model

@model inner()
    x ~ Normal()
    y ~ Bernoulli()
    return "hello"
end

@model outer()
    (c, retval) ~ to_distribution(inner())
    return nothing
end

you cannot condition on retval, despite the fact that it appears on the lhs of a ~. Moreover, you cannot condition on a value of the NamedTuple (c, retval), despite the fact it also appears on the lhs of the tilde.

A key benefit of the proposed syntax is that it balances a few concerns:

  1. you need to be able to get access to the thing that a model returns somehow,
  2. you need to be able to get access to any of the latents of a model (in order to encourage model re-use in the most general way), and
  3. simple semantics are preferable, and the rule "you can condition on anything on the lhs of a tilde and nothing else" is extremely simple.

The to_distribution approach violates the third point, and makes it hard for users to intuit what is going on.

@yebai
Copy link
Member

yebai commented Feb 11, 2025

I do think that calling it to_distribution is a step in the wrong direction though, because we won't be able to implement logpdf on it, and we won't be able to condition on it. i.e. if you have a model

There isretval ~ to_sampleable(model) for inner models where we cannot implement the log density of returned variables. For something more general like (latent, retval) ~ to_sampleable(model), the docs can clearly explain users can only condition on latent but not retval.

I proposed retval ~ to_sampleable(model) to indicate cases where condition is not allowed. The current to_submodel is simply an alias for to_sampleable. So, clarity on what variables one can condition should not be an issue. However, I agree that simple semantics are preferable. The debatable point is what is simpler, i.e. whether a new special syntax is simpler than to_distribution / to_sampleable.

@torfjelde
Copy link
Member

Thanks for writing this up Penny!

Running a submodel without extracting its random values (i.e. just writing submodel {OP} retval) should be forbidden, because in such a case, users should refactor their code to use a plain Julia function instead of a submodel.

Not sure I quite see this.

The aim of submodels is to make it so that models can easily be shared across projects and applications, right? If so, there are definitively applications where you have submodels which happen to only represent a log-likelihood, but in a more general case might represent a model where you want to capture the return values. Asking the user to refactor this into a standard Julia function would either a) require accessing internal DPPL variables, or b) rewriting the model. (a) seems non-ideal, and (b) isn't so easy if the model they use comes from a different package which the user doesn't have control over.

But I guess this is technically something we've already decided to ignore after moving to ~ syntax for submodels (though I forget if we added the option of doing _ ~ ...).

Decide on the infix operator {OP}. We would probably like the operator to (1) be ASCII-compatible; (2) resemble a rightwards arrow.

Personally, I'm a bit worried about such an infix operation. It seems a bit too elaborate?

Maybe it's worth querying end-users about the syntax?

It is possible that we may want to use a different type of data structure that retains more information (i.e. is closer to a varinfo) but still has an interface that allows field access.

Would be surprised if we can find a good and performant solution that doens't involve a VarInfo (or similar) 😕

Figure out how to obtain this data structure when sampling from a submodel.

I think the most likely solution would be:

  1. Nest varinfo objects.
  2. Extract the relevant varinfo object when you hit a submodel.
  3. "Merge" (not quite the current merge since this would be a nested version) the resulting varinfo from submodel call into the "parent" varinfo (or defer these things until they're needed, e.g. when calling getlogp we would need to gather all the logp values from the nested varinfos).

my suspicion is that this might actually be the hardest part of it.

Agreed 😬

I would also point out some of these discussion points seem somewhat independent, e.g. how to represent return- and trace-values from submodels vs. syntax for specifying submodels. Might be worth separating these to avoid getting bogged down in one or the other?

@mhauru
Copy link
Member

mhauru commented Feb 12, 2025

One thing I really like about Penny's proposal is that everything that goes into a VarInfo is always on the LHS of a ~ statement, and everything that is on the LHS of a ~ statement results in a corresponding entry in a VarInfo. This comes from not allowing submodel {OP} retval without a ~, and using an infix operator rather than a tuple of (latents, retval). Also, anything on the LHS of a ~ can be conditioned, and nothing else can be conditioned. That simplicity I think will a) make it very easy to learn and understand, b) will pay long-term dividends when composing this with other language features, developing new syntax, etc.

Note the pleasing analogy with = and bringing variables into scope. ~ is like =, but for __varinfo__ rather than your current Julia scope.

Note also that this would allow us to get rid of prefix; The prefix is obvious from the LHS and always available.

@torfjelde
Copy link
Member

Note also that this would allow us to get rid of prefix; The prefix is obvious from the LHS and always available.

But not always wanted, no? E.g. if you have a model that is nested 10 levels, you don't necessarily want to prefix all that.

@willtebbutt
Copy link
Member

willtebbutt commented Feb 13, 2025

But not always wanted, no? E.g. if you have a model that is nested 10 levels, you don't necessarily want to prefix all that.

Not prefixing something which is 10 levels down feels extremely dangerous to me. This seems like the kind of feature that feels like a win at first, but which you would quickly regret using when you accidentally define the same symbol somewhere else, and now you're conditioning on the wrong thing and erroring / silently failing. If we want to make it convenient to access symbols which buried under layers of models, would a safer mechanism not be preferable?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants