https://twitter.com/_cartick/status/1640903057994285056?s=20 > Does it support model partitioning for quantized inference? I have 4x8gb cards so want to see if I can try larger models.