Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
86a0fad
[Offloading] Per-job store completion tracking for KV cache offloading
EtelisIBM Apr 7, 2026
049e574
[Offloading] Fix assertion error: keep _reqs_being_stored entry for f…
EtelisIBM Apr 7, 2026
f7c0e5c
[Offloading] Add unit tests for OffloadingWorkerMetadata.aggregate
EtelisIBM Apr 7, 2026
5762df9
Merge branch 'main' into per-job-store-completion
Etelis Apr 7, 2026
2b2ead7
Add reverse mapping to avoid linear scan in store job cleanup
EtelisIBM Apr 8, 2026
d01da9e
Merge branch 'main' into per-job-store-completion
EtelisIBM Apr 9, 2026
2986007
Update vllm/distributed/kv_transfer/kv_connector/v1/offloading/common.py
Etelis Apr 12, 2026
0ad8015
[Offloading] Address CR: unify job tracking, key metadata by job ID
EtelisIBM Apr 12, 2026
4fb4d36
[Offloading] Address CR: remove internal job IDs, extract job trackers
EtelisIBM Apr 12, 2026
49f204c
[Offloading] Resolve merge conflict in common.py
EtelisIBM Apr 12, 2026
e12300c
Merge branch 'main' into per-job-store-completion
Etelis Apr 12, 2026
7fa79bb
Merge branch 'per-job-store-completion' of https://github.com/Etelis/…
EtelisIBM Apr 12, 2026
ae305ba
[Offloading] Fix shutdown() to use renamed tracker attributes
EtelisIBM Apr 12, 2026
1b5ceb9
Merge branch 'main' into per-job-store-completion
Etelis Apr 12, 2026
125d67a
Merge branch 'main' into per-job-store-completion
Etelis Apr 13, 2026
e00ed7f
[Offloading] Unify load/store transfer jobs
EtelisIBM Apr 14, 2026
c7dc237
[Offloading] Add completed load jobs
EtelisIBM Apr 14, 2026
049605a
[Offloading] Simplify request cleanup
EtelisIBM Apr 14, 2026
1f888d8
[Offloading] Rename metadata fields to job-centric naming
EtelisIBM Apr 18, 2026
c58f38e
[Offloading] Unify completed load/store jobs into completed_jobs
EtelisIBM Apr 18, 2026
ae46506
[Offloading] Rename _load_jobs to _current_batch_load_jobs
EtelisIBM Apr 18, 2026
d86ca2d
[Offloading] Hoist completion tracking into self._connector_worker_meta
EtelisIBM Apr 18, 2026
8f8a2de
[Offloading] Drop finished_sending fallback in update_connector_output
EtelisIBM Apr 18, 2026
1714fdd
[Offloading] Hoist offload keys to scheduler-level _job_keys
EtelisIBM Apr 18, 2026
23e2ce0
[Offloading] Rename aggregate() loop variable to job_id
EtelisIBM Apr 18, 2026
ff761e8
[Offloading] Flatten worker tracker state into _jobs / _req_state
EtelisIBM Apr 18, 2026
185dfcf
[Offloading] Remove unused variables from OffloadingConnectorWorker i…
EtelisIBM Apr 18, 2026
5cf6114
[Offloading] Drop "TP" from TransferJobStatus comment
EtelisIBM Apr 19, 2026
9cec233
[Offloading] Move offload keys into TransferJobStatus
EtelisIBM Apr 19, 2026
eab28b2
[Offloading] Assert load_job is None before assignment
EtelisIBM Apr 19, 2026
d8c0ffb
[Offloading] Carry job IDs in jobs_to_flush
EtelisIBM Apr 19, 2026
e0d4a7f
[Offloading] Assert pending_count == 0 after completion guard
EtelisIBM Apr 19, 2026
d5ff68c
[Offloading] Assert req_status is not None on completion
EtelisIBM Apr 19, 2026
1bbbde0
[Offloading] Gate per-job req_status pop on req.is_finished()
EtelisIBM Apr 19, 2026
9e262ac
[Offloading] Unify load_job/store_jobs into transfer_jobs + is_store …
EtelisIBM Apr 19, 2026
f85d2ae
Merge branch 'main' into per-job-store-completion
Etelis Apr 19, 2026
48a0ccb
[Offloading] Assert flushed jobs are stores in build_connector_meta
EtelisIBM Apr 23, 2026
4a55d2d
Update vllm/distributed/kv_transfer/kv_connector/v1/offloading/schedu…
Etelis Apr 23, 2026
838f36e
Update vllm/distributed/kv_transfer/kv_connector/v1/offloading/schedu…
Etelis Apr 23, 2026
90309d0
[Offloading] Fix req_status subscript and gate _req_status pop on req…
EtelisIBM Apr 23, 2026
91fc4a0
Merge remote-tracking branch 'upstream/main' into per-job-store-compl…
EtelisIBM Apr 23, 2026
11ecb54
[Offloading][Tests] Thread OffloadingWorkerMetadata through RequestRu…
EtelisIBM Apr 23, 2026
2b921d0
[Offloading] Tie transfer_jobs lifetime to worker acks, not per-job c…
EtelisIBM Apr 23, 2026
5020de2
[Offloading] Guard update_connector_output against None worker meta
EtelisIBM Apr 23, 2026
1421691
Merge remote-tracking branch 'upstream/main' into per-job-store-compl…
EtelisIBM Apr 23, 2026
f2e2d82
Merge remote-tracking branch 'upstream/main' into per-job-store-compl…
EtelisIBM Apr 26, 2026
fb28950
[Offloading] Drop defensive None-fallbacks where invariants guarantee…
EtelisIBM Apr 26, 2026
a6b66ca
[Offloading] Drop defensive None-fallbacks in worker get_finished
EtelisIBM Apr 26, 2026
8a139fd
Update vllm/distributed/kv_transfer/kv_connector/v1/offloading/schedu…
Etelis Apr 26, 2026
8ebde6b
Merge branch 'main' into per-job-store-completion
Etelis Apr 26, 2026
d9beba4
Update vllm/distributed/kv_transfer/kv_connector/v1/offloading/schedu…
EtelisIBM Apr 26, 2026
1a83c5d
[Offloading] Defer preemption-flush bookkeeping to update_connector_o…
EtelisIBM Apr 26, 2026
05b7cb3
[Offloading] request_finished returns False; fence block reuse via re…
EtelisIBM Apr 26, 2026
8dc87ae
Merge branch 'main' into per-job-store-completion
Etelis Apr 26, 2026
f459b3c
Merge branch 'main' into per-job-store-completion
Etelis Apr 26, 2026
df3c8ab
[Offloading][Tests] Drain deferred stores in RequestRunner._run
EtelisIBM Apr 26, 2026
9294929
[Offloading] Drop duplicate next_stored_block_idx assignment in updat…
EtelisIBM Apr 26, 2026
94e20b4
[Offloading][Tests] Drop unreachable post-EOS drain loop
EtelisIBM Apr 26, 2026
c84cdfe
[Offloading] Drop load fencing; consolidate fence to _build_store_jobs
EtelisIBM Apr 27, 2026
456ecf3
[Offloading] Lift jobs_to_flush to _current_batch_jobs_to_flush member
EtelisIBM Apr 27, 2026
b8e9cc8
[Offloading] Fast-path the fence: flat _unprotected_block_ids + isdis…
EtelisIBM Apr 27, 2026
805934e
[Offloading] Defer fence index population to request_finished
EtelisIBM Apr 27, 2026
043d0af
[Offloading] Guard the cleanup loop on dict-empty + request-finished
EtelisIBM Apr 27, 2026
c5879cd
Merge branch 'main' into per-job-store-completion
Etelis Apr 27, 2026
6712fe9
[Offloading] Drop _unprotected_block_ids; use dict.keys().isdisjoint
EtelisIBM Apr 27, 2026
b2d9c28
[Offloading] Replace _jobs with _load_jobs: dict[int, ReqId]
EtelisIBM Apr 27, 2026
1f751d6
[Offloading] Fence load dst blocks at update_state_after_alloc
EtelisIBM Apr 27, 2026
c0804a8
Merge branch 'main' into per-job-store-completion
Etelis Apr 27, 2026
3b88704
[Offloading][Tests] Unit tests for per-job store completion + fence
EtelisIBM Apr 27, 2026
ef6736a
Merge branch 'main' into per-job-store-completion
Etelis Apr 28, 2026
7f98be6
[Offloading] Add return type hint to build_connector_worker_meta
EtelisIBM Apr 28, 2026
8a82380
[Offloading] Fix complete_load called for keys never given to prepare…
EtelisIBM Apr 28, 2026
5e5f0d6
Merge remote-tracking branch 'upstream/main' into per-job-store-compl…
EtelisIBM Apr 28, 2026
eea29b7
Merge branch 'main' into per-job-store-completion
Etelis Apr 28, 2026
db7c221
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-main
EtelisIBM Apr 28, 2026
7e79b85
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-main-r3
EtelisIBM Apr 28, 2026
670fa66
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r6
EtelisIBM Apr 28, 2026
f059821
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r11
EtelisIBM Apr 28, 2026
39f381c
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r13
EtelisIBM Apr 28, 2026
ec4ae7d
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r15
EtelisIBM Apr 29, 2026
dbb078e
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r17
EtelisIBM Apr 29, 2026
8966c13
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r19
EtelisIBM Apr 29, 2026
c8862a1
Merge remote-tracking branch 'upstream/main' into pr-39186-merge-r21
EtelisIBM Apr 29, 2026
810bee9
Merge branch 'main' into per-job-store-completion
orozery Apr 29, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 134 additions & 0 deletions tests/v1/kv_connector/unit/offloading_connector/test_scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,9 @@ def test_request_preemption(request_runner, async_scheduling: bool):
expected_stored_gpu_block_indexes=(9, 10, 11),
)

# All stores completed before request_finished -> fence index empty.
assert runner.connector_scheduler._block_id_to_pending_jobs == {}


@pytest.mark.parametrize("async_scheduling", [True, False])
def test_concurrent_lookups_of_the_same_prefix(request_runner, async_scheduling: bool):
Expand Down Expand Up @@ -292,6 +295,9 @@ def test_concurrent_lookups_of_the_same_prefix(request_runner, async_scheduling:
# second request will use the GPU prefix cache
assert transfer_jobs == list(runner.offloading_spec.handler.transfer_specs)

# Fence index drained: stores completed before request_finished ran.
assert runner.connector_scheduler._block_id_to_pending_jobs == {}


@pytest.mark.parametrize("async_scheduling", [True, False])
def test_abort_loading_requests(request_runner, async_scheduling: bool):
Expand Down Expand Up @@ -534,3 +540,131 @@ def test_do_remote_decode_stores_all_blocks(request_runner, async_scheduling: bo
decoded_tokens=[EOS_TOKEN_ID],
expected_stored_gpu_block_indexes=(0, 1, 2, 3, 4, 5),
)
# All stores completed before request_finished -> fence index empty.
assert runner.connector_scheduler._block_id_to_pending_jobs == {}


# ---------------------------------------------------------------------------
# Tests for the per-job-store-completion design and fence invariants.
# ---------------------------------------------------------------------------


def test_loads_do_not_populate_fence_index(request_runner):
"""Loads don't populate _block_id_to_pending_jobs (protected by
delay_free_blocks while in flight)."""
runner = request_runner(
offloaded_block_size=12,
gpu_block_size=4,
num_gpu_blocks=100,
async_scheduling=False,
)
runner.new_request(token_ids=[0] * 12)
runner.connector_scheduler._maximal_prefix_lookup = lambda key, req_context: 1
runner.run(decoded_tokens=[], complete_transfers=False)
assert runner.connector_scheduler._block_id_to_pending_jobs == {}


def test_fence_at_update_state_after_alloc(request_runner):
"""A load reusing a finished request's pending-store block triggers
a flush via update_state_after_alloc's fence.

num_gpu_blocks=2 forces the BlockPool to give req2 the same block
req1 just freed.
"""
runner = request_runner(
offloaded_block_size=4,
gpu_block_size=4,
num_gpu_blocks=2,
async_scheduling=False,
)

runner.new_request(token_ids=[0] * 4)
runner.manager.prepare_store.side_effect = (
lambda keys, req_context: generate_store_output(keys)
)
runner.run(decoded_tokens=[EOS_TOKEN_ID], complete_transfers=False)
assert runner.connector_scheduler._block_id_to_pending_jobs

runner.scheduler.reset_prefix_cache()
runner.new_request(token_ids=[0] * 4)
runner.connector_scheduler._maximal_prefix_lookup = lambda key, req_context: 1
runner.manager.prepare_store.side_effect = (
lambda keys, req_context: generate_store_output([])
)
runner.run(
decoded_tokens=[],
complete_transfers=False,
expected_stored_gpu_block_indexes=(0,),
expected_flushed_gpu_block_indexes=(0,),
)
assert runner.connector_scheduler._block_id_to_pending_jobs == {}


def test_fence_at_build_store_jobs(request_runner):
"""A new prefill (no load -> update_state_after_alloc returns early)
reusing a finished request's pending-store block is flushed by
_build_store_jobs's fence."""
runner = request_runner(
offloaded_block_size=4,
gpu_block_size=4,
num_gpu_blocks=2,
async_scheduling=False,
)

runner.new_request(token_ids=[0] * 4)
runner.manager.prepare_store.side_effect = (
lambda keys, req_context: generate_store_output(keys)
)
runner.run(decoded_tokens=[EOS_TOKEN_ID], complete_transfers=False)
assert runner.connector_scheduler._block_id_to_pending_jobs

runner.scheduler.reset_prefix_cache()
runner.new_request(token_ids=[1] * 4)
runner.connector_scheduler._maximal_prefix_lookup = lambda key, req_context: 0
runner.manager.prepare_store.side_effect = (
lambda keys, req_context: generate_store_output([])
)
runner.run(
decoded_tokens=[EOS_TOKEN_ID],
expected_stored_gpu_block_indexes=(0,),
expected_flushed_gpu_block_indexes=(0,),
)
assert runner.connector_scheduler._block_id_to_pending_jobs == {}


@pytest.mark.parametrize("async_scheduling", [True, False])
def test_complete_store_called_per_job(request_runner, async_scheduling: bool):
"""complete_store fires per-job, not deferred to request finish.
Each call carries only that store's keys."""
offloaded_block_size = 12
runner = request_runner(
offloaded_block_size=offloaded_block_size,
gpu_block_size=4,
num_gpu_blocks=100,
async_scheduling=async_scheduling,
)
runner.new_request(token_ids=[0] * offloaded_block_size)
runner.manager.prepare_store.side_effect = (
lambda keys, req_context: generate_store_output(keys)
)

# First store: fires when block 0 is fully populated.
runner.run(decoded_tokens=[0, 0], expected_stored_gpu_block_indexes=(0, 1, 2))
assert runner.manager.complete_store.call_count == 1
first_call_keys = set(runner.manager.complete_store.call_args.args[0])
assert len(first_call_keys) == 1
runner.manager.complete_store.reset_mock()

# Second store: fires when block 1 is fully populated, with different keys.
runner.run(
decoded_tokens=[0] * (offloaded_block_size + 1),
expected_stored_gpu_block_indexes=(3, 4, 5),
)
assert runner.manager.complete_store.call_count == 1
second_call_keys = set(runner.manager.complete_store.call_args.args[0])
assert first_call_keys != second_call_keys
runner.manager.complete_store.reset_mock()

# Finish: no store pending -> no further call.
runner.run(decoded_tokens=[EOS_TOKEN_ID])
assert runner.manager.complete_store.call_count == 0
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project

import pytest

from vllm.distributed.kv_transfer.kv_connector.v1.offloading.common import (
OffloadingWorkerMetadata,
)

pytestmark = pytest.mark.cpu_test


def test_aggregate_sums_counts():
meta1 = OffloadingWorkerMetadata(completed_jobs={42: 1, 7: 1})
meta2 = OffloadingWorkerMetadata(completed_jobs={42: 1, 7: 1})
result = meta1.aggregate(meta2)
assert result.completed_jobs == {42: 2, 7: 2}


def test_aggregate_disjoint_jobs():
meta1 = OffloadingWorkerMetadata(completed_jobs={42: 1, 7: 1})
meta2 = OffloadingWorkerMetadata(completed_jobs={43: 1, 8: 1})
result = meta1.aggregate(meta2)
assert result.completed_jobs == {42: 1, 7: 1, 43: 1, 8: 1}


def test_aggregate_multiple_workers():
meta1 = OffloadingWorkerMetadata(completed_jobs={42: 1, 43: 1, 7: 1})
meta2 = OffloadingWorkerMetadata(completed_jobs={42: 1, 7: 1, 8: 1})
meta3 = OffloadingWorkerMetadata(completed_jobs={42: 1, 43: 1, 8: 1})
result = meta1.aggregate(meta2).aggregate(meta3)
assert result.completed_jobs == {42: 3, 43: 2, 7: 2, 8: 2}
34 changes: 13 additions & 21 deletions tests/v1/kv_connector/unit/offloading_connector/utils.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
import copy
from collections.abc import Iterable, Iterator
from dataclasses import dataclass
from typing import Any
Expand All @@ -19,6 +18,7 @@
from vllm.distributed.kv_transfer.kv_connector.v1 import KVConnectorRole
from vllm.distributed.kv_transfer.kv_connector.v1.offloading.common import (
OffloadingConnectorMetadata,
OffloadingWorkerMetadata,
)
from vllm.distributed.kv_transfer.kv_connector.v1.offloading_connector import (
OffloadingConnector,
Expand Down Expand Up @@ -51,7 +51,6 @@
TransferResult,
TransferSpec,
)
from vllm.v1.outputs import EMPTY_MODEL_RUNNER_OUTPUT, KVConnectorOutput
from vllm.v1.request import Request
from vllm.v1.structured_output import StructuredOutputManager

Expand Down Expand Up @@ -369,7 +368,12 @@ def _run(self, decoded_tokens: list[int], complete_transfers: bool):
prev_scheduler_output = None
prev_model_runner_output = None
while True:
assert self.scheduler.requests
# Strict-always-False frees the request immediately on EOS, but
# the worker may still have a deferred store queued. In production
# the next request's step drains it; in single-request tests we
# must keep stepping until the scheduler sees no in-flight jobs.
if not self.scheduler.requests and not self.connector_scheduler._jobs:
break

scheduler_output = self.scheduler.schedule()
self._update_gpu_block_idx()
Expand All @@ -392,6 +396,10 @@ def _run(self, decoded_tokens: list[int], complete_transfers: bool):
finished_sending, finished_recving = self.worker_connector.get_finished(
scheduler_output.finished_req_ids
)
worker_meta = (
self.worker_connector.build_connector_worker_meta()
or OffloadingWorkerMetadata()
)

self.worker_connector.clear_connector_metadata()

Expand All @@ -400,6 +408,7 @@ def _run(self, decoded_tokens: list[int], complete_transfers: bool):
finished_sending=finished_sending,
finished_recving=finished_recving,
token_id=token_id or 0,
kv_connector_worker_meta=worker_meta,
)

prev_token_id = token_id
Expand All @@ -420,7 +429,7 @@ def _run(self, decoded_tokens: list[int], complete_transfers: bool):
if (
prev_token_id == EOS_TOKEN_ID
and prev_token_id != token_id
and self.scheduler.requests
and (self.scheduler.requests or self.connector_scheduler._jobs)
):
# continue for one more step to allow offloading to kick off
continue
Expand All @@ -435,26 +444,9 @@ def _run(self, decoded_tokens: list[int], complete_transfers: bool):

self._parse_transfers()

# run one more step to update finished stored
if EOS_TOKEN_ID in decoded_tokens:
assert not self.scheduler.running

while self.scheduler.requests:
scheduler_output = self.scheduler.schedule()

finished_sending, finished_recving = self.worker_connector.get_finished(
scheduler_output.finished_req_ids
)

assert not finished_recving

model_runner_output = copy.deepcopy(EMPTY_MODEL_RUNNER_OUTPUT)
model_runner_output.kv_connector_output = KVConnectorOutput(
finished_sending=finished_sending
)

self.scheduler.update_from_output(scheduler_output, model_runner_output)

def run(
self,
decoded_tokens: list[int],
Expand Down
4 changes: 4 additions & 0 deletions tests/v1/kv_connector/unit/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
KVConnectorBase_V1,
KVConnectorMetadata,
KVConnectorRole,
KVConnectorWorkerMetadata,
)
from vllm.distributed.kv_transfer.kv_connector.v1.example_connector import ( # noqa
ExampleConnector,
Expand Down Expand Up @@ -249,6 +250,7 @@ def create_model_runner_output(
invalid_block_ids: set[int] | None = None,
use_eos: bool = False,
token_id: int = 0,
kv_connector_worker_meta: KVConnectorWorkerMetadata | None = None,
) -> ModelRunnerOutput:
"""Make dummy model runner output for testing."""

Expand All @@ -266,11 +268,13 @@ def create_model_runner_output(
finished_sending is None
and finished_recving is None
and invalid_block_ids is None
and kv_connector_worker_meta is None
)
else KVConnectorOutput(
finished_sending=finished_sending,
finished_recving=finished_recving,
invalid_block_ids=invalid_block_ids or set(),
kv_connector_worker_meta=kv_connector_worker_meta,
)
)

Expand Down
Original file line number Diff line number Diff line change
@@ -1,15 +1,60 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
from dataclasses import dataclass
from dataclasses import dataclass, field

from vllm.distributed.kv_transfer.kv_connector.v1.base import KVConnectorMetadata
from vllm.distributed.kv_transfer.kv_connector.v1.base import (
KVConnectorMetadata,
KVConnectorWorkerMetadata,
)
from vllm.v1.kv_offload.worker.worker import TransferSpec

ReqId = str


@dataclass
class TransferJob:
"""A transfer job bundling request context with transfer spec.

Used for both loads and stores, keyed by scheduler-assigned job ID.
The worker reports the job ID back when the transfer finishes,
and the scheduler processes the completion.
"""

req_id: ReqId
transfer_spec: TransferSpec


@dataclass
class OffloadingConnectorMetadata(KVConnectorMetadata):
reqs_to_load: dict[ReqId, TransferSpec]
reqs_to_store: dict[ReqId, TransferSpec]
reqs_to_flush: set[str] | None = None
# Keyed by scheduler-assigned job IDs.
load_jobs: dict[int, TransferJob]
store_jobs: dict[int, TransferJob]
jobs_to_flush: set[int] | None = None


@dataclass
class OffloadingWorkerMetadata(KVConnectorWorkerMetadata):
"""Worker -> Scheduler metadata for completed transfer jobs.

Each worker reports {job_id: 1} for newly completed transfer jobs
(load or store). aggregate() sums counts across workers within a step.
The scheduler accumulates across steps and processes
a transfer completion only when count reaches num_workers.
"""

completed_jobs: dict[int, int] = field(default_factory=dict)

def mark_completed(self, job_id: int) -> None:
"""Record a transfer job completion from this worker."""
self.completed_jobs[job_id] = 1

def aggregate(
self, other: "KVConnectorWorkerMetadata"
) -> "KVConnectorWorkerMetadata":
assert isinstance(other, OffloadingWorkerMetadata)

merged = dict(self.completed_jobs)
for job_id, v in other.completed_jobs.items():
merged[job_id] = merged.get(job_id, 0) + v

return OffloadingWorkerMetadata(completed_jobs=merged)
Loading
Loading