[index management] Refactor code to allow emitting multiple index lists#249120
[index management] Refactor code to allow emitting multiple index lists#249120
Conversation
|
🤖 Jobs for this PR can be triggered through checkboxes. 🚧
ℹ️ To trigger the CI, please tick the checkbox below 👇
|
💔 Build Failed
Failed CI StepsTest Failures
Metrics [docs]Async chunks
History
|
| loadIndices$.subscribe((indices) => { | ||
| store.dispatch(loadIndicesSuccess({ indices })); | ||
| }); | ||
|
|
There was a problem hiding this comment.
I'm not sure if this is the right place to park this.
There was a problem hiding this comment.
Maybe I should pass the loadIndices function via the AppContextProvider.
There's also a reloadIndices function somewhere in there that I should examine.
| }, | ||
| loadIndices: () => { | ||
| dispatch(loadIndices()); | ||
| }, |
There was a problem hiding this comment.
Previously this was firing off the request and then getting the data back. I need something that could respond to multiple data sets being passed.
There was a problem hiding this comment.
Previously this was firing off the request and then getting the data back. I need something that could respond to multiple data sets being passed.
Totally get the intent, but we don't need RxJS here to achieve this. Redux already provides the “emit + subscribe” mechanism: each dispatch() is an emit, and the UI is already subscribed via react-redux (connect/selectors). So “multiple data sets” maps naturally to “dispatch multiple times as each chunk arrives”.
Here's an async/await shape (pseudo-code — names like currentRequestId / indicesEnrichmentSuccess are illustrative; you'd need to add the corresponding state/actions if you want this exact pattern) that probably matches what you're aiming for:
import { v4 as uuidv4 } from 'uuid';
export const loadIndices = () => async (dispatch, getState, { api }) => {
const requestId = uuidv4();
dispatch(loadIndicesStart({ requestId }));
// 1) base list first (renders quickly)
try {
const base = await api.getIndicesBase();
if (getState().indices.currentRequestId !== requestId) return; // ignore stale
dispatch(loadIndicesSuccess({ requestId, indices: base }));
} catch (error) {
if (getState().indices.currentRequestId !== requestId) return; // ignore stale
dispatch(loadIndicesError({ requestId, error }));
return;
}
// 2) slow/optional enrichers in parallel (best-effort, progressive)
const run = async (name, fn) => {
try {
const patch = await fn();
if (getState().indices.currentRequestId !== requestId) return; // ignore stale
dispatch(indicesEnrichmentSuccess({ requestId, name, patch }));
} catch (error) {
if (getState().indices.currentRequestId !== requestId) return; // ignore stale
dispatch(indicesEnrichmentError({ requestId, name, error }));
}
};
void run('stats', () => api.getIndicesStats());
void run('other', () => api.getOtherEnrichment());
// ...repeat for the other calls
};The key idea is just: dispatch base ASAP, then dispatch progressive updates as each slow call returns, and record enrich failures without breaking the base list.
| @@ -142,9 +143,12 @@ export async function updateDSFailureStore( | |||
|
|
|||
| export async function loadIndices() { | |||
There was a problem hiding this comment.
While this function currently passes a single response to the Subject, the plan is to integrate code that would call a number of endpoints and mush the data together and return it.
| export async function loadIndices() { | ||
| const response = await httpService.httpClient.get<any>(`${API_BASE_PATH}/indices`); | ||
| return response.data ? response.data : response; | ||
| internalLoadIndicesSubject.next(response.data); |
There was a problem hiding this comment.
Perhaps this could update the store directly instead of using RxJS
There was a problem hiding this comment.
I’d keep services/api.ts free of direct store dispatching; better to orchestrate progressive loads in a thunk and let Redux dispatches be the “emissions”.
|
I totally get what you’re aiming for (progressive loading + resilience when some calls are slow/fail). The part I’d push back on is using an RxJS Why I think it’s a net-negative here:
What I’d do instead (and why I believe it’s better):
Also: #246276 already moves in the right direction by splitting the expensive work into separate endpoints ( |
| import { IndexTablePagination, PAGE_SIZE_OPTIONS } from './index_table_pagination'; | ||
| import { DocCountCell } from './doc_count'; | ||
| import { docCountApi } from './get_doc_count'; | ||
| import { loadIndices } from '../../../../services/api'; |
There was a problem hiding this comment.
Pulling loadIndices from services/api here means this component is directly triggering the fetch/emission path, instead of going through the existing Redux async flow. In this PR, store/actions/load_indices.js only defines loadIndicesStart/loadIndicesSuccess/loadIndicesError, so indicesLoading/indicesError won't naturally reflect retries/failures when callers invoke services/api.loadIndices() directly.
I think it’ll be easier to keep the progressive behavior and keep React/Redux happy if IndexTable calls a connected prop (e.g. props.loadIndices() from index_table.container.js), and that entrypoint dispatches start/error and kicks off the progressive load.
There was a problem hiding this comment.
yeah, I think I need to learn the redux way of doing things. This is my first time using it in quite a while and this isn't a simple use case so its not immediately clear how to. Sorting my thoughts out in another PR where I'm only concerned with data loading is helpful so I can have a better idea of the specific needs I have for working with the state system.
Summary
Since we need to load field list data from up to 6 api calls and need to keep the UI resilient and responsive I'd like emit a new index list a) after the initial get index list is loaded b) after any of the other calls append data to it.
Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
release_note:breakinglabel should be applied in these situations.release_note:*label is applied per the guidelinesbackport:*labels.Identify risks
Does this PR introduce any risks? For example, consider risks like hard to test bugs, performance regression, potential of data loss.
Describe the risk, its severity, and mitigation for each identified risk. Invite stakeholders and evaluate how to proceed before merging.