Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Umbrella issue: Cache invalidation & deletion #621

Closed
sandervanhooft opened this issue Sep 6, 2016 · 72 comments
Closed

Umbrella issue: Cache invalidation & deletion #621

sandervanhooft opened this issue Sep 6, 2016 · 72 comments

Comments

@sandervanhooft
Copy link

Is there a nice and easy way to set a condition to invalidate a cached query against? Think about time-to-live (TTL) or custom conditions.

For example (pseudo-code warning):

query(...).invalidateIf(fiveMinutesHavePassed())
or
query(...).invalidateIf(state.user.hasNewMessages)

forceFetch serves its purpose, but I think the cache invalidation condition should be able to live close to the query(cache) itself. This way I don't need to check manually if there's a forceFetch required when I rerender a container. The query already knows when it's outdated.

@helfer
Copy link
Contributor

helfer commented Sep 6, 2016

@sandervanhooft we don't have cache invalidation right now, but it's a feature we definitely want to implement in the future. See #41 which is sort of related.

@helfer helfer added the feature label Sep 6, 2016
@viridia
Copy link

viridia commented Oct 22, 2016

I'd also like some way to invalidate the cached version of a query after a mutation, but for a different reason: the server-side query logic is fairly complex (especially with sorting), and trying to patch in the results of the mutation via updateQueries is simply too complicated in this case. However, I don't want to force an immediate refetch because it may be a while before I actually need the results of the query.

An example of this is an issue tracking system with custom filtering and sorting: I don't want to repeat the custom filtering and sorting logic on the client especially since it's all written in mongo query language. And it will often be the case that a bunch of issues will be updated in sequence before I need to re-query the issue database, at which time I don't mind waiting a little bit longer to get the non-optimistic query result. So ideally I would just set a dirty bit on that query and let everything happen automatically after that.

@viridia
Copy link

viridia commented Oct 24, 2016

Thinking about this some more: What I'd like to see is something very much like the API for updateQueries:

client.mutate({
    mutation: SomeMutation,
    variables: { some_vars },
    invalidateQueries: {
      queryName: (previousQueryResult, { mutationResult }) => true,
  });
}

The invalidateQueries hook is called with exactly the same arguments as updateQueries, however the return result is a boolean, where true means it should remove the query result from the cache and false means the cache should be unchanged.

This strategy is very flexible, in that it allows the mutation code to decide on a case-by-case basis which queries should be invalidated based on both the previous query result and the result of the mutation.

(Alternatively, you could overload the existing updateQueries to handle this by returning a special sentinel value instead of an updated query result.)

The effect of invalidating a query removes the query results from the cache but does not immediately cause a refetch. Instead, a refetch will occur the next time the query is run.

@sandervanhooft
Copy link
Author

@viridia,

Your solution looks interesting. Would it support Promises? In that case I could set a timer Promise which returns true after a specific period of time.

@viridia
Copy link

viridia commented Oct 24, 2016

My suggestion would work for your first use case but not the second, which would require a very different approach (and may already be handled in the current framework).

Specifically, what I'm asking for is a way to invalidation the cache in response to a mutation. That's why I suggest that the new feature be part of the mutation API since it's very similar in concept to the existing updateQueries feature.

Your second use case has to do with invalidating the cache after a time interval and is unrelated to mutations. It seems like you might be able to get what you want using Apollo's polling APIs.

@dallonf
Copy link

dallonf commented Nov 21, 2016

Popping in on this... I'd like to add that I think this is very badly needed. I'm actually surprised it's not baked into the core of the framework. Without the ability to invalidate parts of the cache, it's almost impossible to cache a paginated list.

@scf4
Copy link

scf4 commented Dec 1, 2016

Also surprised this isn't in the core.

@swernerx
Copy link

swernerx commented Jan 31, 2017

We also need this. Our current approach is:

  • Separate view specific query files which cherry pick data from a large object mySalesConsultation to just return the areas of data which is required for rendering.
  • A mutation which starts quite a complex run of our backend infrastructure which tweaks the affected sales consultation (computes schedules etc. in conjunction with other objects).
  • As this process is running on a complex distributed cloud infrastructure we are not able to return the new data set with the mutation result immediately.
  • We are also not able to use refetchQueries as there is no option to postpone this call by some timeout right now. Our backend team says that it would be far safer to wait to a second before starting querying the cloud infrastructure again.

We are currently doing a manual query with a pretty top-level data structure with forceFetch=true inside a setTimeout after the mutation has been successfully returned. This has a few issues:

  • Our query is not able to wildcard refetch every data entry inside our mySalesConsultation. Therefor we have to list all fragments and entries to exactly match our schema.
  • We are possibly even force fetch loading data which has not even loaded before.
  • We add a lot of pressure to our backend systems as fetching the whole tree under mySalesConsultation involves asking a few different REST services behind our GraphQL backend. That would be okay, if we actually require this data for rendering the current view - but unfortunately we only need a few fields being updated.

That said, together with my colleague I just brainstormed a way which should work for our needs:

  1. Invalidate a whole tree of data under a given base path e.g. salesConsulations/salesConsultation@myId/*
  2. Re-Query all queries which are visible inside the current view (with a API possibility to define a timeout). As mentioned, fetching everything under the invalidated tree, is not needed - we would definitely benefit from something more fine grained.

The first is actually matching the comment by @viridia - the second is something which could be useful - alternatively this is something the invalidation does automatically.

@swernerx
Copy link

@helfer as #41 was closed now... is this issue still on the radar?

@helfer
Copy link
Contributor

helfer commented Feb 3, 2017

@swernerx yes, it's still on the radar! We trying to simplify the core and API before we add more features though, so this won't be in 1.0, but it could very well be in 1.1!

@lucasconstantino
Copy link
Contributor

lucasconstantino commented Feb 22, 2017

I'm feeling the need to have a field based invalidation strategy. As all resources are available in a normalized form under state.apollo.data (based on dataId), and going further with the proposal from @viridia, I believe we could reach a field based invalidation system much like the following:

Given this schema:

type Person {
  id: Id!
  name: String!
  relative: Person
}

type Query {
  people: [Person]
}

type Mutation {
  setRelative (person: String!, relative: String!): Person
}

With a data id resolver such as:

const dataIdFromObject = ({ __typename, id }) => __typename + id

With Person type resources with ids 1, 2, and 3 being found at the apollo store as:

Given person 2 is currently the relative person to person 1

{
  apollo: {
    ...
    data: {
      Person1: {
        id: 1,
        name: 'Lucas',
        relative: {
          type: "id"
          id: "Person2"
          generated: false
        }
      },
      Person2: {
        id: 2,
        name: 'John',
        relative: null
      },
      Person3: {
        id: 3,
        name: 'John',
        relative: null
      },
    }
  }
}

And having a query somewhere in system such as:

query Everyone {
  people {
    id
    name
    relative {
      id
      name
    }
  }
}

I could then perform a mutation to change the relative of person 1 to be person 3 and force invalidation as following::

client.mutate({
  mutation: setRelative,
  variables: { person: 1, relative: 3 },
  invalidateFields: (previousQueryResult, { mutationResult }) => {
    return {
      'Person1': {
        'relative': true
      }
    }
  }
)}

Edit: I do understand that updateQueries is a valid method for this use case, but updateQueries is only fit on mutations which result will bring all the data you need to update the queries, which is almost never true.

@dallonf
Copy link

dallonf commented Feb 23, 2017

@lucasconstantino An additional caveat of updateQueries that's worth harping on pointing out is that it only works with queries that are currently active; it won't help you if the affected data will be fetched by a future query.

@helfer
Copy link
Contributor

helfer commented Feb 23, 2017

@dallonf We patched the updateQueries caveat in a recent release of react-apollo. You should give it a try!

@lucasconstantino
Copy link
Contributor

lucasconstantino commented Mar 20, 2017

Ok, I've worked on this issue and ended up with a project to solve field based cache invalidation: apollo-cache-invalidation.

This is how you would use it, following the example on my previous comment:

import { invalidateFields } from 'apollo-cache-invalidation'

client.mutate({
  mutation: setRelative,
  variables: { person: 1, relative: 3 },
  update: invalidateFields(() => [
    ['Person1', 'relative']
  ])
)}

As you can see, invalidateFields method is only a higher-order function to create a valid update option for the client.mutate method. It receives a function, which will be called with the same arguments update does. It must return not a structured object, such as intended in my previous comment, for the keys can be dynamic - it accepts strings, regex, or functions for each key in a field path.

Further documentation can be found in the project's page.

Keep in mind this can be used for dynamic cache-key invalidation at any level, so to invalidate the relative field on all person one could simply add an invalidating path as such:

invalidateFields(() => [[/^Person/, 'relative']])

If you people find this useful, care to provide some feedback.

@helfer
Copy link
Contributor

helfer commented Mar 20, 2017

@lucasconstantino I think this is really cool! It's exactly the kind of cache control we hoped to enable with the generic store read and write functions and update. A couple of things came to mind when I looked at the package:

  1. I'm okay with the name, but I think it would be best to call it apollo-cache-invalidation to be specific (or apollo-cache-updates if you want it to be a more generic library that can also contain other generators, not just for invalidation).
  2. It would be useful to tell people that it works by deleting the matching fields out of the cache. Currently this will lead to active queries on that field returning stale data and not automatically refetching, which might not be what people expect. New queries that contain the field will be forced to fetch, as expected.
  3. It's also worth noting that refetchQueries now accepts any valid query, not just a name of an active query (at least I believe it does). As such, it might actually be quite useful in cases where you want to refetch just that one nested field).

Overall this is a really great effort and I look forward to seeing similar things like it being built on top of the new imperative store API!

@lucasconstantino
Copy link
Contributor

lucasconstantino commented Mar 20, 2017

@helfer I've renamed the project to apollo-cache-invalidation. I'll look at you other considerations in the morning ;)

Thanks for the productive feedback!

@lucasconstantino
Copy link
Contributor

lucasconstantino commented Mar 20, 2017

@helfer I've looked into your second observation, and I think I found a dead end here.

Studying the QueryManager class, and specifically the queryListenerForObserver method, I've realized the way I'm doing the cache cleaning isn't really work for current observers. This is quite odd, for I though I had it working on a project using react-apollo. I'll look into that later, though. About the stale data being returned, I don't really understand why when a stale data is found isn't a refetch triggered. In which scenarios could some data be missing, but having it had been fulfilled before, and how come the user would want that old data and not a fresh one in that case? I'm talking about line 412-418 on the QueryManager, to contextualize you.

Testing it locally, I was able to fire a refetch from that exact spot, using storedQuery.observableQuery.refetch(), which did solve the problem for apollo-cache-invalidation approach.

The big problem here, I think, is that the field based invalidation isn't really current compatible with the approaches Apollo Client has on cache clearing or altering. Both refetchQueries and updateQueries rely on the code doing the mutation to know exactly which queries (and even variables) are to be updated, meaning the code needs to know quite a lot to perform cache clearing. The idea behind a field-based invalidation system is to make this mutation code aware of the structure of the store, but absent of other code performing queries on that same store. I would like to make all ObservableQueries understand that some of their data is now invalid, but I don't see how when only having a path in the store - nothing really related to the queries used to build the observables. Basically, I'm fighting the current design when trying to walk ahead on this approach.

Well, as far as I could look it up, apollo-cache-invalidation cannot on it's own fix the stale data problem, meaning I would have to pull-request Apollo Client at least on some minor routines. What do you think? Should I proceed, or am I missing something here?

By the way: I guess being informed that refetchQueries now does trigger refetches on non-active queries (supposedly) makes apollo-cache-invalidation project a bit more specific to some use cases.

@nosovsh
Copy link

nosovsh commented Mar 22, 2017

@lucasconstantino I think I have similar use case. I want to invalidate some queries after mutation and it doesn't matter are they active or stale. And I don't want to refetch them immediately because there could be a lot of them and maybe user will never open pages that are using them again. Did I understand your last message correctly that you were trying to solve similar problem with apollo-cache-invalidation and figured out that it is not possible without changes in apollo client?

@lucasconstantino
Copy link
Contributor

@nosovsh apollo-cache-invalidate will basically purge current data from the cache. In the current state of things, it will work as expected for new queries the user eventually perform on that removed data, but if there are any active observable queries (queries currently watching for updates) related to the removed data, these queries will not refetch, but serve the same old data they had prior to the cache clearing. To solve this problem, I'm working on a pull-request to apollo-client to allow the user decide when new data should be refetched in case something goes stale: #1461. It is still a work in progress, and I'm not sure hold long it will take for something similar to go into core.

@dallonf
Copy link

dallonf commented Mar 30, 2017

I'm not sure how much this really adds to the conversation, but I spent a whole lot of time typing this out in a dupe issue, so I may as well add it here. :) Here is a use case my team commonly runs into that is not well covered by the existing cache control methods and would greatly benefit from field-based cache invalidation:

Let's say I have a paginated list field in my schema:

type Query {
  widgets($limit: Int = 15, $offset: Int = 0): WidgetList
}

type Widget {
  id: ID!
  name: String
}

type WidgetList {
  totalCount: Int!
  count: Int!
  limit: Int!
  offset: Int!
  data: [Widget]!
}

There is table in the app powered by this Query.widgets field. The user can customize the page size (aka $limit) as well as paginate through the list ($offset), so there are an essentially unbounded number of possible permutations for this query. Let's also say for the sake of argument that the sorting logic of this field is complex and cannot be simulated client-side. (but even in simple sorting cases, I'm not convinced it's reasonable to do this client side. Pagination really is a difficult caching problem.)

So let's throw a simple mutation into this mix...

type Mutations {
  createWidget($name: String!) : Widget
}

When I fire this mutation, there is really no telling where it will be inserted into the list, given the aforementioned complex sorting logic. The only logical update I can make to the state of the store is to flag either the entire field of widgets as invalid and needing a refetch, or to invalidate every instance of the query, regardless of what its variables are.

Unless I'm missing something, there doesn't seem to be any way to handle this case in Apollo Client. refetchQueries, as well as the the new imperative store manipulation functions, require you to specify variables, and updateQueries of course only works on active queries, and even in react-apollo where all queries are kept active, only one instance of the query (that is, with one set of variables) will be active at a time.

@nosovsh
Copy link

nosovsh commented Mar 30, 2017

@dallonf We have similar use case. Plus we can not refetch widgets right after mutation because there could be a lot of such queries and maybe user will not go again to pages that are using them. It should be just marked as stale so in case user will open such page again query will will be executed.

@lucasconstantino yeah seems active observable queries making problems in such case. I will look deeper in what you propose in PR

@lucasconstantino
Copy link
Contributor

lucasconstantino commented Mar 30, 2017

@dallonf you could experiment using the more sophisticated mutation option update. It gives you direct access to the cache, so you can accomplish quite everything you would need.

Sorry to promote my project here again, but this is exactly the kind of situation I built it for: apollo-cache-invalidate. Basically, following the schema you presented, you could invalidate all your paginated results (because they truly are invalid now) at once with:

import { invalidateFields, ROOT } from 'apollo-cache-invalidation'
import gql from 'graphql-tag'

import { client } from './client' // Apollo Client instance.

const mutation = gql`
  mutation NewWidget($name: String!) {
    createWidget(name: $name) {
      id
    }
  }
`

const update = invalidateFields((proxy, result) => [
  [ROOT, /^widgets.+/]
])

client.mutate({ mutation, update, variables: { name: 'New widget name' } })

But - and here goes a big but - this currently would invalidate cache only for non instantiate queries, meaning if the widgets query is currently present in the page, it would work. I have a pull-request working on this issue.

Hope it helps.

@dallonf
Copy link

dallonf commented Mar 30, 2017

@lucasconstantino Aha, thanks! That does solve my use case for now. (I had tried your module, but didn't realize the regex was necessary to capture field arguments).

@Draiken
Copy link

Draiken commented Feb 22, 2018

I have a very simple scenario:

  • list some data (with pagination)
  • add some data
  • invalidate the list so the list component will query the network again

I've read several issues around here but still can't find a good way of doing this except manually deleting the data from the store on the update method for a mutation.

Since cache is stored with variables, I don't know which list the data should be added to, so it's just better to invalidate all loaded lists from that field. However there doesn't seem to be an API for this.

I can use writeQuery to ADD data, but how do I remove a whole field from the cache?
This issue is from 2016 and we still don't have an API to remove things from the cache... what can I do to change that?

@chris-guidry
Copy link

@Draiken the function in my comment above deletes the whole field from the cache regardless of variables. I agree it's frustrating that there still isn't a proper solution.

@dr-nafanya
Copy link

Simple and rather blunt workaround for the case when you just need to invalidate the entire query cache, can be like this:

const CACHE_TTL = 180000;  // 2 minutes

let client;
let created;

// Use this function to get an access to the GraphQL client object
const ensureClient = () => {
  // Expire the cache once in a while
  if (!client || Date.now() - created > CACHE_TTL) {
    created = Date.now();
    client = new ApolloClient({
      link: new HttpLink({uri: `your_gql_api_endpoint_here`}),
      cache: new InMemoryCache()
    });
  }

  return client;
};

@lucasconstantino
Copy link
Contributor

@dr-nafanya I think there is a method available for this: client.cache.reset()

@Draiken
Copy link

Draiken commented Feb 26, 2018

I've read tons of workarounds, but that's not solving the problem.

You have to cherry-pick a bunch of workarounds (or build them yourself) for basic usage of this library. It's like it is built to work with the "to-do" app and nothing more complex than that.

We need an official implementation or at least documentation guiding users on how to deal with all of these edge cases (if you can call Cache#remove an edge case).

I'm more than willing to help on anything, but I'm afraid if I just fork and try to implement this, it will be forgotten like this issue or the 38 currently open PRs...

Right now I'm mentally cursing who chose this library for a production system 😕

@jvbianchi
Copy link

Is there any up to date roadmap for is project?

@filipenevola
Copy link

I'm using this workaround, but it is pretty bad because I need to refetch all my queries again because my client store is empty but at least doing this my App is always consistent.

export const mutateAndResetStore = async (client, fn) => {
  await fn();
  // TODO fixing problem with cache, but we need to have a better way
  client.resetStore();
};

mutateAndResetStore(client, () =>
    // my mutation call
    saveGroup({
...
...

we need a real solution ASAP.

@strctr
Copy link

strctr commented Mar 20, 2018

I think the best way to solve delete & cache issue is to add GraphQL directive:

mutation DeleteAsset($input: DeleteAssetInput!) {
  deleteAsset(input: $input) {
    id @delete
    __typename # Asset
  }
}

Execution of this query should delete Asset:{id} from cache.

@lucasconstantino
Copy link
Contributor

lucasconstantino commented Mar 20, 2018

@anton-kabysh interesting. I think this might create confusion on what exactly is being removed, though; the cached information, or the entity itself.

@fabien0102
Copy link

I think that we "just" need a way to update the cache by __typename:id instead of a specific query + variables 😉 So even if you have an infinite variables cases (for example a filter param) it's no more a problem.

Something like this:

client.readCache({key: 'Post:1'}); // {data: {...}}
client.writeCache({
  key: 'Post:1',
  data: {
    title: 'oh yeah',
  },
});
client.deleteCache({key: 'Post1'});

Note that with this solution, you can also update every results cached (and this is exactly what I wanted!)

@yohcop
Copy link

yohcop commented Mar 20, 2018

@fabien0102 With this solution, I don't think you can add a result to a query though. If I made a query like posts(fromDate: "2018-01-01", toDate: "2018-03-31"), and I create a new post with date="2018-03-20", I would like to invalidate the query. I can add it manually to the query results, but if the filters get complicated, it can be a lot of work. Invalidating the query would be much easier if I don't mind the extra requests made to refresh them.

@strctr
Copy link

strctr commented Mar 20, 2018

@lucasconstantino Sure, the bare @delete isn't informative enough, it should be something like @deleleFromCache.

@fabien0102
Copy link

fabien0102 commented Mar 20, 2018

@yopcop Sure, my solution only works for update and delete a part of a query, but it's better than the actual solution (and I'm also aware that is a hard problem and no easy solution exists). Sometime is definitively easier to invalidate previous queries.

For this kind of complex queries, I think that a powerfull solution can be to be able to query the queries

Example to illustrate:

// my apollo store
posts(fromDate: "2018-01-01", toDate: "2018-03-31")
posts(fromDate: "2018-01-01", toDate: "2018-04-31")
posts(fromDate: "2018-01-01", toDate: "2018-05-31")

// how to update this
client.readQueries({
  querygql`query Post($from: Date!, $to: Date!) posts(fromDate: $from, toDate: $to) { ... }`,
  variables: (from, to) => isAfter(from, "2018-01-01") && isBefore(to, "2018-04-31")
});

So basicly like a filter and it's return array of queries so can you .map and use the classic client.writeQuery().

(after I never put my hands on the Apollo code base, so I really don't know if it's possible, it's just to share ideas 😉)

@cloudlena
Copy link

cloudlena commented May 21, 2018

Wouldn't changing

proxy.writeData({ id: `MyType:${id}`, data: null });

to delete the object instead of having no effect be sufficient here? For my case at least, it would be a very elegant, easy and intuitive solution.

@Tehnix
Copy link

Tehnix commented May 22, 2018

CC: @smilykoch @martinjlowm worth following this umbrella issue on cache invalidation.

@Frizi
Copy link

Frizi commented May 25, 2018

There is a need to automate garbage collection inside the cache. The cache presents very limited API to the world, mostly allowing reads/writes through queries and fragments. Let's look at the queries for a moment.

The query has a concept of being active. Once query is activated, and results are fetched, it denormalises the response to the cache. It cannot delete it, because other queries might end up using the same results. What the query can't do is to reference other queries, so there is no way to make cycles. This make a reference counter based GC viable.

Let's suppose that the underlying cache object holds a reference counter. Once the result is being written to/materialised from the cache, the query can collect all references objects, hold on them into a private Set and increase the reference counter. Every materialisation process would fully clear and repopulate that `Set, while adjusting refcount accordingly.

To prune specific query data and enable potential garbage collection from cache, you have to adjust refcount for all associated objects and clean that Set.

Once in a while the cache could simply filter out all keys that have refcount 0. That eviction could be easily triggered with few strategies:

  • a query could have a set timer when data becomes stale
  • query could be told programatically to evict the cache
  • query deactivation, potentially combined with additional timer
  • data refetch would automatically prunes all no longer used objects
  • etc. (any ideas?)

The readFragment would have to be further constrained that it may fail if there is no query that holds on requested object. Simply because the data might have been evicted from the cache already.

The remaining issue is with writeFragment when there is no backing query to hold on it's value, as it gives no guarantees that the data will actually be persisted for any length of time. I'm not sure if there is any use-case other than activating a query immediately after some fragments were written, and we can easily make that scenario work.

@wesbos
Copy link

wesbos commented Jun 1, 2018

I've talked with @stubailo about being able to write undefined to invalidate queries/data - he says it should work but it doesn't so this might be something that can be added as a good interm solution

@dallonf
Copy link

dallonf commented Jun 1, 2018

Writing undefined to a fragment or query wouldn't solve a lot of the use cases that are difficult today, particularly when you need to clear the cache for an entire field regardless of arguments (ex. both widgets(sortBy: NAME) and widgets(sortBy: PRICE))

@Frizi
Copy link

Frizi commented Jun 1, 2018

I believe that #3394 might help a great deal with solving this issue. It's basically a ref-count system. Once cache entries will register their dependant queries, cache can be pruned for every entry that doesn't have any dependants.

@riccoski
Copy link

riccoski commented Jun 8, 2018

Here's my my work around, I would love something like this built in of course done better where I dont need to set manual IDs: https://gist.github.com/riccoski/224bfaa911fb8854bb19e0a609748e34

The function stores in the cache a reference IDs along with a timestamp then checks against it which determines the fetch policy

@gastonmorixe
Copy link

gastonmorixe commented Jul 7, 2018

Why is it so hard to write a deleteFragment function and broadcast updates to all queries subscribed to it?
I have a chat app that a user wants to delete a single message. Sounds like a really common scenario. I don't want to refetch all messages nor update the query. I just want to find the message fragment by id and delete it.

@Elijen
Copy link

Elijen commented Jul 11, 2018

This really needs to get resolved. It's not even an edge case or rare use scenario. Every single CRUD app will need to deal with this issue.

@dallonf
Copy link

dallonf commented Jul 11, 2018

Let me try making a proposal for a solution - if the maintainers are OK with it, I or someone else could work on a PR to implement it in apollo-cache-inmemory.

Originally, I wanted to start with the existing evict function, but I don't think it'll work without breaking changes, so I may as well call it something different.

Let's call it deleteQuery and deleteFragment, to mirror the existing read/writeQuery/Fragment functions. I'll just start with deleteQuery and assume deleteFragment works mostly the same way:

public deleteQuery<TVariables = any>(
  options: DataProxy.Query<TVariables>,
): { success: bool }

You could use it like this, after adding a widget, for example:

const CACHE_CLEAR_QUERY = gql`
  query ClearWidgets($gizmo: ID!, $page: Int!, $limit: Int, $search: String) {
    gizmoById(id: $gizmo) {
      widgets(page: $page, search: $search, limit: $limit)
    }
  }
`;

proxy.deleteQuery(CACHE_CLEAR_QUERY, {
  variables: {
    page: () => true, // clear all pages
    // only clear search queries that could match the widget we just added
    search: value => !value || newWidget.name.indexOf(value) !== -1,
    gizmo: newWidget.gizmo.id,
  },
});

A couple of important notes here:

  • This is not a complete GraphQL query; widgets returns a WidgetList, but we haven't provided any subfields. This tells Apollo to wipe out the entire entry of Gizmo1.widgets rather than just a specific subfield
  • variable values can now be functions of type (input: any) => boolean). (this only works for deleteQuery/deleteFragment, of course) Best way to walk through how this works is an example - if Apollo goes into the cache sees a value cached as Gizmo1.widgets({"page":0,"search": "hello"}), it will call the functions with page(0) and search("hello"). variables can also provided like normal literals; gizmo: "15" is equivalent to gizmo: value => value === "15". If all variables match the field in the cache, the field will be matched and removed.

After items have been removed from the cache in this way, any currently active queries that are displaying this data will automatically and immediately refetch.

The part of this I'm least certain about is the ability for a query to be "incomplete" and not specify subfields - some feature needs to exist so that you can clear an entire array instead of, say, just the id and name fields of every entry in the array, but this particular solution would break a lot of tooling that tries to parse GraphQL queries against the schema.

@vegetablesalad
Copy link

vegetablesalad commented Jul 16, 2018

I've spent a whole day trying to figure out how to delete something from my cache/store. Is there a solution for this? I have finished 90% of my app with Apollo and this hit me right in the face. There really is no way to delete something?

@hwillson
Copy link
Member

To help provide a more clear separation between feature requests / discussions and bugs, and to help clean up the feature request / discussion backlog, Apollo Client feature requests / discussions are now being managed under the https://github.com/apollographql/apollo-feature-requests repository.

This issue has been migrated to: apollographql/apollo-feature-requests#4

@apollographql apollographql locked and limited conversation to collaborators Jul 27, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests