-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support multiple per-row updates in a single mutation #2768
Comments
@0x777 As i see, for the simpler API , we could also use |
I would enjoy this feature |
I would also really like this feature. Currently I have to update multiple records individually and this would be so much easier and more efficient. |
Would love this too! For now I'm using upsert although it's not highly recommended in the docs. What are the drawbacks of doing that until a multi-update feature is here? |
Currently, i use multiple mutation in one graphql query to achieve this, as Hasura allows you to use multiple mutation inside one query, and all of them will execute in one transaction. |
Is there an update on this at all? Also, I am considering going the upsert way like @marcfalk has and was wondering if there are any drawbacks as well. Would love some thoughts on this. @tirumaraiselvan |
The pk may leaving gaps unless you use uuid e.g. |
Anything other than this? And does this have any bad effects apart from the fact that there are gaps? |
+1 |
For anyone struggling with this, I ended up using the upsert mutation for this due to the lack of a response and it works perfectly. |
Is this going to implement? I don't see why cron/schedule job has higher priority than multiple per-row updates and multiple auth role. |
@praveenweb This is an important feature. You must assign this to someone. Thanks |
would love to see this too 👍 |
@praveenweb Any updates ? |
@revskill10 How to dynamically add multiple mutations to a single one each with different variable values? How to make use of the aliases dynamically |
@hafiztahajamil No, you can't. You have to embed query variables inside the mutations. It's fast. |
@revskill10
Can you please give an example on how to do it ?
…On Tue, 6 Oct 2020 at 2:02 PM, Truong Hoang Dung ***@***.***> wrote:
@hafiztahajamil <https://github.com/hafiztahajamil> No, you can't. You
have to embed query variables inside the mutations. It's fast.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2768 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC2ASTJS37X7X5WQPIPV3SDSJLMKTANCNFSM4IO367RA>
.
|
@hafiztahajamil For example, i want to update mutation {
update_users_1(objects:users1_objects) { affected_rows }
update_users_2(objects:users2_objects) { affected_rows }
update_users_3(objects:users3_objects) { affected_rows }
update_users_4(objects:users4_objects) { affected_rows }
} In above mutation, i generated usersX_objects = [{
first_name: 'test'
}] |
Any updates to this, currently most of my updates need to fallback to upserts and that is something I wouldn't like to use. |
Updating multiple rows with different Pk's by calling the API multiple times sounds really bad. Hope this will be implemented soon. This should be a core feature. |
Please provide an update on this feature! Although the upsert alternative is a possibility the documentation specifically suggests otherwise: For an upsert, all columns that are necessary for an insert are required." I have a use case where I want to update JSON fields across multiple ecommerce products at once and doing it by running a mutation with hundreds of individual updates or serially calling the API is not ideal. Upsert is a possibility, but as stated above from the docs, it requires all columns necessary for an insert - so doesn't work well if there are not null constraints to consider. (or requires a lot of additional and unnecessary information to be sent with each mutation). |
Wow! First posted in 23 Aug 2019 and still not a feature! This is exactly what I was/am looking for too, I won't hold my breath! :p upsert really isn't an option as all fields are required which would be other data being wiped. Surely this should be high up the list of features that are added! :/ Edit: Looks like the following might be a viable, if not quite perfect, solution - just add an alias to each update request. mutation myUpdates{ |
I also feel this should have higher priority, I've had multiple projects where this exact feature would have been useful. I mean it is quite common case on any project to update multiple rows at once. @0x777 any idea if this will progress or is this abandoned? |
Since upsert can't work against partial unique indexes this is causing issues with our ETL. I'd hate to have to go around Hasura and talk directly to Postgres, which up until now we've never had to do. |
Hi am also i need of this, will likely go with multiple mutations within one call for now to get around limitations. But is a feature |
Please add this feature! |
Hey folks, we will be picking this up soon. Can you share your use cases here? It'll be really helpful in designing the API. |
@0x777 here's our use case. Say a list of fields need to be updated by PK, right now we do this:
Ideally, we'll have something like:
So effectively like inserting many but with a way to tell what you update. Thanks for taking this on. It'll help DX a lot and make mutations safer! (Right now we construct that mutation on the fly) |
We are also updating a big list of objects. I would prefer something like this:
where the parameter is
|
Exactly! None of the solutions above take into account the possibility of having variable number of updates required |
@0x777 I like the idea of having an This at least allows simple syntax for multiple updates within single transaction. We can easily create the list of update objects dynamically. In terms of K.I.S.S This would be a good first iteration at the least IMO - we can worry about overlapping next iteration (caveat in the documentation) Off the top of my head, if the queries are run in a transaction, the overlapping should not be significant as the order of updates will be maintained. |
We've gone through a few internal design iterations on this, and I'm here to report what we've landed on and ask for feedback. What I'm working onI am currently working on implementing the version initially suggested by @0x7777, which is essentially a multi-record update by primary key. Internally, we will make sure the keys don't repeat. If they do, we will use the last value (since it's a list of updates, we'll just pick the one that is closest to the list's end). This will get translated into a single Postgres UPDATE statement. This is important, because this means we get the best possible performance. What were the designs we consideredOne key technical fact is that Postgres will NOT update a row twice in the same statement. For example, say we wanted to allow this query: {
update_user_many(
updates: [
{ where: {id: {_gt: 1}, _set: {name: "hello"}},
{ where: {id: {_eq: 2}, _set: {description: "world"}}
]
) {
affected_rows
}
} If we have a record with In the case of updates by primary key, it's fairly easy to detect when there's an overlap. However, as soon as we add more operators and other columns in the mix, the problem becomes incredibly complex (often times not solvable). This means we end up with two options:
What about RETURNING?In the primary key update version, returning is relatively simple to do and shouldn't surprise anyone. However, the general Returning columns for affected rows for each operation can also be a bit tricky and impair performance. ConclusionSo, in conclusion, we're going for the solution that:
At the same time, we're wondering: how important is having a generic |
Hello! I have just come across this thread, looks like great timing. Given the simplicity of option 1, the complexity of option 2, and significant improvement this update will mean for a lot of developers, I think your conclusion is the right one for the product. In the meantime, developers looking for an all in one solution can simply run a prior query that returns the IDs they are looking to update. |
I'm happy to announce this feature has been merged: 84366c9. You should be able to use this feature in the next release! During these couple of weeks, we've iterated a few times on the solutions and ended up being able to provide a bit more features than originally anticipated. You can read about it in the commit's CHANGELOG. Essentially, this feature will create a new mutation field named We're excited to hear back from you and get feedback on this new issue! Let us know how you end up using it. |
@eviefp This is a deal breaker for Hasura. Cheers for the launch. |
Hey @revskill10, do you care to elaborate? What exactly is the deal breaker part? |
I'd just like to say an extreme thank you for the devs working on this feature over the last 3 years :D I love you, my wife loves you, my wives wive loves you!!! everybody loves you! |
This is very cool, thank you for this feature. I will definitely use it and praise the developers who wrote this code. But I still hope that at some point you change your position on complex where clauses in the multiupdates. If we want to shoot ourselves in the foot and run conflicting mutations in a single call - please let us. Or maybe have a flag to run them sequentially and not as a single transaction? For now we can do a preflight call to resolve the ids or maybe have an Apollo preprocessor in front of Hasura that does it for us. |
Hi @eviefp It's almost the same as Prisma Transaction feature, but one more difference. In Prisma transaction, they can mix match both query and mutation though. |
We actually did change that! So right now we allow arbitrary
If there's user interest, we could definitely add a flag/option to allow running outside a transaction scope, ignoring errors. |
@eviefp oh great! Are you also having your original |
@lxblvs We gave that up in favor of this current iteration. However, if we get enough requests, we can definitely prioritize the original |
Just so that we deeply understand the problem, can you talk through your use case in a little more detail? ie: when you say "bulk", are you talking 10 rows, 100 rows, or 1000 rows? And does the new solution that @eviefp outlined about prohibit you from accomplishing that goal... or is it instead that you can accomplish the goal, it's just not as fast? |
Currently we allow updating multiple rows (through
where
) but all of them will get the same update. We need to add support for cases where the updates are different for each row, say you want to setname
toHello
for the row withid=1
and toWorld
whenid=2
. Something like this maybe?Notes:
where
conditions? What wouldaffected_rows
andreturning
return?updates
argument would need a bit of a boilerplate.Maybe we can simplify the api to just use the primary key/unique constraints?
We can probably use
update .. from
as suggested here: https://stackoverflow.com/a/18799497The text was updated successfully, but these errors were encountered: