Skip to content

Conversation

@captainzmc
Copy link
Member

What changes were proposed in this pull request?

long quotaInBytes are new fields in the bucketArgs, this field will go to 0 by default in the old bucket when the old cluster is upgraded. At this point, the data writes are rejected.

This is similar to creating a bucket without specifying quotaInBytes, where quotaInBytes is set to -1 by default via getQuotaValue. We can use 0 as a special term.

image

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-4562

How was this patch tested?

ut added

@captainzmc captainzmc requested a review from ChenSammi December 9, 2020 03:20
Copy link
Contributor

@adoroszlai adoroszlai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @captainzmc for fixing this. The old bucket is now writeable, reproduced the problem and verified the fix using upgrade compose environment.

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @captainzmc
Few questions:

  1. Can we make use of proto default value to -1 for quotaInBytes value in proto.
  2. So on old buckets, we cannot set quota, as we don't have any info of bytesUsed/namespace count, or if it can be set how this will be handled?
  3. And how upgrades are handled for quota feature overall, like for older volumes and buckets under it?

@bharatviswa504
Copy link
Contributor

Not related to this PR, can you also provide info on how the clear space quota works and it's usage.

Because I see when bucket clear space quota comes, we reset to -1, but what will happen to volume quota (Will it reclaim bucket quota? or any information on clear space quota usage and how it works also can you share some info.

@adoroszlai
Copy link
Contributor

@captainzmc see #1691 for acceptance test with two lines that are currently disabled to verify this fix. If you could review that PR, then uncommenting them in this PR would improve test coverage.

@captainzmc
Copy link
Member Author

captainzmc commented Dec 13, 2020

Hi @captainzmc
Few questions:

  1. Can we make use of proto default value to -1 for quotaInBytes value in proto.
  2. So on old buckets, we cannot set quota, as we don't have any info of bytesUsed/namespace count, or if it can be set how this will be handled?
  3. And how upgrades are handled for quota feature overall, like for older volumes and buckets under it?

Thanks for @bharatviswa504’s advices.

  1. Currently, Unsigned field can't have negative default value in proto. If we set [default = -1], an error will be compiled.
    For now, we use getQuotaValue(long quota) to handle the default case in RpcClient..
  2. After adding usedByets, the original keys of the old bucket cannot be counted, so we can only count the newly written keys. Therefore, we temporarily did not recommend old buckets to enable Quota (because their usedByets were inaccurate).
  3. For old buckets we do not recommend that quota be enabled, but in order not to affect the write, I handle the default at checkBucketQuotaInBytes in this PR.

@captainzmc
Copy link
Member Author

captainzmc commented Dec 13, 2020

Not related to this PR, can you also provide info on how the clear space quota works and it's usage.

Because I see when bucket clear space quota comes, we reset to -1, but what will happen to volume quota (Will it reclaim bucket quota? or any information on clear space quota usage and how it works also can you share some info.

Yes, I have submitted the use of the latest quota in Doc. Can be seen here:
https://ci-hadoop.apache.org/view/Hadoop%20Ozone/job/ozone-doc-master/lastSuccessfulBuild/artifact/hadoop-hdds/docs/public/feature/quota.html
Bucket quota can be set separately without enabling Volume quota. So if volume's quota is cleared, his bucket will not be affected. However, if the volume quota is set to another value, it cannot be less than the quota size for all buckets.

@captainzmc
Copy link
Member Author

@captainzmc see #1691 for acceptance test with two lines that are currently disabled to verify this fix. If you could review that PR, then uncommenting them in this PR would improve test coverage.

Thanks for @adoroszlai 's suggestion, I will deal with the acceptance test here.

@captainzmc captainzmc force-pushed the HDDS-4562 branch 2 times, most recently from 7833495 to 265bf1d Compare December 14, 2020 13:06
@bharatviswa504
Copy link
Contributor

bharatviswa504 commented Dec 14, 2020

https://ci-hadoop.apache.org/view/Hadoop%20Ozone/job/ozone-doc-master/lastSuccessfulBuild/artifact/hadoop->hdds/docs/public/feature/quota.html

  1. When clear volume quota(Assuming volume quota was set before), it is just like setting the quota on the volume to -1, >when clear volume quota is called?
  2. When volume quota is set, bucket clear quota, it is like just setting the quota for that bucket to -1, and it will allow new >buckets can be created if volume quota is still available(As previous bucket quota cleared quota)

When volume quota is not set and bucket quota is set, it is just reset quota of the bucket to -1. when clear bucket quota operation is performed

Is my understanding correct here?

Example:
V1 - 100 MB
V1/B1 - 50MB
V1/B2- 50MB

Now clear quota on V1/B2 we set quota of V1/B2 to -1, and now a new bucket can be still created in this volume?

New Bucket created with quota 50MB. (V1/B3)

So, in this volume there are 3 buckets, only 2 are considered in to count, as clearQuota is run(on V1/B2), and we are not considering this bucket in quota calculations, so this bucket will not be counted under quota, but for volume we have crossed the quota of that volume. (As user has run clear quota on V1/B2)

So, here clear quota means resetting the quota to -1? Or what is the real purpose of this clearQuota usage on clusters, in what scenarios this will be useful on the cluster?

For old buckets, we do not recommend that quota be enabled, but in order not to affect the write, I handle the default at >checkBucketQuotaInBytes in this PR.

But I don't see any guards in the cluster to not allow quota operations for older buckets/volumes.

And also do you think, we need to document this in our docs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to do this during set Operation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check simply ensures that when we use RpcClient set Quota, the value of quota must be legal (not less than -1). But the getQuotaValue conversion step inside is redundant, I‘ll delete this conversion.

@bharatviswa504
Copy link
Contributor

bharatviswa504 commented Dec 14, 2020

Currently, Unsigned field can't have negative default value in proto. If we set [default = -1], an error will be compiled.
For now, we use getQuotaValue(long quota) to handle the default case in RpcClient..

Might be use int64, but protobuf document said it is inefficient for negative numbers, not sure it is good idea. Just thought of bringing this up here?

int64 Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead. int64 long int/long[3] *int64

I think it will also help if we plan to support quota on older buckets.

The default will be -2, all older buckets will have -2, new buckets created after this feature will have -1. So, if some one sets quota on older buckets in this way, we can figure it out, just an idea saying here.

@captainzmc
Copy link
Member Author

Might be use int64, but protobuf document said it is inefficient for negative numbers, not sure it is good idea. Just thought of bringing this up here?

I'll change the type to sint64, which is both negative by default and more efficient than int64.

@captainzmc
Copy link
Member Author

captainzmc commented Dec 15, 2020

Example:
V1 - 100 MB
V1/B1 - 50MB
V1/B2- 50MB

Now clear quota on V1/B2 we set quota of V1/B2 to -1, and now a new bucket can be still created in this volume?

New Bucket created with quota 50MB. (V1/B3)

So, in this volume there are 3 buckets, only 2 are considered in to count, as clearQuota is run(on V1/B2), and we are not considering this bucket in quota calculations, so this bucket will not be counted under quota, but for volume we have crossed the quota of that volume. (As user has run clear quota on V1/B2)

So, here clear quota means resetting the quota to -1? Or what is the real purpose of this clearQuota usage on clusters, in what scenarios this will be useful on the cluster?

Thanks to @bharatviswa504‘s advice, you found a very important point.
Before HDDS-4308, we had usedBytes in volume, so before writing the key, we checked both bucket quota and volume quota. Setting bucket quota to -1 alone at this point does not matter, since we are able to ensure that usedBytes do not exceed volume quota.
But we can't do that now because volume doesn't have usedBytes. I created a new JIRA HDDS-4588 to fix this problem. If volume's quota is enabled then bucket's quota cannot be cleared. We need to prompt the user to clear volume quota first.

But I don't see any guards in the cluster to not allow quota operations for older buckets/volumes.

And also do you think, we need to document this in our docs?

I’ll refine the docs to make it clear that older volumes/buckets are not recommended to use quota.

@captainzmc captainzmc force-pushed the HDDS-4562 branch 4 times, most recently from a38bd49 to 576e3d3 Compare December 15, 2020 07:20
@captainzmc
Copy link
Member Author

When the acceptance test upgrade, namespace will also have the problem that old volumes cannot create buckets. This also fixed in this PR. R: @amaliujia @linyiqun
image

@captainzmc
Copy link
Member Author

Updated PR and fixed review issues.

Copy link
Contributor

@linyiqun linyiqun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@captainzmc , current PR change looks good to me. But I catch some places we can improve in follow-up JIRA.


d. Volume quota is not currently supported separately, and volume quota takes effect only if bucket quota is set. Because ozone only check the usedBytes of the bucket when we write the key.

e. If the cluster is upgraded from old version less than 1.1.0, use of quota on older volumes and buckets is not recommended. Since the old key is not counted to the bucket's usedBytes, the quota setting is inaccurate at this point.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make this check in OM side and throw exception once user try to set quota on older version volumes/buckets.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, we have not agreed on whether bucket can be enabled for the old quota. At present, we only put forward the suggestion in the document very briefly. If the requirement is clear, we can issue Jira and solve it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense to me.

So far, we know that Ozone allows users to create volumes, buckets, and keys. A Volume usually contains several buckets, and each Bucket also contains a certain number of keys. Obviously, it should allow the user to define quotas (for example, how many buckets can be created under a Volume or how much space can be used by a Bucket), which is a common requirement for storage systems.

## Currently supported
1. Storage Space level quota
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The namespace quota introduction can also be added. We could file a new JIRA for tacking this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add namespace quota to this documentation: https://issues.apache.org/jira/browse/HDDS-4594. Before adding to documentation, I will first finish namespace support on bucket: https://issues.apache.org/jira/browse/HDDS-4277

@bharatviswa504
Copy link
Contributor

Example:
V1 - 100 MB
V1/B1 - 50MB
V1/B2- 50MB
Now clear quota on V1/B2 we set quota of V1/B2 to -1, and now a new bucket can be still created in this volume?
New Bucket created with quota 50MB. (V1/B3)
So, in this volume there are 3 buckets, only 2 are considered in to count, as clearQuota is run(on V1/B2), and we are not considering this bucket in quota calculations, so this bucket will not be counted under quota, but for volume we have crossed the quota of that volume. (As user has run clear quota on V1/B2)
So, here clear quota means resetting the quota to -1? Or what is the real purpose of this clearQuota usage on clusters, in what scenarios this will be useful on the cluster?

Thanks to @bharatviswa504‘s advice, you found a very important point.
Before HDDS-4308, we had usedBytes in volume, so before writing the key, we checked both bucket quota and volume quota. Setting bucket quota to -1 alone at this point does not matter, since we are able to ensure that usedBytes do not exceed volume quota.
But we can't do that now because volume doesn't have usedBytes. I created a new JIRA HDDS-4588 to fix this problem. If volume's quota is enabled then bucket's quota cannot be cleared. We need to prompt the user to clear volume quota first.

But I don't see any guards in the cluster to not allow quota operations for older buckets/volumes.
And also do you think, we need to document this in our docs?

I’ll refine the docs to make it clear that older volumes/buckets are not recommended to use quota.

So, now if the volume quota is not set, the bucket Clearspace quota is just disabling quota on the buckets.

And also just setting quota at volume level will not be counted, until bucket level quotas are set.

For my understanding so volume quota is set, and bucket quota is set, clearing volume quota is just set to -1, so that now quota is tracked at bucket level. And now when bucket quota is cleared, when volume quota is set, we disallow the operation.
This will be the behavior of clear space quota, let me know if i am missing something.

But we can't do that now because volume doesn't have usedBytes. I created a new JIRA [HDDS-4588
(https://issues.apache.org/jira/browse/HDDS-4588) to fix this problem. If volume's quota is enabled then bucket's quota >cannot be cleared. We need to prompt the user to clear volume quota first.

Now the example mentioned scenarios will not happen. Thanks for opening this Jira and tracking this issue.

@bharatviswa504
Copy link
Contributor

The default will be -2, all older buckets will have -2, new buckets created after this feature will have -1. So, if some one sets >quota on older buckets in this way, we can figure it out, just an idea saying here.

If we want to disallow the quota support for older buckets, one idea to do at code level.

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM.
Few comments, and one question regarding handling old buckets/volumes.


a. By default, the quota for volume and bucket is not enabled.

b. When volume quota is enabled, the total size of bucket quota cannot exceed volume.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor:
When volume quota is enabled, the total quota of the buckets, cannot exceed the volume quota.


d. Volume quota is not currently supported separately, and volume quota takes effect only if bucket quota is set. Because ozone only check the usedBytes of the bucket when we write the key.

e. If the cluster is upgraded from old version less than 1.1.0, use of quota on older volumes and buckets is not recommended. Since the old key is not counted to the bucket's usedBytes, the quota setting is inaccurate at this point.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also add documentation about the clear space quota and the behavior?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. To prevent conflicts, I have updated the usage and behavior of clear space quota in this PR.

@amaliujia
Copy link
Contributor

@captainzmc

Can we use 0 to indicate that a quota is not set? Will this better handle the cluster upgrade issue that some buckets by default has 0 quota?

If a case is really set "0" quota to a bucket thus lock the bucket from creating, then users can choose to use a negative value.

@amaliujia
Copy link
Contributor

O, actually from the code seems that setting default value as -1 is solved in proto. If so please ignore my previous comment (which assumed that proto cannot set -1 as default value for quota)

@ChenSammi
Copy link
Contributor

ChenSammi commented Dec 16, 2020

The default will be -2, all older buckets will have -2, new buckets created after this feature will have -1. So, if some one sets >quota on older buckets in this way, we can figure it out, just an idea saying here.

If we want to disallow the quota support for older buckets, one idea to do at code level.

User using old Ozone version may still has the desire to use the new Quota feature. It's better not shut the door completely.
@adoroszlai , I'm not sure if it's feasible that we provide a option of calculating the bytesInUsed for each bucket during 1.0.0 -> 1.1. upgrade to user? So that user has the choice either tolerate the inaccurate bytesInUsed or a relative longer upgrade time to get accurate bytesInUsed of buckets.

@captainzmc
Copy link
Member Author

captainzmc commented Dec 16, 2020

So, now if the volume quota is not set, the bucket Clearspace quota is just disabling quota on the buckets.

And also just setting quota at volume level will not be counted, until bucket level quotas are set.

For my understanding so volume quota is set, and bucket quota is set, clearing volume quota is just set to -1, so that now quota is tracked at bucket level. And now when bucket quota is cleared, when volume quota is set, we disallow the operation.
This will be the behavior of clear space quota, let me know if i am missing something.

@bharatviswa504 Yes. And the problem of clear quota will be fixed in HDDS-4588.

@captainzmc
Copy link
Member Author

captainzmc commented Dec 16, 2020

The default will be -2, all older buckets will have -2, new buckets created after this feature will have -1. So, if some one sets >quota on older buckets in this way, we can figure it out, just an idea saying here.

If we want to disallow the quota support for older buckets, one idea to do at code level.

User using old Ozone version may still has the desire to use the new Quota feature. It's better not shut the door completely.
@adoroszlai , I'm not sure if it's feasible that we provide a option of calculating the bytesInUsed for each bucket during 1.0.0 -> 1.1. upgrade to user? So that user has the choice either tolerate the inaccurate bytesInUsed or a relative longer upgrade time to get accurate bytesInUsed of buckets.

Hi @ChenSammi, I think it's dangerous recalculating usedBytes in upgrade . If the amount of data is too large, it will greatly increase the time, and if the processing is not good, it will cause the upgrade to fail.
If we want use Quota in old buckets, we can add a command to recalculate bucket usedBytes to minimize the impact. User can decide for himself which buckets to recalculate.

@bharatviswa504
Copy link
Contributor

bharatviswa504 commented Dec 16, 2020

The default will be -2, all older buckets will have -2, new buckets created after this feature will have -1. So, if some one sets >quota on older buckets in this way, we can figure it out, just an idea saying here.

If we want to disallow the quota support for older buckets, one idea to do at code level.

User using old Ozone version may still has the desire to use the new Quota feature. It's better not shut the door completely.
@adoroszlai , I'm not sure if it's feasible that we provide a option of calculating the bytesInUsed for each bucket during 1.0.0 -> 1.1. upgrade to user? So that user has the choice either tolerate the inaccurate bytesInUsed or a relative longer upgrade time to get accurate bytesInUsed of buckets.

@ChenSammi Above idea is to detect older buckets/volumes in the Ozone cluster, from discussion as it is said not recommended, said we can fail the operation But, if we want to support, when they setQuota, we can know this is an old bucket, we can say a warning message to the user, older keys are not considered.

I am fine with whatever way we go, if we clearly document the behavior it will not be confusing to end-users.

@bharatviswa504
Copy link
Contributor

bharatviswa504 commented Dec 16, 2020

The default will be -2, all older buckets will have -2, new buckets created after this feature will have -1. So, if some one sets >quota on older buckets in this way, we can figure it out, just an idea saying here.

If we want to disallow the quota support for older buckets, one idea to do at code level.

User using old Ozone version may still has the desire to use the new Quota feature. It's better not shut the door completely.
@adoroszlai , I'm not sure if it's feasible that we provide a option of calculating the bytesInUsed for each bucket during 1.0.0 -> 1.1. upgrade to user? So that user has the choice either tolerate the inaccurate bytesInUsed or a relative longer upgrade time to get accurate bytesInUsed of buckets.

Hi @ChenSammi, I think it's dangerous recalculating usedBytes in upgrade . If the amount of data is too large, it will greatly increase the time, and if the processing is not good, it will cause the upgrade to fail.
If we want use Quota in old buckets, we can add a command to recalculate bucket usedBytes to minimize the impact. User can decide for himself which buckets to recalculate.

Yes during upgrade might not be a feasible idea, it will take a long time for an upgrade if there are many buckets/keys in the cluster.

If we want to use Quota in old buckets, we can add a command to recalculate bucket usedBytes to minimize the impact. User can decide for himself which buckets to recalculate.

Even this will be a costly operation, during that operation we should acquire bucket read lock and do the calculation, so all writes will be stalled in the system. (As during this operation, we should not allow new writes to get accurate bytesused).

@captainzmc
Copy link
Member Author

Updated PR, set default to -2. No matter whether or not we support old volume/bucket setting quota, we need to distinguish which volumes and buckets are old. So Here I take @bharatviswa504's advice.

Copy link
Contributor

@linyiqun linyiqun Dec 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we change above condition check? Above check is to confirmed that the quota is enabled, not a validation check. If we want to make sure quota value is a positive value, I prefer to additionally add omVolumeArgs.getQuotaInNamespace() > 0 check.

if (omVolumeArgs.getQuotaInNamespace() != OzoneConsts.QUOTA_RESET && omVolumeArgs.getQuotaInNamespace() > 0) {
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@linyiqun Here with omVolumeArgs.getQuotaInNamespace () > 0 is enough? Because when it > 0 is certainly is not equal to -1.

Copy link
Contributor

@linyiqun linyiqun Dec 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@captainzmc , here my point is that we should firstly check if the quota was enabled by using one variable flag(here I understand is OzoneConsts.QUOTA_RESET). Then we check the quota value if it's valid. Is there a switch config for the quota now? If the quota is not enabled, we don't need to do the quota check anymore.

For the current value OzoneConsts.QUOTA_RESET is -1, omVolumeArgs.getQuotaInNamespace () > 0 makes the sense. But once OzoneConsts.QUOTA_RESET to a positive value, this check won't work. So omVolumeArgs.getQuotaInNamespace() != OzoneConsts.QUOTA_RESET would still be a better check.

Copy link
Member Author

@captainzmc captainzmc Dec 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for @linyiqun's feedback.
Current there are two case -1 or -2 (old volume/bucket) not check quota, actually we have guarantee on the client, user cann’t set negative quota, Therefore, except for clear quota and old quota, there is no negative case.

In addition to using omVolumeArgs.getQuotaInNamespace ()! = OzoneConsts.QUOTA_RESET also need add omvolumeargs.getQuotainNamespace ()! = 2. The judgment condition will increase too much and the code will not be clear enough, we can use omVolumeArgs. getQuotaInNamespace () > 0, this will make the code much cleaner.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, go ahead for this.

optional uint64 usedBytes = 14;
optional uint64 quotaInBytes = 15;
optional uint64 quotaInNamespace = 16;
optional int64 quotaInBytes = 15 [default = -2];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1677 (comment)
Any reason for moving back to int64?

Copy link
Member Author

@captainzmc captainzmc Dec 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using sint64, I found that the default value changed to -9223372036854775808. This could be a compatibility issue. So I switched to int64, this is correct.
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, thanks for info.

Copy link
Contributor

@bharatviswa504 bharatviswa504 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 LGTM.

optional uint64 usedBytes = 14;
optional uint64 quotaInBytes = 15;
optional uint64 quotaInNamespace = 16;
optional int64 quotaInBytes = 15 [default = -2];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, thanks for info.

@bharatviswa504
Copy link
Contributor

If no more comments, I will commit this by tomorrow.

@bharatviswa504 bharatviswa504 merged commit 0687cc5 into apache:master Dec 23, 2020
@bharatviswa504
Copy link
Contributor

bharatviswa504 commented Dec 23, 2020

Thank you @captainzmc for the contribution and everyone for the review.

if any one has more comments, we can open a new Jira to fix them.

errose28 pushed a commit to prashantpogde/hadoop-ozone that referenced this pull request Feb 3, 2021
…graded to the Quota version. (apache#1677)

Cherry picked from master to fix acceptance test failure in upgrade test. Merging again from this point would have introduced 52 new conflicts.
prashantpogde added a commit that referenced this pull request Feb 4, 2021
 (#1822)

* HDDS-4587. Merge remote-tracking branch 'upstream/master' into HDDS-3698.

* HDDS-4587. Addressing CI failure.

* HDDS-4562. Old bucket needs to be accessible after the cluster was upgraded to the Quota version. (#1677)

Cherry picked from master to fix acceptance test failure in upgrade test. Merging again from this point would have introduced 52 new conflicts.

* HDDS-4770. Upgrade Ratis Thirdparty to 0.6.0 (#1868)

Cherry picked from master because 0.6.0-SNAPSHOT is no longer in the repos

Co-authored-by: micah zhao <[email protected]>
Co-authored-by: Doroszlai, Attila <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants