-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data recovery #97
Comments
Can u tell me more about this issue? thx! |
When performing data operations, such as insertion or deletion, a log will be written to the operation log. If the operation fails, the failed operation will be read from the log and the operation will be resumed. However, inserting operation logs has not yet been implemented. |
Below are some of my questions; I hope they can be answered. Thank you very much!!!!!! |
Sure, here's my reply.
A:Of course it is, but we do not have a retry mechanism for data operations to configure, I think the configuration of the number of retries should be the DB built-in configuration rather than exposed.
A:I'm not sure what you mean. Maybe you could be a little more specific.
A:Due to the lack of maintenance for a period of time and the document is not updated in time, no functions related to operation logs are found. Now, you only need to in I hope that solves your confusion. |
Thank you for your answer! I had misunderstood some things earlier, but I get some new question I hope you can help with.
returns an error without logging; it simply returns the error to the caller.
db.index.Put not contain err also failed return a consts error.
In such an asynchronous process where the function has already returned, what is the purpose of the retry? |
ok, here's my reply.
Take
The bool type is returned for the CRUD operation of the index, but in reality, it signifies a write to disk. Therefore, error handling for this piece is unnecessary as the index is maintained in memory and any errors would likely be due to insufficient memory. For the ending/db.go put function
According to this error logic, let's analyze the approach for performing operations to ensure the proper insertion of data. When using the appendLogRecordWithLock function, errors may arise due to one of three reasons.
Therefore, you can record the location and cause of the analysis error with 'runtime_error.log', and then re-execute the operation. This is merely to offer you a line of thought. It might not be the best direction as the implementation could be somewhat trouble. |
Thank you for your answer! |
In general, there are two approaches to log recovery. One involves analyzing the error log and then re-executing the operation, while the other entails reading the successful log to recover all the data. |
Thank you for your very patient reply. Is it correct to understand that this feature mainly involves logging data operations within the program, providing an API based on logs for operation retries and recovery for users to call? However, the current zap log configuration does not include log rotation and elimination, and the runtime log contains full logs. For the API mentioned above, I think it is necessary to provide a time range for the API to retry or recover operations within this specified timeframe. |
Your understanding is accurate. It is essential to provide an API with a specified time range to avoid significantly increasing system overhead. However, in order to maintain data consistency and prevent inconsistent system status, it is important not to compromise the system due to data recovery. Additionally, full recovery functionality should also be available. Full recovery is preferable over recovery within a specified time range when dealing with small log files or infrequent recovery operations. For example:
|
Thank you for your patient response!!!!!! I will try to work on this issue! |
ok, try to do it. |
If a data operation, such as data insertion or deletion, fails, you need to perform the operation again according to the operation log.
The text was updated successfully, but these errors were encountered: