Skip to content

Conversation

@xuanyuanking
Copy link
Member

@xuanyuanking xuanyuanking commented Mar 29, 2018

What changes were proposed in this pull request?

This is a bug caused by abnormal scenario describe below:

  1. ShuffleMapTask 1.0 running, this task will fetch data from ExecutorA
  2. ExecutorA Lost, trigger mapOutputTracker.removeOutputsOnExecutor(execId) , shuffleStatus changed.
  3. Speculative ShuffleMapTask 1.1 start, got a FetchFailed immediately.
  4. ShuffleMapTask 1.0 finally succeed, but because of 1.1's FetchFailed, stage still mark as failed stage.
  5. ShuffleMapTask 1 is the last task of its stage, ShuffleMapTask 1.0's success event triggered mapOutputTracker.registerMapOutput, this is also the root case for this scenario.
  6. This ShuffleMapStage will always skipped because of there's no missing task DAGScheduler can get, and finally this will cause its child stage never succeed.

I apply the detailed screenshots in jira comments.

How was this patch tested?

Add a new UT in TaskSetManagerSuite

@xuanyuanking
Copy link
Member Author

The scenario can be reproduced by below test case added in DAGSchedulerSuite

/**
   * This tests the case where origin task success after speculative task got FetchFailed
   * before.
   */
  test("[SPARK-23811] Fetch failed task should kill other attempt") {
    // Create 3 RDDs with shuffle dependencies on each other: rddA <--- rddB <--- rddC
    val rddA = new MyRDD(sc, 2, Nil)
    val shuffleDepA = new ShuffleDependency(rddA, new HashPartitioner(2))
    val shuffleIdA = shuffleDepA.shuffleId

    val rddB = new MyRDD(sc, 2, List(shuffleDepA), tracker = mapOutputTracker)
    val shuffleDepB = new ShuffleDependency(rddB, new HashPartitioner(2))

    val rddC = new MyRDD(sc, 2, List(shuffleDepB), tracker = mapOutputTracker)

    submit(rddC, Array(0, 1))

    // Complete both tasks in rddA.
    assert(taskSets(0).stageId === 0 && taskSets(0).stageAttemptId === 0)
    complete(taskSets(0), Seq(
      (Success, makeMapStatus("hostA", 2)),
      (Success, makeMapStatus("hostB", 2))))

    // The first task success
    runEvent(makeCompletionEvent(
      taskSets(1).tasks(0), Success, makeMapStatus("hostB", 2)))

    // The second task's speculative attempt fails first, but task self still running.
    // This may caused by ExecutorLost.
    runEvent(makeCompletionEvent(
      taskSets(1).tasks(1),
      FetchFailed(makeBlockManagerId("hostA"), shuffleIdA, 0, 0, "ignored"),
      null))
    // Check currently missing partition
    assert(mapOutputTracker.findMissingPartitions(shuffleDepB.shuffleId).get.size === 1)
    val missingPartition = mapOutputTracker.findMissingPartitions(shuffleDepB.shuffleId).get(0)

    // The second result task self success soon
    runEvent(makeCompletionEvent(
      taskSets(1).tasks(1), Success, makeMapStatus("hostB", 2)))
    // No missing partitions here, this will cause child stage never succeed
    assert(mapOutputTracker.findMissingPartitions(shuffleDepB.shuffleId).get.size === 0)
  }

@SparkQA
Copy link

SparkQA commented Mar 29, 2018

Test build #88690 has finished for PR 20930 at commit 2907075.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@xuanyuanking
Copy link
Member Author

retest this please

@SparkQA
Copy link

SparkQA commented Mar 29, 2018

Test build #88697 has finished for PR 20930 at commit 2907075.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@xuanyuanking
Copy link
Member Author

cc @jerryshao @cloud-fan

@xuanyuanking xuanyuanking changed the title [SPARK-23811][Core] Same tasks' FetchFailed event comes before Success will cause child stage never succeed [SPARK-23811][Core] FetchFailed comes before Success of same task will cause child stage never succeed Mar 29, 2018
@cloud-fan
Copy link
Contributor

what happened to ShuffleMapTask 1.0? I don't get it from your PR description.

@xuanyuanking
Copy link
Member Author

ShuffleMapTask 1.0 succeed after its speculative task failed by FetchFailed. Thanks for your checking, I will modify the PR description.

@cloud-fan
Copy link
Contributor

cloud-fan commented Mar 30, 2018

What happened to ShuffleMapTask 1.0 exactly? There are 3 cases: the stage is marked as failed, but not be resubmitted yet, or the stage has been resubmitted, or the stage is aborted.

@xuanyuanking
Copy link
Member Author

The first case, the stage is marked as failed, but not be resubmitted yet.

@cloud-fan
Copy link
Contributor

then why is it a problem? The stage should be resubmitted soon, ShuffleMapTask 1.0 should be a no-op.

@xuanyuanking
Copy link
Member Author

Yeah, the stage resubmitted, but there's no missing task for this stage and actually no task will be resubmitted. This mainly because the ShuffleMapTask 1.0 triggered shuffleStage.addOutputLoc.
The screenshot I attached in Jira maybe help to explain this scenario.
image
image
You can see the empty ShuffleMapStage 2 retry 4 times, finally its child stage 3 failed with FetchFailed.

@cloud-fan
Copy link
Contributor

What's your proposed fix? it sounds like we can just ignore ShuffleMapTask 1.0 if the stage is marked as failed.

@xuanyuanking
Copy link
Member Author

xuanyuanking commented Mar 31, 2018

What's your proposed fix?

I fix this by killing other attempts while receive a FetchFailed in TaskSetManager. If we finally ignore the success event of other attempts, might as well stop the task.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if this is async, we can't guarantee to not have task success events after marking staging as failed, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not work. Maybe we shall just ignore the finished tasks submitted to a failed stage?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cloud-fan Yes you're right, I should guarantee this in TaskSetManager.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiangxb1987 Yes, ignore the finished event is necessary, maybe it's also needed to kill useless task?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. Useless tasks should fail soon(FetchFailure usually means mapper is down).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, I remove the code and UT in next commit.

@SparkQA
Copy link

SparkQA commented Apr 2, 2018

Test build #88806 has finished for PR 20930 at commit 08f6930.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you making this change? I don't quite get it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change of fetchFailedTaskIndexSet is to ignore the task success event if the stage is marked as failed, as Wenchen's suggestion in before comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should handle this case in DAGScheduler, then we can look up the stage by task id, and see if the stage is failed or not. Then we don't need fetchFailedTaskIndexSet.

Copy link
Member Author

@xuanyuanking xuanyuanking Apr 18, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great thanks for you two's guidance, that's more clear and the UT added for reproducing this problem can also used for checking it!

@xuanyuanking
Copy link
Member Author

@cloud-fan @jiangxb1987
Sorry for late reply, delete the useless code as our discussion before.

@xuanyuanking
Copy link
Member Author

retest this please

Copy link
Member

@Ngone51 Ngone51 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @xuanyuanking. I have some questions with the screenshot you post. Does stage 2 is correspond to the never success stage in PR description ? So, why stage 2 retry 4 times when there's no more missing tasks? As I know, if a stage has 0 task to submit, then, a child stage will be submitted soon. So, in my understanding, there's no retry for stage 2. Hope you can explain more about the screenshot. Thanks.

} else if (fetchFailedTaskIndexSet.contains(index)) {
logInfo("Ignoring task-finished event for " + info.id + " in stage " + taskSet.id +
" because task " + index + " has already failed by FetchFailed")
return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can not simply return here. And we should always send a task CompletionEvent to DAG, in case of there's any listeners are waiting for it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, we can mark task asFAILED with UnknownReason here. And then, DAG will treat this task as no-op, and registerMapOutput will not be triggered. Though, it is not a elegant way.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, as @cloud-fan 's suggestion, handle this in DAGScheduler is a better choice.

@SparkQA
Copy link

SparkQA commented Apr 16, 2018

Test build #89389 has finished for PR 20930 at commit 0defc09.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@xuanyuanking
Copy link
Member Author

@Ngone51 Thanks for your review.

Does stage 2 is correspond to the never success stage in PR description ?

Stage 3 is the never success stage, stage 2 is its parent stage.

So, why stage 2 retry 4 times when there's no more missing tasks?

Stage 2 retry 4 times triggered by Stage 3's fetch failed event. Actually in this scenario, stage 3 will always failed by fetch fail.

* before.
*/
test("[SPARK-23811] FetchFailed comes before Success of same task will cause child stage" +
" never succeed") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the test name should describe the expected behavior not the wrong one.
SPARK-23811: staged failed by FetchFailed should ignore following successful tasks

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'll change it.

// The second task's speculative attempt fails first, but task self still running.
// This may caused by ExecutorLost.
runEvent(makeCompletionEvent(
taskSets(1).tasks(1),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I'm not very familiar with this test suite, how can you tell it's a speculative task?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we only need to mock the speculative task failed event came before success event, makeCompletionEvent with same taskSets's task can achieve such goal. This also use in task events always posted in speculation / when stage is killed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, you can runEvent(SpeculativeTaskSubmitted) first to simulate a speculative task submitted before you runEvent(makeCompletetionEvent()).

}
if (failedEpoch.contains(execId) && smt.epoch <= failedEpoch(execId)) {
logInfo(s"Ignoring possibly bogus $smt completion from executor $execId")
} else if (failedStages.contains(shuffleStage)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we only have a problem with shuffle map task not result task?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also confuse me before, as far as I'm concerned, the result task in such scenario(speculative task fail but original task success) is ok because it has no child stage, we can use the success task's result and markStageAsFinished. But for shuffle map task, it will cause inconformity between mapOutputTracker and stage's pendingPartitions, it must fix.
I'm not sure of ResultTask's behavior, can you give some advice?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I may nitpick kere. Can you simulate what happens to result task if FechFaileded comes before task success?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems we may mistakenly mark a job as finished?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I may nitpick here.

No, that's necessary, I should have to make sure about this, thanks for your advice! :)

Can you simulate what happens to result task if FechFaileded comes before task success?

Sure, but it maybe hardly to reproduce this in real env, I'll try to fake it on UT first ASAP.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the UT for simulating this scenario happens to result task.

@Ngone51
Copy link
Member

Ngone51 commented Apr 18, 2018

Hi, @xuanyuanking , I'm still confused (smile & cry).

Stage 2 retry 4 times triggered by Stage 3's fetch failed event. Actually in this scenario, stage 3 will always failed by fetch fail.

Stage 2 has no missing tasks, right? So, there's no missing partitions for Stage 2 (which means Stage 3 can always get Stage 2's MapOutputs from MapOutputTrackerMaster ), right? So, why Stage 3 will always failed by FetchFail?

Hope you can explain more. Thank you very much!

@SparkQA
Copy link

SparkQA commented Apr 18, 2018

Test build #89479 has finished for PR 20930 at commit ba6f71a.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@xuanyuanking
Copy link
Member Author

@Ngone51
You can check the screenshot in detail, stage 2's shuffleID is 1, but stage 3 failed by missing an output for shuffle '0'! So here the stage 2's skip cause stage 3 got an error shuffleId, the root case is this patch wants to fix, missing task should have, but actually not.

@xuanyuanking
Copy link
Member Author

@Ngone51 Ah, maybe I know how the description misleading you, the in the description 5, 'this stage' refers to 'Stage 2' in screenshot, thanks for your check, I modified the description to avoid misleading others.

@Ngone51
Copy link
Member

Ngone51 commented Apr 21, 2018

Hi, @xuanyuanking , thank for your patient explanation, sincerely.

With regard to your latest explanation:

stage 2's shuffleID is 1, but stage 3 failed by missing an output for shuffle '0'! So here the stage 2's skip cause stage 3 got an error shuffleId.

However, I don't think stage 2's skip will lead to stage 3 got an error shuffleId, as we've already created all ShuffleDependencies (constructed with certain ids) for ShuffleMapStages before any stages of a job submitted.

As I struggle for understanding this issue for a while, finally, I got my own inference:

(assume the 2 ShuffleMapTasks below is belong to stage 2, and stage 2 has two partitions on map side. And stage 2 has a parent stage named stage 1, and a child stage named stage 3.)

  1. ShuffleMapTask 0.0 run on ExecutorB, and write map output on ExecutorB, succeed normally.
    And now, there's only '1' available map output registered on MapOutputTrackerMaster .

  2. ShuffleMapTask 1.0 is running on ExecutorA, and fetch data from ExecutorA, and write map output on ExecutorA, too.

  3. ExecutorA lost for unknown reason after send StatusUpdate message to driver, which tells ShuffleMapTask 1.0's success. And all map outputs on ExecutorA lost, include ShuffleMapTask 1.0's map output.

  4. And driver launch a speculative ShuffleMapTask 1.1 before it receives the StatusUpdate message. And ShuffleMapTask 1.1 get FetchFailed immediately.

  5. DAGScheduler handle the FetchFailed ShuffleMapTask 1.1 firstly, mark stage 2 and it's parent stage 1 as failed. And stage 1 & stage 2 are waiting for resubmit.

  6. DAGScheduler handle the success ShuffleMapTask 1.0 before stage 1 & stage 2 resubmit, which trigger MapOutputTrackerMaster.registerMapOutput . And now, there's '2' available map output registered on MapOutputTrackerMaster (but knowing ShuffleMapTask 1.0's map output on ExecutorA has been lost.).

  7. stage 1 resubmitted and succeed normally.

  8. stage 2 resubmitted. As stage 2 has '2' available map output registered on MapOutputTrackerMaster , so there's no missing partitions for stage 2. Thus, stage 2 has no missing tasks to submit, too.

  9. And then, we submit stage 3. As stage 2's map output file lost on ExecutorA, so stage 3 must get a FetchFailed at the end. Then, we resubmit stage 2& stage 3. And then we get into a loop until stag 3 abort.

But if the issue is what I described above, we should get FetchFailedException instead of MetadataFetchFailedException shown in screenshot. So, at this point which can not make sense.

Please feel free to point my wrong spot out.

Anyway, thanks again.

@xuanyuanking
Copy link
Member Author

image

Stage 0\1\2\3 same with 20\21\22\23 in this screenshot, stage2's shuffleId is 1 but stage3's is 0 can't happen.

Good description for the scenario, can't get a FetchFailed because we can get the MapStatus, but get a 'null'. If I'm not mistaken, this also because the ExecutorLost trigger removeOutputsOnExecutor.

Happy to discuss with all guys and sorry for can't giving more detailed log after checking the root case, this happened in Baidu online env and can't keep all logs for 1 month. I'll keep fixing the case and catching details log as mush as possible.

@Ngone51
Copy link
Member

Ngone51 commented Apr 22, 2018

because we can get the MapStatus, but get a 'null'. If I'm not mistaken, this also because the ExecutorLost trigger removeOutputsOnExecutor

If there's a null MapStatus for stage 2, how can it retry 4 times without any tasks? IIUC, null MapStatus leads to missing partition, which means there will be some tasks to submit.

As for stage 3's shuffle Id, that's really weird. Hope you can fix it! @xuanyuanking

@SparkQA
Copy link

SparkQA commented Apr 25, 2018

Test build #89849 has finished for PR 20930 at commit a201764.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Apr 25, 2018

Test build #89850 has finished for PR 20930 at commit 7f8503f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

assert(taskSets(1).tasks(1).isInstanceOf[ResultTask[_, _]])
runEvent(makeCompletionEvent(
taskSets(1).tasks(1), Success, makeMapStatus("hostB", 2)))
assertDataStructuresEmpty()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this test?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I add this test for answering your previous question "Can you simulate what happens to result task if FechFaileded comes before task success?". This test can pass without my code changing in DAGScheduler.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean the last line, assertDataStructuresEmpty

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it's used for check job successful complete and all temp structure empty.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. It is a check that we are cleaning up the contents of the DAGScheduler's data structures so that they do not grow without bound over time.

// The second result task self success soon.
assert(taskSets(1).tasks(1).isInstanceOf[ResultTask[_, _]])
runEvent(makeCompletionEvent(
taskSets(1).tasks(1), Success, makeMapStatus("hostB", 2)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is the code in DAGScheduler we ignore this task?

Copy link
Member Author

@xuanyuanking xuanyuanking Apr 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The success task will be ignored by OutputCommitCoordinator.taskCompleted, in the taskCompleted logic, stageStates.getOrElse will return because the current stage is in failed set.
Wrong answer above, ShuffleMapStage with the same logic, coordinator didn't filter the complete event.
The detailed log providing below:

18/04/26 10:50:24.524 ScalaTest-run-running-DAGSchedulerSuite INFO DAGScheduler: Resubmitting ShuffleMapStage 0 (RDD at DAGSchedulerSuite.scala:74) and ResultStage 1 () due to fetch failure
18/04/26 10:50:24.535 ScalaTest-run-running-DAGSchedulerSuite DEBUG DAGSchedulerSuite$$anon$6: Increasing epoch to 2
18/04/26 10:50:24.538 ScalaTest-run-running-DAGSchedulerSuite INFO DAGScheduler: Executor lost: exec-hostA (epoch 1)
18/04/26 10:50:24.540 ScalaTest-run-running-DAGSchedulerSuite INFO DAGScheduler: Shuffle files lost for executor: exec-hostA (epoch 1)
18/04/26 10:50:24.545 ScalaTest-run-running-DAGSchedulerSuite DEBUG DAGSchedulerSuite$$anon$6: Increasing epoch to 3
18/04/26 10:50:24.552 ScalaTest-run-running-DAGSchedulerSuite DEBUG OutputCommitCoordinator: Ignoring task completion for completed stage
18/04/26 10:50:24.554 ScalaTest-run-running-DAGSchedulerSuite INFO DAGScheduler: ResultStage 1 () finished in 0.136 s
18/04/26 10:50:24.573 ScalaTest-run-running-DAGSchedulerSuite DEBUG DAGScheduler: Removing stage 1 from failed set.
18/04/26 10:50:24.575 ScalaTest-run-running-DAGSchedulerSuite DEBUG DAGScheduler: After removal of stage 1, remaining stages = 1
18/04/26 10:50:24.576 ScalaTest-run-running-DAGSchedulerSuite DEBUG DAGScheduler: Removing stage 0 from failed set.
18/04/26 10:50:24.576 ScalaTest-run-running-DAGSchedulerSuite DEBUG DAGScheduler: After removal of stage 0, remaining stages = 0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

INFO DAGScheduler: ResultStage 1 () finished in 0.136 s

This is unexpected, isn't it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and it seems Spark will wrongly isssue a job end event, can you check it in the test?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, you're right. The success completely event in UT was treated as normal success task. I fixed this by ignore this event at the beginning of handleTaskCompletion.

@SparkQA
Copy link

SparkQA commented Apr 26, 2018

Test build #89870 has finished for PR 20930 at commit fee903c.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@jiangxb1987
Copy link
Contributor

jiangxb1987 commented Apr 26, 2018

Have you applied this patch: #17955 ?
That PR seems to be addressing the issue you described:

This duplication adds complexity and creates the potential for certain types of correctness bugs. Bad things can happen if these two copies of the map output locations get out of sync. For instance, if the MapOutputTracker is missing locations for a map output but ShuffleMapStage believes that locations are available then tasks will fail with MetadataFetchFailedException but ShuffleMapStage will not be updated to reflect the missing map outputs, leading to situations where the stage will be reattempted (because downstream stages experienced fetch failures) but no task sets will be launched (because ShuffleMapStage thinks all maps are available).

@xuanyuanking
Copy link
Member Author

Have you applied this patch: #17955 ?

No, this happened on Spark 2.1. Thanks xingbo & wenchen, I'll port back this patch to our internal Spark 2.1.

That PR seems to be addressing the issue you described:

Yeah, the description is similar with currently scenario, but there's also a puzzle about the wrong ShuffleId, I'm trying to find the reason. Thanks again for your help, I'll first port back this patch.

@Ngone51
Copy link
Member

Ngone51 commented Apr 26, 2018

No wonder I can't understand the issue for a long time since I've thought it happened on Spark2.3 . And now it makes sense. Thanks @jiangxb1987

@github-actions
Copy link

We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable.
If you'd like to revive this PR, please reopen it and ask a committer to remove the Stale tag!

@github-actions github-actions bot added the Stale label Jan 12, 2020
@github-actions github-actions bot closed this Jan 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants