Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dgraph unresponsive on read & mutation both #2311

Closed
sameervitian opened this issue Apr 5, 2018 · 13 comments
Closed

Dgraph unresponsive on read & mutation both #2311

sameervitian opened this issue Apr 5, 2018 · 13 comments
Labels
investigate Requires further investigation

Comments

@sameervitian
Copy link
Contributor

If you suspect this could be a bug, follow the template.

  • What version of Dgraph are you using?
    Dgraph version : v1.0.4-dev
    Commit SHA-1 : 807976c
    Commit timestamp : 2018-03-22 14:55:24 +1100
    Branch : HEAD

  • Have you tried reproducing the issue with latest release?
    I am already using nightly build as suggested in Dgraph bulk setting up cluster  #2252

  • What is the hardware spec (RAM, OS)?
    3 dgraph data server node with ubuntu 14.04 / 8 core 32GB
    3 node for zero with ubuntu 14.04/ 1 core 2GB

  • Steps to reproduce the issue (command/config used to run Dgraph).
    Dgraph config -

export: export

gentlecommit: 0.33

idx: 1

memory_mb: 16087.0

trace: 0.33

postings: /data/dgraph/p

wal: /data/dgraph/w

debugmode: False

bindall: True

my: "<server_ip>:7080"

zero: "<zero_ip>:5080"

I have 3 dgraph servers running in cluster with replica 3. We are using this setup since more than a week on production. Today all of a sudden we started having issue on read and mutate both. Calls are just freezing(returning no response). As this problem was in production we had no choice but to setup a new dgraph machine and remigrate the data from original source.

I can see following log in one of node roughly around the same time when dgraph became unresponsive

2018/04/05 17:03:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:04:23 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining tic
ks: 100)
2018/04/05 17:04:26 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining tic
ks: 100)
2018/04/05 17:04:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:04:29 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining tic
ks: 100)
2018/04/05 17:04:33 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining tic
ks: 100)
2018/04/05 17:04:36 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:38 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:42 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:44 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:46 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:50 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:52 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:55 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:59 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:04:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:05:02 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:03 groups.go:480: Unable to sync memberships. Error: rpc error: code = Unavailable desc = transport is closing
2018/04/05 17:05:03 groups.go:702: Error in oracle delta stream. Error: rpc error: code = Unavailable desc = transport is closing
2018/04/05 17:05:04 groups.go:694: Error while calling Oracle rpc error: code = Unavailable desc = all SubConns are in TransientFailure
2018/04/05 17:05:04 groups.go:480: Unable to sync memberships. Error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2018/04/05 17:05:05 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.. Will retry...
2018/04/05 17:05:05 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.. Will retry...
2018/04/05 17:05:05 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.. Will retry...
2018/04/05 17:05:05 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.. Will retry...
2018/04/05 17:05:05 groups.go:702: Error in oracle delta stream. Error: rpc error: code = Unknown desc = Node is no longer leader.
2018/04/05 17:05:05 groups.go:480: Unable to sync memberships. Error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2018/04/05 17:05:05 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.. Will retry...
2018/04/05 17:05:06 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:06 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.. Will retry...
2018/04/05 17:05:06 groups.go:702: Error in oracle delta stream. Error: rpc error: code = Unknown desc = Node is no longer leader.
2018/04/05 17:05:06 groups.go:480: Unable to sync memberships. Error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2018/04/05 17:05:09 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:13 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:16 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:19 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:23 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:27 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:05:30 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:32 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:34 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:36 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:39 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)
2018/04/05 17:05:41 raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired (remaining ticks: 100)

And since then such log - raft.go:692: INFO: 2 [logterm: 2, index: 10314649, vote: 0] ignored MsgVote from 3 [logterm: 2, index: 10325133] at term 2: lease is not expired are flooded in the log. Also dgraph is unresponsive since then.
I have attached heap & cpu profile.
pprof.dgraph.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
pprof.dgraph.samples.cpu.001.pb.gz

@sameervitian
Copy link
Contributor Author

sameervitian commented Apr 5, 2018

here is the debug vars and zero state-
debug_vars.txt
state.txt

@pawanrawal
Copy link
Contributor

pawanrawal commented Apr 6, 2018

Can you share logs from other servers as well (Zero and Dgraph server)? Also, some more logs from this server would be helpful from when it was healthy just before the error logs. Ideally, if you can share full logs that would be great or at least mention the node id for which the log file is.

@pawanrawal pawanrawal added the investigate Requires further investigation label Apr 6, 2018
@sameervitian
Copy link
Contributor Author

sameervitian commented Apr 6, 2018

@pawanrawal above log is from node-id 2. Here is full log for node 2 and 3.
dgraph2.error.log.tar.gz
dgraph3.error.log.tar.gz

unfortunately, looks like we lost logs of node 1 and zero servers

I can see only such logs just before the issue occured

2018/04/05 16:51:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:51:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:52:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:52:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:53:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:53:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:54:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:54:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:55:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:55:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:56:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:56:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:57:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:57:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:58:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:58:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:59:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 16:59:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:00:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:00:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:01:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:01:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:02:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:02:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:03:29 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]
2018/04/05 17:03:59 draft.go:682: Couldn't take snapshot, txn watermark: [8859047], applied watermark: [10298430]

@pawanrawal
Copy link
Contributor

pawanrawal commented Apr 6, 2018

Unfortunately, looks like you are using an older commit from 15 days ago. The fix for Couldn't take snapshot issue went into f66c7df as mentioned in #2266 (comment).

@sameervitian
Copy link
Contributor Author

@pawanrawal is dgraph hung issue also related to Couldn't take snapshot issue ?

Also, on restarting zero we are getting following error now and zero isn't coming up.

github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).execute(0xc42024ab40, 0xc4202490a0, 0x2, 0x2, 0xc42024ab40, 0xc4202490a0)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1a60c00, 0xc420131ee8, 0x8f9f90, 0x1a682d0)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).Execute(0x1a60c00, 0x1a682d0, 0x1522bc8f17fdd5d9)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/dgraph-io/dgraph/dgraph/cmd.Execute()
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:52 +0x31
main.main()
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/main.go:34 +0xad




2018/04/06 10:57:06 node.go:232: Found Snapshot, Metadata: {ConfState:{Nodes:[1 2 3] XXX_unrecognized:[]} Index:29925435 Term:1378 XXX_unrecognized:[]}
2018/04/06 10:57:06 node.go:247: Found hardstate: {Term:1420 Vote:3 Commit:29935537 XXX_unrecognized:[]}
2018/04/06 10:57:06 node.go:259: Group 0 found 971 entries
raft2018/04/06 10:57:06 missing log entry [last: 29925435, append at: 29934568]
panic: missing log entry [last: 29925435, append at: 29934568]

goroutine 1 [running]:
log.(*Logger).Panicf(0xc420067090, 0x1376658, 0x2b, 0xc42d5c6aa0, 0x2, 0x2)
	/home/travis/.gimme/versions/go1.9.4.linux.amd64/src/log/log.go:219 +0xdb
github.com/dgraph-io/dgraph/vendor/github.com/coreos/etcd/raft.(*DefaultLogger).Panicf(0x1aeb700, 0x1376658, 0x2b, 0xc42d5c6aa0, 0x2, 0x2)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/coreos/etcd/raft/logger.go:121 +0x60
github.com/dgraph-io/dgraph/vendor/github.com/coreos/etcd/raft.(*MemoryStorage).Append(0xc4203160e0, 0xc42d5ac000, 0x3cb, 0x471, 0x0, 0x0)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/coreos/etcd/raft/storage.go:267 +0x641
github.com/dgraph-io/dgraph/conn.(*Node).InitFromWal(0xc420181ad0, 0xc42003afb0, 0xc24280, 0xc420518514, 0xc4205af6b8, 0xc42d540500)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/conn/node.go:263 +0x523
github.com/dgraph-io/dgraph/dgraph/cmd/zero.(*node).initAndStartNode(0xc4203ccea0, 0xc42003afb0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/cmd/zero/raft.go:408 +0x7e
github.com/dgraph-io/dgraph/dgraph/cmd/zero.run()
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/cmd/zero/run.go:195 +0x83d
github.com/dgraph-io/dgraph/dgraph/cmd/zero.init.0.func1(0xc42024ab40, 0xc420249120, 0x0, 0x2)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/cmd/zero/run.go:69 +0x52
github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).execute(0xc42024ab40, 0xc4202490e0, 0x2, 0x2, 0xc42024ab40, 0xc4202490e0)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1a60c00, 0xc420131ee8, 0x8f9f90, 0x1a682d0)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).Execute(0x1a60c00, 0x1a682d0, 0x1522bca17a874293)
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/dgraph-io/dgraph/dgraph/cmd.Execute()
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:52 +0x31
main.main()
	/home/travis/gopath/src/github.com/dgraph-io/dgraph/dgraph/main.go:34 +0xad
2018/04/06 10:57:08 zero.go:333: Got connection request: id:3 addr:"172.21.253.104:7080"
2018/04/06 10:57:08 pool.go:118: == CONNECT ==> Setting 172.21.253.104:7080
2018/04/06 10:57:08 zero.go:426: Connected
2018/04/06 10:57:08 node.go:232: Found Snapshot, Metadata: {ConfState:{Nodes:[1 2 3] XXX_unrecognized:[]} Index:29925435 Term:1378 XXX_unrecognized:[]}
2018/04/06 10:57:08 node.go:247: Found hardstate: {Term:1420 Vote:3 Commit:29935537 XXX_unrecognized:[]}
2018/04/06 10:57:08 node.go:259: Group 0 found 971 entries
raft2018/04/06 10:57:08 missing log entry [last: 29925435, append at: 29934568]
panic: missing log entry [last: 29925435, append at: 29934568]

@pawanrawal
Copy link
Contributor

is dgraph hung issue also related to Couldn't take snapshot issue ?

Not sure, couldn't take snapshot could have caused your servers to go out of memory and Zero could have been killed. To confirm that I needed Zero logs. I can see servers are not able to talk to Zero.

raft2018/04/06 10:57:06 missing log entry [last: 29925435, append at: 29934568]
panic: missing log entry [last: 29925435, append at: 29934568]

This is a new one and has not been reported before, can you share your zw directory for the Zero that it is happening for?

@sameervitian
Copy link
Contributor Author

zw folder is of 8.8G. On tar it becomes 1.4G

/data/dgraph# ls -lh wz/
total 8.8G
-rw-r--r-- 1 root root 1.1G Mar 28 12:10 000000.vlog
-rw-r--r-- 1 root root 1.1G Mar 29 12:04 000001.vlog
-rw-r--r-- 1 root root 1.1G Mar 31 15:45 000002.vlog
-rw-r--r-- 1 root root 1.1G Apr  2 13:03 000003.vlog
-rw-r--r-- 1 root root 1.1G Apr  3 11:15 000004.vlog
-rw-r--r-- 1 root root 1.1G Apr  4 11:38 000005.vlog
-rw-r--r-- 1 root root 1.1G Apr  5 18:49 000006.vlog
-rw-r--r-- 1 root root 974M Apr  6 10:57 000007.vlog
-rw-r--r-- 1 root root  70M Mar 28 15:35 000013.sst
-rw-r--r-- 1 root root  70M Mar 30 01:37 000027.sst
-rw-r--r-- 1 root root  70M Apr  1 09:18 000042.sst
-rw-r--r-- 1 root root  70M Apr  2 10:09 000050.sst
-rw-r--r-- 1 root root  70M Apr  5 04:29 000093.sst
-rw-r--r-- 1 root root  70M Apr  5 04:29 000095.sst
-rw-r--r-- 1 root root  66M Apr  5 04:30 000096.sst
-rw-r--r-- 1 root root  70M Apr  5 18:32 000104.sst
-rw-r--r-- 1 root root  27M Apr  5 18:32 000105.sst
-rw-r--r-- 1 root root  70M Apr  6 08:15 000111.sst
-rw-r--r-- 1 root root  70M Apr  6 08:15 000112.sst
-rw-r--r-- 1 root root  221 Apr  6 10:55 000132.sst
-rw-r--r-- 1 root root  70M Apr  6 10:55 000133.sst
-rw-r--r-- 1 root root  15M Apr  6 10:55 000134.sst
-rw-r--r-- 1 root root  221 Apr  6 10:57 000135.sst
-rw-r--r-- 1 root root  221 Apr  6 10:57 000136.sst
-rw-r--r-- 1 root root  221 Apr  6 10:57 000137.sst
-rw-r--r-- 1 root root  221 Apr  6 10:57 000138.sst
-rw-r--r-- 1 root root  221 Apr  6 10:57 000139.sst
-rw-r--r-- 1 root root  221 Apr  6 10:57 000140.sst
-rw-r--r-- 1 root root 2.4K Apr  6 10:57 MANIFEST

Let me know if you need any specific file

@pawanrawal
Copy link
Contributor

I need the whole directory to read it. Feel free to share it with me on email pawan AT dgraph DOT io.

@sameervitian
Copy link
Contributor Author

I have sent the whole directory over your email

@sameervitian
Copy link
Contributor Author

@pawanrawal have you looked at zw directory. Any clue ?

@pawanrawal
Copy link
Contributor

I did have a look at it and can reproduce this issue. It requires further investigation to understand how it might have happened. Can you please open a new Github issue for this bug and continue the discussion there as this is a separate issue?

@sameervitian
Copy link
Contributor Author

I have created a new issue - #2327

@manishrjain
Copy link
Contributor

Closed #2327 . If you can reproduce this on master, feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigate Requires further investigation
Development

No branches or pull requests

3 participants