Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPNS over PubSub Publish speed decreased #6291

Closed
aschmahmann opened this issue May 2, 2019 · 4 comments
Closed

IPNS over PubSub Publish speed decreased #6291

aschmahmann opened this issue May 2, 2019 · 4 comments

Comments

@aschmahmann
Copy link
Contributor

Version information:

Baseline: go-ipfs version: 0.4.19-
Repo version: 7
System version: amd64/windows
Golang version: go1.11.5

Slow: go-ipfs version: 0.4.20-
Repo version: 7
System version: amd64/windows
Golang version: go1.12.4

Still Slow: go-ipfs version: 0.4.21-dev-648fa3aac
Repo version: 7
System version: amd64/windows
Golang version: go1.12

Type:

bug

Description:

I ran the following commands on Windows 10 in a cmd shell:
start /B ipfs daemon --enable-namesys-pubsub
ipfs name publish QmY4mFqm81hECYm6Z2NHzHtJwGaQB53sAQfhRmRPuN6keV
(for those interested ipfs cat QmY4mFqm81hECYm6Z2NHzHtJwGaQB53sAQfhRmRPuN6keV outputs abc123)

When I run the above two commands in 0.4.19 it's DHT levels of slow (about a minute), 0.4.20+ it's tremendously slow (many minutes or I just give up and kill the process). After about 77 minutes of hanging (according to curl 'localhost:5001/debug/pprof/goroutine?debug=2') it looks like the issue might be related to waiting to complete bootstrapping PubSub over the DHT. Below is some output from a non-modified 0.4.20 binary, happy to provide more upon request.

goroutine 2835 [chan receive, 5 minutes]:
github.com/libp2p/go-libp2p-kad-dht.(*IpfsDHT).Provide(0xc000130b60, 0x1464a00, 0xc00081a210, 0xc0008d5020, 0x24, 0x1, 0x0, 0x0)
	pkg/mod/github.com/libp2p/[email protected]/routing.go:424 +0x422
github.com/libp2p/go-libp2p-pubsub-router.bootstrapPubsub(0x1464a00, 0xc00081a210, 0x6d844a0, 0xc000130b60, 0x1479f60, 0xc00034c460, 0xc0008f0d00, 0x3e)
	pkg/mod/github.com/libp2p/[email protected]/pubsub.go:345 +0x169
github.com/libp2p/go-libp2p-pubsub-router.(*PubsubValueStore).PutValue(0xc00012e600, 0x14649c0, 0xc00043a480, 0xc000817c20, 0x28, 0xc0008fedc0, 0x288, 0x288, 0x0, 0x0, ...)
	pkg/mod/github.com/libp2p/[email protected]/pubsub.go:92 +0x330
github.com/libp2p/go-libp2p-routing-helpers.(*LimitedValueStore).PutValue(0xc00081bfb0, 0x14649c0, 0xc00043a480, 0xc000817c20, 0x28, 0xc0008fedc0, 0x288, 0x288, 0x0, 0x0, ...)
	pkg/mod/github.com/libp2p/[email protected]/limited.go:35 +0xf3
github.com/libp2p/go-libp2p-routing-helpers.(*Compose).PutValue(0xc000386000, 0x14649c0, 0xc00043a480, 0xc000817c20, 0x28, 0xc0008fedc0, 0x288, 0x288, 0x0, 0x0, ...)
	pkg/mod/github.com/libp2p/[email protected]/composed.go:36 +0xc9
github.com/libp2p/go-libp2p-routing-helpers.Parallel.PutValue.func1(0x146f0a0, 0xc000386000, 0x438d8d, 0x130cf50)
	pkg/mod/github.com/libp2p/[email protected]/parallel.go:294 +0x99
github.com/libp2p/go-libp2p-routing-helpers.Parallel.put.func1(0xc00043a4e0, 0xc0006b95c0, 0x2, 0x2, 0xc00079bef0, 0x146f0a0, 0xc000386000, 0x0)
	pkg/mod/github.com/libp2p/[email protected]/parallel.go:127 +0x45
created by github.com/libp2p/go-libp2p-routing-helpers.Parallel.put
	pkg/mod/github.com/libp2p/[email protected]/parallel.go:126 +0x170
@Stebalien
Copy link
Member

So... I believe this is because libp2p/go-libp2p-peerstore#67 and the fact that we end up trying to dial all the peers. Before 0.4.20, we'd simply fail to dial this long tail of peers as the addresses will have expired by the time we get to them.

Hm. This could also explain some of the "too many connections" issues we've been seeing.

Could I get a full trace?

@aschmahmann
Copy link
Contributor Author

aschmahmann commented May 2, 2019

@Stebalien: The state during my <10 minute run (collected with curl 'localhost:5001/debug/pprof/goroutine?debug=2') is here: https://gist.github.com/aschmahmann/6b1c9bfb356c77276e1850ed2695c331

Happy to provide more logging info if you give me some direction.

Also, I suspect you are correct and that this is related to dialing/searching the DHT (and continuously searching for the long tail of peers) since I created a new repo and copied the Identity (private key + PeerID) into my existing .ipfs repo (but left everything else alone) and the timing is much better now. Happy to post the slow/unlucky private key if that helps.

@Stebalien
Copy link
Member

Yeah, it's stuck in the query runner. I'm going to hide tomorrow and finish a PR.

@Stebalien
Copy link
Member

Fixed. We've moved bootstrapping into the background.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants