Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NullPointException occurred when ClusterServer shutdown and then startup #463

Closed
all4you opened this issue Jan 26, 2019 · 1 comment · Fixed by #467
Closed

NullPointException occurred when ClusterServer shutdown and then startup #463

all4you opened this issue Jan 26, 2019 · 1 comment · Fixed by #467
Assignees
Labels
area/cluster-flow Issues or PRs related to cluster flow control kind/bug Category issues or prs related to bug.
Milestone

Comments

@all4you
Copy link
Contributor

all4you commented Jan 26, 2019

Issue Description

[bug1] I created one ClusterServer and two ClusterClients, the clients are connected to the server. I shutdown the server and then startup, and then the NPE occurred.

[bug2] The total connection count is also not correct, the count should be 2 but it is 1 shown in the TokenServer list page. While the state of two cluster clients are connected that i can see it in TokenClient list page.

Describe what happened (or what feature you want)

The stack error msg is blow:
Sat Jan 26 21:01:26 CST 2019 sun.misc.Launcher$AppClassLoader@18b4aac2 JM.Log:INFO Log root path: /Users/wanghui/logs/
Sat Jan 26 21:01:26 CST 2019 sun.misc.Launcher$AppClassLoader@18b4aac2 JM.Log:INFO Set nacos log path: /Users/wanghui/logs/nacos
21:01:26.764 [main] INFO com.alibaba.nacos.client.identify.CredentialWatcher - [appA] [] [] No credential found
21:01:27.450 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d] REGISTERED
21:01:27.451 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d] BIND: 0.0.0.0/0.0.0.0:11111
21:01:27.459 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d, L:/0:0:0:0:0:0:0:0:11111] ACTIVE
21:01:28.711 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d, L:/0:0:0:0:0:0:0:0:11111] READ: [id: 0xe2e5b58d, L:/192.168.0.104:11111 - R:/192.168.0.104:64840]
21:01:28.712 [nioEventLoopGroup-2-1] WARN io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0xe2e5b58d, L:/192.168.0.104:11111 - R:/192.168.0.104:64840]'
21:01:28.712 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d, L:/0:0:0:0:0:0:0:0:11111] READ COMPLETE
21:01:28.717 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d, L:/0:0:0:0:0:0:0:0:11111] READ: [id: 0x9f7bb1af, L:/192.168.0.104:11111 - R:/192.168.0.104:64841]
21:01:28.717 [nioEventLoopGroup-2-1] WARN io.netty.bootstrap.ServerBootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x9f7bb1af, L:/192.168.0.104:11111 - R:/192.168.0.104:64841]'
21:01:28.717 [nioEventLoopGroup-2-1] INFO io.netty.handler.logging.LoggingHandler - [id: 0xf468a88d, L:/0:0:0:0:0:0:0:0:11111] READ COMPLETE
21:01:28.742 [nioEventLoopGroup-3-1] WARN io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.NullPointerException: null
at com.alibaba.csp.sentinel.cluster.server.connection.ConnectionManager.addConnection(ConnectionManager.java:97) ~[sentinel-cluster-server-default-1.4.1.jar:?]
at com.alibaba.csp.sentinel.cluster.server.handler.TokenServerHandler.handlePingRequest(TokenServerHandler.java:103) ~[sentinel-cluster-server-default-1.4.1.jar:?]
at com.alibaba.csp.sentinel.cluster.server.handler.TokenServerHandler.channelRead(TokenServerHandler.java:69) ~[sentinel-cluster-server-default-1.4.1.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.31.Final.jar:4.1.31.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.31.Final.jar:4.1.31.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:648) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:583) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:500) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462) [netty-transport-4.1.31.Final.jar:4.1.31.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-common-4.1.31.Final.jar:4.1.31.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.31.Final.jar:4.1.31.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]

Describe what you expected to happen

No NPE should be thrown

How to reproduce it (as minimally and precisely as possible)

  1. start a token server
  2. start two token client and connect the token server
  3. shutdown the token server
    4.restart the token server

Tell us your environment

OS:
Darwin 18.2.0 Darwin Kernel Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
JDK:
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)

Anything else we need to know?

None

@sczyh30 sczyh30 added kind/bug Category issues or prs related to bug. area/cluster-flow Issues or PRs related to cluster flow control labels Jan 28, 2019
@sczyh30 sczyh30 added this to the 1.4.2 milestone Jan 28, 2019
@sczyh30 sczyh30 self-assigned this Jan 29, 2019
@zhoushuai1119
Copy link

我启动token-server也是报这个错误,请问一下是怎么解决的嘛?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-flow Issues or PRs related to cluster flow control kind/bug Category issues or prs related to bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants