You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2023/05/25 15:44:10.346368 juicefs[25278] <WARNING>: The latency to database is too high: 25.149877ms [sql.go:240]
2023-05-25 15:44:10,348 WARN juicefs.JuiceFileSystemImpl: 2023/05/25 15:44:10.346368 juicefs[25278] <WARNING>: The latency to database is too high: 25.149877ms [sql.go:240]
2023-05-25 15:44:10,565 WARN utils.NodesFetcher: fetch from:http://ds-3:8088/ws/v1/cluster/nodes/ failed, switch to another url
io.juicefs.shaded.org.json.JSONException: JSONObject["node"] not found.
at io.juicefs.shaded.org.json.JSONObject.get(JSONObject.java:566)
at io.juicefs.shaded.org.json.JSONObject.getJSONArray(JSONObject.java:760)
at io.juicefs.utils.YarnNodesFetcher.parseNodes(YarnNodesFetcher.java:60)
at io.juicefs.utils.NodesFetcher.getNodes(NodesFetcher.java:109)
at io.juicefs.utils.YarnNodesFetcher.getNodes(YarnNodesFetcher.java:54)
at io.juicefs.utils.NodesFetcher.fetchNodes(NodesFetcher.java:65)
at io.juicefs.JuiceFileSystemImpl.discoverNodes(JuiceFileSystemImpl.java:727)
at io.juicefs.JuiceFileSystemImpl.initCache(JuiceFileSystemImpl.java:690)
at io.juicefs.JuiceFileSystemImpl.initialize(JuiceFileSystemImpl.java:412)
at org.apache.hadoop.fs.FilterFileSystem.initialize(FilterFileSystem.java:98)
at io.juicefs.JuiceFileSystem.initialize(JuiceFileSystem.java:71)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:288)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:342)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:252)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:235)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:105)
at org.apache.hadoop.fs.shell.Command.run(Command.java:179)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
2023-05-25 15:44:10,582 WARN utils.NodesFetcher: java.net.ConnectException: 拒绝连接 (Connection refused)
2023-05-25 15:44:10,617 WARN utils.NodesFetcher: fetch from:http://ds-2:8088/ws/v1/cluster/nodes/ failed, switch to another url
io.juicefs.shaded.org.json.JSONException: JSONObject["node"] not found.
at io.juicefs.shaded.org.json.JSONObject.get(JSONObject.java:566)
at io.juicefs.shaded.org.json.JSONObject.getJSONArray(JSONObject.java:760)
at io.juicefs.utils.YarnNodesFetcher.parseNodes(YarnNodesFetcher.java:60)
at io.juicefs.utils.NodesFetcher.getNodes(NodesFetcher.java:109)
at io.juicefs.utils.YarnNodesFetcher.getNodes(YarnNodesFetcher.java:54)
at io.juicefs.utils.NodesFetcher.fetchNodes(NodesFetcher.java:65)
at io.juicefs.JuiceFileSystemImpl.discoverNodes(JuiceFileSystemImpl.java:727)
at io.juicefs.JuiceFileSystemImpl.initCache(JuiceFileSystemImpl.java:690)
at io.juicefs.JuiceFileSystemImpl.initialize(JuiceFileSystemImpl.java:412)
at org.apache.hadoop.fs.FilterFileSystem.initialize(FilterFileSystem.java:98)
at io.juicefs.JuiceFileSystem.initialize(JuiceFileSystem.java:71)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:288)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:342)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:252)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:235)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:105)
at org.apache.hadoop.fs.shell.Command.run(Command.java:179)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
2023-05-25 15:44:10,684 WARN utils.NodesFetcher: fetch from:http://ds-3:8088/ws/v1/cluster/nodes/ failed, switch to another url
io.juicefs.shaded.org.json.JSONException: JSONObject["node"] not found.
at io.juicefs.shaded.org.json.JSONObject.get(JSONObject.java:566)
at io.juicefs.shaded.org.json.JSONObject.getJSONArray(JSONObject.java:760)
at io.juicefs.utils.YarnNodesFetcher.parseNodes(YarnNodesFetcher.java:60)
at io.juicefs.utils.NodesFetcher.getNodes(NodesFetcher.java:109)
at io.juicefs.utils.YarnNodesFetcher.getNodes(YarnNodesFetcher.java:54)
at io.juicefs.utils.NodesFetcher.fetchNodes(NodesFetcher.java:65)
at io.juicefs.JuiceFileSystemImpl.discoverNodes(JuiceFileSystemImpl.java:727)
at io.juicefs.JuiceFileSystemImpl.initCache(JuiceFileSystemImpl.java:690)
at io.juicefs.JuiceFileSystemImpl.initialize(JuiceFileSystemImpl.java:412)
at io.juicefs.JuiceFileSystem.lambda$startTrashEmptier$1(JuiceFileSystem.java:87)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at io.juicefs.JuiceFileSystem.startTrashEmptier(JuiceFileSystem.java:85)
at io.juicefs.JuiceFileSystem.initialize(JuiceFileSystem.java:73)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:288)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:342)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:252)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:235)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:105)
at org.apache.hadoop.fs.shell.Command.run(Command.java:179)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
2023-05-25 15:44:10,686 WARN utils.NodesFetcher: java.net.ConnectException: 拒绝连接 (Connection refused)
2023-05-25 15:44:10,704 WARN utils.NodesFetcher: fetch from:http://ds-2:8088/ws/v1/cluster/nodes/ failed, switch to another url
io.juicefs.shaded.org.json.JSONException: JSONObject["node"] not found.
at io.juicefs.shaded.org.json.JSONObject.get(JSONObject.java:566)
at io.juicefs.shaded.org.json.JSONObject.getJSONArray(JSONObject.java:760)
at io.juicefs.utils.YarnNodesFetcher.parseNodes(YarnNodesFetcher.java:60)
at io.juicefs.utils.NodesFetcher.getNodes(NodesFetcher.java:109)
at io.juicefs.utils.YarnNodesFetcher.getNodes(YarnNodesFetcher.java:54)
at io.juicefs.utils.NodesFetcher.fetchNodes(NodesFetcher.java:65)
at io.juicefs.JuiceFileSystemImpl.discoverNodes(JuiceFileSystemImpl.java:727)
at io.juicefs.JuiceFileSystemImpl.initCache(JuiceFileSystemImpl.java:690)
at io.juicefs.JuiceFileSystemImpl.initialize(JuiceFileSystemImpl.java:412)
at io.juicefs.JuiceFileSystem.lambda$startTrashEmptier$1(JuiceFileSystem.java:87)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at io.juicefs.JuiceFileSystem.startTrashEmptier(JuiceFileSystem.java:85)
at io.juicefs.JuiceFileSystem.initialize(JuiceFileSystem.java:73)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:288)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:342)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:252)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:235)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:105)
at org.apache.hadoop.fs.shell.Command.run(Command.java:179)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
2023-05-25 15:44:10,711 INFO fs.TrashPolicyDefault: The configured checkpoint interval is 0 minutes. Using an interval of 0 minutes that is used for deletion instead
2023-05-25 15:44:10,711 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Found 2 items
drwxrwxrwx - hdfs supergroup 4096 2023-05-25 14:19 /tmp
drwxr-xr-x - hdfs supergroup 4096 2023-05-25 14:18 /user
What happened:
执行命令 hadoop fs -ls /
后出现如下异常
juicefs.discover-nodes-url 修改为 jfs://myjfs/etc/nodes 并上传nodes文件后 hadoop fs -ls / 正常, 但执行命令 hadoop fs -mkdir /flink 出现如下异常
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?
Environment:
JuiceFS version (use
juicefs --version
) or Hadoop Java SDK version:自己编译的版本 juicefs-hadoop-1.1-dev.jar
hadoop : 3.3.3
Cloud provider or hardware configuration running JuiceFS:
OS (e.g
cat /etc/os-release
):Kernel (e.g.
uname -a
):Linux ds-1 3.10.0-1160.90.1.el7.x86_64 #1 SMP Thu May 4 15:21:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Object storage (cloud provider and region, or self maintained): Minio
Metadata engine info (version, cloud provider managed or self maintained):
mysql Ver 14.14 Distrib 5.7.42, for Linux (x86_64) using EditLine wrapper
Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
Others:
juiceFs
core-site.xml
The text was updated successfully, but these errors were encountered: