Zookeeper

ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件。它是一个为分布式应用提供一致性服务的软件,提供的功能包括:配置维护、域名服务、分布式同步、组服务等。

一、版本信息

版本信息 CentOs_64_6.7

JDK_1.8.0_162  64

Zookeeper-3.4.11.tar.gz


二、Zookeeper集群安装

集群至少3台服务器,集群由一个Leader,多个Follower组成。


安装目录 /home/zookeeper

1. 直接解压文件

# tar -zxf zookeeper-3.4.11.tar.gz

 2. 修改 /home/zookeeper/zookeeper-3.4.11/conf/zoo.cfg,没有复制一份zoo_sample.cfg

# cp zoo_sample.cfg zoo.cfg


Server 信息,
其中, dataDir=/home/zookeeper/data 数据存放位置,data文件夹需要自己创建。

ClientPort=2181 第三方链接zookeeper集群需要的端口号。

    2888 zookeeper集群同步数据使用的端口号。

    3888 zookeeper选举主服务器使用的端口号。

3.再 dataDir 指定的目录中创建文件名称为myid的文件(全小写), 里边写入ID号要和server信息匹配。


3. Zookeeper 启动

[root@node1 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/zookeeper/zookeeper-3.4.11/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

另外两台类似,查看节点状态

[root@node1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/zookeeper/zookeeper-3.4.11/bin/../conf/zoo.cfg
Mode: follower

4. 常用命令

登录客户端

[root@node1 bin]# ./zkCli.sh

退出

] quit

查看所有命令

stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history 
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit 
getAcl path
close 
connect host:port

创建节点

] create /miselehe "www.miselehe.com"
Created /miselehe

创建节点默认是持久化节点。

-s 参数:创建顺序自动编号节点(持久化节点或临时节点);

-e 参数:创建临时节点;

创建顺序自动编号的临时节点

] create -s -e /mslh "miselehe"
Created /mslh0000000001
] create -s -e /mslh "miselehe1"
Created /mslh0000000002
] ls /
[miselehe, mslh0000000002, zookeeper, mslh0000000001]

删除节点

] delete /mslh0000000001
] ls /                  
[miselehe, mslh0000000002, zookeeper]

退出客户端,结束会话后,临时节点将删除。

delete只能删除非空节点,rmr可以递归删除节点

] create /server "server"
Created /server
] create /server/one "1"
Created /server/one
] create /server/two "2"
Created /server/two
] ls /server
[two, one]
] delete /server
Node not empty: /server
] rmr /server
] ls /server
Node does not exist: /server


获取节点列表

] ls /
[miselehe, zookeeper]
] ls /miselehe
[]

获取节点详细信息

] ls2 /miselehe
[]
cZxid = 0x100000002
ctime = Fri Dec 04 19:48:29 CST 2020
mZxid = 0x100000002
mtime = Fri Dec 04 19:48:29 CST 2020
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 16
numChildren = 0

获取节点数据

] get /miselehe
www.miselehe.com
cZxid = 0x100000002
ctime = Fri Dec 04 19:48:29 CST 2020
mZxid = 0x100000002
mtime = Fri Dec 04 19:48:29 CST 2020
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 16
numChildren = 0


修改节点数据

] set /miselehe "miselehe.com"


增加监听(在server.1的客户端增加监听,在server.3的客户端中修改节点信息。)

server.1 ] get /miselehe watch
server.3 ] set /miselehe "mslh"
server.1 ]
WATCHER::

WatchedEvent state:SyncConnected type:NodeDataChanged path:/miselehe

server.1] ls /miselehe watch
server.3 ] create /miselehe/one "1"
Created /miselehe/one
server.1 ]
WATCHER::

WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/miselehe

get监听数据变化,ls监听子节点变化,每次watch只监听一次变化。


转载请指明出处!http://www.miselehe.com/article/view/16