大数据平台利器-Ambari的安装与部署

Ambari是什么

Ambari 自身也是一个分布式架构的软件,主要由两部分组成:Ambari ServerAmbari Agent。简单来说,用户通过 Ambari Server 通知 Ambari Agent 安装对应的软件;Agent会定时地发送各个机器每个软件模块的状态给 Ambari Server,最终这些状态信息会呈现在AmbariGUI,方便用户了解到集群的各种状态,并进行相应的维护。

Ambari 安装部署

环境准备

版本选择ambari-2.7.3HDP-3.1.0.0

1、安装包下载(主节点)

1
2
3
4
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.3.0/ambari-2.7.3.0-centos7.tar.gz
wget http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.1.0.0/HDP-3.1.0.0-centos7-rpm.tar.gz
wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7/HDP-UTILS-1.1.0.22-centos7.tar.gz
wget http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.1.0.0/HDP-GPL-3.1.0.0-centos7-gpl.tar.gz

2、配置Java环境(所有节点)

1
2
3
4
5
6
7
8
9
10
11
# jdk自行下载
tar -zxvf jdk-8u181-linux-x64.tar.gz -C /usr/lib/jvm/
#设置环境变量
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_181
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
#环境变量生效
source /etc/profile
#查看Java版本
java -version

3、设置hostname、hosts (所有节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 设置hostname
hostnamectl set-hostname hadoop17

设置hosts
vim /etc/hosts(注释掉其他ip)
172.16.100.15 hadoop15
172.16.100.16 hadoop16
172.16.100.17 hadoop17
172.16.100.1x hadoop1x
...

# 设置 vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop15

4、时钟同步(所有节点)

每台机器都要时钟同步

1
2
3
4
5
6
7
8
9
# 安装ntpdate
yum -y install ntp

# 单台机器时钟同步、且机器要与外网ping通
ntpdate 10.211.0.101

# 时钟同步脚本
crontab -e
*/15 * * * * ntpdate 10.211.0.101

5、修改文件打开限制(所有节点)

1
2
3
4
5
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072

6、关闭防火墙(所有节点)

1
2
3
4
5
systemctl stop firewalld.service
systemctl disable firewalld.service
# 修改配置文件
vim /etc/selinux/config
SELINUX=disabled

7、SSH无密码登陆(主节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
# 配置hadoop15节点无密码登录到其他节点,在hadoop15节点上操作
ssh-keygen -t rsa
ssh-copy-id hadoop16 (其他11台机器)

# 验证ssh免密是否生效
ssh hadoop15 hostname;ssh hadoop16 hostname;ssh hadoop17 hostname ....
# 拷贝密钥(后续需要使用)
cp ~/.ssh/id_rsa /data/soft

# 设置umask
sh -c "echo umask 0022 >> /etc/profile"
# 解释:022表示默认创建新文件权限为755 也就是 rxwr-xr-x(所有者全部权限,属组读写,其它人读写)
# 详细文档:https://www.cnblogs.com/walblog/articles/7903319.html

8、 安装mysql

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 安装mysql源,centos7默认不带mysql安装源
rpm -Uvh http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
yum install mysql-server -y
# 启动mysql
service mysqld start
service mysqld status
# 创建root管理员
mysqladmin -uroot password root
# 登录
mysql -uroot -proot
# 设置远程访问
use mysql
执行update user set host = '%' where user = 'root';这一句执行完可能会报错,不用管它。

#(注意:设置这个之后,在本地直接访问时,老是登录不上,但是使用远程就可以登录上,原因:mysql中一个用户名为空的访问本地权限的字段,所有登陆时优先匹配了这一条,就无法登陆了。
select user,host from mysql.user;
#删除用户即可
drop user ''@localhost;
flush privileges;

修改yum源,实现离线安装

1、安装httpd服务(主服务器)

1
2
3
yum -y install httpd
service httpd restart
chkconfig httpd on

2、将上面下载的包放到/var/www/html目录下(主服务器)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@hadoop15 ~]$ mkdir /var/www/html/hdp  在http服务默认目录下添加目录
[root@hadoop15 ~]$ cd /var/www/html/hdp 进入添加的目录
[hdfs@hadoop15 hdp]$ ll
总用量 20
drwxr-xr-x 3 root root 4096 7月 28 12:20 ambari
drwxr-xr-x 3 ambari users 4096 12月 11 2018 HDP
drwxr-xr-x 3 root root 4096 7月 28 19:54 HDP-GPL
drwxr-xr-x 3 root root 4096 7月 28 12:16 HDP-UTILS-1.1.0.22
drwxr-xr-x 2 root root 4096 7月 28 12:27 repodata
# 上传之前下载包
[root@hadoop15 hdp]$ tar -zxvf ambari-2.7.3.0-centos7.tar.gz
[root@hadoop15 hdp]$ tar -zxvf HDP-3.1.0.0-centos7-rpm.tar.gz
[root@hadoop15 hdp]$ tar -zxvf HDP-UTILS-1.1.0.22-centos7.tar.gz
[root@hadoop15 hdp]$ tar -zxvf HDP-GPL-3.1.0.0-centos7-gpl.tar.g

遇到系统盘空间比较小的,可以修改/val/www/html所在的路径

1
2
执行vim /etc/httpd/conf/httpd.conf指令
找到 DocumentRoot "/var/www/html" 这一段 #apache的根目录,把/var/www/html 这个目录改为/data(自己想要设置的目录)

再找到 <Directory "/var/www/html"> #定义apache /var/www/html这个区域,把 /var/www/html改成/data(自己想要设置的目录)

重启httpd服务

3、访问http://ip/hdp查看是否能成功访问

如果成功的话,可以看到 安装的文件

1
2
# 检查是否访问成功
http://10.34.50.142/hdp/

4、制作本地源

1
2
3
安装本地源制作相关工具(主服务器)
[root@hadoop15 hdp]yum install yum-utils createrepo yum-plugin-priorities -y
[root@hadoop15 hdp]# createrepo ./

5、修改文件里面的源地址(主服务器)

  • ambari 文件的repo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1、打开repo
[root@hadoop15 hdp]# vim ambari/centos7/2.7.3.0-139/ambari.repo
2、修改内容
#VERSION_NUMBER=2.7.3.0-139
[ambari-2.7.3.0]
#json.url = http://public-repo-1.hortonworks.com/HDP/hdp_urlinfo.json
name=ambari Version - ambari-2.7.3.0
baseurl=http://172.16.100.15/hdp/ambari/centos7/2.7.3.0-139
gpgcheck=1
gpgkey=http://172.16.100.15/hdp/ambari/centos7/2.7.3.0-139/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
3、移动repo
[root@hadoop15 hdp]# cp ambari/centos7/2.7.3.0-139/ambari.repo /etc/yum.repos.d/
  • HDP文件的repo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
1、打开repo
[root@hadoop15 hdp]# vim HDP/centos7/3.1.0.0-78/hdp.repo
2、修改内容
#VERSION_NUMBER=3.1.0.0-78
[HDP-3.1.0.0]
name=HDP Version - HDP-3.1.0.0
baseurl=http://172.16.100.15/hdp/HDP/centos7/3.1.0.0-78
gpgcheck=1
gpgkey=http://172.16.100.15/hdp/HDP/centos7/3.1.0.0-78/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1


[HDP-UTILS-1.1.0.22]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.22
baseurl=http://172.16.100.15/hdp/HDP-UTILS-1.1.0.22/HDP-UTILS/centos7/1.1.0.22/
gpgcheck=1
gpgkey=http://172.16.100.15/hdp/HDP-UTILS-1.1.0.22/HDP-UTILS/centos7/1.1.0.22/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
3、移动repo
[root@hadoop15 hdp]# cp HDP/centos7/3.1.0.0-78/hdp.repo /etc/yum.repos.d/
  • yum清除缓存
1
2
3
yum clean all
yum makecache
yum repolist

6、脚本分发到子节点

  • 脚本xsync编写
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash
#1 获取输入参数个数,如果没有参数,直接退出
pcount=$#
if((pcount==0)); then
echo no args;
exit;
fi
#2 获取文件名称
p1=$1
fname=`basename $p1`
echo fname=$fname
#3 获取上级目录到绝对路径
pdir=`cd -P $(dirname $p1); pwd`
echo pdir=$pdir
#4 获取当前用户名称
user=`whoami`
#5 循环
for((host=15; host<125; host++)); do
echo ------------------- hadoop$host --------------
rsync -rvl $pdir/$fname $user@10.0.14.$host:$pdir
done

#添加权限 chmod 777 xsync
  • 分发子节点
1
2
3
# 切记到yum.repos.d 目录下去分发repo文件
[root@hadoop15 yum.repos.d]# xsync ambari.repo
[root@hadoop15 yum.repos.d]# xsync hdp.repo

安装Ambari-server

  • master节点
1
yum -y install ambari-server
  • ambari、oozie、ranger、rangerkms、superse配置mysql元数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
# ambari
CREATE DATABASE ambari;
use ambari;
CREATE USER 'ambari'@'%' IDENTIFIED BY 'ambari';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'ambari';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'hadoop15' IDENTIFIED BY 'ambari';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'hadoop15';
FLUSH PRIVILEGES;
source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
show tables;
use mysql;
select Host,User,Password from user where user='ambari';

# hive
CREATE DATABASE hive;
use hive;
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
CREATE USER 'hive'@'hadoop15' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hadoop15';
FLUSH PRIVILEGES;

# oozie
CREATE DATABASE oozie;
use oozie;
CREATE USER 'oozie'@'%' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'%';
CREATE USER 'oozie'@'localhost' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'localhost';
CREATE USER 'oozie'@'hadoop15' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'hadoop15';
FLUSH PRIVILEGES;

# ranger
DROP DATABASE rangdb;
CREATE DATABASE rangdb;
use rangdb;
CREATE USER 'rangdb'@'%' IDENTIFIED BY 'rangdb';
GRANT ALL PRIVILEGES ON *.* TO 'rangdb'@'%';
CREATE USER 'rangdb'@'localhost' IDENTIFIED BY 'rangdb';
GRANT ALL PRIVILEGES ON *.* TO 'rangdb'@'localhost';
CREATE USER 'rangdb'@'hadoop15' IDENTIFIED BY 'rangdb';
GRANT ALL PRIVILEGES ON *.* TO 'rangdb'@'hadoop15';
FLUSH PRIVILEGES;

# rangerkms
DROP DATABASE rangerkms;
CREATE DATABASE rangerkms;
use rangerkms;
CREATE USER 'rangerkms'@'%' IDENTIFIED BY 'rangerkms';
GRANT ALL PRIVILEGES ON *.* TO 'rangerkms'@'%';
CREATE USER 'rangerkms'@'localhost' IDENTIFIED BY 'rangerkms';
GRANT ALL PRIVILEGES ON *.* TO 'rangerkms'@'localhost';
CREATE USER 'rangerkms'@'hadoop15' IDENTIFIED BY 'rangerkms';
GRANT ALL PRIVILEGES ON *.* TO 'rangerkms'@'hadoop15';
FLUSH PRIVILEGES;

#superset
DROP DATABASE superset;
CREATE DATABASE superset;
use superset;
CREATE USER 'superset'@'%' IDENTIFIED BY 'superset';
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'%';
CREATE USER 'superset'@'localhost' IDENTIFIED BY 'superset';
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'localhost';
CREATE USER 'superset'@'hadoop15' IDENTIFIED BY 'superset';
GRANT ALL PRIVILEGES ON *.* TO 'superset'@'hadoop15';
FLUSH PRIVILEGES;
  • 安装mysql驱动、创建mysqlambari-server的连接
1
2
3
4
5
6
7
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.zip
mkdir /usr/share/java
cp 解压目录/mysql-connector-java-5.1.46.jar /usr/share/java/mysql-connector-java.jar
cp /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar
# 给ambari添加驱动路径
vi /etc/ambari-server/conf/ambari.properties
server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar

初始化设置ambari-server并启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
[root@hadoop15 ~]# ambari-server setup
下面是配置执行流程,按照提示操作
(1) 提示是否自定义设置。输入:y
Customize user account for ambari-server daemon [y/n] (n)? y
(2)ambari-server 账号。
Enter user account for ambari-server daemon (root):
如果直接回车就是默认选择root用户
如果输入已经创建的用户就会显示:
Enter user account for ambari-server daemon (root):ambari
Adjusting ambari-server permissions and ownership...
(3)检查防火墙是否关闭
Adjusting ambari-server permissions and ownership...
Checking firewall...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)?
直接回车
(4)设置JDK。输入:3
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
[3] Custom JDK
==============================================================================
Enter choice (1): 3
如果上面选择3自定义JDK,则需要设置JAVA_HOME。输入:/usr/java/jdk1.8.0_161
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /usr/java/jdk1.8.0_131
Validating JDK on Ambari Server...done.
Completing setup...
(5)数据库配置。选择:y
Configuring database...
Enter advanced database configuration [y/n] (n)? y
(6)选择数据库类型。输入:3
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
==============================================================================
Enter choice (3): 3
(7)设置数据库的具体配置信息,根据实际情况输入,如果和括号内相同,则可以直接回车。如果想重命名,就输入。
Hostname (localhost):
Port (3306):
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):ambarizk123
Re-Enter password: ambarizk123
(8)将Ambari数据库脚本导入到数据库
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)?
[root@hadoop15 ~]# ambari-server start

安装ambari agent

  • 全部节点安装
1
yum install ambari-agent -y
  • 修改配置文件
1
2
3
4
5
6
7
8
9
10
 # 打开
vi/etc/ambari-agent/conf/ambari-agent.ini
# 改为控制节点的机器名(IP也可)所以机器的hostname都要配置为master节点的、
[server]
hostname=hadoop15 # 修改此地址即可
url_port=8440
secured_url_port=8441
connect_retry_delay=10
max_reconnect_retry_delay=30
...

访问ambari-server web页面

默认端口8080,Username:admin;Password:admin;http://ip:8080

注意:ip为主节点master的ip

安装hive组件必须执行如下:

1
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar

安装配置部署HDP集群

  • 登录webui

  • 本地源配置,选择对应的os,其它的删掉

  • 集群节点和ssh密钥配置

  • 选装对应的组件

  • 配置组件的账号密码,与mysql一致

  • hive、oozie、ranger 、rangerkms、superset中数据库jdbc配置和测试

安装成功

ambari_bushu.jpg

重新安装Ambari

1、关闭所有组件

1.1.1.关闭所有组件

通过ambari将集群中的所用组件都关闭,如果关闭不了,直接kill-9 XXX

1.1.2.关闭ambari-server,ambari-agent

1
2
ambari-server stop
ambari-agent stop

1.1.3.yum删除所有Ambari组件

1
sudo yum remove -y hadoop_3* ranger* zookeeper* atlas-metadata* ambari* spark* slide* hive* oozie* pig* tez* hbase* knox* storm* accumulo* falcon* ambari* smartsense*

1.1.4.删除各种文件

特别注意:这里删除的时候,一定要慎重检查

ambari安装hadoop集群会创建一些用户,清除集群时有必要清除这些用户,并删除对应的文件夹。这样做可以避免集群运行时出现的文件访问权限错误的问题。总之,Ambari自己创建的东西全部删完,不然的话重新安装的时候会报各种“找不到文件”的错误。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
sudo userdel oozie
sudo userdel hive
sudo userdel ambari-qa
sudo userdel flume
sudo userdel hdfs
sudo userdel knox
sudo userdel storm
sudo userdel mapred
sudo userdel hbase
sudo userdel tez
sudo userdel zookeeper
sudo userdel kafka
sudo userdel falcon
sudo userdel sqoop
sudo userdel yarn
sudo userdel hcat
sudo userdel atlas
sudo userdel spark
sudo userdel ams
sudo userdel zeppelin

sudo rm -rf /home/atlas
sudo rm -rf /home/accumulo
sudo rm -rf /home/hbase
sudo rm -rf /home/hive
sudo rm -rf /home/oozie
sudo rm -rf /home/storm
sudo rm -rf /home/yarn
sudo rm -rf /home/ambari-qa
sudo rm -rf /home/falcon
sudo rm -rf /home/hcat
sudo rm -rf /home/kafka
sudo rm -rf /home/mahout
sudo rm -rf /home/spark
sudo rm -rf /home/tez
sudo rm -rf /home/zookeeper
sudo rm -rf /home/flume
sudo rm -rf /home/hdfs
sudo rm -rf /home/knox
sudo rm -rf /home/mapred
sudo rm -rf /home/sqoop

下面三个慎重
sudo rm -rf /var/lib/ambari*
sudo rm -rf /usr/lib/ambari-*
sudo rm -rf /usr/lib/ams-hbase*


sudo rm -rf /etc/ambari-*
sudo rm -rf /etc/hadoop
sudo rm -rf /etc/hbase
sudo rm -rf /etc/hive*
sudo rm -rf /etc/sqoop
sudo rm -rf /etc/zookeeper
sudo rm -rf /etc/tez*
sudo rm -rf /etc/spark2
sudo rm -rf /etc/phoenix
sudo rm -rf /etc/kafka

sudo rm -rf /var/run/spark*
sudo rm -rf /var/run/hadoop*
sudo rm -rf /var/run/hbase
sudo rm -rf /var/run/zookeeper
sudo rm -rf /var/run/hive*
sudo rm -rf /var/run/sqoop
sudo rm -rf /var/run/ambari-*
sudo rm -rf /var/log/hadoop*
sudo rm -rf /var/log/hive*
sudo rm -rf /var/log/ambari-*
sudo rm -rf /var/log/hbase
sudo rm -rf /var/log/sqoop

sudo rm -rf /usr/lib/ambari-*

sudo rm -rf /usr/hdp

sudo rm -rf /usr/bin/zookeeper-*
sudo rm -rf /usr/bin/yarn
sudo rm -rf /usr/bin/sqoop*
sudo rm -rf /usr/bin/ranger-admin-start
sudo rm -rf /usr/bin/ranger-admin-stop
sudo rm -rf /usr/bin/ranger-kms
sudo rm -rf /usr/bin/phoenix-psql
sudo rm -rf /usr/bin/phoenix-*
sudo rm -rf /usr/bin/mapred
sudo rm -rf /usr/bin/hive
sudo rm -rf /usr/bin/hiveserver2
sudo rm -rf /usr/bin/hbase
sudo rm -rf /usr/bin/hcat
sudo rm -rf /usr/bin/hdfs
sudo rm -rf /usr/bin/hadoop
sudo rm -rf /usr/bin/beeline

sudo rpm -qa | grep ambari 获取到要删除的列表
sudo rpm -e --nodeps 列表中条项

sudo rpm -qa | grep zookeeper
sudo rpm -e --nodeps 列表中的条项

1.1.5.清理数据库

删除mysql中的ambari库

1
drop database ambari;

1.1.6.重装ambari

通过以上清理后,重新安装ambari和hadoop集群(包括HDFS,YARN+MapReduce2,Zookeeper,AmbariMetrics,Spark)成功。

遇到的问题

1、重装时 hostname没有配置

1
vi/etc/ambari-agent/conf/ambari-agent.ini

2、yumcache缓存没有清除

3、文件没删除干净