clickhouse(⼗⼆、踩坑之路)
Q1
DB::Exception: Cannot create table from metadata file /data/clickhouse/metadata/default/dwd_test.sql, error:
DB::Exception: The local set of parts of table default.dwd_test doesn’t look like the set of parts in ZooKeeper: 65.88 million rows of 85.04 million total rows in filesystem are suspicious. There are 545 unexpected parts with 65883643 rows (191 of them is not just-written with 65883643 rows), 0 missing parts (with 0 blocks).
A1
这是由于truncate、alter等ddl语句⽤了on cluster,偶尔导致zookeeper同步异常。解决⽅法1:删除有问题节点的本地表数据 rm -r /data/clickhouse/data/default/dwd_test,再重新启动CK,副本会⾃动重新同步该表数据。(如果没有副本请不要使⽤此⽅法。)解决⽅法2:命令⾏执⾏sudo -u clickhouse touch /data/clickhouse/flags/force_restore_data 然后⼿动恢复有问题的partition
Q2
Connected to ClickHouse server version 20.3.8 revision 54433.
Poco::Exception. Code: 1000, e.code() = 13, e.displayText() = Access to file denied: /home/qspace/.clickhouse-client-history (version 20.3.8.53 (official build))
A2
创建该⽂件、并设置开放权限。
chown clickhouse:clickhouse /home/qspace/.clickhouse-client-history (没有则创建)
Q3
Application: DB::Exception: Listen [::]:8124 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.5.2.7 (official build))
A3
本机没有开放ipv6,只能对ipv4⽣效。在/etc/l中,把<listen_host> 改成0.0.0.0 或者 ::
Q4
Code: 32, e.displayText() = DB::Exception: Received from hadoop8:9900. DB::Exception: Attempt to read after eof: Cannot parse Int32 from String, because value is too short. (version 20.3.8.53 (official build))
字符串转数字异常,有些为空或者⾮数字的字符导致转换失败
A4
使⽤toUInt64OrZero函数、转换失败则为0。
Q5
Application: DB::Exception: Suspiciously many (125) broken parts to remove.: Cannot attach st
Code: 231. DB::Exception: Received from ck10:9000. DB::Exception: Suspiciously many (125) broken parts to remove…
A5
因为写⼊数据造成的元数据和数据不⼀致问题。先删除磁盘数据、再重启节点删除本地表,如果是复制表再上zookeeper删除副本,然后重新建表。复制表的话数据会同步其他副本的。
Q6
Cannot execute replicated DDL query on leader.
A6
由于分布式ddl语句⽐较耗时会超时响应、改为本地执⾏或者减少作⽤的数据范围。如ALTER、OPTIMIZE全表改为具体的partition.
Q7
Code: 76. DB::Exception: Received from 0.0.0.0:9900. DB::Exception: Cannot open file
/data/clickhouse/data/default/test/tmp_insert_20200523_55575_55575_0/f0560_deep_conversion_optimization_k2, errno: 24, strerror: Too many open files.
A7
parse error怎么解决修改/etc/f 添加:
clickhouse soft nofile 262144
clickhouse soft nofile 262144
clickhouse hard nofile 262144
使⽤ulimit -n 查询、看到的是所有⽤户可打开的总数,⽽ck能打开的⼤⼩只是系统的默认值,所以不要被这个命令⼲扰,重启ck后、获取ck的进程、再通过cat /proc/${pid}/limits |grep open ,判断配置是否⽣效
Q8
DB::Exception: Too many parts(100).
A8
:The following changes have been made l:
<merge_tree>
<parts_to_delay_insert>300</parts_to_delay_insert>
<parts_to_throw_insert>600</parts_to_throw_insert>
<max_delay_to_insert>2</max_delay_to_insert>
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
</merge_tree>
Q9
Code: 117. DB::Exception: Received from hadoop8:9900. DB::Exception: Data directory for table already containing data parts - probably it was unclean DROP table or manual intervention. You must either clear directory by hand or use ATTACH TABLE instead of CREATE TABLE if you need to use that parts…
A9
建表的时候发现之前表存储数据还在,如果需要之前的数据那么把create改为attach语句,如果需要则要清理磁盘上对应表空间的⽬录、如果zookeeper上也有相关信息可能也需要⼀起清理。
Q10
Received exception from server (version 20.3.8):
Code: 999. DB::Exception: Received from hadoop7:9900. DB::Exception: Session expired (Session expired).
A10
查看⽇志应该是zookeeper的状态问题。

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。