CAP理论中,P(partitiontolerance,分区容错性)的合理解释
在CAP理论中, ⽹上搜到的对 Partition Tolerance 的解释往往不够准确, 在 Henry Robinson 的⽂章, (已经⽆法访问) 对这个此进⾏了分析, 并说明了在不同规模分布式系统中的重要性.
现在通常把 Partition Tolerance 翻译为分区容错性, 这个⽂字表意是不准确的, Partition 实际指的是"被隔离"的含义, 即表⽰"允许部分节点被隔离".
The ‘CAP’ theorem is a hot topic in the design of distributed data storage systems. However, it’s often widely misused. In this post I hope to highlight why the common consistency, availability and partition tolerance: pick two formulation is inadequate for distributed systems. In fact, the lesson of the theorem is that the choice is almost always between sequential consistency and high availability.
It’s very common to invoke the CAP theorem when designing, or talking about designing, distributed data storage systems. The theorem, as commonly stated, gives system designers a choice between three competing guarantees:
Consistency – roughly meaning that all clients of a data store get responses to requests that ‘make sense’. For example, if Client A writes 1 then 2 to location X, Client B cannot read 2 followed by 1.
⼀致性 - ⼤致的含义是这个分布式结构中所有节点的请求和响应都是⼀致的, 合乎逻辑的, 例如节点A往X写⼊1然后写⼊2, 节点B不会先读出2再读出1
Availability – all operations on a data store eventually return successfully. We say that a data store is ‘available’ for, e.g. write
operations.
可⽤性 - 分布式系统中所有的操作都会成功, 我们说⼀个系统"可⽤", 指的是, 例如"写操作".
Partition tolerance – if the network stops delivering messages between two sets of servers, will the system continue to work correctly?
隔离容忍性 - 如果分布式系统中两个节点之间的⽹络断了, 系统是否还能正常⼯作?
This is often summarised as a single sentence: “consistency, availability, partition tolerance. Pick two.”. Short, snappy and useful.
At least, that’s the conventional wisdom. Many modern distributed data stores, including those often caught under the ‘NoSQL’ net, pride themselves on offering availability and partition tolerance over stro
ng consistency; the reasoning being that short periods of application misbehavior are less problematic than short periods of unavailability. Indeed, Dr. Michael Stonebraker posted an article on the ACM’s blog bemoaning the preponderance of systems that are choosing the ‘AP’ data point, and that consistency and availability are the two to choose. However for the vast majority of systems, I contend that the choice is almost always between consistency and availability, and unavoidably so.
Dr. Stonebraker’s central thesis is that, since partitions are rare, we might simply sacrifice ‘partition-tolerance’ in favour of sequential consistency and availability – a model that is well suited to traditional transactional data processing and the maintainance of the good old ACID invariants of most relational databases. I want to illustrate why this is a misinterpretation of the CAP theorem.
We first need to get exactly what is meant by ‘partition tolerance’ straight. Dr. Stonebraker asserts that a system is partition tolerant if processing can continue in both partitions in the case of a network failure.
“If there is a network failure that splits the processing nodes into two groups that cannot talk to each other, then the goal would be to allow processing to continue in both subgroups.”
This is actually a very strong partition tolerance requirement. Digging into the history of the CAP theore
m reveals some divergence from this definition.
Seth Gilbert and Professor Nancy Lynch provided both a formalisation and a proof of the CAP theorem in their 2002 SIGACT paper. We should defer to their definition of partition tolerance – if we are going to invoke CAP as a mathematical truth, we should formalize our foundations, otherwise we are building on very shaky ground. Gilbert and Lynch define partition tolerance as follows:
“The network will be allowed to lose arbitrarily many messages sent from one node to another”
⽹络允许节点间通讯时丢失任意多的消息
Note that Gilbert and Lynch’s definition isn’t a property of a distributed application, but a property of the network in which it executes. This is often misunderstood: partition tolerance is not something we have a choice about designing into our systems. If you have a partition in your network, you lose either consistency (because you allow updates to both sides of the partition) or you lose availability (because you detect the error and shutdown the system until the error condition is resolved). Partition tolerance means simply developing a coping strategy by choosing which of the other system properties to drop. This is the real lesson of the CAP theorem – if you have a network that may drop messages, then you cannot have both availability and consistency, you must choose one. We should really be writing Possi
bility of Network Partitions => not(availability and consistency), but that’s not nearly so snappy.
Dr. Stonebraker’s definition of partition tolerance is actually a measure of availability – if a write may go to either partition, will it eventually be responded to? This is a very meaningful question for systems distributed across many geographic locations, but for the LAN case it is less common to have two partitions available for writes. However, it is encompassed by the requirement for availability that we already gave – if your system is available for writes at all times, then it is certainly available for writes during a network partition.
So what causes partitions? Two things, really. The first is obvious – a network failure, for example due to a faulty switch, can cause the network to partition. The other is less obvious, but fits with the definition from Gilbert and Lynch: machine failures, either hard or soft. In an
asynchronous network, i.e. one where processing a message could take unbounded time, it is impossible to distinguish between machine failures and lost messages. Therefore a single machine failure partitions it from the rest of the network. A correlated failure of several machines partitions them all from the network. Not being able to receive a message is the same as the network not delivering it. In the face of sufficiently many machine failures, it is still impossible to maintain availability and consist
ency, not because two writes may go to separate partitions, but because the failure of an entire ‘quorum’ of servers may render some recent writes unreadable.
所以这就是为什么说定义P为"允许被隔离的各组保持可⽤"是误导 This is why defining P as ‘allowing partitioned groups to remain available’is misleading – machine failures are partitions, almost tautologously, and by definition cannot be available while they are failed. Yet, Dr. Stonebraker says that he would suggest choosing CA rather than P. This feels rather like we are invited to both have our cake and eat it. Not ‘choosing’ P is analogous to building a network that will never experience multiple correlated failures. This is unreasonable for a distributed system – precisely for all the valid reasons that are laid out in the CACM post about correlated failures, OS bugs and cluster disasters – so what a designer has to do is to decide between maintaining consistency and availability. Dr. Stonebraker tells us to choose consistency, in fact, because availability will unavoidably be impacted by large failure incidents. This is a legitimate design choice, and one that the traditional RDBMS lineage of systems has explored to its fullest, but it implicitly protects us neither from availability problems stemming from smaller failure incidents, nor from the high cost of maintaining sequential consistency.
When the scale of a system increases to many hundreds or thousands of machines, writing in such a way to allow consistency in the face of potential failures can become very expensive (you have to write
to one more machine than failures you are prepared to tolerate at once). This kind of nuance is not captured by the CAP theorem: 从吞吐量和延迟的⾓度, 维持⼀致性的代价往往⽐可⽤性⾼得多 consistency is often much more expensive in terms of throughput or latency to maintain than availability. 类似于 Zookeeper 这样的系统能实现⼀致性是因为它们的集往往⾜够⼩, 以⾄于写⼊仲裁的代价很⼩. Systems such as ZooKeeper are explicitly sequentially consistent because there are few enough nodes in a cluster that the cost of writing to quorum is relatively small. The Hadoop Distributed File System (HDFS) also chooses consistency – three failed datanodes can render a file’s blocks unavailable if you are unlucky. Both systems are designed to work in real networks, however, where partitions and failures will occur, and when they do both systems will become unavailable, having made their choice between consistency and availability. That choice remains the unavoidable reality for distributed data stores.
下⾯说我对CAP的理解:
1. A
可⽤性, 主要是在⾼负载下的可⽤性, 以及低延迟响应. 这个在当前的系统设计中是排在第⼀位的, 尽量保证服务不会失去响应
2. C
⼀致性, 强⼀致性, 或是时序⼀致性, 或是滞后的最终⼀致性. 分别代表了系统需要保障A和P的能⼒时, 在⼀致性上的妥协.
3. P
truncated normal distribution隔离容忍性, 在节点间通信失败时保证系统不受影响. 对允许隔离的要求提⾼会降低对可⽤性或⼀致性的期望, 要么停⽌系统⽤于错误恢复, 要么继续服务但是降低⼀致性
在现今的⼤型分布式系统, 对ACP的取舍已经很明显, 因为伴随着分布式的结构, P是必然存在的, ⽽业务往往要求很⾼的可⽤性, 所以对强⼀致性的要求就需要让步, 过渡为最终⼀致性
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论