Zookeeper Distributed Lock Example Java

Resource Recommendation

The article is very long. , It's suggested to collect it , Read slowly ! Crazy maker circle Here are some valuable learning resources for you :

  • Crazy maker circle Classic books : 《Netty Zookeeper Redis High concurrency practice 》 For the interview + It's necessary for large factories + A raise in salary is necessary
  • Crazy maker circle Classic books : 《SpringCloud、Nginx High concurrency core programming 》 For the interview + It's necessary for large factories + A raise in salary is necessary
  • A treasure house of resources : Java Programmers must A large collection of network disk resources value >1000 element whatever GO->【 The main entrance of blog Park
  • The lonely nine sword :Netty Soul experiment : Local 100W Connect High concurrency experiments , Instant ascension Java internal force
  • The purest technology exchange : He Dachang buddy 、 Technical experts 、 Architects Conduct Pure exchange of technical issues 、 Ask for help 、 Question group study

recommend 2: In the history of the most complete Java Interview questions 21 Topics

In the history of the most complete Java Interview questions 21 Topics Ali 、 JD.COM 、 Meituan 、 headlines .... Pick whatever you want 、 Walk sideways !!!
1: JVM Interview questions ( The strongest in history 、 Continuous updating 、 Hematemesis recommendation ) https://www.cnblogs.com/crazymakercircle/p/14365820.html
2:Java Basic interview questions ( In the history of the most complete 、 Continuous updating 、 Hematemesis recommendation ) https://www.cnblogs.com/crazymakercircle/p/14366081.html
4: Design pattern interview questions ( In the history of the most complete 、 Continuous updating 、 Hematemesis recommendation ) https://www.cnblogs.com/crazymakercircle/p/14367101.html
5: Structure design interview questions ( In the history of the most complete 、 Continuous updating 、 Hematemesis recommendation ) https://www.cnblogs.com/crazymakercircle/p/14367907.html
also 21 piece Will brush 、 Will brush The interview questions more ....., Please see the 【 Crazy maker circle High concurrency General catalogue 】

recommend 3: Crazy maker circle High-quality post

springCloud High-quality post
nacos actual combat ( In the history of the most complete ) sentinel ( In the history of the most complete + Introductory tutorial )
springcloud + webflux High concurrency practice Webflux( In the history of the most complete )
SpringCloud gateway ( In the history of the most complete ) TCP/IP The illustration ( In the history of the most complete )
10 Minutes to understand , Java NIO Underlying principle Feign principle ( The illustration )
More wonderful posts ..... Please see the 【 Crazy maker circle High concurrency General catalogue 】

For the interview : Distributed lock principle and actual combat

In a single application development scenario , When concurrent synchronization is involved , People tend to use synchronized perhaps Lock To solve the synchronization problem between multithreads . But in the development scenario where distributed clusters work , Then we need a more advanced locking mechanism , To deal with cross species JVM Data synchronization between processes , This is the distributed lock .

The principle of fair lock and reentrant lock

The most classic distributed lock is the reentrant fair lock . What is a reentrant fair lock ? Directly explain the concept and principle of , It will be more abstract and difficult to understand , Let's start with concrete examples ! Here's a simple analogy , It's probably much simpler .

The story takes place in an ancient city without running water , There is a well in a village , The water quality is very good , The villagers are scrambling for water from the well . There is only one well , There are a lot of people in the village , Villagers fight for water , Even the head is broken .

Problems always have to be solved , So the village head racked his brains , Finally came up with a plan to get water according to the number . Arrange a well watcher by the well , Maintain the order of water intake . The order is simple :

(1) Before taking water , Prefetch number ;

(2) The number one in the front , You can get water first ;

(3) The first ones are in the front , Those who came later , One by one , Line up by the well .

Water intake diagram , Pictured 10-3 Shown .
 Insert picture description here

chart 10-3 The schematic diagram of queuing up to get water

This queuing model , It's a lock model . The number at the front , Have access to water , It's a typical exclusive lock . in addition , First come first served basis , The man at the front of the line got the water first , After getting water, it's the next one's turn to get water , It's fair , It's a fair lock .

What is a relocatable lock ?
Assume , The water is taken from the household , Someone in the family got the number , Other family members come to fetch water , You don't have to take the number at this time , Pictured 10-4 Shown .
 Insert picture description here

chart 10-4 People in the same family don't have to queue up again

chart 10-4 in , be ranked at 1 My family , My husband takes the number , Suppose his wife comes , First in line , As the saying goes, a wife depends on her husband . Look at the picture above 2 Number , My father is drawing water , Suppose his son and daughter also went to the well , Straight to the second , The so-called son depends on father . All in all , If you take water from a household , The same family , You can reuse row numbers directly , You don't have to start from the back .

In the above story model , Take the number once , It can be used to get water many times , The principle is the model of reentrant lock . In the reentry lock model , An exclusive lock , Can be locked multiple times , This is called a re entrant lock .

ZooKeeper The principle of distributed lock

After understanding the principle of the classic fair reentrant lock , Let's look at the principle of fair reentrant locking in distributed scenarios . Through the previous analysis , Basically, we can judge :ZooKeeper
Temporary order node for , We are born with an embryo to implement distributed lock . Why? ?

( One ) ZooKeeper Each node of , It's all a natural sequencer .

Create temporary sequential nodes under each node (EPHEMERAL_SEQUENTIAL) type , After the new child node , It's going to add a sequence number , And this generated sequence number , Is the sequence number of the last generation plus one .

for example , There is a node for issuing "/test/lock" For the father , You can create a temporary sequential child node with the same prefix under this parent node , Suppose the same prefix is "/test/lock/seq-". The first created child node should be basically /test/lock/seq-0000000000, The next node is /test/lock/seq-0000000001, By analogy , If 10-5 Shown .

[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-qYJqmJ2G-1615259374087)(media/c02a85cf87f0bdab67ff299b7ac3bb24.png)]

chart 10-5 Zookeeper The natural signer function of temporary sequential nodes

( Two ) ZooKeeper Increasing order of nodes , It can ensure the fairness of the lock

One ZooKeeper Distributed lock , First you need to create a parent node , Try to be persistent nodes (PERSISTENT type ), Then every thread that wants to get the lock , Create a temporary order node under this node . because ZK node , It's in the order of creation , Increasing in sequence .

To ensure fairness , It can be simply stipulated that : The node with the smallest number , It means that you have obtained the lock . therefore , Each thread before trying to occupy the lock , First of all, judge whether you are the smallest or not , If it is , Get lock .

( 3、 ... and )ZooKeeper Node monitoring mechanism based on XML , It can ensure the orderly and efficient transmission of possession lock

Before each thread preempts the lock , Try to create your own ZNode. Again , When you release the lock , You need to delete the created Znode. Once created , If it's not the smallest node , Just waiting to be informed . Wait for whose notice ? No one else is needed , Just wait for the first one Znode
That's fine . Previous Znode When deleting , Will trigger Znode event , The current node can listen to delete events , It's your turn to own the lock . The first one informs the second one 、 The second informs the third , Beat the drum and pass the flowers back in turn .

ZooKeeper Node monitoring mechanism based on XML , Can be very perfect to achieve this kind of drumming flower like information transmission . The specific method is , Everyone waiting to be informed Znode node , Just monitor (linsten) Or surveillance (watch) The one in front of you , And the node right in front of you , You can receive the delete event .
As long as the last node is deleted , Just judge again , See if you are the node with the smallest serial number , If it is , Get the lock yourself .

in addition ,ZooKeeper The internal superior mechanism of , It can guarantee that due to network abnormality or other reasons , When the client holding the lock in the cluster loses connection , Locks can be effectively released . Once occupied Znode Lock client and ZooKeeper Cluster server lost contact , This temporary Znode Will also be automatically deleted . The node behind it , You can also receive delete Events , To get the lock . That's why , When you create a node that takes numbers , Try to create temporary znode
node ,

( Four )ZooKeeper Node monitoring mechanism based on XML , To avoid herding

ZooKeeper This kind of end to end , The way the back listens to the front , You can avoid herding . The so-called herding is that a node fails , All nodes listen , And then react , This puts a lot of pressure on the server , So with the temporary order node , When a node fails , Only the node behind it responds .

The illustration : The preemption process of distributed lock

Let's take a look at , Multi client acquisition and release zk The whole process of distributed lock and the principle behind it .

First of all, let's look at the picture below , If there are two clients competing together now zk A distributed lock on , What would it be like ?

 Insert picture description here

If you're right zk If you don't understand it , It is suggested to Baidu first , Let's get some basic concepts , such as zk What are the node types and so on .

See above .zk There's a lock in it , This lock is zk A node on . so what , Both clients need to get this lock , How to get it ?

Let's assume that the client A Take the lead , Yes zk A distributed lock request was initiated , This lock request uses zk A special concept in , be called " Temporary order node ".

Simply speaking , It's directly in "my_lock" Under this lock node , Create a sequence node , This order node has zk A node serial number maintained internally .

client A Initiate a lock request

for instance , The first client is a sequential node ,zk The inside will give you a name called :xxx-000001. Then the second client will create a sequential node ,zk It might be called :xxx-000002. Pay attention to this , The last number is increasing in turn , from 1 Start incrementing .zk Will maintain this order .

So at this point , Let's say the client A Make a request first , You'll get a sequence node , Look at the picture below ,Curator The frame will look like this :

 Insert picture description here

Everyone to see , client A Initiate a lock request , You'll lock it first node The next temporary order node , The long names of this big lump are Curator The framework is generated by itself .

then , The last number is "1". Pay attention to this , Because the client A It was the first to make a request , So the number of the order node he got is "1".

Then the client A Create a sequence node . It's not over yet. , He'll check "my_lock" All the child nodes under this lock node , And these child nodes are sorted by serial number , At this time, he will probably get such a collection :

img

Then the client A It's going to be a critical judgment , That is to say : alas ! brother , In this collection , The order node i created , Is it the first one ?

If so , Then I can lock it ! Because I'm the first one to create a sequence node , So I was the first one to try to add a distributed lock !

bingo! Locking success ! Look at the picture below , Let's feel the whole process intuitively .

 Insert picture description here

client B Come and line up

And then if you say , client A It's all locked up , client B Come here and try to lock it , He will do the same thing at this time : First of all "my_lock" Create a temporary sequence node under this lock node , At this point, the name becomes something like :

img

Let's take a look at the picture below :

 Insert picture description here

client B Because it's the second one to create a sequential node , therefore zk The internal will maintain the serial number as "2".

Then the client B I can use lock judgment logic , Inquire about "my_lock" All the child nodes under the lock node , In order of number , What he sees at this point is similar to :

img

At the same time, check the order node you created , Is it the first one in the collection ?

Obviously not , The first one here is the client A The order node created , Serial number is "01" the . So locking failed !

client B Turn on the monitor client A

After the lock failed , client B Would pass ZK Of API Add a listener to the previous sequence node of his sequence node .zk Naturally, we can monitor a certain node .

If you don't know yet zk The basic usage of , You can check it on Baidu , Very simple . client B The order node of is :

img

His last order node , Isn't that the one below ?

img

The client side A The order node created !

therefore , client B Would be right :

img

This node adds a listener , Monitor whether the node is deleted or not ! Look at the picture below .

 Insert picture description here

next , client A After lock up , It may have dealt with some code logic , And then it releases the lock . that , What is the process of releasing the lock ?

It's very simple , It's about putting yourself in zk The order node created in , That is to say :

img

This node is deleted .

After deleting that node ,zk Will be responsible for notifying the listener listening on this node , That's the client side B The monitor you added before , say : brother , The node you are listening to has been deleted , Someone released the lock .

 Insert picture description here

At this point the client B Our listener sensed that the last sequential node was deleted , That is, a client ahead of him released the lock .

client B Successful lock snatching

here , Will inform the client B Try again to get the lock , That is to get "my_lock" The set of child nodes under a node , This is the case :

img

There are only clients in the collection at this time B The only order node created !

so what , client B Judge that you are the first order node in the set ,bingo! It's time to lock ! Lock directly , Just run the following business code , Release the lock again after running .

 Insert picture description here

The basic implementation of distributed lock

The next step is based on ZooKeeper, Implement a distributed lock . First , Defines a lock interface Lock, It's simple , Just two abstract methods : A locking method , A way to unlock .Lock The code of the interface is as follows :

          package com.crazymakercircle.zk.distributedLock; /** * create by Nean @ Crazy maker circle **/ public interface Lock { /** * Lock method * * @return Whether the lock is successful */ boolean lock() throws Exception; /** * Unlocking method * * @return Whether it was successfully unlocked */ boolean unlock(); }                  

Use ZooKeeper Implementation of distributed lock algorithm , There are several key points :

(1) A distributed lock usually uses a Znode Nodes represent ; If the lock corresponds to Znode Node does not exist , First create Znode node . Let's assume that "/test/lock", Represents a distributed lock that needs to be created .

(2) Preempt all clients of the lock , Using locks Znode A list of child nodes of a node ; If a client needs to hold a lock , It's in "/test/lock" Next, create a temporary ordered child node .

here , All temporary ordered child nodes , Try to share a meaningful child prefix .

such as , If the prefix of the child node is "/test/lock/seq-", Then the child node corresponding to the first lock snatching is "/test/lock/seq-000000000", The child node corresponding to the second lock snatching is "/test/lock/seq-000000001", And so on .

Another example , If the child node prefix is "/test/lock/", Then the child node corresponding to the first lock snatching is "/test/lock/000000000", The child node corresponding to the second lock snatching is "/test/lock/000000001", And so on , It's also very intuitive .

(3) If we determine whether the client owns the lock ?
It's simple , After the client creates the child node , It needs to be judged : Create your own child node , Whether it is the child node with the smallest serial number in the current child node list . If it is , The lock is successful ; If not , Then listen to the previous Znode Child node change message , Wait for the previous node to release the lock .

(4) Once the next node in the queue , Get the previous child node change notification , Then we start to judge , Judge whether you are the child node with the smallest number in the current child node list , If it is , The lock is successful ; If not , We'll continue to monitor , Until you get the lock .

(5) After acquiring a lock , Start working on business processes . After completing the business process , Delete its own corresponding child node , Finish releasing the lock , In this way, the successor node can capture the node change notification , Get distributed locks .

actual combat : The realization of lock

Lock The way to lock an interface is lock().lock() The general flow of the method is : First try to lock , If the lock fails, wait , Then repeat .

1.lock() Method implementation code

lock() Method lock implementation code , As follows :

          package com.crazymakercircle.zk.distributedLock; import com.crazymakercircle.zk.ZKclient; import lombok.extern.slf4j.Slf4j; import org.apache.curator.framework.CuratorFramework; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; import java.util.Collections; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; /** * create by Nean @ Crazy maker circle **/ @Slf4j public class ZkLock implements Lock { //ZkLock Node links of private static final String ZK_PATH = "/test/lock"; private static final String LOCK_PREFIX = ZK_PATH + "/"; private static final long WAIT_TIME = 1000; //Zk client CuratorFramework client = null; private String locked_short_path = null; private String locked_path = null; private String prior_path = null; final AtomicInteger lockCount = new AtomicInteger(0); private Thread thread; public ZkLock() { ZKclient.instance.init(); synchronized (ZKclient.instance) { if (!ZKclient.instance.isNodeExist(ZK_PATH)) { ZKclient.instance.createNode(ZK_PATH, null); } } client = ZKclient.instance.getClient(); } @Override public boolean lock() { // Reentrant , Make sure the same thread , It can be locked repeatedly synchronized (this) { if (lockCount.get() == 0) { thread = Thread.currentThread(); lockCount.incrementAndGet(); } else { if (!thread.equals(Thread.currentThread())) { return false; } lockCount.incrementAndGet(); return true; } } try { boolean locked = false; // First try to lock locked = tryLock(); if (locked) { return true; } // If the lock fails, wait while (!locked) { await(); // Get the list of waiting child nodes List<String> waiters = getWaiters(); // Judge , Whether the lock is successful if (checkLocked(waiters)) { locked = true; } } return true; } catch (Exception e) { e.printStackTrace(); unlock(); } return false; } //... Omit other methods }                  

2.tryLock() Try to lock

Trying to lock tryLock Method is the key , Did two important things :

(1) Create a temporary order node , And save your own node path

(2) Judge whether it's the first , If it's the first one , Then lock successfully . If not , Just find the previous one Znode node , And save its path to prior_path.

Trying to lock tryLock Method , The implementation code is as follows :

                      /** * Try to lock * @return Whether the lock is successful * @throws Exception abnormal */ private boolean tryLock() throws Exception { // Create a temporary Znode locked_path = ZKclient.instance .createEphemeralSeqNode(LOCK_PREFIX); // Then get all the nodes List<String> waiters = getWaiters(); if (null == locked_path) { throw new Exception("zk error"); } // Get the queue number of the lock locked_short_path = getShortPath(locked_path); // Get the list of waiting child nodes , Judge whether you are the first if (checkLocked(waiters)) { return true; } // Judge who's in the line int index = Collections.binarySearch(waiters, locked_short_path); if (index < 0) { // The network jitter , There may be no self in the list of obtained child nodes throw new Exception(" Node not found : " + locked_short_path); } // If you don't get a lock , Listen to the previous node prior_path = ZK_PATH + "/" + waiters.get(index - 1); return false; } private String getShortPath(String locked_path) { int index = locked_path.lastIndexOf(ZK_PATH + "/"); if (index >= 0) { index += ZK_PATH.length() + 1; return index <= locked_path.length() ? locked_path.substring(index) : ""; } return null; }                  

After creating the temporary order node , The full path is stored in locked_path Among members ; Also intercepted a suffix path , Put it in
locked_short_path Among members , The suffix path is a short path , Only the last layer of the full path . Why save the path alone ?
because , When other paths in the obtained list of remote child nodes return results , All returned are short paths , There's only one last path . So for the convenience of subsequent comparison , Save your own short path .

After creating your own temporary node , call checkLocked Method , Judge whether the lock is successful . If the lock is successful , Then return to true; If you don't get a lock , Listen to the previous node , At this point, we need to find the path of the previous node , And keep it in
prior_path
Among members , For the back await() Wait for the method to listen and use . When entering await() Wait for the introduction of the method , First say checkLocked
Lock judgment method .

3.checkLocked() Check if the lock is held

stay checkLocked() In the method , Judge whether the lock can be held . The rules are simple : The node currently created , Whether it is in the first position of the child node list obtained in the previous step :

(1) If it is , That means you can hold the lock , return true, Indicates that the locking is successful ;

(2) If not , It means that other threads have already held the lock , return false.

checkLocked() The code of the method is as follows :

                      private boolean checkLocked(List<String> waiters) { // Nodes are numbered , Ascending order Collections.sort(waiters); // If it's the first one , It means that you have obtained the lock if (locked_short_path.equals(waiters.get(0))) { log.info(" Successful acquisition of distributed lock , The node is {}", locked_short_path); return true; } return false; }                  

checkLocked The method is simple , List of all child nodes that will participate in the queue , Sort by node name from small to large . Sorting mainly depends on the number of nodes , That is after Znode The path of 10 Digit number , Because prefixes are all the same . After the sorting , Do judgment , If your own locked_short_path The number position is first , If it is , It means that you have got the lock . If not , Will return false.

If checkLocked() by false, The outer calling method , Generally speaking, it will be carried out await() Waiting method , Execute the waiting logic after the failure of lock capture .

4.await() Listen to the previous node to release the lock

await() It's also very simple. , It's listening to the previous one ZNode node (prior_path member ) Delete events for , The code is as follows :

                      private void await() throws Exception { if (null == prior_path) { throw new Exception("prior_path error"); } final CountDownLatch latch = new CountDownLatch(1); // Subscribe to delete events of nodes in a smaller order than yourself Watcher w = new Watcher() { @Override public void process(WatchedEvent watchedEvent) { System.out.println(" Monitored changes watchedEvent = " + watchedEvent); log.info("[WatchedEvent] The node to delete "); latch.countDown(); } }; client.getData().usingWatcher(w).forPath(prior_path); /* // Subscribe to delete events of nodes in a smaller order than yourself TreeCache treeCache = new TreeCache(client, prior_path); TreeCacheListener l = new TreeCacheListener() { @Override public void childEvent(CuratorFramework client, TreeCacheEvent event) throws Exception { ChildData data = event.getData(); if (data != null) { switch (event.getType()) { case NODE_REMOVED: log.debug("[TreeCache] The node to delete , path={}, data={}", data.getPath(), data.getData()); latch.countDown(); break; default: break; } } } }; treeCache.getListenable().addListener(l); treeCache.start();*/ latch.await(WAIT_TIME, TimeUnit.SECONDS); }                  

So let's add one Watcher monitor , And the listening nodes , It's exactly what you saved in the front prior_path The path of the previous node of the member . here , Just listen to the changes of your previous node , It's not the change of other nodes , Improve efficiency . After listening , call latch.await(), Thread enters wait state , Until the thread is listening for latch.countDown() Wake up by , Or wait timeout .

say bright

The above code uses CountDownLatch The core principles and actual combat knowledge of ,《Netty Zookeeper Redis High concurrency practice 》 Accompanying, 《Java High concurrency core programming ( volume 2)》.

In the above code , Listen for the deletion of the previous node , There are two ways to monitor :

(1)Watcher subscribe ;

(2)TreeCache subscribe .

The effect of the two methods , Almost. . But the delete event here , Just monitor once , There's no need to monitor over and over again , So it's using Watcher
One time subscription . and TreeCache The subscription code has been annotated in the source code project , Just for your reference .

Once the previous node prior_path The node is deleted , Then wake up the thread from the waiting state , Another round of lock competition , Until the lock is acquired , And complete the business processing .

thus , Distributed Lock Lock algorithm , It's almost finished . This point , It is to realize the reentry of lock .

5. Reentrant implementation code

What is reentrant ? Just make sure that the same thread enters the locked code , It can be locked repeatedly .
Modify the previous lock Method , Add reentrant judgment logic to the front . The code is as follows :

          @Override public boolean lock() { // Reentrant judgment synchronized (this) { if (lockCount.get() == 0) { thread = Thread.currentThread(); lockCount.incrementAndGet(); } else { if (!thread.equals(Thread.currentThread())) { return false; } lockCount.incrementAndGet(); return true; } } //.... }                  

In order to become reentrant , Add a lock counter to the code lockCount
, Count the number of repeated locks . If the same thread is locked , Just increase the number of times , Go straight back to , Indicates that the locking is successful .

thus ,lock() The method has been introduced , Next , To release the lock

actual combat : Implementation of release lock

Lock Interface unLock() Method , Indicates release lock , There are two main tasks to release the lock :

(1) Reduce the count of reentry locks , If the final value is not 0, Go straight back to , It means a successful release ;

(2) If the counter is 0, remove Watchers Monitor , And delete the created Znode Temporary node .

unLock() The code of the method is as follows :

                      /** * Release the lock * * @return Whether the lock is released successfully */ @Override public boolean unlock() { // Only locked threads , Be able to unlock if (!thread.equals(Thread.currentThread())) { return false; } // Reduce the reentrant count int newLockCount = lockCount.decrementAndGet(); // The count cannot be less than 0 if (newLockCount < 0) { throw new IllegalMonitorStateException("Lock count has gone negative for lock: " + locked_path); } // If the count is not 0, Go straight back to if (newLockCount != 0) { return true; } // Delete temporary nodes try { if (ZKclient.instance.isNodeExist(locked_path)) { client.delete().forPath(locked_path); } } catch (Exception e) { e.printStackTrace(); return false; } return true; }                  

here , To ensure thread safety as much as possible , Type of reentrant counter , It's not int type , It is Java And the type of atom in the contract ——AtomicInteger.

actual combat : Use of distributed locks

Write a use case , Test it ZLock Use , The code is as follows :

                      @Test public void testLock() throws InterruptedException { for (int i = 0; i < 10; i++) { FutureTaskScheduler.add(() -> { // Create a lock ZkLock lock = new ZkLock(); lock.lock(); // Every thread , perform 10 Times add up for (int j = 0; j < 10; j++) { // Accumulation of common resource variables count++; } try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } log.info("count = " + count); // Release the lock lock.unlock(); }); } Thread.sleep(Integer.MAX_VALUE); }                  

The above code is 10 Concurrent tasks , Each task adds up to 10 Time , Execute the above use case , It will be found that the results will be expected and 100, If you don't use locks , The result may not be 100, Because of the above count It's a common variable , Not thread safe .

say bright

About the core principles and practical knowledge of thread safety , Please refer to the next volume of this book 《Java High concurrency core programming ( volume 2)》.

In principle, a Zlock Example represents a lock , And need to take up a Znode Permanent nodes , If you need a lot of distributed locks , It also needs a lot of different Znode node . Above code , If you want to expand to multiple distributed lock versions , There's also a need for a simple transformation , This transformation is left to you to practice and realize .

actual combat :curator Of InterProcessMutex Reentrant lock

Distributed lock Zlock Realize the main value independently : Learn the principle and basic development of distributed lock , That's it . In actual development , If you need to use distributed locks , And suggest making your own wheels , Recommended direct use Curator All kinds of official distributed locks in client , Like one of the InterProcessMutex
Reentrant lock .

Here's a simple InterProcessMutex Examples of the use of reentrant locks , The code is as follows :

                      @Test public void testzkMutex() throws InterruptedException { CuratorFramework client = ZKclient.instance.getClient(); final InterProcessMutex zkMutex = new InterProcessMutex(client, "/mutex"); ; for (int i = 0; i < 10; i++) { FutureTaskScheduler.add(() -> { try { // Get mutex zkMutex.acquire(); for (int j = 0; j < 10; j++) { // Accumulation of common resource variables count++; } try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } log.info("count = " + count); // Release the mutex zkMutex.release(); } catch (Exception e) { e.printStackTrace(); } }); } Thread.sleep(Integer.MAX_VALUE); }                  

ZooKeeper Advantages and disadvantages of distributed locks

To sum up ZooKeeper Distributed lock :

(1) advantage :ZooKeeper Distributed lock ( Such as InterProcessMutex), It can effectively solve the distributed problem , Don't reenter the problem , It's easy to use .

(2) shortcoming :ZooKeeper Implementation of distributed lock , The performance is not very high . Why? ?
Because every time you create a lock and release a lock , All need to be created dynamically 、 Destroy the instantaneous node to realize the lock function . As we all know ,ZK You can only create and delete nodes through Leader Server to execute , then Leader The server also needs to share data with not all Follower On the machine , Such frequent network communication , The short board of performance is very outstanding .

All in all , In high performance , High concurrency scenario , Not recommended ZooKeeper Distributed locks for . And because the ZooKeeper High availability features of , So in scenarios where concurrency is not too high , Recommended ZooKeeper Distributed locks for .

In the current implementation of distributed lock , More mature 、 There are two mainstream solutions :

(1) be based on Redis Distributed locks for

(2) be based on ZooKeeper Distributed locks for

Two kinds of locks , The applicable scenarios are :

(1) be based on ZooKeeper Distributed locks for , Suitable for high reliability ( High availability ) And the concurrency is not too large ;

(2) be based on Redis Distributed locks for , It is suitable for large amount of concurrency 、 High performance 、 The reliability problem can be remedied by other solutions .

All in all , There is no question of who is good or who is bad , It's about who's more suitable .

Finally, the content of this chapter is summarized : In distributed systems ,ZooKeeper It's an important coordination tool . This chapter introduces distributed naming services 、 The principle of distributed lock and its application based on ZooKeeper Reference implementation of . The actual combat cases in this chapter , It is suggested that we should do it by ourselves , Whether it's the beginning of practical application 、 Or big company interviews , It's all very useful . in addition , Mainstream distributed coordination middleware , It's not just Zookeeper, And the very famous Etcd middleware . But in terms of learning , The functional design between the two 、 The core principles are similar , Mastered Zookeeper,Etcd It's also easy to use .

The core content and source of the article

The book :《Netty Zookeeper Redis High concurrency practice 》 Book Introduction - Crazy creation ...

Reference documents :

The book :《Netty Zookeeper Redis High concurrency practice 》 Book Introduction - Crazy creation ...

Distributed tool Zookeeper( Two ): Distributed lock - Simple books

ZooKeeper Distributed lock Simple practice | Grandfather's blog Garden

zookeeper Realization Distributed lock _java_ Script house

be based on Zookeeper Of Distributed lock Realization - SegmentFault Think no

Distributed lock use Redis still Zookeeper - You know

ZooKeeper Distributed lock Implementation principle of - A history of struggle - Blog Garden

lawsonlovervicieds.blogspot.com

Source: https://javamana.com/2021/03/20210324225429657t.html

Belum ada Komentar untuk "Zookeeper Distributed Lock Example Java"

Posting Komentar

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel