Redis series - Single threaded Redis Why so fast ?

Redis Why single thread ?

Before we talk about this problem, let's first understand the common overhead of introducing multithreading :

1. Context switching :

Even single core CPU Multithreaded code execution is also supported ,CPU By assigning each thread CPU Time slice to implement this mechanism . Time slice is CPU Time allocated to each thread , Because the time slice is very short , therefore CPU By constantly switching threads , Let's feel that multiple threads execute at the same time , The time slice is usually tens of milliseconds (ms).

CPU The task is executed circularly through the time slice allocation algorithm , After the current task executes a time slice, it will switch to the next task . however , The status of the previous task is saved before switching , So that the next time you switch back to this task , You can load the status of this task again ,
The process from task saving to reloading is a context switch .

When a new thread is switched in , The data it needs may not be in the local cache of the current processor , Therefore, context switching will result in some cache misses , Therefore, the thread will be slower when it is first scheduled to run . This is why the scheduler allocates a minimum execution time for each runnable thread .

2. Memory synchronization

Memory visibility issues , There is no more introduction here .

3. block :

Locking is required when accessing shared resources to ensure data security , When there is competition on the lock , Threads that fail to compete are blocked .

 

under normal conditions , After we adopt multithreading , If there is no good system design , Actual results , In fact, it is as shown in the figure on the right . When we first started increasing the number of threads , System throughput will increase , however , When adding more threads , The system throughput grows slowly , Sometimes there will even be a decline .

The key reason is , There are usually shared resources accessed by multiple threads at the same time in the system , For example, a shared data structure . When there are multiple threads to modify the shared resource , To ensure the correctness of shared resources , Additional mechanisms are needed to guarantee , And this additional mechanism , It will bring additional expenses . If there is no fine design , for instance , Simply adopt a coarse-grained mutex , There will be unsatisfactory results : Even if threads are added , Most threads are also waiting to acquire mutexes to access shared resources , Parallel to serial , The system throughput does not increase with the increase of threads .

Redis Why so fast ?

*
Redis It is based on memory
Completely memory based , Most requests are purely memory operations , Very fast . Data stored in memory , be similar to HashMap,HashMap The advantage of is that the time complexity of search and operation is O(1);

*
Simple data structure , The data operation is also simple ,Redis The data structure in is specially designed ;

*
Single thread , Unnecessary context switching and contention conditions are avoided , There is no consumption due to switching caused by multiple processes or threads
CPU, You don't have to think about all kinds of locks , There is no lock release operation , No performance consumption due to possible deadlock ;

*
Redis Multiplexing mechanism of

Linux Medium IO Multiplexing mechanism means that one thread processes multiple IO flow , That's what we often hear select/epoll mechanism . In short , stay Redis
When running a single thread only , This mechanism allows , There are multiple listening sockets and connected sockets at the same time . The kernel always listens for connection requests or data requests on these sockets . Once a request arrives , I'll give it to you
Redis Thread processing , This implements a Redis Threads handle multiple IO Effect of flow .

 

Technology
Daily Recommendation