preface
Although in the programmer's career , Computer underlying knowledge may be rarely directly involved , But this does not mean that this part of knowledge is not important .
Deep understanding of the underlying computer implementation , It can help you understand the operation principle of the computer , Better design efficient architecture , And help debugging , Wrong judgment . Especially , Understanding multithreading is particularly important : Today's program architectures require concurrent processing , How to coordinate the division of labor and cooperation between different threads , Avoid deadlock , Synchronization error and other problems , Is a skill that programmers should have . For back-end engineers , A good basic knowledge of operating system is a prerequisite for a deep understanding and implementation of complex distributed systems .
process vs. thread
process ( process) And thread ( thread) The biggest difference is : The process has its own address space , Threads within a process are not visible to other processes , Namely process A
The process cannot be read or written directly through address transfer B Storage area for . Communication between processes requires inter process communication ( Inter-Process Communication,
IPC). The opposite , Information can be transferred directly between threads of the same process by passing addresses or global variables .
in addition , Process is the basic unit of resource and independent scheduling in the operating system , Can have multiple threads . Usually, a program running in the operating system corresponds to a process . In the same process , Thread switching does not cause process switching . Thread switching in different processes , For example, when switching from a thread in one process to a thread in another process , It will cause process switching . Compared with process switching , The overhead of thread switching is much smaller . The combination of threads and processes can improve the running efficiency of the system .
Threads can be divided into two categories :
One is user level thread ( user level
thread). For such threads , All work on thread management is done by the application , The kernel is not aware of the existence of threads . After the application starts , The operating system assigns a process number to the program , And its corresponding memory space and other resources . Applications usually run in one thread first , This thread is called the master thread . At some point in its operation , You can create a new thread running in the same process by calling a function in the thread library .
The benefit of user level threads is very efficient , No need to enter kernel space , But the concurrency efficiency is not high .
The other is kernel level threads ( kernel level
thread). For such threads , All work on thread management is done by the kernel , The application has no code for thread management , Only kernel thread interfaces can be called . The kernel maintains the process and each thread within it , Scheduling is also done by the kernel based on thread architecture . The benefits of kernel level threads are , The kernel can better allocate different threads to different threads
CPU, To achieve true parallel computing .
in fact , In modern operating systems , Multithreading is often implemented in combination , That is, thread creation is completely completed in user space , And multiple user level threads in an application are mapped to some kernel level threads , It's a compromise .
Context switching
For single core and single thread CPU for , Only one can be executed at a time CPU instructions . Context switching ( Context Switch) Is a kind of CPU
A mechanism for allocating resources from one process to another . From the user's point of view , A computer can run multiple processes in parallel , This is precisely the result of the operating system through fast context switching . During switching , The operating system needs to store the status of the current process first ( Include pointer to memory space , Currently executed instructions, etc ), Then read the status of the next process , Then execute the process .
system call
system call ( System Call) Is the way a program requests services from the system kernel . Hardware related services may be included ( for example , Access hard disk, etc ), Or create a new process ,
Scheduling other processes, etc . System call is an important interface between program and operating system .
Semaphore/Mutex
When a user creates multiple threads / Process time , If different threads / The process reads and writes the same content at the same time , It may cause reading and writing errors , Or the data is inconsistent . here , Need to add
Lock mode , Control core area ( critical section) Access rights for . about semaphore for , When initializing variables, you can control how many threads are allowed / enter
Processes access a core area at the same time , Other threads / The process will be blocked , Until someone unlocks it . Mutex This is equivalent to allowing only one thread / Process accessed
semaphore. in addition , According to actual needs , People have also implemented a read-write lock ( read-write lock), It allows multiple readers to exist at the same time (
reader), But there is at most one writer at any time ( writer), And cannot coexist with the reader .
deadlock
While introducing the lock , We have a new problem : deadlock ( Deadlock).
Deadlocks are two or more threads / Processes block each other , So that no one can continue to run , Therefore, other threads cannot be unlocked / process . for example , thread A occupy lockA, And try to get
lock B; And thread 2 occupy lock B, Try to get lock A. here , They block each other , Can't continue . Deadlock generating 4 The two conditions are summarized as follows ( Only when 4
Deadlock will occur only when two conditions are met at the same time ):
* mutual exclusion : A resource can only be used by one process at a time , That is, within a period of time Resources are owned by only one process . At this time, if another process requests the resource , Then the request process can only wait .
* Request and hold conditions : The process has maintained at least one resource , However, new resource requests have been made , And this resource Already occupied by another process , At this time, the request process is blocked , But keep the resources you have obtained .
* Inalienable conditions : The resources obtained by the process are not used up , It cannot be forcibly taken away by other processes , That is, only It is released by the process that obtains the resource itself ( Only active release ).
* Cycle waiting condition : Several processes form a head to tail cycle waiting for resources
bounded-buffer problem
Producer consumer model is a common communication model : Producers and consumers share a data pipeline , Producer writes data to buffer, Consumer reads data from the other end
according to . For data pipes , Empty and overflow situations need to be considered . meanwhile , It is usually necessary to use this part of shared memory mutex Lock . In the case of only one producer and one consumer , Lockless queues can be designed (
lockless queue), Threads can read and write data directly and safely .
Interprocess communication
When introducing the process , We mentioned that one process cannot directly read and write data from another process , The communication between the two needs to be carried out through interprocess communication . Process communication party
The formula usually follows the producer consumer model , The two functions of data exchange and synchronization need to be realized .
( 1) Shared memory ( Shared-memory) + semaphore
Different processes exchange data by reading and writing special shared memory in the operating system , Between processes semaphore Achieve synchronization .
( 2) information transfer ( Message passing)
The process registers a port within the operating system , And monitor for data , Other processes write data directly to this port . The communication mode is closer to the network communication mode . in fact , Network communication is also a
IPC, It's just that processes are distributed on different machines .
Logical address / Physical address / virtual memory
The so-called logical address , Refers to computer users ( For example, program developers ) See your address . for example , When creating a length of 100 An integer array of , The operating system returns a
Logical continuous space : Pointer to the memory address of the first element of the array . Because the size of the integer element is 4 Bytes , Therefore, the address of the second element is added to the starting address
4, and so on . in fact , The logical address is not necessarily the real address of the element store , That is, the physical address of the array element ( Location in the memory module ), The physical address is not continuous , It's just that the operating system uses address mapping , Map logical addresses to contiguous , This is more in line with people's intuitive thinking .
Another important concept is virtual memory . The operating system can read and write memory several orders of magnitude faster than the disk . however , Memory prices are also relatively high , Not on a large scale
extend . therefore , The operating system can move some less commonly used data out of memory , Store to disk cache with relatively low price , To achieve memory expansion . The operating system can also predict which part of the data stored in the disk cache needs to be read and written through the algorithm , Read this part of data back to memory in advance . Virtual memory space is much smaller than disk space , therefore , Even searching virtual memory space is faster than searching disk directly . The only possibility that is slower than disk is , Memory , There is no required data in the virtual memory , Finally, it needs to be read directly from the hard disk . This is why memory and virtual memory need to store data that will be read and written repeatedly , Otherwise
Lost the meaning of cache .
present generation meter count machine in have one individual expert door of turn translate slow punching area ( Translation Lookaside Buffer,
TLB), It is used to realize the fast conversion from virtual address to physical address . And memory / There are two other concepts related to virtual memory :
( 1) Resident Set
When a process is running , The operating system will not load all the data of the process into memory at one time , Only part of the in use will be loaded , And the data expected to be used . Other data may be stored in virtual memory , Swap area and hard disk file system . The part loaded into memory is
resident set.
( 2) Thrashing
because resident set Contains the data expected to be used , Ideally , The data used during the process will be loaded into the database step by step resident
set. But this is often not the case : Whenever memory pages are needed ( page) be not in resident set
Middle time , The operating system must read data from virtual memory or hard disk , This process is called a memory page error ( page faults). This is the case when the operating system takes a lot of time to deal with page errors
thrashing.
file system
UNIX Style file system uses tree structure to manage files . Each node has multiple pointers , Point to the disk storage location of the next layer node or file . The file node is also attached with the operation information of the file (
metadata), Including modification time , Access rights, etc . The user's access rights are through the access control table ( Access Control List) And capacity table ( Capability
List) realization . The former is from the perspective of documents , It indicates what each user can do with the file . The latter is from the user's point of view , Marked with
What permissions can users operate on which files .
UNIX File permissions are divided into read , Write and execute , User groups are divided into file owners , Groups and all users . You can set permissions for three groups of users through commands .
real time vs. Time sharing operating system
Operating systems can be divided into real-time operating systems ( Real-Time System), Time sharing operating system ( Sharing Time
System). Usually computers use time-sharing , Multiple processes / Shared between users
CPU, Multitasking from the situation . Individual users / Scheduling between processes is not particularly accurate , If a process is locked , More time can be allocated to it . The real-time operating system is different , Software and hardware must comply with strict requirements deadline, Processes that exceed the time limit may be terminated directly . In such an operating system , Every lock needs careful consideration .
compiler
For high-level languages , The code needs to be compiled to run . Compile through compiler ( Compiler) realization , Is a program source code into binary machine code
process . The computer can execute binary code directly . During compilation , The compiler needs lexical analysis ( Lexical Analysis), analysis ( Parsing) Transition code generation (
Intermediate Code Generation). The quality of the compiler can directly affect the execution efficiency of the final code .
Technology