Error message here!

Hide Error message here!

忘记密码?

Error message here!

请输入正确邮箱

Hide Error message here!

密码丢失?请输入您的电子邮件地址。您将收到一个重设密码链接。

Error message here!

返回登录

Close

03. The third design philosophy of go language: concurrency

Brother Huanxi 2021-01-21 16:14:33 阅读数:15 评论数:0 点赞数:0 收藏数:0

Video address of this article

Go Language native concurrency principle

1) Go The implementation level of the language supports concurrent execution and scheduling for multi-core hardware

When it comes to concurrent execution and scheduling , The first thing we think about is how the operating system affects the process 、 Scheduling of threads . The operating system scheduler will schedule multiple threads in the system to the physical system according to a certain algorithm CPU Go up and run . Traditional programming languages like C、C++ The implementation of concurrency is actually based on operating system scheduling , That is, the program is responsible for creating threads ( Usually by pthread And so on ), The operating system is responsible for scheduling . This traditional way of supporting concurrency has many shortcomings :

complex

  • 1 Easy to create , It's hard to quit : Use C Language developers know , Create a thread( Such as the use of pthread) Although there are many parameters , But it's acceptable . But when it comes to thread The exit of , Think about thread yes detached, Still need parent thread Go to join? Is it necessary to thread Set in cancel point, In order to make sure join Can exit smoothly when ?
  • 2 Communication between concurrent units is difficult , Fallible : Multiple thread Although there are many mechanisms to choose between them , But it's quite complicated to use ; And when it comes to shared memory, You'll use all kinds of lock, Deadlock has become a routine .
  • 3 thread stack size Set up : Is to use the default , It's better to set it larger , Or smaller ?

Difficult to expand

  • 1 One thread It's much cheaper than the process , But we still can't create a lot of thread, Because in addition to each thread It takes up a lot of resources , Operating system scheduling switch thread It's not a small price ;
  • 2 For many Web Services , Because you can't create a lot of thread, Just in a small amount thread Do network multiplexing in , namely : Use epoll/kqueue/IoCompletionPort This mechanism , Even if there is libevent、libev Third party libraries like this help , It's not easy to write such a program , There is a lot of callback, It brings a lot of mental burden to programmers .

So ,Go The user layer is lightweight thread Or class coroutine To solve these problems ,Go Call it "goroutine".goroutine It takes very little resources , Every goroutine stack Of size The default setting is 2k,goroutine There is no need to fall into (trap) The kernel layer of the operating system is complete , The price is very low . therefore , One Go You can create thousands of concurrent goroutine. be-all Go The code is all there goroutine In the implementation of , Even if it is go Of runtime No exception . Will these goroutines According to a certain algorithm “CPU” The program executed on the computer is called goroutine Scheduler or goroutine scheduler.

however , One Go A program is just a user layer program for the operating system , For the operating system , Its eyes are just thread, It doesn't even know what it's called Goroutine The existence of something that's important .goroutine It's all up to Go Do it yourself , Realization Go In process goroutine Between “ fair ” competition “CPU” resources , The task falls to Go runtime top of one's head .
Go The language implements G-P-M Scheduling model and work stealing Algorithm , This model is still in use today , As shown in the figure below :

 coroutines .jpeg
G: Express goroutine, Store goroutine Implementation stack Information 、goroutine State and goroutine And so on ; in addition G Objects are reusable .

P: To express logic processor,P The number determines the maximum parallelism in the system G The number of ( Premise : The physics of the system cpu Check the number >=P The number of );P The most important role of it is the variety of G Object queue 、 Linked list 、 some cache And status . Every G To really work , First you need to be assigned a P( Enter into P Of local runq in ). about G Come on ,P That's what runs it “CPU”, so to speak :G In the eyes of P.

M:M Represents the real execution computing resource , It generally corresponds to the thread of the operating system . from Goroutine From the perspective of scheduler , real “CPU” yes M, Only will P and M Binding makes P Of runq in G To be able to really run . In this way P And M The relationship between , like Linux Operating system scheduling level user thread (user thread) And core threads (kernel thread) That's the corresponding relationship of (N x M).M Valid in binding P after , Get into schedule loop ; and schedule The mechanism of the loop is roughly from various queues 、p Get... In the local queue of G, Switch to G On the execution stack and execute G Function of , call goexit Do the cleaning and go back to m, So again and again .M Not reserved G state , This is a G Can span M The basis of scheduling .

2) Go Language provides developers with syntax elements and mechanisms to support concurrency

Let's take a look at the programming languages that were designed and born in the era of single core , Such as :C、C++、Java How concurrency is supported at the level of syntax elements and mechanisms .

  • execution unit : Threads ;
  • How to create and destroy : Call library functions or object methods ;
  • Communication between concurrent threads : Most of them are based on IPC Mechanism , such as : Shared memory 、Socket、Pipe etc. , Of course, global variables with concurrency protection are also used .

Compared with the traditional language mentioned above ,Go Provides developers with language level built-in concurrent syntax elements and mechanisms :

  • execution unit :goroutine;
  • How to create and destroy :go+ Function call ; Function exit is goroutine sign out ;
  • Concurrent goroutine Communication for : Through language built-in channel Deliver messages or synchronize , And pass select Realize multi-channel channel Concurrency control .

By contrast ,Go Native support for concurrency will greatly reduce the mental burden of developers in developing concurrent programs .

3) The concurrency principle is right for Go The influence of developers at the level of program structure design

because goroutine It's a small expense ( Relative to threads ),Go The official is to encourage people to use goroutine To make full use of multi-core resources . But not with goroutine We can make full use of multi-core resources , Or even if you use Go Not necessarily able to design and write a good concurrent program .
So Rob Pike There was a time about “ Concurrency is not parallel ”1 Share the theme of , In that sharing , the Go The father of language explains concurrency with pictures and text (Concurrency) And parallel (Parallelism) The difference between .Rob Pike Think :

  • Concurrency is about structure , It is a programming method that decomposes a program into small fragments and each fragment can be executed independently ; Small pieces of concurrent programs usually communicate with each other and cooperate with each other through communication ;
  • Parallelism is about execution , It means doing some computing tasks at the same time .

a key : Concurrency is a method of program structure design , It makes parallel possible . But it's still abstract , We also borrow it here Rob Pike The one in the sharing “ The problem of carrying books ” To reinterpret the meaning of concurrency . The problem of carrying books requires the design of a plan , bring gopher It's faster to move a pile of discarded language manuals to a garbage collection and burn them .
 coroutines 1.png
This is obviously not a concurrent design , It doesn't do any decomposition of the problem , Everything is made up of one gopher It's done from beginning to end in order . But even if such a scheme is not concurrent , We can also put it on multi-core hardware for parallel execution , Just need to build a few more gopher routine (procedure) It's just an example of :
 coroutines 2.png
This is obviously not a concurrent design , It doesn't do any decomposition of the problem , Everything is made up of one gopher It's done from beginning to end in order . But even if such a scheme is not concurrent , We can also put it on multi-core hardware for parallel execution , Just need to build a few more gopher routine (procedure) It's just an example of :
 coroutines 3.jpeg
But compared with the concurrent scheme , This scheme lacks the ability to automatically expand to parallel .Rob Pike Two concurrent schemes are given in the sharing , That is, two decomposition schemes of the problem , Both options are right , It's just that the granularity of decomposition is different .
 coroutines 5.png
programme 1 The original single gopher Routine execution is split into 4 There are two people who perform different tasks gopher routine , Each routine is simpler :

  • Carry the book to the car (loadBooksToCart);
  • Cart to the incineration site (moveCartToIncinerator);
  • Take the book out of the car and put it in the incinerator (unloadBookIntoIncinerator);
  • Send the empty car back to (returnEmptyCart).

Theoretically, concurrency schemes 1 The processing performance is four times that of the original scheme , And different gopher Routines can be executed in parallel on different processor cores , There is no need to create a new instance to achieve parallel as in the original scheme .
 coroutines 4.png
programme 2 Added “ staging area ”, The granularity of decomposition is finer , Every part of it gopher Every routine has its own responsibility , Such a program also works normally on a single core processor ( On a single core, the processing power may not be as good as that of a non concurrent scheme ). But as the number of processor cores increases , Concurrency can naturally improve processing performance , Increase throughput . After the number of processor cores is increased in the non concurrent scheme , Only one of them can be used , It doesn't scale naturally , All this is determined by the structure of the program . It also tells us : The structure design of concurrent programs should not be limited to the processing power in the case of single core , But in the case of multi-core can fully improve the utilization of multi-core 、 The ultimate goal is to improve performance naturally .

image

Copyright statement
In this paper,the author:[Brother Huanxi],Reprint please bring the original link, thank you

编程之旅,人生之路,不止于编程,还有诗和远方。
阅代码原理,看框架知识,学企业实践;
赏诗词,读日记,踏人生之路,观世界之行;