Is Parallel Programming Hard, And, If So, What Can You Do About It?

October 26, 2017 | Author: s | Category: Virtualization
Share Embed


Short Description

Description: The purpose of this book is to help you understand how to program shared-memory parallel machines without ...

Description

Is Parallel Programming Hard, And, If So, What Can You Do About It? First Edition Release Candidate 8 Edited by: Paul E. McKenney Linux Technology Center IBM Beaverton [email protected] February 25, 2014 Legal Statement This work represents the views of the authors and does not necessarily represent the view of their employers. Trademarks: •  IBM, zSeries, and PowerPC are trademarks or registered trademarks of Interna- tional Business Machines Corporation in the United States, other countries, or both. • Linux is a registered trademark of Linus Torvalds. •  i386 is a trademark of Intel Corporation or its subsidiaries in the United States, other countries, or both. •  Other company, product, and service names may be trademarks or service marks of such companies. The non-source-code text and images in this document are provided under the terms of the Creative Commons Attribution-Share Alike 3.0 United States license. 1 In brief, you may use the contents of this document for any purpose, personal, commercial, or otherwise, so long as attribution to the authors is maintained. Likewise, the document may be modified, and derivative works and translations made available, so long as such modifications and derivations are offered to the public on equal terms as the non-source-code text and images in the original document. Source code is covered by various versions of the GPL. 2 Some of this code is GPLv2-only, as it derives from the Linux kernel, while other code is GPLv2-or-later. See the comment headers of the individual source files within the CodeSamples directory in the git archive 3 for the exact licenses. If you are unsure of the license for a given code fragment, you should assume GPLv2-only. Combined work © 2005-2014 by Paul E. McKenney. 1 http://creativecommons.org/licenses/by-sa/3.0/us/ 2 http://www.gnu.org/licenses/gpl-2.0.html 3 git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git ii Contents 1 Introduction  1 1.1 Historic Parallel Programming Difficulties  . . . . . . . . . . . . . . . . 1 1.2 Parallel Programming Goals  . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Performance  . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Productivity  . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Generality  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Alternatives to Parallel Programming  . . . . . . . . . . . . . . . . . . 8 1.3.1 Multiple Instances of a Sequential Application  . . . . . . . . 8 1.3.2 Use Existing Parallel Software  . . . . . . . . . . . . . . . . . 9 1.3.3 Performance Optimization  . . . . . . . . . . . . . . . . . . . 9 1.4 What Makes Parallel Programming Hard?  . . . . . . . . . . . . . . . 10 1.4.1 Work Partitioning  . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.2 Parallel Access Control  . . . . . . . . . . . . . . . . . . . . . . 11 1.4.3 Resource Partitioning and Replication  . . . . . . . . . . . . . 12 1.4.4 Interacting With Hardware  . . . . . . . . . . . . . . . . . . . 12 1.4.5 Composite Capabilities  . . . . . . . . . . . . . . . . . . . . . 12 1.4.6  How Do Languages and Environments Assist With These Tasks?  13 1.5 Guide to This Book  . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5.1 Quick Quizzes  . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5.2 Sample Source Code  . . . . . . . . . . . . . . . . . . . . . . 14 2 Hardware and its Habits  15 2.1 Overview  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.1 Pipelined CPUs  . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.2 Memory References  . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.3 Atomic Operations  . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.4 Memory Barriers  . . . . . . . . . . . . . . . . . . . . . . . . 18 2.1.5 Cache Misses  . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.6 I/O Operations  . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Overheads  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 Hardware System Architecture  . . . . . . . . . . . . . . . . . 20 2.2.2 Costs of Operations  . . . . . . . . . . . . . . . . . . . . . . . 22 2.3 Hardware Free Lunch?  . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3.1 3D Integration  . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.2 Novel Materials and Processes  . . . . . . . . . . . . . . . . . 26 2.3.3 Light, Not Electrons  . . . . . . . . . . . . . . . . . . . . . . 26 2.3.4 Special-Purpose Accelerators  . . . . . . . . . . . . . . . . . 26 2.3.5 Existing Parallel Software  . . . . . . . . . . . . . . . . . . . . 27 iii 2.4 Software Design Implications  . . . . . . . . . . . . . . . . . . . . . . . 27 3 Tools of the Trade  29 3.1 Scripting Languages  . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2 POSIX Multiprocessing  . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1 POSIX Process Creation and Destruction  . . . . . . . . . . . 30 3.2.2 POSIX Thread Creation and Destruction  . . . . . . . . . . . 32 3.2.3 POSIX Locking  . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.4 POSIX Reader-Writer Locking  . . . . . . . . . . . . . . . . . 36 3.3 Atomic Operations  . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.4 Linux-Kernel Equivalents to POSIX Operations  . . . . . . . . . . . . 40 3.5 The Right Tool for the Job: How to Choose?  . . . . . . . . . . . . . . 42 4 Counting  43 4.1 Why Isn’t Concurrent Counting Trivial?  . . . . . . . . . . . . . . . . 44 4.2 Statistical Counters  . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.2.1 Design  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2.2 Array-Based Implementation  . . . . . . . . . . . . . . . . . . . 47 4.2.3 Eventually Consistent Implementation  . . . . . . . . . . . . . 48 4.2.4 Per-Thread-Variable-Based Implementation  . . . . . . . . . . 50 4.2.5 Discussion  . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3 Approximate Limit Counters  . . . . . . . . . . . . . . . . . . . . . . 52 4.3.1 Design  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.3.2 Simple Limit Counter Implementation  . . . . . . . . . . . . . 54 4.3.3 Simple Limit Counter Discussion  . . . . . . . . . . . . . . . 60 4.3.4 Approximate Limit Counter Implementation  . . . . . . . . . . 61 4.3.5 Approximate Limit Counter Discussion  . . . . . . . . . . . . . 61 4.4 Exact Limit Counters  . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.4.1 Atomic Limit Counter Implementation  . . . . . . . . . . . . 62 4.4.2 Atomic Limit Counter Discussion  . . . . . . . . . . . . . . . 66 4.4.3 Signal-Theft Limit Counter Design  . . . . . . . . . . . . . . 68 4.4.4 Signal-Theft Limit Counter Implementation  . . . . . . . . . . 69 4.4.5 Signal-Theft Limit Counter Discussion  . . . . . . . . . . . . 73 4.5 Applying Specialized Parallel Counters  . . . . . . . . . . . . . . . . 73 4.6 Parallel Counting Discussion  . . . . . . . . . . . . . . . . . . . . . . 76 5 Partitioning and Synchronization Design  79 5.1 Partitioning Exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.1.1 Dining Philosophers Problem  . . . . . . . . . . . . . . . . . 79 5.1.2 Double-Ended Queue  . . . . . . . . . . . . . . . . . . . . . . 83 5.1.3 Partitioning Example Discussion  . . . . . . . . . . . . . . . . . 91 5.2 Design Criteria  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.3 Synchronization Granularity  . . . . . . . . . . . . . . . . . . . . . . 94 5.3.1 Sequential Program  . . . . . . . . . . . . . . . . . . . . . . . 94 5.3.2 Code Locking  . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.3.3 Data Locking  . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.3.4 Data Ownership  . . . . . . . . . . . . . . . . . . . . . . . . 98 5.3.5 Locking Granularity and Performance  . . . . . . . . . . . . . . 101 5.4 Parallel Fastpath  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.4.1 Reader/Writer Locking  . . . . . . . . . . . . . . . . . . . . . 104 iv 5.4.2 Hierarchical Locking  . . . . . . . . . . . . . . . . . . . . . . 105 5.4.3 Resource Allocator Caches  . . . . . . . . . . . . . . . . . . . 105 5.5 Beyond Partitioning  . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.5.1 Work-Queue Parallel Maze Solver  . . . . . . . . . . . . . . . 112 5.5.2 Alternative Parallel Maze Solver  . . . . . . . . . . . . . . . . 114 5.5.3 Performance Comparison I  . . . . . . . . . . . . . . . . . . . . 117 5.5.4 Alternative Sequential Maze Solver  . . . . . . . . . . . . . . 119 5.5.5 Performance Comparison II  . . . . . . . . . . . . . . . . . . 120 5.5.6 Future Directions and Conclusions  . . . . . . . . . . . . . . . . 121 5.6 Partitioning, Parallelism, and Optimization  . . . . . . . . . . . . . . . 122 6 Locking  123 6.1 Staying Alive  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6.1.1 Deadlock  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6.1.2 Livelock and Starvation  . . . . . . . . . . . . . . . . . . . . 133 6.1.3 Unfairness  . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.1.4 Inefficiency  . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Types of Locks  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.2.1 Exclusive Locks  . . . . . . . . . . . . . . . . . . . . . . . . 136 6.2.2 Reader-Writer Locks  . . . . . . . . . . . . . . . . . . . . . . 136 6.2.3 Beyond Reader-Writer Locks  . . . . . . . . . . . . . . . . . 136 6.2.4 Scoped Locking  . . . . . . . . . . . . . . . . . . . . . . . . 138 6.3 Locking Implementation Issues  . . . . . . . . . . . . . . . . . . . . . 140 6.3.1  Sample Exclusive-Locking Implementation Based on Atomic Exchange  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.3.2 Other Exclusive-Locking Implementations  . . . . . . . . . . . 141 6.4 Lock-Based Existence Guarantees  . . . . . . . . . . . . . . . . . . . 143 6.5 Locking: Hero or Villain?  . . . . . . . . . . . . . . . . . . . . . . . . 145 6.5.1 Locking For Applications: Hero!  . . . . . . . . . . . . . . . . 145 6.5.2 Locking For Parallel Libraries: Just Another Tool  . . . . . . . 145 6.5.3 Locking For Parallelizing Sequential Libraries: Villain!  . . . . 149 6.6 Summary  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 7 Data Ownership  153 7.1 Multiple Processes  . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.2 Partial Data Ownership and pthreads  . . . . . . . . . . . . . . . . . . 154 7.3 Function Shipping  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7.4 Designated Thread  . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7.5 Privatization  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.6 Other Uses of Data Ownership  . . . . . . . . . . . . . . . . . . . . . 155 8 Deferred Processing  157 8.1 Reference Counting  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 8.1.1 Implementation of Reference-Counting Categories  . . . . . . 158 8.1.2 Hazard Pointers  . . . . . . . . . . . . . . . . . . . . . . . . . 163 8.1.3 Linux Primitives Supporting Reference Counting  . . . . . . . 164 8.1.4 Counter Optimizations  . . . . . . . . . . . . . . . . . . . . . 166 8.2 Sequence Locks  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 8.3 Read-Copy Update (RCU)  . . . . . . . . . . . . . . . . . . . . . . . 169 8.3.1 Introduction to RCU  . . . . . . . . . . . . . . . . . . . . . . 169 v 8.3.2 RCU Fundamentals  . . . . . . . . . . . . . . . . . . . . . . . 173 8.3.3 RCU Usage  . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 8.3.4 RCU Linux-Kernel API  . . . . . . . . . . . . . . . . . . . . 196 8.3.5 “Toy” RCU Implementations  . . . . . . . . . . . . . . . . . . 202 8.3.6 RCU Exercises  . . . . . . . . . . . . . . . . . . . . . . . . . 222 8.4 Which to Choose?  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 8.5 What About Updates?  . . . . . . . . . . . . . . . . . . . . . . . . . . 224 9 Data Structures  227 9.1 Motivating Application  . . . . . . . . . . . . . . . . . . . . . . . . . . 227 9.2 Partitionable Data Structures  . . . . . . . . . . . . . . . . . . . . . . 228 9.2.1 Hash-Table Design  . . . . . . . . . . . . . . . . . . . . . . . 228 9.2.2 Hash-Table Implementation  . . . . . . . . . . . . . . . . . . 228 9.2.3 Hash-Table Performance  . . . . . . . . . . . . . . . . . . . . . 231 9.3 Read-Mostly Data Structures  . . . . . . . . . . . . . . . . . . . . . . 233 9.3.1 RCU-Protected Hash Table Implementation  . . . . . . . . . . 233 9.3.2 RCU-Protected Hash Table Performance  . . . . . . . . . . . 235 9.3.3 RCU-Protected Hash Table Discussion  . . . . . . . . . . . . 238 9.4 Non-Partitionable Data Structures  . . . . . . . . . . . . . . . . . . . 239 9.4.1 Resizable Hash Table Design  . . . . . . . . . . . . . . . . . . 239 9.4.2 Resizable Hash Table Implementation  . . . . . . . . . . . . . . 241 9.4.3 Resizable Hash Table Discussion  . . . . . . . . . . . . . . . 248 9.4.4 Other Resizable Hash Tables  . . . . . . . . . . . . . . . . . . 250 9.5 Other Data Structures  . . . . . . . . . . . . . . . . . . . . . . . . . . 253 9.6 Micro-Optimization  . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 9.6.1 Specialization  . . . . . . . . . . . . . . . . . . . . . . . . . . 254 9.6.2 Bits and Bytes  . . . . . . . . . . . . . . . . . . . . . . . . . 254 9.6.3 Hardware Considerations  . . . . . . . . . . . . . . . . . . . . 255 9.7 Summary  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 10 Validation  259 10.1 Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 10.1.1 Where Do Bugs Come From?  . . . . . . . . . . . . . . . . . 260 10.1.2 Required Mindset  . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.1.3 When Should Validation Start?  . . . . . . . . . . . . . . . . . 263 10.1.4 The Open Source Way  . . . . . . . . . . . . . . . . . . . . . 264 10.2 Tracing  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 10.3 Assertions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 10.4 Static Analysis  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 10.5 Code Review  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 10.5.1 Inspection  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 10.5.2 Walkthroughs  . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 10.5.3 Self-Inspection  . . . . . . . . . . . . . . . . . . . . . . . . . 268 10.6 Probability and Heisenbugs  . . . . . . . . . . . . . . . . . . . . . . . 270 10.6.1 Statistics for Discrete Testing  . . . . . . . . . . . . . . . . . . 271 10.6.2 Abusing Statistics for Discrete Testing  . . . . . . . . . . . . . 273 10.6.3 Statistics for Continuous Testing  . . . . . . . . . . . . . . . . 273 10.6.4 Hunting Heisenbugs  . . . . . . . . . . . . . . . . . . . . . . . 277 10.7 Performance Estimation  . . . . . . . . . . . . . . . . . . . . . . . . . 279 10.7.1 Benchmarking  . . . . . . . . . . . . . . . . . . . . . . . . . 279 vi 10.7.2 Profiling  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 10.7.3 Differential Profiling  . . . . . . . . . . . . . . . . . . . . . . 280 10.7.4 Microbenchmarking  . . . . . . . . . . . . . . . . . . . . . . . 281 10.7.5 Isolation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 10.7.6 Detecting Interference  . . . . . . . . . . . . . . . . . . . . . 283 10.8 Summary  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 11 Formal Verification  289 11.1 What are Promela and Spin?  . . . . . . . . . . . . . . . . . . . . . . 289 11.2 Promela Example: Non-Atomic Increment  . . . . . . . . . . . . . . . 290 11.3 Promela Example: Atomic Increment  . . . . . . . . . . . . . . . . . 292 11.3.1 Combinatorial Explosion  . . . . . . . . . . . . . . . . . . . . 294 11.4 How to Use Promela  . . . . . . . . . . . . . . . . . . . . . . . . . . 294 11.4.1 Promela Peculiarities  . . . . . . . . . . . . . . . . . . . . . . 295 11.4.2 Promela Coding Tricks  . . . . . . . . . . . . . . . . . . . . . 296 11.5 Promela Example: Locking  . . . . . . . . . . . . . . . . . . . . . . . . 297 11.6 Promela Example: QRCU  . . . . . . . . . . . . . . . . . . . . . . . 299 11.6.1 Running the QRCU Example  . . . . . . . . . . . . . . . . . 302 11.6.2 How Many Readers and Updaters Are Really Needed?  . . . . 305 11.6.3 Alternative Approach: Proof of Correctness  . . . . . . . . . . 305 11.6.4 Alternative Approach: More Capable Tools  . . . . . . . . . . 306 11.6.5 Alternative Approach: Divide and Conquer  . . . . . . . . . . 306 11.7 Promela Parable: dynticks and Preemptible RCU  . . . . . . . . . . . 306 11.7.1 Introduction to Preemptible RCU and dynticks  . . . . . . . . . 307 11.7.2 Validating Preemptible RCU and dynticks  . . . . . . . . . . . . 311 11.7.3 Lessons (Re)Learned  . . . . . . . . . . . . . . . . . . . . . . 326 11.8 Simplicity Avoids Formal Verification  . . . . . . . . . . . . . . . . . 326 11.8.1 State Variables for Simplified Dynticks Interface  . . . . . . . . 327 11.8.2 Entering and Leaving Dynticks-Idle Mode  . . . . . . . . . . . 328 11.8.3 NMIs From Dynticks-Idle Mode  . . . . . . . . . . . . . . . . 328 11.8.4 Interrupts From Dynticks-Idle Mode  . . . . . . . . . . . . . . 329 11.8.5 Checking For Dynticks Quiescent States  . . . . . . . . . . . . 329 11.8.6 Discussion  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 11.9 Formal Verification and Memory Ordering  . . . . . . . . . . . . . . . 332 11.9.1 Anatomy of a Litmus Test  . . . . . . . . . . . . . . . . . . . 332 11.9.2 What Does This Litmus Test Mean?  . . . . . . . . . . . . . . 333 11.9.3 Running a Litmus Test  . . . . . . . . . . . . . . . . . . . . . 334 11.9.4 CPPMEM Discussion  . . . . . . . . . . . . . . . . . . . . . 335 11.10Summary  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 12 Putting It All Together  337 12.1 Counter Conundrums  . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 12.1.1 Counting Updates  . . . . . . . . . . . . . . . . . . . . . . . . 337 12.1.2 Counting Lookups  . . . . . . . . . . . . . . . . . . . . . . . . 337 12.2 RCU Rescues  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 12.2.1 RCU and Per-Thread-Variable-Based Statistical Counters  . . . 338 12.2.2 RCU and Counters for Removable I/O Devices  . . . . . . . . . 341 12.2.3 Array and Length  . . . . . . . . . . . . . . . . . . . . . . . . . 341 12.2.4 Correlated Fields  . . . . . . . . . . . . . . . . . . . . . . . . 343 12.3 Hashing Hassles  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 vii 12.3.1 Correlated Data Elements  . . . . . . . . . . . . . . . . . . . 344 12.3.2 Update-Friendly Hash-Table Traversal  . . . . . . . . . . . . . 344 13 Advanced Synchronization  347 13.1 Avoiding Locks  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 13.2 Memory Barriers  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 13.2.1 Memory Ordering and Memory Barriers  . . . . . . . . . . . . 348 13.2.2 If B Follows A, and C Follows B, Why Doesn’t C Follow A?  . 349 13.2.3 Variables Can Have More Than One Value  . . . . . . . . . . 350 13.2.4 What Can You Trust?  . . . . . . . . . . . . . . . . . . . . . . 352 13.2.5 Review of Locking Implementations  . . . . . . . . . . . . . . . 357 13.2.6 A Few Simple Rules  . . . . . . . . . . . . . . . . . . . . . . 358 13.2.7 Abstract Memory Access Model  . . . . . . . . . . . . . . . . 359 13.2.8 Device Operations  . . . . . . . . . . . . . . . . . . . . . . . 360 13.2.9 Guarantees  . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 13.2.10 What Are Memory Barriers?  . . . . . . . . . . . . . . . . . . 362 13.2.11 Locking Constraints  . . . . . . . . . . . . . . . . . . . . . . 372 13.2.12 Memory-Barrier Examples  . . . . . . . . . . . . . . . . . . . 373 13.2.13 The Effects of the CPU Cache  . . . . . . . . . . . . . . . . . 375 13.2.14 Where Are Memory Barriers Needed?  . . . . . . . . . . . . . . 377 13.3 Non-Blocking Synchronization  . . . . . . . . . . . . . . . . . . . . . . 377 13.3.1 Simple NBS  . . . . . . . . . . . . . . . . . . . . . . . . . . 378 13.3.2 NBS Discussion  . . . . . . . . . . . . . . . . . . . . . . . . 379 14 Ease of Use  381 14.1 What is Easy?  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 14.2 Rusty Scale for API Design  . . . . . . . . . . . . . . . . . . . . . . . . 381 14.3 Shaving the Mandelbrot Set  . . . . . . . . . . . . . . . . . . . . . . . 383 15 Conflicting Visions of the Future  387 15.1 The Future of CPU Technology Ain’t What it Used to Be  . . . . . . . . 387 15.1.1 Uniprocessor Über Alles  . . . . . . . . . . . . . . . . . . . . . 387 15.1.2 Multithreaded Mania  . . . . . . . . . . . . . . . . . . . . . . 388 15.1.3 More of the Same  . . . . . . . . . . . . . . . . . . . . . . . . 389 15.1.4 Crash Dummies Slamming into the Memory Wall  . . . . . . . 390 15.2 Transactional Memory  . . . . . . . . . . . . . . . . . . . . . . . . . . 391 15.2.1 Outside World  . . . . . . . . . . . . . . . . . . . . . . . . . 393 15.2.2 Process Modification  . . . . . . . . . . . . . . . . . . . . . . . 397 15.2.3 Synchronization  . . . . . . . . . . . . . . . . . . . . . . . . 402 15.2.4 Discussion  . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 15.3 Hardware Transactional Memory  . . . . . . . . . . . . . . . . . . . . 408 15.3.1 HTM Benefits WRT to Locking  . . . . . . . . . . . . . . . . 409 15.3.2 HTM Weaknesses WRT Locking  . . . . . . . . . . . . . . . . 411 15.3.3 HTM Weaknesses WRT to Locking When Augmented  . . . . . 417 15.3.4 Where Does HTM Best Fit In?  . . . . . . . . . . . . . . . . . 420 15.3.5 Potential Game Changers  . . . . . . . . . . . . . . . . . . . . . 421 15.3.6 Conclusions  . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 15.4 Functional Programming for Parallelism  . . . . . . . . . . . . . . . . 424 viii A Important Questions  427 A.1 What Does “After” Mean?  . . . . . . . . . . . . . . . . . . . . . . . . 427 A.2 What Time Is It?  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 B Synchronization Primitives  433 B.1 Organization and Initialization  . . . . . . . . . . . . . . . . . . . . . 434 B.1.1 smp_init():  . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 B.2 Thread Creation, Destruction, and Control  . . . . . . . . . . . . . . . 434 B.2.1 create_thread()  . . . . . . . . . . . . . . . . . . . . . . . . . 434 B.2.2 smp_thread_id()  . . . . . . . . . . . . . . . . . . . . . . . . 434 B.2.3 for_each_thread()  . . . . . . . . . . . . . . . . . . . . . . . . 435 B.2.4 for_each_running_thread()  . . . . . . . . . . . . . . . . . . . 435 B.2.5 wait_thread()  . . . . . . . . . . . . . . . . . . . . . . . . . . 435 B.2.6 wait_all_threads()  . . . . . . . . . . . . . . . . . . . . . . . 435 B.2.7 Example Usage  . . . . . . . . . . . . . . . . . . . . . . . . . 435 B.3 Locking  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 B.3.1 spin_lock_init()  . . . . . . . . . . . . . . . . . . . . . . . . . 436 B.3.2 spin_lock()  . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 B.3.3 spin_trylock()  . . . . . . . . . . . . . . . . . . . . . . . . . . 436 B.3.4 spin_unlock()  . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 B.3.5 Example Usage  . . . . . . . . . . . . . . . . . . . . . . . . . . 437 B.4 Per-Thread Variables  . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 B.4.1 DEFINE_PER_THREAD()  . . . . . . . . . . . . . . . . . . . 437 B.4.2 DECLARE_PER_THREAD()  . . . . . . . . . . . . . . . . . . 437 B.4.3 per_thread()  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 B.4.4 __get_thread_var()  . . . . . . . . . . . . . . . . . . . . . . . 438 B.4.5 init_per_thread()  . . . . . . . . . . . . . . . . . . . . . . . . 438 B.4.6 Usage Example  . . . . . . . . . . . . . . . . . . . . . . . . . 438 B.5 Performance  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 C Why Memory Barriers?  439 C.1 Cache Structure  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 C.2 Cache-Coherence Protocols  . . . . . . . . . . . . . . . . . . . . . . . . 441 C.2.1 MESI States  . . . . . . . . . . . . . . . . . . . . . . . . . . 442 C.2.2 MESI Protocol Messages  . . . . . . . . . . . . . . . . . . . . 442 C.2.3 MESI State Diagram  . . . . . . . . . . . . . . . . . . . . . . 443 C.2.4 MESI Protocol Example  . . . . . . . . . . . . . . . . . . . . 445 C.3 Stores Result in Unnecessary Stalls  . . . . . . . . . . . . . . . . . . . 445 C.3.1 Store Buffers  . . . . . . . . . . . . . . . . . . . . . . . . . . 446 C.3.2 Store Forwarding  . . . . . . . . . . . . . . . . . . . . . . . . . 447 C.3.3 Store Buffers and Memory Barriers  . . . . . . . . . . . . . . 449 C.4 Store Sequences Result in Unnecessary Stalls  . . . . . . . . . . . . . . 451 C.4.1 Invalidate Queues  . . . . . . . . . . . . . . . . . . . . . . . . 452 C.4.2 Invalidate Queues and Invalidate Acknowledge  . . . . . . . . 452 C.4.3 Invalidate Queues and Memory Barriers  . . . . . . . . . . . . 453 C.5 Read and Write Memory Barriers  . . . . . . . . . . . . . . . . . . . . 455 C.6 Example Memory-Barrier Sequences  . . . . . . . . . . . . . . . . . . 456 C.6.1 Ordering-Hostile Architecture  . . . . . . . . . . . . . . . . . 456 C.6.2 Example 1  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 C.6.3 Example 2  . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 ix C.6.4 Example 3  . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 C.7 Memory-Barrier Instructions For Specific CPUs  . . . . . . . . . . . . 459 C.7.1 Alpha  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 C.7.2 AMD64  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 C.7.3 ARMv7-A/R  . . . . . . . . . . . . . . . . . . . . . . . . . . 463 C.7.4 IA64  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 C.7.5 PA-RISC  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 C.7.6 POWER / PowerPC  . . . . . . . . . . . . . . . . . . . . . . . 466 C.7.7 SPARC RMO, PSO, and TSO  . . . . . . . . . . . . . . . . . . 467 C.7.8 x86  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 C.7.9 zSeries  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 C.8 Are Memory Barriers Forever?  . . . . . . . . . . . . . . . . . . . . . 469 C.9 Advice to Hardware Designers  . . . . . . . . . . . . . . . . . . . . . 470 D Read-Copy Update Implementations  473 D.1 Sleepable RCU Implementation  . . . . . . . . . . . . . . . . . . . . 473 D.1.1 SRCU Implementation Strategy  . . . . . . . . . . . . . . . . 474 D.1.2 SRCU API and Usage  . . . . . . . . . . . . . . . . . . . . . 475 D.1.3 Implementation  . . . . . . . . . . . . . . . . . . . . . . . . . 478 D.1.4 SRCU Summary  . . . . . . . . . . . . . . . . . . . . . . . . 482 D.2 Hierarchical RCU Overview  . . . . . . . . . . . . . . . . . . . . . . 482 D.2.1 Review of RCU Fundamentals  . . . . . . . . . . . . . . . . . 483 D.2.2 Brief Overview of Classic RCU Implementation  . . . . . . . 483 D.2.3 RCU Desiderata  . . . . . . . . . . . . . . . . . . . . . . . . 484 D.2.4 Towards a More Scalable RCU Implementation  . . . . . . . . 485 D.2.5 Towards a Greener RCU Implementation  . . . . . . . . . . . 488 D.2.6 State Machine  . . . . . . . . . . . . . . . . . . . . . . . . . . 489 D.2.7 Use Cases  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 D.2.8 Testing  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 D.2.9 Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 D.3 Hierarchical RCU Code Walkthrough  . . . . . . . . . . . . . . . . . . 501 D.3.1 Data Structures and Kernel Parameters  . . . . . . . . . . . . . 501 D.3.2 External Interfaces  . . . . . . . . . . . . . . . . . . . . . . . 510 D.3.3 Initialization  . . . . . . . . . . . . . . . . . . . . . . . . . . 516 D.3.4 CPU Hotplug  . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 D.3.5 Miscellaneous Functions  . . . . . . . . . . . . . . . . . . . . 526 D.3.6 Grace-Period-Detection Functions  . . . . . . . . . . . . . . . . 527 D.3.7 Dyntick-Idle Functions  . . . . . . . . . . . . . . . . . . . . . . 537 D.3.8 Forcing Quiescent States  . . . . . . . . . . . . . . . . . . . . . 541 D.3.9 CPU-Stall Detection  . . . . . . . . . . . . . . . . . . . . . . 548 D.3.10 Possible Flaws and Changes  . . . . . . . . . . . . . . . . . . 549 D.4 Preemptible RCU  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 D.4.1 Conceptual RCU  . . . . . . . . . . . . . . . . . . . . . . . . . 551 D.4.2 Overview of Preemptible RCU Algorithm  . . . . . . . . . . . 553 D.4.3 Validation of Preemptible RCU  . . . . . . . . . . . . . . . . . 567 x E Read-Copy Update in Linux  571 E.1 RCU Usage Within Linux  . . . . . . . . . . . . . . . . . . . . . . . . . 571 E.2 RCU Evolution  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 E.2.1 2.6.27 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . 571 E.2.2 2.6.28 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . 571 E.2.3 2.6.29 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 573 E.2.4 2.6.31 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 574 E.2.5 2.6.32 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 574 E.2.6 2.6.33 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 574 E.2.7 2.6.34 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 575 E.2.8 2.6.35 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 575 E.2.9 2.6.36 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 575 E.2.10 2.6.37 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 575 E.2.11 2.6.38 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 576 E.2.12 2.6.39 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . 576 E.2.13 3.0 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 576 E.2.14 3.1 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . . 577 E.2.15 3.2 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . . 577 E.2.16 3.3 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . . 577 E.2.17 3.4 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 578 E.2.18 3.5 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 578 E.2.19 3.6 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 578 E.2.20 3.7 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 579 E.2.21 3.8 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 579 E.2.22 3.9 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . . 579 E.2.23 3.10 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . 579 E.2.24 3.11 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . 580 E.2.25 3.12 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . 580 E.2.26 3.13 Linux Kernel  . . . . . . . . . . . . . . . . . . . . . . . 580 F Answers to Quick Quizzes  583 F.1 Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583 F.2 Hardware and its Habits  . . . . . . . . . . . . . . . . . . . . . . . . . . 591 F.3 Tools of the Trade  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595 F.4 Counting  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 F.5 Partitioning and Synchronization Design  . . . . . . . . . . . . . . . . 622 F.6 Locking  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 F.7 Data Ownership  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638 F.8 Deferred Processing  . . . . . . . . . . . . . . . . . . . . . . . . . . . 640 F.9 Data Structures  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666 F.10 Validation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 F.11 Formal Verification  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 F.12 Putting It All Together  . . . . . . . . . . . . . . . . . . . . . . . . . 684 F.13 Advanced Synchronization  . . . . . . . . . . . . . . . . . . . . . . . . 687 F.14 Ease of Use  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 F.15 Conflicting Visions of the Future  . . . . . . . . . . . . . . . . . . . . 692 F.16 Important Questions  . . . . . . . . . . . . . . . . . . . . . . . . . . . 695 F.17 Synchronization Primitives  . . . . . . . . . . . . . . . . . . . . . . . 696 F.18 Why Memory Barriers?  . . . . . . . . . . . . . . . . . . . . . . . . . . 697 F.19 Read-Copy Update Implementations  . . . . . . . . . . . . . . . . . . 702 xi G Glossary and Bibliography  723 H Credits  757 H.1 Authors  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 H.2 Reviewers  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757 H.3 Machine Owners  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758 H.4 Original Publications  . . . . . . . . . . . . . . . . . . . . . . . . . . 759 H.5 Figure Credits  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759 H.6 Other Support  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761 xii Preface The purpose of this book is to help you understand how to program shared-memory parallel machines without risking your sanity. 4 By describing the algorithms and designs that have worked well in the past, we hope to help you avoid at least some of the pitfalls that have beset parallel-programming projects. But you should think of this book as a foundation on which to build, rather than as a completed cathedral. Your mission, if you choose to accept, is to help make further progress in the exciting field of parallel programming—progress that should in time render this book obsolete. Parallel programming is not as hard as some say, and we hope that this book makes your parallel-programming projects easier and more fun. In short, where parallel programming once focused on science, research, and grand- challenge projects, it is quickly becoming an engineering discipline. We therefore examine the specific tasks required for parallel programming and describe how they may be most effectively handled. In some surprisingly common special cases, they can even be automated. This book is written in the hope that presenting the engineering discipline underlying successful parallel-programming projects will free a new generation of parallel hackers from the need to slowly and painstakingly reinvent old wheels, enabling them to instead focus their energy and creativity on new frontiers. We sincerely hope that parallel programming brings you at least as much fun, excitement, and challenge that it has brought to us! 4 Or, perhaps more accurately, without much greater risk to your sanity than that incurred by non- parallel programming. Which, come to think of it, might not be saying all that much. Either way, Appendix  A discusses some important questions whose answers are less intuitive in parallel programming than in sequential programming. xiii xiv Chapter 1 Introduction Parallel programming has earned a reputation as one of the most difficult areas a hacker can tackle. Papers and textbooks warn of the perils of deadlock, livelock, race conditions, non-determinism, Amdahl’s-Law limits to scaling, and excessive realtime latencies. And these perils are quite real; we authors have accumulated uncounted years of experience dealing with them, and all of the emotional scars, grey hairs, and hair loss that go with such experiences. However, new technologies that are difficult to use at introduction invariably become easier over time. For example, the once-rare ability to drive a car is now commonplace in many countries. This dramatic change came about for two basic reasons: (1) cars became cheaper and more readily available, so that more people had the opportunity to learn to drive, and (2) cars became easier to operate due to automatic transmissions, automatic chokes, automatic starters, greatly improved reliability, and a host of other technological improvements. The same is true of a host of other technologies, including computers. It is no longer necessary to operate a keypunch in order to program. Spreadsheets allow most non-programmers to get results from their computers that would have required a team of specialists a few decades ago. Perhaps the most compelling example is web-surfing and content creation, which since the early 2000s has been easily done by untrained, uneducated people using various now-commonplace social-networking tools. As recently as 1968, such content creation was a far-out research project [ Eng68 ] , described at the time as “like a UFO landing on the White House lawn”[ Gri00 ]. Therefore, if you wish to argue that parallel programming will remain as difficult as it is currently perceived by many to be, it is you who bears the burden of proof, keeping in mind the many centuries of counter-examples in a variety of fields of endeavor. 1.1 Historic Parallel Programming Difficulties As indicated by its title, this book takes a different approach. Rather than complain about the difficulty of parallel programming, it instead examines the reasons why parallel programming is difficult, and then works to help the reader to overcome these difficulties. As will be seen, these difficulties have fallen into several categories, including: 1. The historic high cost and relative rarity of parallel systems. 1 2.  The typical researcher’s and practitioner’s lack of experience with parallel sys- tems. 3. The paucity of publicly accessible parallel code. 4.  The lack of a widely understood engineering discipline of parallel programming. 5.  The high overhead of communication relative to that of processing, even in tightly coupled shared-memory computers. Many of these historic difficulties are well on the way to being overcome. First, over the past few decades, the cost of parallel systems has decreased from many multiples of  that of a house to a fraction of that of a bicycle, courtesy of Moore’s Law. Papers calling out the advantages of multicore CPUs were published as early as 1996 [ ONH + 96 ] . IBM introduced simultaneous multi-threading into its high-end POWER family in 2000, and multicore in 2001. Intel introduced hyperthreading into its commodity Pentium line in November 2000, and both AMD and Intel introduced dual-core CPUs in 2005. Sun followed with the multicore/multi-threaded Niagara in late 2005. In fact, by 2008, it was becoming difficult to find a single-CPU desktop system, with single-core CPUs being relegated to netbooks and embedded devices. By 2012, even smartphones were starting to sport multiple CPUs. Second, the advent of low-cost and readily available multicore systems means that the once-rare experience of parallel programming is now available to almost all researchers and practitioners. In fact, parallel systems are now well within the budget of  students and hobbyists. We can therefore expect greatly increased levels of invention and innovation surrounding parallel systems, and that increased familiarity will over time make the once prohibitively expensive field of parallel programming much more friendly and commonplace. Third, in the 20 th century, large systems of highly parallel software were almost always closely guarded proprietary secrets. In happy contrast, the 21 st century has seen numerous open-source (and thus publicly available) parallel software projects, including the Linux kernel [ Tor03c ], database systems [ Pos08 ,  MS08 ] , and message- passing systems [ The08 ,  UoC08 ] . This book will draw primarily from the Linux kernel, but will provide much material suitable for user-level applications. Fourth, even though the large-scale parallel-programming projects of the 1980s and 1990s were almost all proprietary projects, these projects have seeded the community with a cadre of developers who understand the engineering discipline required to develop production-quality parallel code. A major purpose of this book is to present this engineering discipline. Unfortunately, the fifth difficulty, the high cost of communication relative to that of processing, remains largely in force. Although this difficulty has been receiving increasing attention during the new millennium, according to Stephen Hawking, the finite speed of light and the atomic nature of matter is likely to limit progress in this area  [ Gar07 ,  Moo03 ]. Fortunately, this difficulty has been in force since the late 1980s, so that the aforementioned engineering discipline has evolved practical and effective strategies for handling it. In addition, hardware designers are increasingly aware of  these issues, so perhaps future hardware will be more friendly to parallel software as discussed in Section  2.3. Quick Quiz 1.1:  Come on now!!! Parallel programming has been known to be exceedingly hard for many decades. You seem to be hinting that it is not so hard. What sort of game are you playing? 2 However, even though parallel programming might not be as hard as is commonly advertised, it is often more work than is sequential programming. Quick Quiz 1.2:  How could parallel programming  ever   be as easy as sequential programming? It therefore makes sense to consider alternatives to parallel programming. However, it is not possible to reasonably consider parallel-programming alternatives without understanding parallel-programming goals. This topic is addressed in the next section. 1.2 Parallel Programming Goals The three major goals of parallel programming (over and above those of sequential programming) are as follows: 1. Performance. 2. Productivity. 3. Generality. Quick Quiz 1.3:  Oh, really??? What about correctness, maintainability, robustness, and so on? Quick Quiz 1.4:  And if correctness, maintainability, and robustness don’t make the list, why do productivity and generality? Quick Quiz 1.5:  Given that parallel programs are much harder to prove correct than are sequential programs, again, shouldn’t correctness  really  be on the list? Quick Quiz 1.6:  What about just having fun? Each of these goals is elaborated upon in the following sections. 1.2.1 Performance Performance is the primary goal behind most parallel-programming effort. After all, if  performance is not a concern, why not do yourself a favor: Just write sequential code, and be happy? It will very likely be easier and you will probably get done much more quickly. Quick Quiz 1.7:  Are there no cases where parallel programming is about something other than performance? Note that “performance” is interpreted quite broadly here, including scalability (performance per CPU) and efficiency (for example, performance per watt). That said, the focus of performance has shifted from hardware to parallel software. This change in focus is due to the fact that, although Moore’s Law continues to deliver increases in transistor density, it has ceased to provide the traditional single-threaded performance increases. This can be seen in Figure  1.1 . 1 ,  which shows that writing single-threaded code and simply waiting a year or two for the CPUs to catch up may no longer be an option. Given the recent trends on the part of all major manufacturers 1 This plot shows clock frequencies for newer CPUs theoretically capable of retiring one or more instructions per clock, and MIPS (millions of instructions per second, usually from the old Dhrystone benchmark) for older CPUs requiring multiple clocks to execute even the simplest instruction. The reason for shifting between these two measures is that the newer CPUs’ ability to retire multiple instructions per clock is typically limited by memory-system performance. Furthermore, the benchmarks commonly used on the older CPUs are obsolete, and it is difficult to run the newer benchmarks on systems containing the old CPUs, in part because it is hard to find working instances of the old CPUs. 3  0.1  1  10  100  1000      1    9    7    5      1    9    8    0      1    9    8    5      1    9    9    0      1    9    9    5      2    0    0    0      2    0    0    5      2    0    1    0      2    0    1    5    C    P    U    C    l   o   c    k    F   r   e   q   u   e   n   c   y    /    M    I    P    S Year Figure 1.1: MIPS/Clock-Frequency Trend for Intel CPUs towards multicore/multithreaded systems, parallelism is the way to go for those wanting the avail themselves of the full performance of their systems. Even so, the first goal is performance rather than scalability, especially given that the easiest way to attain linear scalability is to reduce the performance of each CPU  [ Tor01 ]. Given a four-CPU system, which would you prefer? A program that provides 100 transactions per second on a single CPU, but does not scale at all? Or a program that provides 10 transactions per second on a single CPU, but scales perfectly? The first program seems like a better bet, though the answer might change if you happened to be one of the lucky few with access to a 32-CPU system. That said, just because you have multiple CPUs is not necessarily in and of itself  a reason to use them all, especially given the recent decreases in price of multi-CPU systems. The key point to understand is that parallel programming is primarily a performance optimization, and, as such, it is one potential optimization of many. If your program is fast enough as currently written, there is no reason to optimize, either by parallelizing it or by applying any of a number of potential sequential optimizations. 2 By the same token, if you are looking to apply parallelism as an optimization to a sequential program, then you will need to compare parallel algorithms to the best sequential algorithms. This may require some care, as far too many publications ignore the sequential case when analyzing the performance of parallel algorithms. 1.2.2 Productivity Quick Quiz 1.8:  Why all this prattling on about non-technical issues??? And not just any  non-technical issue, but  productivity  of all things? Who cares? Productivity has been becoming increasingly important in recent decades. To see this, consider that the price of early computers was tens of millions of dollars at a time 2 Of course, if you are a hobbyist whose primary interest is writing parallel software, that is more than enough reason to parallelize whatever software you are interested in. 4  0.1  1  10  100  1000  10000      1    9    7    5      1    9    8    0      1    9    8    5      1    9    9    0      1    9    9    5      2    0    0    0      2    0    0    5      2    0    1    0      2    0    1    5    M    I    P    S   p   e   r    D    i   e Year Figure 1.2: MIPS per Die for Intel CPUs when engineering salaries were but a few thousand dollars a year. If dedicating a team of ten engineers to such a machine would improve its performance, even by only 10%, then their salaries would be repaid many times over. One such machine was the CSIRAC, the oldest still-intact stored-program computer, put into operation in 1949  [ Mus04 ,  Mel06 ]. Because this machine was built before the transistor era, it was constructed of 2,000 vacuum tubes, ran with a clock frequency of  1kHz, consumed 30kW of power, and weighed more than three metric tons. Given that this machine had but 768 words of RAM, it is safe to say that it did not suffer from the productivity issues that often plague today’s large-scale software projects. Today, it would be quite difficult to purchase a machine with so little computing power. Perhaps the closest equivalents are 8-bit embedded microprocessors exemplified by the venerable Z80 [ Wik08 ] , but even the old Z80 had a CPU clock frequency more than 1,000 times faster than the CSIRAC. The Z80 CPU had 8,500 transistors, and could be purchased in 2008 for less than $2 US per unit in 1,000-unit quantities. In stark contrast to the CSIRAC, software-development costs are anything but insignificant for the Z80. The CSIRAC and the Z80 are two points in a long-term trend, as can be seen in Figure  1.2.  This figure plots an approximation to computational power per die over the past three decades, showing a consistent four-order-of-magnitude increase. Note that the advent of multicore CPUs has permitted this increase to continue unabated despite the clock-frequency wall encountered in 2003. One of the inescapable consequences of the rapid decrease in the cost of hardware is that software productivity becomes increasingly important. It is no longer sufficient merely to make efficient use of the hardware: It is now necessary to make extremely efficient use of software developers as well. This has long been the case for sequential hardware, but parallel hardware has become a low-cost commodity only recently. There- fore, only recently has high productivity become critically important when creating parallel software. Quick Quiz 1.9:  Given how cheap parallel hardware has become, how can anyone 5 afford to pay people to program it? Perhaps at one time, the sole purpose of parallel software was performance. Now, however, productivity is gaining the spotlight. 1.2.3 Generality One way to justify the high cost of developing parallel software is to strive for maximal generality. All else being equal, the cost of a more-general software artifact can be spread over more users than that of a less-general one. Unfortunately, generality often comes at the cost of performance, productivity, or both. To see this, consider the following popular parallel programming environments: C/C++ “Locking Plus Threads”  : This category, which includes POSIX Threads (pthreads) [ Ope97 ] , Windows Threads, and numerous operating-system kernel environments, offers excellent performance (at least within the confines of a single SMP system) and also offers good generality. Pity about the relatively low productivity. Java  : This general purpose and inherently multithreaded programming environment is widely believed to offer much higher productivity than C or C++, courtesy of  the automatic garbage collector and the rich set of class libraries. However, its performance, though greatly improved in the early 2000s, lags that of C and C++. MPI  : This Message Passing Interface [ MPI08 ] powers the largest scientific and technical computing clusters in the world and offers unparalleled performance and scalability. In theory, it is general purpose, but it is mainly used for scientific and technical computing. Its productivity is believed by many to be even lower than that of C/C++ “locking plus threads” environments. OpenMP  : This set of compiler directives can be used to parallelize loops. It is thus quite specific to this task, and this specificity often limits its performance. It is, however, much easier to use than MPI or C/C++ “locking plus threads.” SQL  : Structured Query Language  [ Int92 ] is specific to relational database queries. However, its performance is quite good as measured by the Transaction Processing Performance Council (TPC) benchmark results [ Tra01 ]. Productivity is excellent; in fact, this parallel programming environment enables people to make good use of a large parallel system despite having little or no knowledge of parallel programming concepts. The nirvana of parallel programming environments, one that offers world-class performance, productivity, and generality, simply does not yet exist. Until such a nirvana appears, it will be necessary to make engineering tradeoffs among performance, productivity, and generality. One such tradeoff is shown in Figure  1.3,  which shows how productivity becomes increasingly important at the upper layers of the system stack, while performance and generality become increasingly important at the lower layers of the system stack. The huge development costs incurred at the lower layers must be spread over equally huge numbers of users (hence the importance of generality), and performance lost in lower layers cannot easily be recovered further up the stack. In the upper layers of the stack, there might be very few users for a given specific application, in which case productivity concerns are paramount. This explains the 6 Application Middleware (e.g., DBMS) System Libraries Operating System Kernel Firmware Hardware Productivity      P     e     r      f     o     r     m     a     n     c     e  G  e n  e r    a l     i      t      y  Figure 1.3: Software Layers and Performance, Productivity, and Generality User 2 User 3 User 4 User 1 General−Purpose Environment for User 1 Env Productiv e Special−Purpose Special−Purpose Special−Purpose Environment Productive for User 3 Special−Purpose Environment Productive for User 4 Productive for User 2 Environment HW /  Abs Figure 1.4: Tradeoff Between Productivity and Generality tendency towards “bloatware” further up the stack: extra hardware is often cheaper than the extra developers. This book is intended for developers working near the bottom of  the stack, where performance and generality are of great concern. It is important to note that a tradeoff between productivity and generality has existed for centuries in many fields. For but one example, a nailgun is more productive than a hammer for driving nails, but in contrast to the nailgun, a hammer can be used for many things besides driving nails. It should therefore be no surprise to see similar tradeoffs appear in the field of parallel computing. This tradeoff is shown schematically in Figure  1.4 . Here, users 1, 2, 3, and 4 have specific jobs that they need the computer to helpthemwith. Themostproductivepossiblelanguageorenvironmentforagivenuseris one that simply does that user’s job, without requiring any programming, configuration, or other setup. Quick Quiz 1.10:  This is a ridiculously unachievable ideal! Why not focus on something that is achievable in practice? Unfortunately, a system that does the job required by user 1 is unlikely to do 7 user 2’s job. In other words, the most productive languages and environments are domain-specific, and thus by definition lacking generality. Another option is to tailor a given programming language or environment to the hardware system (for example, low-level languages such as assembly, C, C++, or Java) or to some abstraction (for example, Haskell, Prolog, or Snobol), as is shown by the circular region near the center of Figure  1.4.  These languages can be considered to be general in the sense that they are equally ill-suited to the jobs required by users 1, 2, 3, and 4. In other words, their generality is purchased at the expense of decreased productivity when compared to domain-specific languages and environments. Worse yet, a language that is tailored to a given abstraction is also likely to suffer from performance and scalability problems unless and until someone figures out how to efficiently map that abstraction to real hardware. With the three often-conflicting parallel-programming goals of performance, pro- ductivity, and generality in mind, it is now time to look into avoiding these conflicts by considering alternatives to parallel programming. 1.3 Alternatives to Parallel Programming In order to properly consider alternatives to parallel programming, you must first decide on what exactly you expect the parallelism to do for you. As seen in Section  1.2 , the primary goals of parallel programming are performance, productivity, and generality. Because this book is intended for developers working on performance-critical code near the bottom of the software stack, the remainder of this section focuses primarily on performance improvement. It is important to keep in mind that parallelism is but one way to improve perfor- mance. Other well-known approaches include the following, in roughly increasing order of difficulty: 1. Run multiple instances of a sequential application. 2. Make the application use existing parallel software. 3. Apply performance optimization to the serial application. These approaches are covered in the following sections. 1.3.1 Multiple Instances of a Sequential Application Running multiple instances of a sequential application can allow you to do parallel programming without actually doing parallel programming. There are a large number of ways to approach this, depending on the structure of the application. If your program is analyzing a large number of different scenarios, or is analyzing a large number of independent data sets, one easy and effective approach is to create a single sequential program that carries out a single analysis, then use any of a number of  scripting environments (for example the  bash  shell) to run a number of instances of  that sequential program in parallel. In some cases, this approach can be easily extended to a cluster of machines. This approach may seem like cheating, and in fact some denigrate such programs as “embarrassingly parallel”. And in fact, this approach does have some potential 8 disadvantages, including increased memory consumption, waste of CPU cycles recom- puting common intermediate results, and increased copying of data. However, it is often extremely productive, garnering extreme performance gains with little or no added effort. 1.3.2 Use Existing Parallel Software There is no longer any shortage of parallel software environments that can present a single-threaded programming environment, including relational databases [ Dat82 ], web-application servers, and map-reduce environments. For example, a common design provides a separate program for each user, each of which generates SQL programs. Theseper-userSQLprogramsarerunconcurrentlyagainstacommonrelationaldatabase, which automatically runs the users’ queries concurrently. The per-user programs are responsible only for the user interface, with the relational database taking full responsi- bility for the difficult issues surrounding parallelism and persistence. Taking this approach often sacrifices some performance, at least when compared to carefully hand-coding a fully parallel application. However, such sacrifice is often  justified given the huge reduction in development effort required. 1.3.3 Performance Optimization Up through the early 2000s, CPU performance was doubling every 18 months. In such an environment, it is often much more important to create new functionality than to do careful performance optimization. Now that Moore’s Law is “only” increasing transistor density instead of increasing both transistor density and per-transistor performance, it might be a good time to rethink the importance of performance optimization. After all, new hardware generations no longer bring significant single-threaded performance improvements. Furthermore, many performance optimizations can also conserve energy. From this viewpoint, parallel programming is but another performance optimization, albeit one that is becoming much more attractive as parallel systems become cheaper and more readily available. However, it is wise to keep in mind that the speedup available from parallelism is limited to roughly the number of CPUs. In contrast, the speedup available from traditional single-threaded software optimizations can be much larger. For example, replacing a long linked list with either a hash table or a search tree can improve performance by many orders of magnitude. This highly optimized single-threaded program might run much faster than its unoptimized parallel counterpart, making parallelization unnecessary. Of course, a highly optimized parallel program would be even better, give or take the added development effort required. Furthermore, different programs might have different performance bottlenecks. For example, if your program spends most of its time waiting on data from your disk drive, using multiple CPUs will probably just increase the time wasted waiting for the disks. In fact, if the program was reading from a single large file laid out sequentially on a rotating disk, parallelizing your program might well make it a lot slower due to the added seek overhead. You should instead optimize the data layout so that the file can be smaller (thus faster to read), split the file into chunks which can be accessed in parallel from different drives, cache frequently accessed data in main memory, or, if possible, reduce the amount of data that must be read. Quick Quiz 1.11:  What other bottlenecks might prevent additional CPUs from providing additional performance? 9 Partitioning Work Access Control Parallel With Hardware Interacting Performance Productivity  Generality  Resource Partitioning and Replication Figure 1.5: Categories of Tasks Required of Parallel Programmers Parallelism can be a powerful optimization technique, but it is not the only such technique, nor is it appropriate for all situations. Of course, the easier it is to parallelize your program, the more attractive parallelization becomes as an optimization. Paral- lelization has a reputation of being quite difficult, which leads to the question “exactly what makes parallel programming so difficult?” 1.4 What Makes Parallel Programming Hard? It is important to note that the difficulty of parallel programming is as much a human- factors issue as it is a set of technical properties of the parallel programming problem. We do need human beings to be able to tell parallel systems what to do, otherwise known as programming. But parallel programming involves two-way communication, with a program’s performance and scalability being the communication from the machine to the human. In short, the human writes a program telling the computer what to do, and the computer critiques this program via the resulting performance and scalability. Therefore, appeals to abstractions or to mathematical analyses will often be of severely limited utility. In the Industrial Revolution, the interface between human and machine was eval- uated by human-factor studies, then called time-and-motion studies. Although there have been a few human-factor studies examining parallel programming  [ ENS05 ,  ES05 , HCS + 05 ,  SS94 ] , these studies have been extremely narrowly focused, and hence unable to demonstrate any general results. Furthermore, given that the normal range of pro- grammer productivity spans more than an order of magnitude, it is unrealistic to expect an affordable study to be capable of detecting (say) a 10% difference in productivity. Although the multiple-order-of-magnitude differences that such studies  can  reliably detect are extremely valuable, the most impressive improvements tend to be based on a long series of 10% improvements. We must therefore take a different approach. One such approach is to carefully consider the tasks that parallel programmers must undertake that are not required of sequential programmers. We can then evaluate how well a given programming language or environment assists the developer with these tasks. These tasks fall into the four categories shown in Figure  1.5,  each of which is covered in the following sections. 10 1.4.1 Work Partitioning Work partitioning is absolutely required for parallel execution: if there is but one “glob” of work, then it can be executed by at most one CPU at a time, which is by definition sequential execution. However, partitioning the code requires great care. For example, uneven partitioning can result in sequential execution once the small partitions have completed [ Amd67 ]. In less extreme cases, load balancing can be used to fully utilize available hardware and restore performance and scalabilty. Although partitioning can greatly improve performance and scalability, it can also increase complexity. For exmample, partitioning can complicate handling of global errors and events: A parallel program may need to carry out non-trivial synchronization in order to safely process such global events. More generally, each partition requires some sort of communication: After all, if a given thread did not communicate at all, it would have no effect and would thus not need to be executed. However, because communication incurs overhead, careless partitioning choices can result in severe performance degradation. Furthermore, the number of concurrent threads must often be controlled, as each such thread occupies common resources, for example, space in CPU caches. If too many threads are permitted to execute concurrently, the CPU caches will overflow, resulting in high cache miss rate, which in turn degrades performance. Conversely, large numbers of threads are often required to overlap computation and I/O so as to fully utilize I/O devices. Quick Quiz 1.12:  Other than CPU cache capacity, what might require limiting the number of concurrent threads? Finally, permitting threads to execute concurrently greatly increases the program’s state space, which can make the program difficult to understand and debug, degrading productivity. All else being equal, smaller state spaces having more regular structure are more easily understood, but this is a human-factors statement as much as it is a technical or mathematical statement. Good parallel designs might have extremely large state spaces, but nevertheless be easy to understand due to their regular structure, while poor designs can be impenetrable despite having a comparatively small state space. The best designs exploit embarrassing parallelism, or transform the problem to one having an embarrassingly parallel solution. In either case, “embarrassingly parallel” is in fact an embarrassment of riches. The current state of the art enumerates good designs; more work is required to make more general judgments on state-space size and structure. 1.4.2 Parallel Access Control Given a single-threaded sequential program, that single thread has full access to all of  the program’s resources. These resources are most often in-memory data structures, but can be CPUs, memory (including caches), I/O devices, computational accelerators, files, and much else besides. The first parallel-access-control issue is whether the form of the access to a given resource depends on that resource’s location. For example, in many message-passing environments, local-variable access is via expressions and assignments, while remote- variable access uses an entirely different syntax, usually involving messaging. The POSIX Threads environment [ Ope97 ], Structured Query Language (SQL)  [ Int92 ], and partitioned global address-space (PGAS) environments such as Universal Parallel C (UPC) [ EGCD03 ] offer implicit access, while Message Passing Interface (MPI)  [ MPI08 ] offers explicit access because access to remote data requires explicit messaging. 11 The other parallel-access-control issue is how threads coordinate access to the re- sources. This coordination is carried out by the very large number of synchronization mechanisms provided by various parallel languages and environments, including mes- sage passing, locking, transactions, reference counting, explicit timing, shared atomic variables, and data ownership. Many traditional parallel-programming concerns such as deadlock, livelock, and transaction rollback stem from this coordination. This frame- work can be elaborated to include comparisons of these synchronization mechanisms, for example locking vs. transactional memory  [ MMW07 ] , but such elaboration is beyond the scope of this section. 1.4.3 Resource Partitioning and Replication The most effective parallel algorithms and systems exploit resource parallelism, so much so that it is usually wise to begin parallelization by partitioning your write-intensive resources and replicating frequently accessed read-mostly resources. The resource in question is most frequently data, which might be partitioned over computer systems, mass-storage devices, NUMA nodes, CPU cores (or dies or hardware threads), pages, cache lines, instances of synchronization primitives, or critical sections of code. For example, partitioning over locking primitives is termed “data locking”  [BK85 ]. Resource partitioning is frequently application dependent. For example, numerical applications frequently partition matrices by row, column, or sub-matrix, while com- mercial applications frequently partition write-intensive data structures and replicate read-mostly data structures. Thus, a commercial application might assign the data for a given customer to a given few computer out of a large cluster. An application might statically partition data, or dynamically change the partitioning over time. Resource partitioning is extremely effective, but it can be quite challenging for complex multilinked data structures. 1.4.4 Interacting With Hardware Hardware interaction is normally the domain of the operating system, the compiler, libraries, or other software-environment infrastructure. However, developers working with novel hardware features and components will often need to work directly with such hardware. In addition, direct access to the hardware can be required when squeezing the last drop of performance out of a given system. In this case, the developer may need to tailor or configure the application to the cache geometry, system topology, or interconnect protocol of the target hardware. In some cases, hardware may be considered to be a resource which is subject to partitioning or access control, as described in the previous sections. 1.4.5 Composite Capabilities Although these four capabilities are fundamental, good engineering practice uses com- posites of these capabilities. For example, the data-parallel approach first partitions the data so as to minimize the need for inter-partition communication, partitions the code accordingly, and finally maps data partitions and threads so as to maximize throughput while minimizing inter-thread communication, as shown in Figure  1.6.  The developer can then consider each partition separately, greatly reducing the size of the relevant state space, in turn increasing productivity. Even though some problems are non-partitionable, 12 Partitioning Work Access Control Parallel With Hardware Interacting Performance Productivity  Generality  Resource Partitioning and Replication Figure 1.6: Ordering of Parallel-Programming Tasks clever transformations into forms permitting partitioning can sometimes greatly enhance both performance and scalability [ Met99 ]. 1.4.6 HowDoLanguagesandEnvironmentsAssistWithTheseTasks? Although many environments require the developer to deal manually with these tasks, there are long-standing environments that bring significant automation to bear. The poster child for these environments is SQL, many implementations of which auto- matically parallelize single large queries and also automate concurrent execution of  independent queries and updates. These four categories of tasks must be carried out in all parallel programs, but that of course does not necessarily mean that the developer must manually carry out these tasks. We can expect to see ever-increasing automation of these four tasks as parallel systems continue to become cheaper and more readily available. Quick Quiz 1.13:  Are there any other obstacles to parallel programming? 1.5 Guide to This Book This book is not a collection of optimal algorithms with tiny areas of applicability; instead, it is a handbook of widely applicable and heavily used techniques. Of course, we could not resist the urge to include some of our favorites that have not (yet!) passed the test of time (what author could?), but we have nonetheless gritted our teeth and banished our darlings to appendices. Perhaps in time, some of them will see enough use that we can promote them into the main body of the text. 1.5.1 Quick Quizzes “Quick quizzes” appear throughout this book, and the answers may be found in Ap- pendix  F  starting on page  583.  Some of them are based on material in which that quick quiz appears, but others require you to think beyond that section, and, in some cases, beyond the realm of current knowledge. As with most endeavors, what you get out of  this book is largely determined by what you are willing to put into it. Therefore, readers 13 1 git clone git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git 2 cd perfbook 3 make 4 evince perfbook.pdf Figure 1.7: Creating an Up-To-Date PDF 1 git remote update 2 git checkout origin/master 3 make 4 evince perfbook.pdf Figure 1.8: Generating an Updated PDF who make a genuine effort to solve a quiz before looking at the answer find their effort repaid handsomely with increased understanding of parallel programming. Quick Quiz 1.14:  Where are the answers to the Quick Quizzes found? Quick Quiz 1.15:  Some of the Quick Quiz questions seem to be from the viewpoint of the reader rather than the author. Is that really the intent? Quick Quiz 1.16:  These Quick Quizzes just are not my cup of tea. What do you recommend? 1.5.2 Sample Source Code This book discusses its fair share of source code, and in many cases this source code may be found in the  CodeSamples  directory of this book’s git tree. For example, on UNIX systems, you should be able to type: find CodeSamples -name rcu_rcpls.c -print to locate the file  rcu_rcpls.c , which is called out in Section  8.3.5.  Other types of systems have well-known ways of locating files by filename. The source to this book may be found in the git archive at git://git.kernel. org/pub/scm/linux/kernel/git/paulmck/perfbook.git ,  and  git  it- self is available as part of most mainstream Linux distributions. To create and display a current L A T E X source tree of this book, use the list of Linux commands shown in Figure  1.7.  In some environments, the  evince  that displays  perfbook.pdf  may need to be replaced, for example, with  acroread . The  git clone  command need only be used the first time you create a PDF, subsequently, you can run the commands shown in Figure  1.8  to pull in any updates and generate an updated PDF. The commands in Figure  1.8  must be run within the  perfbook  directory created by the commands shown in Figure  1.7. PDFsofthisbookaresporadicallypostedat http://kernel.org/pub/linux / kernel/people/paulmck/perfbook/perfbook.html andat http://ww w. rdrop.com/users/paulmck/perfbook/ . 14 Chapter 2 Hardware and its Habits Most people have an intuitive understanding that passing messages between systems is considerably more expensive than performing simple calculations within the confines of  a single system. However, it is not always so clear that communicating among threads within the confines of a single shared-memory system can also be quite expensive. This chapter therefore looks at the cost of synchronization and communication within a shared-memory system. These few pages can do no more than scratch the surface of  shared-memory parallel hardware design; readers desiring more detail would do well to start with a recent edition of Hennessy and Patterson’s classic text [ HP95 ]. Quick Quiz 2.1:  Why should parallel programmers bother learning low-level prop- erties of the hardware? Wouldn’t it be easier, better, and more general to remain at a higher level of abstraction? 2.1 Overview Careless reading of computer-system specification sheets might lead one to believe that CPU performance is a footrace on a clear track, as illustrated in Figure  2.1,  where the race always goes to the swiftest. Although there are a few CPU-bound benchmarks that approach the ideal shown in Figure  2.1 , the typical program more closely resembles an obstacle course than a race track. This is because the internal architecture of CPUs has changed dramatically over the past few decades, courtesy of Moore’s Law. These changes are described in the following sections. 2.1.1 Pipelined CPUs In the early 1980s, the typical microprocessor fetched an instruction, decoded it, and executed it, typically taking  at least   three clock cycles to complete one instruction before proceeding to the next. In contrast, the CPU of the late 1990s and early 2000s will be executing many instructions simultaneously, using a deep “pipeline” to control the flow of instructions internally to the CPU. These modern hardware features can greatly improve performance, as illustrated by Figure  2.2. Achieving full performance with a CPU having a long pipeline requires highly predictable control flow through the program. Suitable control flow can be provided by a program that executes primarily in tight loops, for example, arithmetic on large 15 Figure 2.1: CPU Performance at its Best matrices or vectors. The CPU can then correctly predict that the branch at the end of the loop will be taken in almost all cases, allowing the pipeline to be kept full and the CPU to execute at full speed. However, given a program with many loops with small loop counts, or given an object-oriented program with many virtual objects that can reference many different real objects, all with different implementations for frequently invoked member functions, then it is difficult or even impossible for the CPU to predict where a given branch might lead. The CPU must then either stall waiting for execution to proceed far enough to know for certain where the branch will lead, or guess — and, in the face of programs Figure 2.2: CPUs Old and New 16 Figure 2.3: CPU Meets a Pipeline Flush with unpredictable control flow, frequently guess wrong. Wrong guesses can be very expensive because the CPU must discard the results of any instructions that were executed speculatively based on the wrong guess. In addition, regardless of whether the CPU stalls or guesses, the pipeline will empty and have to be refilled, leading to stalls that can drastically reduce performance, as fancifully depicted in Figure  2.3. Unfortunately, pipeline flushes are not the only hazards in the obstacle course that modern CPUs must run. The next section covers the hazards of referencing memory. 2.1.2 Memory References In the 1980s, it often took less time for a microprocessor to load a value from memory than it did to execute an instruction. In 2006, a microprocessor might be capable of exe- cuting hundreds or even thousands of instructions in the time required to access memory. This disparity is due to the fact that Moore’s Law has increased CPU performance at a much greater rate than it has increased memory performance, in part due to the rate at which memory sizes have grown. For example, a typical 1970s minicomputer might have 4KB (yes, kilobytes, not megabytes, let alone gigabytes) of main memory, with single-cycle access . 1 In 2008, CPU designers still can construct a 4KB memory with single-cycle access, even on systems with multi-GHz clock frequencies. And in fact they frequently do construct such memories, but they now call them “level-0 caches”. Although the large caches found on modern microprocessors can do quite a bit to help combat memory-access latencies, these caches require highly predictable data- access patterns to successfully hide memory latencies. Unfortunately, common oper- ations, such as traversing a linked list, have extremely unpredictable memory-access patterns — after all, if the pattern was predictable, us software types would not bother with the pointers, right? Therefore, as shown in Figure  2.4,  memory references are often severe obstacles for modern CPUs. 1 It is only fair to add that each of these single cycles consumed no less than 1.6 microseconds . 17 Figure 2.4: CPU Meets a Memory Reference Thus far, we have only been considering obstacles that can arise during a given CPU’s execution of single-threaded code. Multi-threading presents additional obstacles to the CPU, as described in the following sections. 2.1.3 Atomic Operations One such obstacle is atomic operations. The whole idea of an atomic operation in some sense conflicts with the piece-at-a-time assembly-line operation of a CPU pipeline. To hardware designers’ credit, modern CPUs use a number of extremely clever tricks to make such operations  look   atomic even though they are in fact being executed piece-at- a-time, but even so, there are cases where the pipeline must be delayed or even flushed in order to permit a given atomic operation to complete correctly. The resulting effect on performance is depicted in Figure  2.5. Unfortunately, atomic operations usually apply only to single elements of data. Be- cause many parallel algorithms require that ordering constraints be maintained between updates of multiple data elements, most CPUs provide memory barriers. These memory barriers also serve as performance-sapping obstacles, as described in the next section. QuickQuiz2.2:  Whattypesofmachineswouldallowatomicoperationsonmultiple data elements? Fortunately, CPU designers have focused heavily on atomic operations, so that as of  early 2012 they have greately reduced (but by no means eliminated) their overhead. 2.1.4 Memory Barriers Memory barriers will be considered in more detail in Section  13.2  and Appendix  C.  In the meantime, consider the following simple lock-based critical section: 18 Figure 2.5: CPU Meets an Atomic Operation 1 spin_lock(&mylock); 2 a = a + 1; 3 spin_unlock(&mylock); If the CPU were not constrained to execute these statements in the order shown, the effect would be that the variable “a” would be incremented without the protection of  “mylock”, which would certainly defeat the purpose of acquiring it. To prevent such destructive reordering, locking primitives contain either explicit or implicit memory barriers. Because the whole purpose of these memory barriers is to prevent reorderings that the CPU would otherwise undertake in order to increase performance, memory barriers almost always reduce performance, as depicted in Figure  2.6. As with atomic operations, CPU designers have been working hard to reduce memory-barrier overhead, and have made substantial progress. 2.1.5 Cache Misses An additional multi-threading obstacle to CPU performance is the “cache miss”. As noted earlier, modern CPUs sport large caches in order to reduce the performance penalty that would otherwise be incurred due to high memory latencies. However, these caches are actually counter-productive for variables that are frequently shared among CPUs. This is because when a given CPU wishes to modify the variable, it is most likely the case that some other CPU has modified it recently. In this case, the variable will be in that other CPU’s cache, but not in this CPU’s cache, which will therefore incur an expensive cache miss (see Section  C.1  for more detail). Such cache misses form a major obstacle to CPU performance, as shown in Figure  2.7. Quick Quiz 2.3:  So have CPU designers also greatly reduced the overhead of cache misses? 2.1.6 I/O Operations A cache miss can be thought of as a CPU-to-CPU I/O operation, and as such is one of the cheapest I/O operations available. I/O operations involving networking, mass 19 Figure 2.6: CPU Meets a Memory Barrier storage, or (worse yet) human beings pose much greater obstacles than the internal obstacles called out in the prior sections, as illustrated by Figure  2.8. This is one of the differences between shared-memory and distributed-system paral- lelism: shared-memory parallel programs must normally deal with no obstacle worse than a cache miss, while a distributed parallel program will typically incur the larger network communication latencies. In both cases, the relevant latencies can be thought of as a cost of communication—a cost that would be absent in a sequential program. Therefore, the ratio between the overhead of the communication to that of the actual work being performed is a key design parameter. A major goal of parallel hardware de- sign is to reduce this ratio as needed to achieve the relevant performance and scalability goals. In turn, as will be seen in Chapter  5,  a major goal of parallel software design is to reduce the frequency of expensive operations like communications cache misses. Of course, it is one thing to say that a given operation is an obstacle, and quite another to show that the operation is a  significant   obstacle. This distinction is discussed in the following sections. 2.2 Overheads This section presents actual overheads of the obstacles to performance listed out in the previous section. However, it is first necessary to get a rough view of hardware system architecture, which is the subject of the next section. 2.2.1 Hardware System Architecture Figure  2.9  shows a rough schematic of an eight-core computer system. Each die has a pair of CPU cores, each with its cache, as well as an interconnect allowing the pair of  20 Figure 2.7: CPU Meets a Cache Miss CPUs to communicate with each other. The system interconnect in the middle of the diagram allows the four dies to communicate, and also connects them to main memory. Data moves through this system in units of “cache lines”, which are power-of-two fixed-size aligned blocks of memory, usually ranging from 32 to 256 bytes in size. When a CPU loads a variable from memory to one of its registers, it must first load the cacheline containing that variable into its cache. Similarly, when a CPU stores a value from one of its registers into memory, it must also load the cacheline containing that variable into its cache, but must also ensure that no other CPU has a copy of that cacheline. For example, if CPU 0 were to perform a compare-and-swap (CAS) operation on a variable whose cacheline resided in CPU 7’s cache, the following over-simplified sequence of events might ensue: 1. CPU 0 checks its local cache, and does not find the cacheline. 2.  The request is forwarded to CPU 0’s and 1’s interconnect, which checks CPU 1’s local cache, and does not find the cacheline. 3.  The request is forwarded to the system interconnect, which checks with the other three dies, learning that the cacheline is held by the die containing CPU 6 and 7. 4.  The request is forwarded to CPU 6’s and 7’s interconnect, which checks both CPUs’ caches, finding the value in CPU 7’s cache. 21 Figure 2.8: CPU Waits for I/O Completion 5.  CPU 7 forwards the cacheline to its interconnect, and also flushes the cacheline from its cache. 6. CPU 6’s and 7’s interconnect forwards the cacheline to the system interconnect. 7.  The system interconnect forwards the cacheline to CPU 0’s and 1’s interconnect. 8. CPU 0’s and 1’s interconnect forwards the cacheline to CPU 0’s cache. 9. CPU 0 can now perform the CAS operation on the value in its cache. Quick Quiz 2.4:  This is a  simplified   sequence of events? How could it  possibly  be any more complex? Quick Quiz 2.5:  Why is it necessary to flush the cacheline from CPU 7’s cache? 2.2.2 Costs of Operations The overheads of some common operations important to parallel programs are displayed in Table  2.1.  This system’s clock period rounds to 0.6ns. Although it is not unusual for modern microprocessors to be able to retire multiple instructions per clock period, the operations will be normalized to a full clock period in the third column, labeled “Ratio”. The first thing to note about this table is the large values of many of the ratios. The best-case CAS operation consumes almost forty nanoseconds, a duration more than sixty times that of the clock period. Here, “best case” means that the same CPU now performing the CAS operation on a given variable was the last CPU to operate on this variable, so that the corresponding cache line is already held in that CPU’s cache, Similarly, the best-case lock operation (a “round trip” pair consisting of a lock acquisition followed by a lock release) consumes more than sixty nanoseconds, or 22 CPU 0 Cache CPU 1 Cache Interconnect CPU 2 Cache CPU 3 Cache Interconnect CPU 6 Cache CPU 7 Cache Interconnect CPU 4 Cache CPU 5 Cache Interconnect MemoryMemory Speed−of−Light Round−Trip Distance in Vacuum for 1.8GHz Clock Period (8cm) System Interconnect Figure 2.9: System Hardware Architecture Operation Cost (ns) Ratio Clock period 0.6 1.0 Best-case CAS 37.9 63.2 Best-case lock 65.6 109.3 Single cache miss 139.5 232.5 CAS cache miss 306.0 510.0 Comms Fabric 3,000 5,000 Global Comms 130,000,000 216,000,000 Table 2.1: Performance of Synchronization Mechanisms on 4-CPU 1.8GHz AMD Opteron 844 System more than one hundred clock cycles. Again, “best case” means that the data structure representing the lock is already in the cache belonging to the CPU acquiring and releasing the lock. The lock operation is more expensive than CAS because it requires two atomic operations on the lock data structure. An operation that misses the cache consumes almost one hundred and forty nanosec- onds, or more than two hundred clock cycles. The code used for this cache-miss measurement passes the cache line back and forth between a pair of CPUs, so this cache miss is satisfied not from memory, but rather from the other CPU’s cache. A CAS operation, which must look at the old value of the variable as well as store a new value, consumes over three hundred nanoseconds, or more than five hundred clock cycles. Think about this a bit. In the time required to do  one  CAS operation, the CPU could have executed more than  five hundred   normal instructions. This should demonstrate the limitations not only of fine-grained locking, but of any other synchronization mechanism relying on fine-grained global agreement. Quick Quiz 2.6:  Surely the hardware designers could be persuaded to improve this situation! Why have they been content with such abysmal performance for these single-instruction operations? 23 Figure 2.10: Hardware and Software: On Same Side I/O operations are even more expensive. A high performance (and expensive!) com- munications fabric, such as InfiniBand or any number of proprietary interconnects, has a latency of roughly three microseconds, during which time five  thousand   instructions might have been executed. Standards-based communications networks often require some sort of protocol processing, which further increases the latency. Of course, ge- ographic distance also increases latency, with the theoretical speed-of-light latency around the world coming to roughly 130  milliseconds , or more than 200 million clock cycles. Quick Quiz 2.7:  These numbers are insanely large! How can I possibly get my head around them? In short, hardware and software engineers are really fighting on the same side, trying to make computers go fast despite the best efforts of the laws of physics, as fancifully depicted in Figure  2.10  where our data stream is trying its best to exceed the speed of light. The next section discusses some of the things that the hardware engineers might (or might not) be able to do. Software’s contribution to this fight is outlined in the remaining chapters of this book. 2.3 Hardware Free Lunch? The major reason that concurrency has been receiving so much focus over the past few years is the end of Moore’s-Law induced single-threaded performance increases (or “free lunch” [ Sut08 ]), as shown in Figure  1.1  on page  4.  This section briefly surveys a few ways that hardware designers might be able to bring back some form of the “free lunch”. However, the preceding section presented some substantial hardware obstacles to exploiting concurrency. One severe physical limitation that hardware designers face is the finite speed of light. As noted in Figure  2.9  on page  23,  light can travel only about an 8-centimeters round trip in a vacuum during the duration of a 1.8 GHz clock period. This distance drops to about 3 centimeters for a 5 GHz clock. Both of these distances are relatively small compared to the size of a modern computer system. To make matters even worse, electrons in silicon move from three to thirty times more slowly than does light in a vacuum, and common clocked logic constructs run still more slowly, for example, a memory reference may need to wait for a local cache lookup to complete before the request may be passed on to the rest of the system. Furthermore, 24 1.5 cm 3 cm 70 um Figure 2.11: Latency Benefit of 3D Integration relatively low speed and high power drivers are required to move electrical signals from one silicon die to another, for example, to communicate between a CPU and main memory. Quick Quiz 2.8:  But individual electrons don’t move anywhere near that fast, even in conductors!!! The electron drift velocity in a conductor under the low voltages found in semiconductors is on the order of only one  millimeter   per second. What gives??? There are nevertheless some technologies (both hardware and software) that might help improve matters: 1. 3D integration, 2. Novel materials and processes, 3. Substituting light for electrons, 4. Special-purpose accelerators, and 5. Existing parallel software. Each of these is described in one of the following sections. 2.3.1 3D Integration 3-dimensional integration (3DI) is the practice of bonding very thin silicon dies to each other in a vertical stack. This practice provides potential benefits, but also poses significant fabrication challenges [ Kni08] . Perhaps the most important benefit of 3DI is decreased path length through the system, as shown in Figure  2.11 . A 3-centimeter silicon die is replaced with a stack of  four 1.5-centimeter dies, in theory decreasing the maximum path through the system by a factor of two, keeping in mind that each layer is quite thin. In addition, given proper attention to design and placement, long horizontal electrical connections (which are both slow and power hungry) can be replaced by short vertical electrical connections, which are both faster and more power efficient. However, delays due to levels of clocked logic will not be decreased by 3D in- tegration, and significant manufacturing, testing, power-supply, and heat-dissipation problems must be solved for 3D integration to reach production while still delivering on its promise. The heat-dissipation problems might be solved using semiconductors based on diamond, which is a good conductor for heat, but an electrical insulator. That said, it remains difficult to grow large single diamond crystals, to say nothing of slicing them 25 into wafers. In addition, it seems unlikely that any of these technologies will be able to deliver the exponential increases to which some people have become accustomed. That said, they may be necessary steps on the path to the late Jim Gray’s “smoking hairy golf  balls”  [Gra02 ]. 2.3.2 Novel Materials and Processes Stephen Hawking is said to have claimed that semiconductor manufacturers have but two fundamental problems: (1) the finite speed of light and (2) the atomic nature of  matter [ Gar07 ]. It is possible that semiconductor manufacturers are approaching these limits, but there are nevertheless a few avenues of research and development focused on working around these fundamental limits. One workaround for the atomic nature of matter are so-called “high-K dielectric” materials, which allow larger devices to mimic the electrical properties of infeasibly small devices. These materials pose some severe fabrication challenges, but nevertheless may help push the frontiers out a bit farther. Another more-exotic workaround stores multiple bits in a single electron, relying on the fact that a given electron can exist at a number of energy levels. It remains to be seen if this particular approach can be made to work reliably in production semiconductor devices. Another proposed workaround is the “quantum dot” approach that allows much smaller device sizes, but which is still in the research stage. 2.3.3 Light, Not Electrons Although the speed of light would be a hard limit, the fact is that semiconductor devices are limited by the speed of electrons rather than that of light, given that electrons in semiconductor materials move at between 3% and 30% of the speed of light in a vacuum. The use of copper connections on silicon devices is one way to increase the speed of  electrons, and it is quite possible that additional advances will push closer still to the actual speed of light. In addition, there have been some experiments with tiny optical fibers as interconnects within and between chips, based on the fact that the speed of  light in glass is more than 60% of the speed of light in a vacuum. One obstacle to such optical fibers is the inefficiency conversion between electricity and light and vice versa, resulting in both power-consumption and heat-dissipation problems. That said, absent some fundamental advances in the field of physics, any exponential increases in the speed of data flow will be sharply limited by the actual speed of light in a vacuum. 2.3.4 Special-Purpose Accelerators A general-purpose CPU working on a specialized problem is often spending significant time and energy doing work that is only tangentially related to the problem at hand. For example, when taking the dot product of a pair of vectors, a general-purpose CPU will normally use a loop (possibly unrolled) with a loop counter. Decoding the instructions, incrementing the loop counter, testing this counter, and branching back to the top of the loop are in some sense wasted effort: the real goal is instead to multiply corresponding elements of the two vectors. Therefore, a specialized piece of hardware designed specifically to multiply vectors could get the job done more quickly and with less energy consumed. 26 This is in fact the motivation for the vector instructions present in many commodity microprocessors. Because these instructions operate on multiple data items simultane- ously, they would permit a dot product to be computed with less instruction-decode and loop overhead. Similarly, specialized hardware can more efficiently encrypt and decrypt, compress and decompress, encode and decode, and many other tasks besides. Unfortunately, this efficiency does not come for free. A computer system incorporating this specialized hardware will contain more transistors, which will consume some power even when not in use. Software must be modified to take advantage of this specialized hardware, and this specialized hardware must be sufficiently generally useful that the high up-front hardware-design costs can be spread over enough users to make the specialized hardware affordable. In part due to these sorts of economic considerations, specialized hardware has thus far appeared only for a few application areas, including graphics processing (GPUs), vector processors (MMX, SSE, and VMX instructions), and, to a lesser extent, encryption. Unlike the server and PC arena, smartphones have long used a wide variety of  hardware accelerators. These hardware accelerators are often used for media decoding, so much so that a high-end MP3 player might be able to play audio for several minutes— with its CPU fully powered off the entire time. The purpose of these accelerators is to improve energy efficiency and thus extend battery life: special purpose hardware can often compute more efficiently than can a general-purpose CPU. This is another example of the principle called out in Section  1.2.3:  Generality is almost never free. Nevertheless, given the end of Moore’s-Law-induced single-threaded performance increases, it seems safe to predict that there will be an increasing variety of special- purpose hardware going forward. 2.3.5 Existing Parallel Software Although multicore CPUs seem to have taken the computing industry by surprise, the fact remains that shared-memory parallel computer systems have been commercially available for more than a quarter century. This is more than enough time for significant parallel software to make its appearance, and it indeed has. Parallel operating systems are quite commonplace, as are parallel threading libraries, parallel relational database management systems, and parallel numerical software. Use of existing parallel software can go a long ways towards solving any parallel-software crisis we might encounter. Perhaps the most common example is the parallel relational database management system. It is not unusual for single-threaded programs, often written in high-level scripting languages, to access a central relational database concurrently. In the resulting highly parallel system, only the database need actually deal directly with parallelism. A very nice trick when it works! 2.4 Software Design Implications The values of the ratios in Table  2.1  are critically important, as they limit the efficiency of a given parallel application. To see this, suppose that the parallel application uses CAS operations to communicate among threads. These CAS operations will typically involve a cache miss, that is, assuming that the threads are communicating primarily with each other rather than with themselves. Suppose further that the unit of work corresponding to each CAS communication operation takes 300ns, which is sufficient 27 time to compute several floating-point transcendental functions. Then about half of the execution time will be consumed by the CAS communication operations! This in turn means that a two-CPU system running such a parallel program would run no faster than one a sequential implementation running on a single CPU. The situation is even worse in the distributed-system case, where the latency of  a single communications operation might take as long as thousands or even millions of floating-point operations. This illustrates how important it is for communications operations to be extremely infrequent and to enable very large quantities of processing. Quick Quiz 2.9:  Given that distributed-systems communication is so horribly expensive, why does anyone bother with them? The lesson should be quite clear: parallel algorithms must be explicitly designed to runnearlyindependentthreads. Thelessfrequentlythethreadscommunicate, whetherby atomic operations, locks, or explicit messages, the better the application’s performance and scalability will be. In short, achieving excellent parallel performance and scalability means striving for embarrassingly parallel algorithms and implementations, whether by careful choice of data structures and algorithms, use of existing parallel applications and environments, or transforming the problem into one for which an embarrassingly parallel solution exists. Quick Quiz 2.10:  OK, if we are going to have to apply distributed-programming techniques to shared-memory parallel programs, why not just always use these dis- tributed techniques and dispense with shared memory? So, to sum up: 1. The good news is that multicore systems are inexpensive and readily available. 2.  More good news: The overhead of many synchronization operations is much lower than it was on parallel systems from the early 2000s. 3.  The bad news is that the overhead of cache misses is still high, especially on large systems. The remainder of this book describes ways of handling this bad news. Chapter  3  will cover some of the low-level tools used for parallel programming, Chapter  4  will investigate problems and solutions to parallel counting, and Chapter  5 will discuss design disciplines that promote performance and scalability. 28 Chapter 3 Tools of the Trade Thischapterprovidesabriefintroductiontosomebasictoolsoftheparallel-programming trade, focusing mainly on those available to user applications running on operating systems similar to Linux. Section  3.1  begins with scripting languages, Section  3.2  de- scribes the multi-process parallelism supported by the POSIX API, Section  3.2  touches on POSIX threads, and finally, Section  3.3  describes atomic operations. Pleasenotethatthischapterprovidesbutabriefintroduction. Moredetailisavailable from the references cited, and more information on how best to use these tools will be provided in later chapters. 3.1 Scripting Languages The Linux shell scripting languages provide simple but effective ways of managing parallelism. For example, suppose that you had a program  compute_it  that you needed to run twice with two different sets of arguments. This can be accomplished using UNIX shell scripting as follows: 1 compute_it 1 > compute_it.1.out & 2 compute_it 2 > compute_it.2.out & 3 wait 4 cat compute_it.1.out 5 cat compute_it.2.out Lines 1 and 2 launch two instances of this program, redirecting their output to two separate files, with the  &  character directing the shell to run the two instances of the program in the background. Line 3 waits for both instances to complete, and lines 4 and 5 display their output. The resulting execution is as shown in Figure  3.1 : the two instances of   compute_it  execute in parallel,  wait  completes after both of them do, and then the two instances of   cat  execute sequentially. Quick Quiz 3.1:  But this silly shell script isn’t a  real  parallel program! Why bother with such trivia??? Quick Quiz 3.2:  Is there a simpler way to create a parallel shell script? If so, how? If not, why not? For another example, the  make  software-build scripting language provides a  -j option that specifies how much parallelism should be introduced into the build process. For example, typing  make -j4  when building a Linux kernel specifies that up to four parallel compiles be carried out concurrently. 29 compute_it 1 > compute_it.1.out & compute_it 2 > compute_it.2. out  & wait cat compute_it.1.out cat compute_it.2.out Figure 3.1: Execution Diagram for Parallel Shell Execution It is hoped that these simple examples convince you that parallel programming need not always be complex or difficult. Quick Quiz 3.3:  But if script-based parallel programming is so easy, why bother with anything else? 3.2 POSIX Multiprocessing ThissectionscratchesthesurfaceofthePOSIXenvironment, includingpthreads[ Ope97 ], as this environment is readily available and widely implemented. Section  3.2.1  provides a glimpse of the POSIX fork() and related primitives, Section  3.2.2  touches on thread creation and destruction, Section  3.2.3  gives a brief overview of POSIX locking, and, finally, Section  3.4  presents the analogous operations within the Linux kernel. 3.2.1 POSIX Process Creation and Destruction Processes are created using the  fork()  primitive, they may be destroyed using the kill()  primitive, they may destroy themselves using the  exit()  primitive. A process executing a  fork()  primitive is said to be the “parent” of the newly created process. A parent may wait on its children using the  wait()  primitive. Please note that the examples in this section are quite simple. Real-world applica- tions using these primitives might need to manipulate signals, file descriptors, shared memory segments, and any number of other resources. In addition, some applications need to take specific actions if a given child terminates, and might also need to be concerned with the reason that the child terminated. These concerns can of course add substantial complexity to the code. For more information, see any of a number of  textbooks on the subject  [Ste92 ]. If   fork()  succeeds, it returns twice, once for the parent and again for the child. The value returned from  fork()  allows the caller to tell the difference, as shown in Figure  3.2  ( forkjoin.c ). Line 1 executes the fork() primitive, and saves its return value in local variable pid . Line 2 checks to see if  pid is zero, in which case, this is the child, which continues on to execute line 3. As noted earlier, the child may terminate via 30 1 pid = fork(); 2 if (pid == 0) { 3 / *  child  * / 4 } else if (pid < 0) { 5 / *  parent, upon error  * / 6 perror("fork"); 7 exit(-1); 8 } else { 9 / *  parent, pid == child ID  * / 10 } Figure 3.2: Using the fork() Primitive 1 void waitall(void) 2 { 3 int pid; 4 int status; 5 6 for (;;) { 7 pid = wait(&status); 8 if (pid == -1) { 9 if (errno == ECHILD) 10 break; 11 perror("wait"); 12 exit(-1); 13 } 14 } 15 } Figure 3.3: Using the wait() Primitive the  exit()  primitive. Otherwise, this is the parent, which checks for an error return from the  fork()  primitive on line 4, and prints an error and exits on lines 5-7 if so. Otherwise, the  fork()  has executed successfully, and the parent therefore executes line 9 with the variable  pid  containing the process ID of the child. The parent process may use the  wait()  primitive to wait for its children to com- plete. However, use of this primitive is a bit more complicated than its shell-script counterpart, as each invocation of   wait()  waits for but one child process. It is there- fore customary to wrap  wait()  into a function similar to the  waitall()  function shown in Figure  3.3  ( api-pthread.h ), with this  waitall()  function having se- mantics similar to the shell-script wait command. Each pass through the loop spanning lines 6-15 waits on one child process. Line 7 invokes the  wait()  primitive, which blocks until a child process exits, and returns that child’s process ID. If the process ID is instead -1, this indicates that the  wait()  primitive was unable to wait on a child. If  so, line 9 checks for the  ECHILD  errno, which indicates that there are no more child processes, so that line 10 exits the loop. Otherwise, lines 11 and 12 print an error and exit. Quick Quiz 3.4:  Why does this wait() primitive need to be so complicated? Why not just make it work like the shell-script  wait  does? It is critically important to note that the parent and child do  not   share memory. This is illustrated by the program shown in Figure  3.4  ( forkjoinvar.c ), in which the child sets a global variable  x  to 1 on line 6, prints a message on line 7, and exits on line 8. The parent continues at line 14, where it waits on the child, and on line 15 finds that its copy of the variable  x  is still zero. The output is thus as follows: Child process set x=1 Parent process sees x=0 31 1 int x = 0; 2 int pid; 3 4 pid = fork(); 5 if (pid == 0) { / *  child  * / 6 x = 1; 7 printf("Child process set x=1n"); 8 exit(0); 9 } 10 if (pid < 0) { / *  parent, upon error  * / 11 perror("fork"); 12 exit(-1); 13 } 14 waitall(); 15 printf("Parent process sees x=%dn", x); Figure 3.4: Processes Created Via fork() Do Not Share Memory 1 int x = 0; 2 3 void  * mythread(void  * arg) 4 { 5 x = 1; 6 printf("Child process set x=1n"); 7 return NULL; 8 } 9 10 int main(int argc, char  * argv[]) 11 { 12 pthread_t tid; 13 void  * vp; 14 15 if (pthread_create(&tid, NULL, 16 mythread, NULL) != 0) { 17 perror("pthread_create"); 18 exit(-1); 19 } 20 if (pthread_join(tid, &vp) != 0) { 21 perror("pthread_join"); 22 exit(-1); 23 } 24 printf("Parent process sees x=%dn", x); 25 return 0; 26 } Figure 3.5: Threads Created Via  pthread_create()  Share Memory Quick Quiz 3.5:  Isn’t there a lot more to  fork()  and  wait()  than discussed here? The finest-grained parallelism requires shared memory, and this is covered in Sec- tion  3.2.2.  That said, shared-memory parallelism can be significantly more complex than fork-join parallelism. 3.2.2 POSIX Thread Creation and Destruction To create a thread within an existing process, invoke the  pthread_create()  primi- tive, for example, as shown on lines 15 and 16 of Figure  3.5  ( pcreate.c ). The first argument is a pointer to a  pthread_t  in which to store the ID of the thread to be created, the second  NULL  argument is a pointer to an optional  pthread_attr_t , the third argument is the function (in this case,  mythread()  that is to be invoked 32 by the new thread, and the last  NULL  argument is the argument that will be passed to mythread . In this example,  mythread()  simply returns, but it could instead call  pthread_  exit() . Quick Quiz 3.6:  If the  mythread()  function in Figure  3.5  can simply return, why bother with  pthread_exit() ? The  pthread_join()  primitive, shown on line 20, is analogous to the fork-join wait()  primitive. It blocks until the thread specified by the  tid  variable completes execution, either by invoking  pthread_exit()  or by returning from the thread’s top-level function. The thread’s exit value will be stored through the pointer passed as the second argument to  pthread_join() . The thread’s exit value is either the value passed to  pthread_exit()  or the value returned by the thread’s top-level function, depending on how the thread in question exits. The program shown in Figure  3.5  produces output as follows, demonstrating that memory is in fact shared between the two threads: Child process set x=1 Parent process sees x=1 Note that this program carefully makes sure that only one of the threads stores a value to variable  x  at a time. Any situation in which one thread might be storing a value to a given variable while some other thread either loads from or stores to that same variable is termed a “data race”. Because the C language makes no guarantee that the results of a data race will be in any way reasonable, we need some way of safely accessing and modifying data concurrently, such as the locking primitives discussed in the following section. Quick Quiz 3.7:  If the C language makes no guarantees in presence of a data race, then why does the Linux kernel have so many data races? Are you trying to tell me that the Linux kernel is completely broken??? 3.2.3 POSIX Locking The POSIX standard allows the programmer to avoid data races via “POSIX locking”. POSIX locking features a number of primitives, the most fundamental of which are pthread_mutex_lock()  and  pthread_mutex_unlock() . These primitives operate on locks, which are of type pthread_mutex_t . These locks may be declared statically and initialized with  PTHREAD_MUTEX_INITIALIZER , or they may be allocated dynamically and initialized using the  pthread_mutex_init()  primitive. The demonstration code in this section will take the former course. The  pthread_mutex_lock()  primitive “acquires” the specified lock, and the pthread_mutex_unlock()  “releases” the specified lock. Because these are “ex- clusive” locking primitives, only one thread at a time may “hold” a given lock at a given time. For example, if a pair of threads attempt to acquire the same lock concurrently, one of the pair will be “granted” the lock first, and the other will wait until the first thread releases the lock. Quick Quiz 3.8:  What if I want several threads to hold the same lock at the same time? This exclusive-locking property is demonstrated using the code shown in Figure  3.6 ( lock.c ). Line 1 defines and initializes a POSIX lock named  lock_a , while line 2 similarly defines and initializes a lock named  lock_b . Line 3 defines and initializes a shared variable  x . 33 1 pthread_mutex_t lock_a = PTHREAD_MUTEX_INITIALIZER; 2 pthread_mutex_t lock_b = PTHREAD_MUTEX_INITIALIZER; 3 int x = 0; 4 5 void  * lock_reader(void  * arg) 6 { 7 int i; 8 int newx = -1; 9 int oldx = -1; 10 pthread_mutex_t  * pmlp = (pthread_mutex_t  * )arg; 11 12 if (pthread_mutex_lock(pmlp) != 0) { 13 perror("lock_reader:pthread_mutex_lock"); 14 exit(-1); 15 } 16 for (i = 0; i < 100; i++) { 17 newx = ACCESS_ONCE(x); 18 if (newx != oldx) { 19 printf("lock_reader(): x = %d", newx); 20 } 21 oldx = newx; 22 poll(NULL, 0, 1); 23 } 24 if (pthread_mutex_unlock(pmlp) != 0) { 25 perror("lock_reader:pthread_mutex_unlock"); 26 exit(-1); 27 } 28 return NULL; 29 } 30 31 void  * lock_writer(void  * arg) 32 { 33 int i; 34 pthread_mutex_t  * pmlp = (pthread_mutex_t  * )arg; 35 36 if (pthread_mutex_lock(pmlp) != 0) { 37 perror("lock_reader:pthread_mutex_lock"); 38 exit(-1); 39 } 40 for (i = 0; i < 3; i++) { 41 ACCESS_ONCE(x)++; 42 poll(NULL, 0, 5); 43 } 44 if (pthread_mutex_unlock(pmlp) != 0) { 45 perror("lock_reader:pthread_mutex_unlock"); 46 exit(-1); 47 } 48 return NULL; 49 } Figure 3.6: Demonstration of Exclusive Locks 34 1 printf("Creating two threads using same lock:"); 2 if (pthread_create(&tid1, NULL, 3 lock_reader, &lock_a) != 0) { 4 perror("pthread_create"); 5 exit(-1); 6 } 7 if (pthread_create(&tid2, NULL, 8 lock_writer, &lock_a) != 0) { 9 perror("pthread_create"); 10 exit(-1); 11 } 12 if (pthread_join(tid1, &vp) != 0) { 13 perror("pthread_join"); 14 exit(-1); 15 } 16 if (pthread_join(tid2, &vp) != 0) { 17 perror("pthread_join"); 18 exit(-1); 19 } Figure 3.7: Demonstration of Same Exclusive Lock Lines 5-28 defines a function  lock_reader()  which repeatedly reads the shared variable  x  while holding the lock specified by  arg . Line 10 casts  arg  to a pointer to a pthread_mutex_t , asrequiredbythe pthread_mutex_lock() and pthread_  mutex_unlock()  primitives. Quick Quiz 3.9:  Why not simply make the argument to  lock_reader()  on line 5 of Figure  3.6  be a pointer to a  pthread_mutex_t ? Lines 12-15 acquire the specified  pthread_mutex_t , checking for errors and exiting the program if any occur. Lines 16-23 repeatedly check the value of   x , printing the new value each time that it changes. Line 22 sleeps for one millisecond, which allows this demonstration to run nicely on a uniprocessor machine. Line 24-27 release the  pthread_mutex_t , again checking for errors and exiting the program if any occur. Finally, line 28 returns  NULL , again to match the function type required by pthread_create() . Quick Quiz 3.10:  Writing four lines of code for each acquisition and release of a pthread_mutex_t  sure seems painful! Isn’t there a better way? Lines 31-49 of Figure  3.6  shows  lock_writer() , which periodically update the shared variable  x  while holding the specified  pthread_mutex_t . As with lock_reader() , line 34 casts  arg  to a pointer to  pthread_mutex_t , lines 36- 39 acquires the specified lock, and lines 44-47 releases it. While holding the lock, lines 40-43 increment the shared variable x , sleeping for five milliseconds between each increment. Finally, lines 44-47 release the lock. Figure 3.7 showsacodefragmentthatruns lock_reader() and lock_writer() as thread using the same lock, namely,  lock_a . Lines 2-6 create a thread running lock_reader() , and then Lines 7-11 create a thread running  lock_writer() . Lines 12-19 wait for both threads to complete. The output of this code fragment is as follows: Creating two threads using same lock: lock_reader(): x = 0 Because both threads are using the same lock, the lock_reader() thread cannot see any of the intermediate values of   x  produced by  lock_writer()  while holding the lock. Quick Quiz 3.11:  Is “x = 0” the only possible output from the code fragment shown 35 1 printf("Creating two threads w/different locks:"); 2 x = 0; 3 if (pthread_create(&tid1, NULL, 4 lock_reader, &lock_a) != 0) { 5 perror("pthread_create"); 6 exit(-1); 7 } 8 if (pthread_create(&tid2, NULL, 9 lock_writer, &lock_b) != 0) { 10 perror("pthread_create"); 11 exit(-1); 12 } 13 if (pthread_join(tid1, &vp) != 0) { 14 perror("pthread_join"); 15 exit(-1); 16 } 17 if (pthread_join(tid2, &vp) != 0) { 18 perror("pthread_join"); 19 exit(-1); 20 } Figure 3.8: Demonstration of Different Exclusive Locks in Figure  3.7 ? If so, why? If not, what other output could appear, and why? Figure  3.8  shows a similar code fragment, but this time using different locks:  lock_  a for lock_reader() and lock_b for lock_writer() . The output of this code fragment is as follows: Creating two threads w/different locks: lock_reader(): x = 0 lock_reader(): x = 1 lock_reader(): x = 2 lock_reader(): x = 3 Because the two threads are using different locks, they do not exclude each other, and can run concurrently. The  lock_reader()  function can therefore see the inter- mediate values of   x  stored by  lock_writer() . Quick Quiz 3.12:  Using different locks could cause quite a bit of confusion, what with threads seeing each others’ intermediate states. So should well-written parallel programs restrict themselves to using a single lock in order to avoid this kind of  confusion? Quick Quiz 3.13:  In the code shown in Figure  3.8,  is  lock_reader()  guaran- teed to see all the values produced by  lock_writer() ? Why or why not? Quick Quiz 3.14:  Wait a minute here!!! Figure  3.7  didn’t initialize shared variable x , so why does it need to be initialized in Figure  3.8 ? Although there is quite a bit more to POSIX exclusive locking, these primitives provide a good start and are in fact sufficient in a great many situations. The next section takes a brief look at POSIX reader-writer locking. 3.2.4 POSIX Reader-Writer Locking The POSIX API provides a reader-writer lock, which is represented by a  pthread_  rwlock_t . As with pthread_mutex_t , pthread_rwlock_t may be statically initialized via  PTHREAD_RWLOCK_INITIALIZER  or dynamically initialized via the  pthread_rwlock_init()  primitive. The  pthread_rwlock_rdlock() primitiveread-acquiresthespecified pthread_rwlock_t , the pthread_rwlock_  wrlock() primitivewrite-acquiresit, andthe pthread_rwlock_unlock() prim- 36 1 pthread_rwlock_t rwl = PTHREAD_RWLOCK_INITIALIZER; 2 int holdtime = 0; 3 int thinktime = 0; 4 long long  * readcounts; 5 int nreadersrunning = 0; 6 7 #define GOFLAG_INIT 0 8 #define GOFLAG_RUN 1 9 #define GOFLAG_STOP 2 10 char goflag = GOFLAG_INIT; 11 12 void  * reader(void  * arg) 13 { 14 int i; 15 long long loopcnt = 0; 16 long me = (long)arg; 17 18 __sync_fetch_and_add(&nreadersrunning, 1); 19 while (ACCESS_ONCE(goflag) == GOFLAG_INIT) { 20 continue; 21 } 22 while (ACCESS_ONCE(goflag) == GOFLAG_RUN) { 23 if (pthread_rwlock_rdlock(&rwl) != 0) { 24 perror("pthread_rwlock_rdlock"); 25 exit(-1); 26 } 27 for (i = 1; i < holdtime; i++) { 28 barrier(); 29 } 30 if (pthread_rwlock_unlock(&rwl) != 0) { 31 perror("pthread_rwlock_unlock"); 32 exit(-1); 33 } 34 for (i = 1; i < thinktime; i++) { 35 barrier(); 36 } 37 loopcnt++; 38 } 39 readcounts[me] = loopcnt; 40 return NULL; 41 } Figure 3.9: Measuring Reader-Writer Lock Scalability itive releases it. Only a single thread may write-hold a given  pthread_rwlock_t  at any given time, but multiple threads may read-hold a given  pthread_rwlock_t , at least while there is no thread currently write-holding it. As you might expect, reader-writer locks are designed for read-mostly situations. In these situations, a reader-writer lock can provide greater scalability than can an exclusive lock because the exclusive lock is by definition limited to a single thread holding the lock at any given time, while the reader-writer lock permits an arbitrarily large number of readers to concurrently hold the lock. However, in practice, we need to know how much additional scalability is provided by reader-writer locks. Figure  3.9  ( rwlockscale.c ) shows one way of measuring reader-writer lock scalability. Line 1 shows the definition and initialization of the reader-writer lock, line 2 shows the holdtime argument controlling the time each thread holds the reader-writer lock, line 3 shows the  thinktime  argument controlling the time between the release of the reader-writer lock and the next acquisition, line 4 defines the readcounts array into which each reader thread places the number of times it acquired the lock, and line 5 defines the  nreadersrunning  variable, which determines when all reader threads have started running. Lines 7-10 define  goflag , which synchronizes the start and the end of the test. 37  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  1.1  0 20 40 60 80 100 120 140    C   r    i    t    i   c   a    l    S   e   c    t    i   o   n    P   e   r    f   o   r   m   a   n   c   e Number of CPUs (Threads) ideal 100M 10M 1M 100K 10K 1K Figure 3.10: Reader-Writer Lock Scalability This variable is initially set to  GOFLAG_INIT , then set to  GOFLAG_RUN  after all the reader threads have started, and finally set to  GOFLAG_STOP  to terminate the test run. Lines 12-41 define  reader() , which is the reader thread. Line 18 atomically incrementsthe nreadersrunning variabletoindicatethatthisthreadisnowrunning, and lines 19-21 wait for the test to start. The  ACCESS_ONCE()  primitive forces the compilertofetch goflag oneachpassthroughtheloop—thecompilerwouldotherwise be within its rights to assume that the value of   goflag  would never change. Quick Quiz 3.15:  Instead of using  ACCESS_ONCE()  everywhere, why not just declare  goflag  as  volatile  on line 10 of Figure  3.9 ? Quick Quiz 3.16:  ACCESS_ONCE()  only affects the compiler, not the CPU. Don’t we also need memory barriers to make sure that the change in  goflag ’s value propagates to the CPU in a timely fashion in Figure  3.9 ? Quick Quiz 3.17:  Would it ever be necessary to use  ACCESS_ONCE()  when accessing a per-thred variable, for example, a variable declared using the  gcc __  thread  storage class? The loop spanning lines 22-38 carries out the performance test. Lines 23-26 acquire the lock, lines 27-29 hold the lock for the specified duration (and the  barrier() directive prevents the compiler from optimizing the loop out of existence), lines 30-33 release the lock, and lines 34-36 wait for the specified duration before re-acquiring the lock. Line 37 counts this lock acquisition. Line39movesthelock-acquisitioncounttothisthread’selementofthe readcounts[] array, and line 40 returns, terminating this thread. Figure  3.10  shows the results of running this test on a 64-core Power-5 system with two hardware threads per core for a total of 128 software-visible CPUs. The thinktime  parameter was zero for all these tests, and the  holdtime  parameter set to values ranging from one thousand (“1K” on the graph) to 100 million (“100M” on 38 the graph). The actual value plotted is:  L  N   NL 1 (3.1) where  N   is the number of threads,  L  N   is the number of lock acquisitions by  N   threads, and  L 1  is the number of lock acquisitions by a single thread. Given ideal hardware and software scalability, this value will always be 1.0. As can be seen in the figure, reader-writer locking scalability is decidedly non-ideal, especially for smaller sizes of critical sections. To see why read-acquisition can be so slow, consider that all the acquiring threads must update the  pthread_rwlock_t data structure. Therefore, if all 128 executing threads attempt to read-acquire the reader- writer lock concurrently, they must update this underlying  pthread_rwlock_t  one at a time. One lucky thread might do so almost immediately, but the least-lucky thread must wait for all the other 127 threads to do their updates. This situation will only get worse as you add CPUs. Quick Quiz 3.18:  Isn’t comparing against single-CPU throughput a bit harsh? Quick Quiz 3.19:  But 1,000 instructions is not a particularly small size for a critical section. What do I do if I need a much smaller critical section, for example, one containing only a few tens of instructions? Quick Quiz 3.20:  In Figure  3.10,  all of the traces other than the 100M trace deviate gently from the ideal line. In contrast, the 100M trace breaks sharply from the ideal line at 64 CPUs. In addition, the spacing between the 100M trace and the 10M trace is much smaller than that between the 10M trace and the 1M trace. Why does the 100M trace behave so much differently than the other traces? Quick Quiz 3.21:  Power-5 is several years old, and new hardware should be faster. So why should anyone worry about reader-writer locks being slow? Despite these limitations, reader-writer locking is quite useful in many cases, for ex- ample when the readers must do high-latency file or network I/O. There are alternatives, some of which will be presented in Chapters  4  and  8. 3.3 Atomic Operations Given that Figure  3.10  shows that the overhead of reader-writer locking is most severe for the smallest critical sections, it would be nice to have some other way to protect the tiniest of critical sections. One such way are atomic operations. We have seen one atomic operations already, in the form of the  __sync_fetch_and_add() primitive on line 18 of Figure  3.9.  This primitive atomically adds the value of its second argument to the value referenced by its first argument, returning the old value (which was ignored in this case). If a pair of threads concurrently execute  __sync_fetch_and_add() on the same variable, the resulting value of the variable will include the result of both additions. The  gcc  compiler offers a number of additional atomic operations, including  __sync_fetch_and_sub() ,  __sync_fetch_and_or() ,  __sync_fetch_  and_and() ,  __sync_fetch_and_xor() , and  __sync_fetch_and_nand() , all of which return the old value. If you instead need the new value, you can in- stead use the  __sync_add_and_fetch() ,  __sync_sub_and_fetch() ,  __  sync_or_and_fetch() ,  __sync_and_and_fetch() ,  __sync_xor_and_  fetch() , and  __sync_nand_and_fetch()  primitives. Quick Quiz 3.22:  Is it really necessary to have both sets of primitives? 39 The classic compare-and-swap operation is provided by a pair of primitives,  __  sync_bool_compare_and_swap() and  __sync_val_compare_and_swap() . Both of these primitive atomically update a location to a new value, but only if its prior value was equal to the specified old value. The first variant returns 1 if the operation succeeded and 0 if it failed, for example, if the prior value was not equal to the spec- ified old value. The second variant returns the prior value of the location, which, if  equal to the specified old value, indicates that the operation succeeded. Either of the compare-and-swap operation is “universal” in the sense that any atomic operation on a single location can be implemented in terms of compare-and-swap, though the earlier operations are often more efficient where they apply. The compare-and-swap operation is also capable of serving as the basis for a wider set of atomic operations, though the more elaborate of these often suffer from complexity, scalability, and performance problems [ Her90 ]. The  __sync_synchronize()  primitive issues a “memory barrier”, which con- strains both the compiler’s and the CPU’s ability to reorder operations, as discussed in Section  13.2 . In some cases, it is sufficient to constrain the compiler’s ability to reorder operations, while allowing the CPU free rein, in which case the  barrier()  primitive may be used, as it in fact was on line 28 of Figure  3.9 . In some cases, it is only necessary to ensure that the compiler avoids optimizing away a given memory access, in which case the  ACCESS_ONCE()  primitive may be used, as it was on line 17 of Figure  3.6. These last two primitives are not provided directly by gcc, but may be implemented straightforwardly as follows: #define ACCESS_ONCE(x) ( * (volatile typeof(x)  * )&(x)) #define barrier() __asm__ __volatile__("": : :"memory") Quick Quiz 3.23:  Given that these atomic operations will often be able to generate single atomic instructions that are directly supported by the underlying instruction set, shouldn’t they be the fastest possible way to get things done? 3.4 Linux-Kernel Equivalents to POSIX Operations Unfortunately, threading operations, locking primitives, and atomic operations were in reasonably wide use long before the various standards committees got around to them. As a result, there is considerable variation in how these operations are supported. It is still quite common to find these operations implemented in assembly language, either for historical reasons or to obtain better performance in specialized circumstances. For example, the gcc  __sync_  family of primitives all provide memory-ordering semantics, motivating many developers to create their own implementations for situations where the memory ordering semantics are not required. Therefore, Table  3.1  on page  41  provides a rough mapping between the POSIX and gcc primitives to those used in the Linux kernel. Exact mappings are not always available, for example, the Linux kernel has a wide variety of locking primitives, while gcc has a number of atomic operations that are not directly available in the Linux kernel. Of course, on the one hand, user-level code does not need the Linux kernel’s wide array of locking primitives, while on the other hand, gcc’s atomic operations can be emulated reasonably straightforwardly using  cmpxchg() . Quick Quiz 3.24:  What happened to the Linux-kernel equivalents to  fork()  and join() ? 40 Category POSIX Linux Kernel Thread Management  pthread_t struct task_struct pthread_create() kthread_create pthread_exit() kthread_should_stop() (rough) pthread_join() kthread_stop() (rough) poll(NULL, 0, 5) schedule_timeout_interruptible() POSIX Locking  pthread_mutex_t spinlock_t (rough) struct mutex PTHREAD_MUTEX_INITIALIZER DEFINE_SPINLOCK() DEFINE_MUTEX() pthread_mutex_lock() spin_lock() (and friends) mutex_lock() (and friends) pthread_mutex_unlock() spin_unlock() (and friends) mutex_unlock() POSIX Reader-Writer  pthread_rwlock_t rwlock_t (rough) Locking  struct rw_semaphore PTHREAD_RWLOCK_INITIALIZER DEFINE_RWLOCK() DECLARE_RWSEM() pthread_rwlock_rdlock() read_lock() (and friends) down_read() (and friends) pthread_rwlock_unlock() read_unlock() (and friends) up_read() pthread_rwlock_wrlock() write_lock() (and friends) down_write() (and friends) pthread_rwlock_unlock() write_unlock() (and friends) up_write() Atomic Operations C Scalar Types  atomic_t atomic64_t  __sync_fetch_and_add() atomic_add_return() atomic64_add_return()  __sync_fetch_and_sub() atomic_sub_return() atomic64_sub_return()  __sync_val_compare_and_swap() cmpxchg()  __sync_lock_test_and_set() xchg()  (rough)  __sync_synchronize() smp_mb() Table 3.1: Mapping from POSIX to Linux-Kernel Primitives 41 3.5 The Right Tool for the Job: How to Choose? As a rough rule of thumb, use the simplest tool that will get the job done. If you can, simply program sequentially. If that is insufficient, try using a shell script to mediate parallelism. If the resulting shell-script  fork()  /  exec()  overhead (about 480 microseconds for a minimal C program on an Intel Core Duo laptop) is too large, try using the C-language  fork()  and  wait()  primitives. If the overhead of these primitives (about 80 microseconds for a minimal child process) is still too large, then you might need to use the POSIX threading primitives, choosing the appropriate locking and/or atomic-operation primitives. If the overhead of the POSIX threading primitives (typically sub-microsecond) is too great, then the primitives introduced in Chapter  8  may be required. Always remember that inter-process communication and message-passing can be good alternatives to shared-memory multithreaded execution. Quick Quiz 3.25:  Wouldn’t the shell normally use vfork() rather than fork() ? Of course, the actual overheads will depend not only on your hardware, but most critically on the manner in which you use the primitives. Therefore, it is necessary to make the right design choices as well as the correct choice of individual primitives, as is discussed at length in subsequent chapters. 42 Chapter 4 Counting Counting is perhaps the simplest and most natural thing a computer can do. However, counting efficiently and scalably on a large shared-memory multiprocessor can be quite challenging. Furthermore, the simplicity of the underlying concept of counting allows us to explore the fundamental issues of concurrency without the distractions of elaborate data structures or complex synchronization primitives. Counting therefore provides an excellent introduction to parallel programming. This chapter covers a number of special cases for which there are simple, fast, and scalable counting algorithms. But first, let us find out how much you already know about concurrent counting. Quick Quiz 4.1:  Why on earth should efficient and scalable counting be hard? After all, computers have special hardware for the sole purpose of doing counting, addition, subtraction, and lots more besides, don’t they??? Quick Quiz 4.2: Network-packet counting problem.  Suppose that you need to collect statistics on the number of networking packets (or total number of bytes) transmitted and/or received. Packets might be transmitted or received by any CPU on the system. Suppose further that this large machine is capable of handling a million packets per second, and that there is a systems-monitoring package that reads out the count every five seconds. How would you implement this statistical counter? Quick Quiz 4.3: Approximate structure-allocation limit problem.  Suppose that you need to maintain a count of the number of structures allocated in order to fail any allocations once the number of structures in use exceeds a limit (say, 10,000). Suppose further that these structures are short-lived, that the limit is rarely exceeded, and that a “sloppy” approximate limit is acceptable. Quick Quiz 4.4: Exact structure-allocation limit problem.  Suppose that you need to maintain a count of the number of structures allocated in order to fail any allocations once the number of structures in use exceeds an exact limit (again, say 10,000). Suppose further that these structures are short-lived, and that the limit is rarely exceeded, that there is almost always at least one structure in use, and suppose further still that it is necessary to know exactly when this counter reaches zero, for example, in order to free up some memory that is not required unless there is at least one structure in use. Quick Quiz 4.5: Removable I/O device access-count problem.  Suppose that you need to maintain a reference count on a heavily used removable mass-storage device, so that you can tell the user when it is safe to remove the device. This device follows the usual removal procedure where the user indicates a desire to remove the device, and 43 the system tells the user when it is safe to do so. The remainder of this chapter will develop answers to these questions. Section  4.1 asks why counting on multicore systems isn’t trivial, and Section  4.2  looks into ways of solving the network-packet counting problem. Section  4.3  investigates the approxi- mate structure-allocation limit problem, while Section  4.4  takes on the exact structure- allocation limit problem. Section  4.5  discusses how to use the various specialized parallel counters introduced in the preceding sections. Finally, Section  4.6  concludes the chapter with performance measurements. Sections  4.1  and  4.2  contain introductory material, while the remaining sections are more appropriate for advanced students. 4.1 Why Isn’t Concurrent Counting Trivial? Let’s start with something simple, for example, the straightforward use of arithmetic shown in Figure  4.1  ( count_nonatomic.c ). Here, we have a counter on line 1, we increment it on line 5, and we read out its value on line 10. What could be simpler? This approach has the additional advantage of being blazingly fast if you are doing lots of reading and almost no incrementing, and on small systems, the performance is excellent. There is just one large fly in the ointment: this approach can lose counts. On my dual-core laptop, a short run invoked  inc_count()  100,014,000 times, but the final value of the counter was only 52,909,118. Although approximate values do have their place in computing, accuracies far greater than 50% are almost always necessary. Quick Quiz 4.6:  But doesn’t the  ++  operator produce an x86 add-to-memory instruction? And won’t the CPU cache cause this to be atomic? Quick Quiz 4.7:  The 8-figure accuracy on the number of failures indicates that you really did test this. Why would it be necessary to test such a trivial program, especially when the bug is easily seen by inspection? The straightforward way to count accurately is to use atomic operations, as shown in Figure  4.2  ( count_atomic.c ) . Line 1 defines an atomic variable, line 5 atomically increments it, and line 10 reads it out. Because this is atomic, it keeps perfect count. However, it is slower: on a Intel Core Duo laptop, it is about six times slower than non-atomic increment when a single thread is incrementing, and more than  ten times slower if two threads are incrementing . 1 1 Interestingly enough, a pair of threads non-atomically incrementing a counter will cause the counter to increase more quickly than a pair of threads atomically incrementing the counter. Of course, if your only goal is to make the counter increase quickly, an easier approach is to simply assign a large value to the counter. 1 long counter = 0; 2 3 void inc_count(void) 4 { 5 counter++; 6 } 7 8 long read_count(void) 9 { 10 return counter; 11 } Figure 4.1: Just Count! 44 1 atomic_t counter = ATOMIC_INIT(0); 2 3 void inc_count(void) 4 { 5 atomic_inc(&counter); 6 } 7 8 long read_count(void) 9 { 10 return atomic_read(&counter); 11 } Figure 4.2: Just Count Atomically!  0  100  200  300  400  500  600  700  800   1 2 3 4 5 6 7 8    T    i   m   e    P   e   r    I   n   c   r   e   m   e   n    t    (   n   a   n   o   s   e   c   o   n    d   s    ) Number of CPUs/Threads Figure 4.3: Atomic Increment Scalability on Nehalem This poor performance should not be a surprise, given the discussion in Chapter  2, nor should it be a surprise that the performance of atomic increment gets slower as the number of CPUs and threads increase, as shown in Figure  4.3.  In this figure, the horizontal dashed line resting on the x axis is the ideal performance that would be achieved by a perfectly scalable algorithm: with such an algorithm, a given increment would incur the same overhead that it would in a single-threaded program. Atomic increment of a single global variable is clearly decidedly non-ideal, and gets worse as you add CPUs. Quick Quiz 4.8:  Why doesn’t the dashed line on the x axis meet the diagonal line at  x  =  1? Quick Quiz 4.9:  But atomic increment is still pretty fast. And incrementing a single variable in a tight loop sounds pretty unrealistic to me, after all, most of the program’s execution should be devoted to actually doing work, not accounting for the work it has done! Why should I care about making this go faster? For another perspective on global atomic increment, consider Figure  4.4 . In order Nevertheless, there is likely to be a role for algorithms that use carefully relaxed notions of correctness in order to gain greater performance and scalability  [And91,  ACMS03,  Ung11] . 45 CPU 0 Cache CPU 1 Cache Interconnect CPU 2 Cache CPU 3 Cache Interconnect CPU 6 Cache CPU 7 Cache Interconnect CPU 4 Cache CPU 5 Cache Interconnect MemoryMemory System Interconnect Figure 4.4: Data Flow For Global Atomic Increment Figure 4.5: Waiting to Count for each CPU to get a chance to increment a given global variable, the cache line containing that variable must circulate among all the CPUs, as shown by the red arrows. Such circulation will take significant time, resulting in the poor performance seen in Figure  4.3,  which might be thought of as shown in Figure  4.5. The following sections discuss high-performance counting, which avoids the delays inherent in such circulation. Quick Quiz 4.10:  But why can’t CPU designers simply ship the addition operation to the data, avoiding the need to circulate the cache line containing the global variable being incremented? 4.2 Statistical Counters This section covers the common special case of statistical counters, where the count is updated extremely frequently and the value is read out rarely, if ever. These will be used to solve the network-packet counting problem posed in Quick Quiz 4.2. 46 1 DEFINE_PER_THREAD(long, counter); 2 3 void inc_count(void) 4 { 5 __get_thread_var(counter)++; 6 } 7 8 long read_count(void) 9 { 10 int t; 11 long sum = 0; 12 13 for_each_thread(t) 14 sum += per_thread(counter, t); 15 return sum; 16 } Figure 4.6: Array-Based Per-Thread Statistical Counters 4.2.1 Design Statistical counting is typically handled by providing a counter per thread (or CPU, when running in the kernel), so that each thread updates its own counter. The aggregate value of the counters is read out by simply summing up all of the threads’ counters, relying on the commutative and associative properties of addition. This is an example of the Data Ownership pattern that will be introduced in Section  5.3.4 . Quick Quiz 4.11:  But doesn’t the fact that C’s “integers” are limited in size compli- cate things? 4.2.2 Array-Based Implementation One way to provide per-thread variables is to allocate an array with one element per thread (presumably cache aligned and padded to avoid false sharing). Quick Quiz 4.12:  An array??? But doesn’t that limit the number of threads? Such an array can be wrapped into per-thread primitives, as shown in Figure  4.6 ( count_stat.c ). Line 1 defines an array containing a set of per-thread counters of  type  long  named, creatively enough,  counter . Lines3-6showafunctionthatincrementsthecounters, usingthe  __get_thread_  var()  primitive to locate the currently running thread’s element of the  counter array. Because this element is modified only by the corresponding thread, non-atomic increment suffices. Lines 8-16 show a function that reads out the aggregate value of the counter, us- ing the  for_each_thread()  primitive to iterate over the list of currently running threads, and using the per_thread() primitive to fetch the specified thread’s counter. Because the hardware can fetch and store a properly aligned  long  atomically, and because gcc is kind enough to make use of this capability, normal loads suffice, and no special atomic instructions are required. Quick Quiz 4.13:  What other choice does gcc have, anyway??? Quick Quiz 4.14:  How does the per-thread  counter  variable in Figure  4.6  get initialized? Quick Quiz 4.15:  How is the code in Figure  4.6  supposed to permit more than one counter? This approach scales linearly with increasing number of updater threads invoking inc_count() . As is shown by the green arrows on each CPU in Figure  4.7,  the 47 CPU 0 Cache CPU 1 Cache Interconnect CPU 2 Cache CPU 3 Cache Interconnect CPU 6 Cache CPU 7 Cache Interconnect CPU 4 Cache CPU 5 Cache Interconnect MemoryMemory System Interconnect Figure 4.7: Data Flow For Per-Thread Increment reason for this is that each CPU can make rapid progress incrementing its thread’s variable, without any expensive cross-system communication. As such, this section solves the network-packet counting problem presented at the beginning of this chapter. Quick Quiz 4.16:  The read operation takes time to sum up the per-thread values, and during that time, the counter could well be changing. This means that the value returned by  read_count()  in Figure  4.6  will not necessarily be exact. Assume that the counter is being incremented at rate  r   counts per unit time, and that  read_  count() ’s execution consumes  ∆  units of time. What is the expected error in the return value? However, this excellent update-side scalability comes at great read-side expense for large numbers of threads. The next section shows one way to reduce read-side expense while still retaining the update-side scalability. 4.2.3 Eventually Consistent Implementation One way to retain update-side scalability while greatly improving read-side performance is to weaken consistency requirements. The counting algorithm in the previous section is guaranteed to return a value between the value that an ideal counter would have taken on near the beginning of   read_count() ’s execution and that near the end of   read_  count() ’s execution.  Eventual consistency  [ Vog09 ] provides a weaker guarantee: in absence of calls to  inc_count() , calls to  read_count()  will eventually return an accurate count. We exploit eventual consistency by maintaining a global counter. However, updaters only manipulate their per-thread counters. A separate thread is provided to transfer counts from the per-thread counters to the global counter. Readers simply access the value of the global counter. If updaters are active, the value used by the readers will be out of date, however, once updates cease, the global counter will eventually converge on the true value—hence this approach qualifies as eventually consistent. TheimplementationisshowninFigure 4.8 ( count_stat_eventual.c ) . Lines1- 2 show the per-thread variable and the global variable that track the counter’s value, and line three shows  stopflag  which is used to coordinate termination (for the case where we want to terminate the program with an accurate counter value). The inc_count()  function shown on lines 5-8 is similar to its counterpart in Figure  4.6. 48 1 DEFINE_PER_THREAD(unsigned long, counter); 2 unsigned long global_count; 3 int stopflag; 4 5 void inc_count(void) 6 { 7 ACCESS_ONCE(__get_thread_var(counter))++; 8 } 9 10 unsigned long read_count(void) 11 { 12 return ACCESS_ONCE(global_count); 13 } 14 15 void  * eventual(void  * arg) 16 { 17 int t; 18 int sum; 19 20 while (stopflag < 3) { 21 sum = 0; 22 for_each_thread(t) 23 sum += ACCESS_ONCE(per_thread(counter, t)); 24 ACCESS_ONCE(global_count) = sum; 25 poll(NULL, 0, 1); 26 if (stopflag) { 27 smp_mb(); 28 stopflag++; 29 } 30 } 31 return NULL; 32 } 33 34 void count_init(void) 35 { 36 thread_id_t tid; 37 38 if (pthread_create(&tid, NULL, eventual, NULL)) { 39 perror("count_init:pthread_create"); 40 exit(-1); 41 } 42 } 43 44 void count_cleanup(void) 45 { 46 stopflag = 1; 47 while (stopflag < 3) 48 poll(NULL, 0, 1); 49 smp_mb(); 50 } Figure 4.8: Array-Based Per-Thread Eventually Consistent Counters 49 The  read_count()  function shown on lines 10-13 simply returns the value of the global_count  variable. However, the  count_init()  function on lines 34-42 creates the  eventual() thread shown on lines 15-32, which cycles through all the threads, summing the per- thread local  counter  and storing the sum to the  global_count  variable. The eventual()  thread waits an arbitrarily chosen one millisecond between passes. The count_cleanup()  function on lines 44-50 coordinates termination. This approach gives extremely fast counter read-out while still supporting linear counter-update performance. However, this excellent read-side performance and update- side scalability comes at the cost of the additional thread running  eventual() . Quick Quiz 4.17:  Why doesn’t  inc_count()  in Figure  4.8  need to use atomic instructions? After all, we now have multiple threads accessing the per-thread counters! Quick Quiz 4.18:  Won’t the single global thread in the function  eventual()  of  Figure  4.8  be just as severe a bottleneck as a global lock would be? Quick Quiz 4.19:  Won’t the estimate returned by  read_count()  in Figure  4.8 become increasingly inaccurate as the number of threads rises? Quick Quiz 4.20:  Given that in the eventually-consistent algorithm shown in Figure  4.8  both reads and updates have extremely low overhead and are extremely scalable, why would anyone bother with the implementation described in Section  4.2.2, given its costly read-side code? 4.2.4 Per-Thread-Variable-Based Implementation Fortunately, gcc provides an  __thread  storage class that provides per-thread storage. This can be used as shown in Figure  4.9  ( count_end.c ) to implement a statistical counter that not only scales, but that also incurs little or no performance penalty to incrementers compared to simple non-atomic increment. Lines 1-4 define needed variables:  counter  is the per-thread counter variable, the counterp[]  array allows threads to access each others’ counters,  finalcount  ac- cumulates the total as individual threads exit, and  final_mutex  coordinates between threads accumulating the total value of the counter and exiting threads. Quick Quiz 4.21:  Why do we need an explicit array to find the other threads’ counters? Why doesn’t gcc provide a  per_thread()  interface, similar to the Linux kernel’s  per_cpu()  primitive, to allow threads to more easily access each others’ per-thread variables? The  inc_count()  function used by updaters is quite simple, as can be seen on lines 6-9. The  read_count()  function used by readers is a bit more complex. Line 16 acquires a lock to exclude exiting threads, and line 21 releases it. Line 17 initializes the sum to the count accumulated by those threads that have already exited, and lines 18-20 sum the counts being accumulated by threads currently running. Finally, line 22 returns the sum. Quick Quiz 4.22:  Doesn’t the check for  NULL  on line 19 of Figure  4.9  add extra branch mispredictions? Why not have a variable set permanently to zero, and point unused counter-pointers to that variable rather than setting them to  NULL ? Quick Quiz 4.23:  Why on earth do we need something as heavyweight as a  lock  guarding the summation in the function  read_count()  in Figure  4.9 ? 50 1 long __thread counter = 0; 2 long  * counterp[NR_THREADS] = { NULL }; 3 long finalcount = 0; 4 DEFINE_SPINLOCK(final_mutex); 5 6 void inc_count(void) 7 { 8 counter++; 9 } 10 11 long read_count(void) 12 { 13 int t; 14 long sum; 15 16 spin_lock(&final_mutex); 17 sum = finalcount; 18 for_each_thread(t) 19 if (counterp[t] != NULL) 20 sum +=  * counterp[t]; 21 spin_unlock(&final_mutex); 22 return sum; 23 } 24 25 void count_register_thread(void) 26 { 27 int idx = smp_thread_id(); 28 29 spin_lock(&final_mutex); 30 counterp[idx] = &counter; 31 spin_unlock(&final_mutex); 32 } 33 34 void count_unregister_thread(int nthreadsexpected) 35 { 36 int idx = smp_thread_id(); 37 38 spin_lock(&final_mutex); 39 finalcount += counter; 40 counterp[idx] = NULL; 41 spin_unlock(&final_mutex); 42 } Figure 4.9: Per-Thread Statistical Counters 51 Lines 25-32 show the  count_register_thread()  function, which must be called by each thread before its first use of this counter. This function simply sets up this thread’s element of the  counterp[]  array to point to its per-thread  counter variable. Quick Quiz 4.24:  Why on earth do we need to acquire the lock in  count_  register_thread()  in Figure  4.9 ? It is a single properly aligned machine-word store to a location that no other thread is modifying, so it should be atomic anyway, right? Lines 34-42 show the  count_unregister_thread()  function, which must be called prior to exit by each thread that previously called  count_register_  thread() . Line 38 acquires the lock, and line 41 releases it, thus excluding any callsto read_count() aswellasothercallsto count_unregister_thread() . Line 39 adds this thread’s  counter  to the global  finalcount , and then line 40 NULL s out its  counterp[]  array entry. A subsequent call to  read_count()  will see the exiting thread’s count in the global  finalcount , and will skip the exiting thread when sequencing through the  counterp[]  array, thus obtaining the correct total. This approach gives updaters almost exactly the same performance as a non-atomic add, and also scales linearly. On the other hand, concurrent reads contend for a single global lock, and therefore perform poorly and scale abysmally. However, this is not a problem for statistical counters, where incrementing happens often and readout happens almost never. Of course, this approach is considerably more complex than the array- based scheme, due to the fact that a given thread’s per-thread variables vanish when that thread exits. Quick Quiz 4.25:  Fine, but the Linux kernel doesn’t have to acquire a lock when reading out the aggregate value of per-CPU counters. So why should user-space code need to do this??? 4.2.5 Discussion These three implementations show that it is possible to obtain uniprocessor performance for statistical counters, despite running on a parallel machine. Quick Quiz 4.26:  What fundamental difference is there between counting packets and counting the total number of bytes in the packets, given that the packets vary in size? Quick Quiz 4.27:  Given that the reader must sum all the threads’ counters, this could take a long time given large numbers of threads. Is there any way that the increment operation can remain fast and scalable while allowing readers to also enjoy reasonable performance and scalability? Given what has been presented in this section, you should now be able to answer the Quick Quiz about statistical counters for networking near the beginning of this chapter. 4.3 Approximate Limit Counters Another special case of counting involves limit-checking. For example, as noted in the approximate structure-allocation limit problem in Quick Quiz 4.3, suppose that you need to maintain a count of the number of structures allocated in order to fail any allocations once the number of structures in use exceeds a limit, in this case, 10,000. Suppose further that these structures are short-lived, that this limit is rarely exceeded, and that this 52 limit is approximate in that it is OK to exceed it sometimes by some bounded amount (see Section  4.4  if you instead need the limit to be exact). 4.3.1 Design One possible design for limit counters is to divide the limit of 10,000 by the number of threads, and give each thread a fixed pool of structures. For example, given 100 threads, each thread would manage its own pool of 100 structures. This approach is simple, and in some cases works well, but it does not handle the common case where a given structure is allocated by one thread and freed by another [ MS93 ]. On the one hand, if a given thread takes credit for any structures it frees, then the thread doing most of the allocating runs out of structures, while the threads doing most of the freeing have lots of credits that they cannot use. On the other hand, if freed structures are credited to the CPU that allocated them, it will be necessary for CPUs to manipulate each others’ counters, which will require expensive atomic instructions or other means of communicating between threads. 2 In short, for many important workloads, we cannot fully partition the counter. Given that partitioning the counters was what brought the excellent update-side perfor- mance for the three schemes discussed in Section  4.2,  this might be grounds for some pessimism. However, the eventually consistent algorithm presented in Section  4.2.3  pro- vides an interesting hint. Recall that this algorithm kept two sets of books, a per-thread counter  variable for updaters and a  global_count  variable for readers, with an eventual()  thread that periodically updated  global_count  to be eventually con- sistent with the values of the per-thread  counter . The per-thread  counter  perfectly partitioned the counter value, while  global_count  kept the full value. For limit counters, we can use a variation on this theme, in that we  partially partition the counter. For example, each of four threads could have a per-thread  counter , but each could also have a per-thread maximum value (call it  countermax ). But then what happens if a given thread needs to increment its  counter , but counter  is equal to its  countermax ? The trick here is to move half of that thread’s counter  value to a  globalcount , then increment  counter . For example, if a given thread’s  counter  and  countermax  variables were both equal to 10, we do the following: 1. Acquire a global lock. 2. Add five to  globalcount . 3. To balance out the addition, subtract five from this thread’s  counter . 4. Release the global lock. 5. Increment this thread’s  counter , resulting in a value of six. Although this procedure still requires a global lock, that lock need only be ac- quired once for every five increment operations, greatly reducing that lock’s level of  contention. We can reduce this contention as low as we wish by increasing the value of   countermax . However, the corresponding penalty for increasing the value of  countermax  is reduced accuracy of   globalcount . To see this, note that on a 2 That said, if each structure will always be freed by the same CPU (or thread) that allocated it, then this simple partitioning approach works extremely well. 53 1 unsigned long __thread counter = 0; 2 unsigned long __thread countermax = 0; 3 unsigned long globalcountmax = 10000; 4 unsigned long globalcount = 0; 5 unsigned long globalreserve = 0; 6 unsigned long  * counterp[NR_THREADS] = { NULL }; 7 DEFINE_SPINLOCK(gblcnt_mutex); Figure 4.10: Simple Limit Counter Variables four-CPU system, if   countermax  is equal to ten,  globalcount  will be in error by at most 40 counts. In contrast, if   countermax  is increased to 100,  globalcount might be in error by as much as 400 counts. This raises the question of just how much we care about  globalcount ’s de- viation from the aggregate value of the counter, where this aggregate value is the sum of   globalcount  and each thread’s  counter  variable. The answer to this question depends on how far the aggregate value is from the counter’s limit (call it globalcountmax ). The larger the difference between these two values, the larger countermax  can be without risk of exceeding the  globalcountmax  limit. This means that the value of a given thread’s countermax  variable can be set based on this difference. When far from the limit, the  countermax  per-thread variables are set to large values to optimize for performance and scalability, while when close to the limit, these same variables are set to small values to minimize the error in the checks against the  globalcountmax  limit. This design is an example of   parallel fastpath , which is an important design pattern in which the common case executes with no expensive instructions and no interactions between threads, but where occasional use is also made of a more conservatively designed (and higher overhead) global algorithm. This design pattern is covered in more detail in Section  5.4. 4.3.2 Simple Limit Counter Implementation Figure  4.10  shows both the per-thread and global variables used by this implemen- tation. The per-thread  counter  and  countermax  variables are the correspond- ing thread’s local counter and the upper bound on that counter, respectively. The globalcountmax  variable on line 3 contains the upper bound for the aggregate counter, and the  globalcount  variable on line 4 is the global counter. The sum of  globalcount  and each thread’s  counter  gives the aggregate value of the overall counter. The  globalreserve  variable on line 5 is the sum of all of the per-thread countermax  variables. The relationship among these variables is shown by Fig- ure  4.11: 1.  The sum of   globalcount  and  globalreserve  must be less than or equal to  globalcountmax . 2.  The sum of all threads’  countermax  values must be less than or equal to globalreserve . 3.  Each thread’s counter must beless than or equal to that thread’s countermax . Each element of the  counterp[]  array references the corresponding thread’s counter  variable, and, finally, the  gblcnt_mutex  spinlock guards all of the global 54 counter 3 countermax 3     g      l     o      b     a      l     c     o     u     n      t     m     a     x counter 0 countermax 0 countermax 1counter 1     g      l     o      b     a      l     c     o     u     n      t     g      l     o      b     a      l     r     e     s     e     r     v     e countermax 2counter 2 Figure 4.11: Simple Limit Counter Variable Relationships variables, in other words, no thread is permitted to access or modify any of the global variables unless it has acquired  gblcnt_mutex . Figure  4.12  shows the  add_count() ,  sub_count() , and  read_count() functions  ( count_lim.c ). Quick Quiz 4.28:  Why does Figure  4.12  provide  add_count()  and  sub_  count()  instead of the  inc_count()  and  dec_count()  interfaces show in Sec- tion  4.2 ? Lines 1-18 show  add_count() , which adds the specified value  delta  to the counter. Line 3 checks to see if there is room for delta on this thread’s counter , and, if so, line 4 adds it and line 6 returns success. This is the  add_counter()  fastpath, and it does no atomic operations, references only per-thread variables, and should not incur any cache misses. Quick Quiz 4.29:  What is with the strange form of the condition on line 3 of  Figure  4.12 ? Why not the following more intuitive form of the fastpath? 3 if (counter + delta <= countermax){ 4 counter += delta; 5 return 1; 6 } If the test on line 3 fails, we must access global variables, and thus must acquire gblcnt_mutex on line 7, which we release on line 11 in the failure case or on line 16 in the success case. Line 8 invokes  globalize_count() , shown in Figure  4.13, which clears the thread-local variables, adjusting the global variables as needed, thus simplifying global processing. (But don’t take  my  word for it, try coding it yourself!) 55 1 int add_count(unsigned long delta) 2 { 3 if (countermax - counter >= delta) { 4 counter += delta; 5 return 1; 6 } 7 spin_lock(&gblcnt_mutex); 8 globalize_count(); 9 if (globalcountmax - 10 globalcount - globalreserve < delta) { 11 spin_unlock(&gblcnt_mutex); 12 return 0; 13 } 14 globalcount += delta; 15 balance_count(); 16 spin_unlock(&gblcnt_mutex); 17 return 1; 18 } 19 20 int sub_count(unsigned long delta) 21 { 22 if (counter >= delta) { 23 counter -= delta; 24 return 1; 25 } 26 spin_lock(&gblcnt_mutex); 27 globalize_count(); 28 if (globalcount < delta) { 29 spin_unlock(&gblcnt_mutex); 30 return 0; 31 } 32 globalcount -= delta; 33 balance_count(); 34 spin_unlock(&gblcnt_mutex); 35 return 1; 36 } 37 38 unsigned long read_count(void) 39 { 40 int t; 41 unsigned long sum; 42 43 spin_lock(&gblcnt_mutex); 44 sum = globalcount; 45 for_each_thread(t) 46 if (counterp[t] != NULL) 47 sum +=  * counterp[t]; 48 spin_unlock(&gblcnt_mutex); 49 return sum; 50 } Figure 4.12: Simple Limit Counter Add, Subtract, and Read 56 Lines 9 and 10 check to see if addition of   delta  can be accommodated, with the meaning of the expression preceding the less-than sign shown in Figure  4.11  as the difference in height of the two red (leftmost) bars. If the addition of   delta  cannot be accommodated, then line 11 (as noted earlier) releases  gblcnt_mutex  and line 12 returns indicating failure. Otherwise, we take the slowpath. Line 14 adds delta to globalcount , and then line 15 invokes balance_count() (shown in Figure  4.13 ) in order to update both the global and the per-thread variables. This call to  balance_count()  will usually set this thread’s  countermax  to re-enable the fastpath. Line 16 then releases  gblcnt_  mutex  (again, as noted earlier), and, finally, line 17 returns indicating success. QuickQuiz4.30:  Whydoes globalize_count() zerotheper-threadvariables, only to later call balance_count() to refill them in Figure  4.12 ? Why not just leave the per-thread variables non-zero? Lines 20-36 show  sub_count() , which subtracts the specified  delta  from the counter. Line 22 checks to see if the per-thread counter can accommodate this subtrac- tion, and, if so, line 23 does the subtraction and line 24 returns success. These lines form  sub_count() ’s fastpath, and, as with  add_count() , this fastpath executes no costly operations. If the fastpath cannot accommodate subtraction of   delta , execution proceeds to the slowpath on lines 26-35. Because the slowpath must access global state, line 26 acquires  gblcnt_mutex , which is released either by line 29 (in case of failure) or by line 34 (in case of success). Line 27 invokes  globalize_count() , shown in Figure  4.13 , which again clears the thread-local variables, adjusting the global variables as needed. Line 28 checks to see if the counter can accommodate subtracting  delta , and, if not, line 29 releases  gblcnt_mutex  (as noted earlier) and line 30 returns failure. Quick Quiz 4.31:  Given that  globalreserve  counted against us in  add_  count() , why doesn’t it count for us in  sub_count()  in Figure  4.12 ? Quick Quiz 4.32:  Suppose that one thread invokes  add_count()  shown in Figure  4.12,  and then another thread invokes  sub_count() . Won’t  sub_count() return failure even though the value of the counter is non-zero? If, on the other hand, line 28 finds that the counter  can  accommodate subtracting delta , we complete the slowpath. Line 32 does the subtraction and then line 33 invokes  balance_count()  (shown in Figure  4.13 ) in order to update both global and per-thread variables (hopefully re-enabling the fastpath). Then line 34 releases gblcnt_mutex , and line 35 returns success. Quick Quiz 4.33:  Why have both  add_count()  and  sub_count()  in Fig- ure  4.12 ? Why not simply pass a negative number to  add_count() ? Lines 38-50 show  read_count() , which returns the aggregate value of the counter. It acquires  gblcnt_mutex  on line 43 and releases it on line 48, excluding global operations from  add_count()  and  sub_count() , and, as we will see, also excluding thread creation and exit. Line 44 initializes local variable  sum  to the value of  globalcount , and then the loop spanning lines 45-47 sums the per-thread counter variables. Line 49 then returns the sum. Figure  4.13  shows a number of utility functions used by the add_count() ,  sub_  count() , and  read_count()  primitives shown in Figure  4.12. Lines 1-7 show  globalize_count() , which zeros the current thread’s per- thread counters, adjusting the global variables appropriately. It is important to note that this function does not change the aggregate value of the counter, but instead changes how 57 1 static void globalize_count(void) 2 { 3 globalcount += counter; 4 counter = 0; 5 globalreserve -= countermax; 6 countermax = 0; 7 } 8 9 static void balance_count(void) 10 { 11 countermax = globalcountmax - 12 globalcount - globalreserve; 13 countermax /= num_online_threads(); 14 globalreserve += countermax; 15 counter = countermax / 2; 16 if (counter > globalcount) 17 counter = globalcount; 18 globalcount -= counter; 19 } 20 21 void count_register_thread(void) 22 { 23 int idx = smp_thread_id(); 24 25 spin_lock(&gblcnt_mutex); 26 counterp[idx] = &counter; 27 spin_unlock(&gblcnt_mutex); 28 } 29 30 void count_unregister_thread(int nthreadsexpected) 31 { 32 int idx = smp_thread_id(); 33 34 spin_lock(&gblcnt_mutex); 35 globalize_count(); 36 counterp[idx] = NULL; 37 spin_unlock(&gblcnt_mutex); 38 } Figure 4.13: Simple Limit Counter Utility Functions 58      g        l      o        b      a        l      c      o      u      n        t      g        l      o        b      a        l      r      e      s      e      r      v      e cm 0c 0 cm 3 cm 2 cm 1 c 3 c 1 c 2      g        l      o        b      a        l      c      o      u      n        t      g        l      o        b      a        l      r      e      s      e      r      v      e cm 3 cm 2 cm 1 c 3 c 1 c 2      g        l      o        b      a        l      c      o      u      n        t cm 3 cm 2 cm 1 c 3 c 1 c 2      g        l      o        b      a        l      r      e      s      e      r      v      e cm 0 c 0 globalize_count()balance_count() Figure 4.14: Schematic of Globalization and Balancing the counter’s current value is represented. Line 3 adds the thread’s counter variable to globalcount , and line 4 zeroes  counter . Similarly, line 5 subtracts the per-thread countermax from globalreserve , and line 6 zeroes countermax . It is helpful to refer to Figure  4.11  when reading both this function and  balance_count() , which is next. Lines 9-19 show  balance_count() , which is roughly speaking the inverse of  globalize_count() . Thisfunction’sjobistosetthecurrentthread’s countermax variabletothelargestvaluethatavoidstheriskofthecounterexceedingthe globalcountmax limit. Changing the current thread’s  countermax  variable of course requires corre- sponding adjustments to  counter ,  globalcount  and  globalreserve , as can be seen by referring back to Figure  4.11.  By doing this,  balance_count()  max- imizes use of   add_count() ’s and  sub_count() ’s low-overhead fastpaths. As with  globalize_count() ,  balance_count()  is not permitted to change the aggregate value of the counter. Lines 11-13 compute this thread’s share of that portion of   globalcountmax  that is not already covered by either  globalcount  or  globalreserve , and assign the computed quantity to this thread’s  countermax . Line 14 makes the corresponding ad-  justmentto globalreserve . Line15setsthisthread’s counter tothemiddleofthe range from zero to  countermax . Line 16 checks to see whether  globalcount  can in fact accommodate this value of   counter , and, if not, line 17 decreases  counter accordingly. Finally, in either case, line 18 makes the corresponding adjustment to globalcount . Quick Quiz 4.34:  Why set  counter  to  countermax / 2  in line 15 of Fig- ure  4.13 ? Wouldn’t it be simpler to just take  countermax  counts? 59 It is helpful to look at a schematic depicting how the relationship of the coun- ters changes with the execution of first  globalize_count()  and then  balance_  count , as shown in Figure  4.14 . Time advances from left to right, with the leftmost configuration roughly that of Figure  4.11 . The center configuration shows the rela- tionship of these same counters after  globalize_count()  is executed by thread 0. As can be seen from the figure, thread 0’s  counter  (“c 0” in the figure) is added to  globalcount , while the value of   globalreserve  is reduced by this same amount. Both thread 0’s  counter  and its  countermax  (“cm 0” in the figure) are reduced to zero. The other three threads’ counters are unchanged. Note that this change did not affect the overall value of the counter, as indicated by the bottommost dotted line connecting the leftmost and center configurations. In other words, the sum of   globalcount  and the four threads’  counter  variables is the same in both configurations. Similarly, this change did not affect the sum of   globalcount  and globalreserve , as indicated by the upper dotted line. Therightmostconfigurationshowstherelationshipofthesecountersafter balance_  count()  is executed, again by thread 0. One-quarter of the remaining count, denoted by the vertical line extending up from all three configurations, is added to thread 0’s countermax and half of that to thread 0’s counter . The amount added to thread 0’s counter  is also subtracted from  globalcount  in order to avoid changing the overall value of the counter (which is again the sum of   globalcount  and the three threads’  counter  variables), again as indicated by the lowermost of the two dotted lines connecting the center and rightmost configurations. The  globalreserve  vari- able is also adjusted so that this variable remains equal to the sum of the four threads’ countermax  variables. Because thread 0’s  counter  is less than its  countermax , thread 0 can once again increment the counter locally. Quick Quiz 4.35:  In Figure  4.14,  even though a quarter of the remaining count up to the limit is assigned to thread 0, only an eighth of the remaining count is consumed, as indicated by the uppermost dotted line connecting the center and the rightmost configurations. Why is that? Lines 21-28 show  count_register_thread() , which sets up state for newly created threads. This function simply installs a pointer to the newly created thread’s counter  variable into the corresponding entry of the  counterp[]  array under the protection of   gblcnt_mutex . Finally, lines 30-38 show  count_unregister_thread() , which tears down state for a soon-to-be-exiting thread. Line 34 acquires  gblcnt_mutex  and line 37 releases it. Line 35 invokes  globalize_count()  to clear out this thread’s counter state, and line 36 clears this thread’s entry in the  counterp[]  array. 4.3.3 Simple Limit Counter Discussion This type of counter is quite fast when aggregate values are near zero, with some over- headduetothecomparisonandbranchinboth add_count() ’s and sub_count() ’s fastpaths. However, the use of a per-thread  countermax  reserve means that  add_  count()  can fail even when the aggregate value of the counter is nowhere near globalcountmax . Similarly,  sub_count()  can fail even when the aggregate value of the counter is nowhere near zero. In many cases, this is unacceptable. Even if the  globalcountmax  is intended to be an approximate limit, there is usually a limit to exactly how much approximation can be tolerated. One way to limit the degree of approximation is to impose an upper limit 60 1 unsigned long __thread counter = 0; 2 unsigned long __thread countermax = 0; 3 unsigned long globalcountmax = 10000; 4 unsigned long globalcount = 0; 5 unsigned long globalreserve = 0; 6 unsigned long  * counterp[NR_THREADS] = { NULL }; 7 DEFINE_SPINLOCK(gblcnt_mutex); 8 #define MAX_COUNTERMAX 100 Figure 4.15: Approximate Limit Counter Variables 1 static void balance_count(void) 2 { 3 countermax = globalcountmax - 4 globalcount - globalreserve; 5 countermax /= num_online_threads(); 6 if (countermax > MAX_COUNTERMAX) 7 countermax = MAX_COUNTERMAX; 8 globalreserve += countermax; 9 counter = countermax / 2; 10 if (counter > globalcount) 11 counter = globalcount; 12 globalcount -= counter; 13 } Figure 4.16: Approximate Limit Counter Balancing on the value of the per-thread  countermax  instances. This task is undertaken in the next section. 4.3.4 Approximate Limit Counter Implementation Because this implementation ( count_lim_app.c )  is quite similar to that in the previous section (Figures  4.10,  4.12,  and  4.13) , only the changes are shown here. Figure  4.15  is identical to Figure  4.10,  with the addition of   MAX_COUNTERMAX , which sets the maximum permissible value of the per-thread  countermax  variable. Similarly, Figure  4.16  is identical to the  balance_count()  function in Fig- ure  4.13,  with the addition of lines 6 and 7, which enforce the  MAX_COUNTERMAX limit on the per-thread  countermax  variable. 4.3.5 Approximate Limit Counter Discussion These changes greatly reduce the limit inaccuracy seen in the previous version, but present another problem: any given value of  MAX_COUNTERMAX will cause a workload- dependent fraction of accesses to fall off the fastpath. As the number of threads increase, non-fastpath execution will become both a performance and a scalability problem. However, we will defer this problem and turn instead to counters with exact limits. 4.4 Exact Limit Counters To solve the exact structure-allocation limit problem noted in Quick Quiz 4.4, we need a limit counter that can tell exactly when its limits are exceeded. One way of implementing such a limit counter is to cause threads that have reserved counts to give them up. One 61 1 atomic_t __thread ctrandmax = ATOMIC_INIT(0); 2 unsigned long globalcountmax = 10000; 3 unsigned long globalcount = 0; 4 unsigned long globalreserve = 0; 5 atomic_t  * counterp[NR_THREADS] = { NULL }; 6 DEFINE_SPINLOCK(gblcnt_mutex); 7 #define CM_BITS (sizeof(atomic_t)  *  4) 8 #define MAX_COUNTERMAX ((1 << CM_BITS) - 1) 9 10 static void 11 split_ctrandmax_int(int cami, int  * c, int  * cm) 12 { 13  * c = (cami >> CM_BITS) & MAX_COUNTERMAX; 14  * cm = cami & MAX_COUNTERMAX; 15 } 16 17 static void 18 split_ctrandmax(atomic_t  * cam, int  * old, 19 int  * c, int  * cm) 20 { 21 unsigned int cami = atomic_read(cam); 22 23  * old = cami; 24 split_ctrandmax_int(cami, c, cm); 25 } 26 27 static int merge_ctrandmax(int c, int cm) 28 { 29 unsigned int cami; 30 31 cami = (c << CM_BITS) | cm; 32 return ((int)cami); 33 } Figure 4.17: Atomic Limit Counter Variables and Access Functions way to do this is to use atomic instructions. Of course, atomic instructions will slow down the fastpath, but on the other hand, it would be silly not to at least give them a try. 4.4.1 Atomic Limit Counter Implementation Unfortunately, if one thread is to safely remove counts from another thread, both threads will need to atomically manipulate that thread’s counter and countermax variables. The usual way to do this is to combine these two variables into a single variable, for example, given a 32-bit variable, using the high-order 16 bits to represent  counter and the low-order 16 bits to represent  countermax . Quick Quiz 4.36:  Why is it necessary to atomically manipulate the thread’s counter  and  countermax  variables as a unit? Wouldn’t it be good enough to atomically manipulate them individually? The variables and access functions for a simple atomic limit counter are shown in Figure  4.17  ( count_lim_atomic.c ) . The  counter  and  countermax  variables in earlier algorithms are combined into the single variable ctrandmax shown on line 1, with  counter  in the upper half and  countermax  in the lower half. This variable is of type  atomic_t , which has an underlying representation of   int . Lines2-6showthedefinitionsfor globalcountmax , globalcount , globalreserve , counterp , and  gblcnt_mutex , all of which take on roles similar to their coun- terparts in Figure  4.15 . Line 7 defines  CM_BITS , which gives the number of bits in each half of   ctrandmax , and line 8 defines  MAX_COUNTERMAX , which gives the 62 maximum value that may be held in either half of   ctrandmax . Quick Quiz 4.37:  In what way does line 7 of Figure  4.17  violate the C standard? Lines 10-15 show the  split_ctrandmax_int()  function, which, when given the underlying  int  from the  atomic_t ctrandmax  variable, splits it into its counter ( c )and countermax ( cm )components. Line13isolatesthemost-significant half of this  int , placing the result as specified by argument  c , and line 14 isolates the least-significant half of this  int , placing the result as specified by argument  cm . Lines 17-25 show the  split_ctrandmax()  function, which picks up the un- derlying  int  from the specified variable on line 21, stores it as specified by the  old argument on line 23, and then invokes  split_ctrandmax_int()  to split it on line 24. Quick Quiz 4.38:  Given that there is only one  ctrandmax  variable, why bother passing in a pointer to it on line 18 of Figure  4.17 ? Lines 27-33 show the  merge_ctrandmax()  function, which can be thought of as the inverse of   split_ctrandmax() . Line 31 merges the  counter  and countermax  values passed in  c  and  cm , respectively, and returns the result. Quick Quiz 4.39:  Why does  merge_ctrandmax()  in Figure  4.17  return an int  rather than storing directly into an  atomic_t ? Figure  4.18  shows the  add_count() ,  sub_count() , and  read_count() functions. Lines 1-32 show add_count() , whose fastpath spans lines 8-15, with the remain- der of the function being the slowpath. Lines 8-14 of the fastpath form a compare-and- swap (CAS) loop, with the  atomic_cmpxchg()  primitives on lines 13-14 perform- ing the actual CAS. Line 9 splits the current thread’s  ctrandmax  variable into its counter  (in  c ) and  countermax  (in  cm ) components, while placing the underlying int  into  old . Line 10 checks whether the amount  delta  can be accommodated locally (taking care to avoid integer overflow), and if not, line 11 transfers to the slowpath. Otherwise, line 11 combines an updated  counter  value with the original countermax  value into  new . The  atomic_cmpxchg()  primitive on lines 13-14 then atomically compares this thread’s ctrandmax variable to old , updating its value to  new  if the comparison succeeds. If the comparison succeeds, line 15 returns success, otherwise, execution continues in the loop at line 9. Quick Quiz 4.40:  Yecch! Why the ugly  goto  on line 11 of Figure  4.18 ? Haven’t you heard of the  break  statement??? Quick Quiz 4.41:  Why would the  atomic_cmpxchg()  primitive at lines 13-14 of Figure  4.18  ever fail? After all, we picked up its old value on line 9 and have not changed it! Lines 16-31 of Figure  4.18  show  add_count() ’s slowpath, which is protected by  gblcnt_mutex , which is acquired on line 17 and released on lines 24 and 30. Line 18 invokes globalize_count() , which moves this thread’s state to the global counters. Lines 19-20 check whether the  delta  value can be accommodated by the current global state, and, if not, line 21 invokes  flush_local_count()  to flush all threads’ local state to the global counters, and then lines 22-23 recheck whether delta  can be accommodated. If, after all that, the addition of   delta  still cannot be accommodated, then line 24 releases  gblcnt_mutex  (as noted earlier), and then line 25 returns failure. Otherwise, line 28 adds  delta  to the global counter, line 29 spreads counts to the local state if appropriate, line 30 releases  gblcnt_mutex  (again, as noted earlier), and finally, line 31 returns success. 63 1 int add_count(unsigned long delta) 2 { 3 int c; 4 int cm; 5 int old; 6 int new; 7 8 do { 9 split_ctrandmax(&ctrandmax, &old, &c, &cm); 10 if (delta > MAX_COUNTERMAX || c + delta > cm) 11 goto slowpath; 12 new = merge_ctrandmax(c + delta, cm); 13 } while (atomic_cmpxchg(&ctrandmax, 14 old, new) != old); 15 return 1; 16 slowpath: 17 spin_lock(&gblcnt_mutex); 18 globalize_count(); 19 if (globalcountmax - globalcount - 20 globalreserve < delta) { 21 flush_local_count(); 22 if (globalcountmax - globalcount - 23 globalreserve < delta) { 24 spin_unlock(&gblcnt_mutex); 25 return 0; 26 } 27 } 28 globalcount += delta; 29 balance_count(); 30 spin_unlock(&gblcnt_mutex); 31 return 1; 32 } 33 34 int sub_count(unsigned long delta) 35 { 36 int c; 37 int cm; 38 int old; 39 int new; 40 41 do { 42 split_ctrandmax(&ctrandmax, &old, &c, &cm); 43 if (delta > c) 44 goto slowpath; 45 new = merge_ctrandmax(c - delta, cm); 46 } while (atomic_cmpxchg(&ctrandmax, 47 old, new) != old); 48 return 1; 49 slowpath: 50 spin_lock(&gblcnt_mutex); 51 globalize_count(); 52 if (globalcount < delta) { 53 flush_local_count(); 54 if (globalcount < delta) { 55 spin_unlock(&gblcnt_mutex); 56 return 0; 57 } 58 } 59 globalcount -= delta; 60 balance_count(); 61 spin_unlock(&gblcnt_mutex); 62 return 1; 63 } Figure 4.18: Atomic Limit Counter Add and Subtract 64 1 unsigned long read_count(void) 2 { 3 int c; 4 int cm; 5 int old; 6 int t; 7 unsigned long sum; 8 9 spin_lock(&gblcnt_mutex); 10 sum = globalcount; 11 for_each_thread(t) 12 if (counterp[t] != NULL) { 13 split_ctrandmax(counterp[t], &old, &c, &cm); 14 sum += c; 15 } 16 spin_unlock(&gblcnt_mutex); 17 return sum; 18 } Figure 4.19: Atomic Limit Counter Read Lines 34-63 of Figure  4.18  show  sub_count() , which is structured similarly to add_count() , having a fastpath on lines 41-48 and a slowpath on lines 49-62. A line-by-line analysis of this function is left as an exercise to the reader. Figure  4.19  shows read_count() . Line 9 acquires gblcnt_mutex and line 16 releases it. Line 10 initializes local variable  sum  to the value of   globalcount , and the loop spanning lines 11-15 adds the per-thread counters to this sum, isolating each per-thread counter using  split_ctrandmax  on line 13. Finally, line 17 returns the sum. Figures 4.20 and 4.21 showstheutilityfunctions globalize_count() , flush_  local_count() , balance_count() , count_register_thread() , and count_  unregister_thread() . Thecodefor globalize_count() isshownonlines1- 12, of Figure  4.20  and is similar to that of previous algorithms, with the addition of line 7, which is now required to split out  counter  and  countermax  from  ctrandmax . The code for  flush_local_count() , which moves all threads’ local counter state to the global counter, is shown on lines 14-32. Line 22 checks to see if the value of   globalreserve  permits any per-thread counts, and, if not, line 23 returns. Otherwise, line 24 initializes local variable  zero  to a combined zeroed  counter  and countermax . The loop spanning lines 25-31 sequences through each thread. Line 26 checks to see if the current thread has counter state, and, if so, lines 27-30 move that state to the global counters. Line 27 atomically fetches the current thread’s state while replacing it with zero. Line 28 splits this state into its  counter  (in local variable c ) and  countermax  (in local variable  cm ) components. Line 29 adds this thread’s counter to globalcount , while line 30 subtracts this thread’s countermax from globalreserve . Quick Quiz 4.42:  What stops a thread from simply refilling its  ctrandmax  vari- able immediately after  flush_local_count()  on line 14 of Figure  4.20  empties it? Quick Quiz 4.43:  What prevents concurrent execution of the fastpath of either atomic_add() or atomic_sub() from interfering with the ctrandmax variable while  flush_local_count()  is accessing it on line 27 of Figure  4.20  empties it? Lines 1-22 on Figure  4.21  show the code for  balance_count() , which refills the calling thread’s local  ctrandmax  variable. This function is quite similar to that 65 1 static void globalize_count(void) 2 { 3 int c; 4 int cm; 5 int old; 6 7 split_ctrandmax(&ctrandmax, &old, &c, &cm); 8 globalcount += c; 9 globalreserve -= cm; 10 old = merge_ctrandmax(0, 0); 11 atomic_set(&ctrandmax, old); 12 } 13 14 static void flush_local_count(void) 15 { 16 int c; 17 int cm; 18 int old; 19 int t; 20 int zero; 21 22 if (globalreserve == 0) 23 return; 24 zero = merge_ctrandmax(0, 0); 25 for_each_thread(t) 26 if (counterp[t] != NULL) { 27 old = atomic_xchg(counterp[t], zero); 28 split_ctrandmax_int(old, &c, &cm); 29 globalcount += c; 30 globalreserve -= cm; 31 } 32 } Figure 4.20: Atomic Limit Counter Utility Functions 1 of the preceding algorithms, with changes required to handle the merged  ctrandmax variable. Detailed analysis of the code is left as an exercise for the reader, as it is with the  count_register_thread()  function starting on line 24 and the  count_  unregister_thread()  function starting on line 33. Quick Quiz 4.44:  Given that the atomic_set()  primitive does a simple store to the specified atomic_t , how can line 21 of  balance_count() in Figure  4.21  work correctly in face of concurrent  flush_local_count()  updates to this variable? The next section qualitatively evaluates this design. 4.4.2 Atomic Limit Counter Discussion This is the first implementation that actually allows the counter to be run all the way to either of its limits, but it does so at the expense of adding atomic operations to the fastpaths, which slow down the fastpaths significantly on some systems. Although some workloads might tolerate this slowdown, it is worthwhile looking for algorithms with better read-side performance. One such algorithm uses a signal handler to steal counts from other threads. Because signal handlers run in the context of the signaled thread, atomic operations are not necessary, as shown in the next section. Quick Quiz 4.45:  But signal handlers can be migrated to some other CPU while running. Doesn’tthispossibilityrequirethatatomicinstructionsandmemorybarriersare required to reliably communicate between a thread and a signal handler that interrupts that thread? 66 1 static void balance_count(void) 2 { 3 int c; 4 int cm; 5 int old; 6 unsigned long limit; 7 8 limit = globalcountmax - globalcount - 9 globalreserve; 10 limit /= num_online_threads(); 11 if (limit > MAX_COUNTERMAX) 12 cm = MAX_COUNTERMAX; 13 else 14 cm = limit; 15 globalreserve += cm; 16 c = cm / 2; 17 if (c > globalcount) 18 c = globalcount; 19 globalcount -= c; 20 old = merge_ctrandmax(c, cm); 21 atomic_set(&ctrandmax, old); 22 } 23 24 void count_register_thread(void) 25 { 26 int idx = smp_thread_id(); 27 28 spin_lock(&gblcnt_mutex); 29 counterp[idx] = &ctrandmax; 30 spin_unlock(&gblcnt_mutex); 31 } 32 33 void count_unregister_thread(int nthreadsexpected) 34 { 35 int idx = smp_thread_id(); 36 37 spin_lock(&gblcnt_mutex); 38 globalize_count(); 39 counterp[idx] = NULL; 40 spin_unlock(&gblcnt_mutex); 41 } Figure 4.21: Atomic Limit Counter Utility Functions 2 67 IDLE REQ need flush READY no count !counting ACK counting flushed done counting Figure 4.22: Signal-Theft State Machine 4.4.3 Signal-Theft Limit Counter Design Even though per-thread state will now be manipulated only by the corresponding thread, there will still need to be synchronization with the signal handlers. This synchronization is provided by the state machine shown in Figure  4.22  The state machine starts out in the IDLE state, and when  add_count()  or  sub_count()  find that the combination of the local thread’s count and the global count cannot accommodate the request, the corresponding slowpath sets each thread’s  theft  state to REQ (unless that thread has no count, in which case it transitions directly to READY). Only the slowpath, which holds the  gblcnt_mutex  lock, is permitted to transition from the IDLE state, as indicated by the green color . 3 The slowpath then sends a signal to each thread, and the corresponding signal handler checks the corresponding thread’s theft and counting variables. If the  theft  state is not REQ, then the signal handler is not permitted to change the state, and therefore simply returns. Otherwise, if the  counting  variable is set, indicating that the current thread’s fastpath is in progress, the signal handler sets the theft  state to ACK, otherwise to READY. If the  theft  state is ACK, only the fastpath is permitted to change the  theft state, as indicated by the blue color. When the fastpath completes, it sets the  theft state to READY. Once the slowpath sees a thread’s theft state is READY, the slowpath is permitted to steal that thread’s count. The slowpath then sets that thread’s  theft  state to IDLE. Quick Quiz 4.46:  In Figure  4.22,  why is the REQ  theft  state colored red? Quick Quiz 4.47:  In Figure  4.22,  what is the point of having separate REQ and ACK  theft  states? Why not simplify the state machine by collapsing them into a single REQACK state? Then whichever of the signal handler or the fastpath gets there first could set the state to READY. 3 For those with black-and-white versions of this book, IDLE and READY are green, REQ is red, and ACK is blue. 68 1 #define THEFT_IDLE 0 2 #define THEFT_REQ 1 3 #define THEFT_ACK 2 4 #define THEFT_READY 3 5 6 int __thread theft = THEFT_IDLE; 7 int __thread counting = 0; 8 unsigned long __thread counter = 0; 9 unsigned long __thread countermax = 0; 10 unsigned long globalcountmax = 10000; 11 unsigned long globalcount = 0; 12 unsigned long globalreserve = 0; 13 unsigned long  * counterp[NR_THREADS] = { NULL }; 14 unsigned long  * countermaxp[NR_THREADS] = { NULL }; 15 int  * theftp[NR_THREADS] = { NULL }; 16 DEFINE_SPINLOCK(gblcnt_mutex); 17 #define MAX_COUNTERMAX 100 Figure 4.23: Signal-Theft Limit Counter Data 4.4.4 Signal-Theft Limit Counter Implementation Figure  4.23  ( count_lim_sig.c ) shows the data structures used by the signal-theft based counter implementation. Lines 1-7 define the states and values for the per-thread theft state machine described in the preceding section. Lines 8-17 are similar to earlier implementations, with the addition of lines 14 and 15 to allow remote access to a thread’s  countermax  and  theft  variables, respectively. Figure  4.24  shows the functions responsible for migrating counts between per-thread variables and the global variables. Lines 1-7 shows  globalize_count() , which is identical to earlier implementations. Lines 9-19 shows  flush_local_count_  sig() , which is the signal handler used in the theft process. Lines 11 and 12 check to see if the  theft  state is REQ, and, if not returns without change. Line 13 executes a memory barrier to ensure that the sampling of the theft variable happens before any change to that variable. Line 14 sets the  theft  state to ACK, and, if line 15 sees that this thread’s fastpaths are not running, line 16 sets the  theft  state to READY. Quick Quiz 4.48:  In Figure  4.24  function  flush_local_count_sig() , why are there  ACCESS_ONCE()  wrappers around the uses of the  theft  per-thread vari- able? Lines 21-49 shows  flush_local_count() , which is called from the slowpath to flush all threads’ local counts. The loop spanning lines 26-34 advances the  theft state for each thread that has local count, and also sends that thread a signal. Line 27 skips any non-existent threads. Otherwise, line 28 checks to see if the current thread holds any local count, and, if not, line 29 sets the thread’s  theft  state to READY and line 30 skips to the next thread. Otherwise, line 32 sets the thread’s  theft  state to REQ and line 33 sends the thread a signal. Quick Quiz 4.49:  In Figure  4.24,  why is it safe for line 28 to directly access the other thread’s  countermax  variable? Quick Quiz 4.50:  In Figure  4.24 , why doesn’t line 33 check for the current thread sending itself a signal? Quick Quiz 4.51:  The code in Figure  4.24 , works with gcc and POSIX. What would be required to make it also conform to the ISO C standard? The loop spanning lines 35-48 waits until each thread reaches READY state, then steals that thread’s count. Lines 36-37 skip any non-existent threads, and the loop spanning lines 38-42 wait until the current thread’s  theft  state becomes READY. 69 1 static void globalize_count(void) 2 { 3 globalcount += counter; 4 counter = 0; 5 globalreserve -= countermax; 6 countermax = 0; 7 } 8 9 static void flush_local_count_sig(int unused) 10 { 11 if (ACCESS_ONCE(theft) != THEFT_REQ) 12 return; 13 smp_mb(); 14 ACCESS_ONCE(theft) = THEFT_ACK; 15 if (!counting) { 16 ACCESS_ONCE(theft) = THEFT_READY; 17 } 18 smp_mb(); 19 } 20 21 static void flush_local_count(void) 22 { 23 int t; 24 thread_id_t tid; 25 26 for_each_tid(t, tid) 27 if (theftp[t] != NULL) { 28 if ( * countermaxp[t] == 0) { 29 ACCESS_ONCE( * theftp[t]) = THEFT_READY; 30 continue; 31 } 32 ACCESS_ONCE( * theftp[t]) = THEFT_REQ; 33 pthread_kill(tid, SIGUSR1); 34 } 35 for_each_tid(t, tid) { 36 if (theftp[t] == NULL) 37 continue; 38 while (ACCESS_ONCE( * theftp[t]) != THEFT_READY) { 39 poll(NULL, 0, 1); 40 if (ACCESS_ONCE( * theftp[t]) == THEFT_REQ) 41 pthread_kill(tid, SIGUSR1); 42 } 43 globalcount +=  * counterp[t]; 44  * counterp[t] = 0; 45 globalreserve -=  * countermaxp[t]; 46  * countermaxp[t] = 0; 47 ACCESS_ONCE( * theftp[t]) = THEFT_IDLE; 48 } 49 } 50 51 static void balance_count(void) 52 { 53 countermax = globalcountmax - 54 globalcount - globalreserve; 55 countermax /= num_online_threads(); 56 if (countermax > MAX_COUNTERMAX) 57 countermax = MAX_COUNTERMAX; 58 globalreserve += countermax; 59 counter = countermax / 2; 60 if (counter > globalcount) 61 counter = globalcount; 62 globalcount -= counter; 63 } Figure 4.24: Signal-Theft Limit Counter Value-Migration Functions 70 1 int add_count(unsigned long delta) 2 { 3 int fastpath = 0; 4 5 counting = 1; 6 barrier(); 7 if (countermax - counter >= delta && 8 ACCESS_ONCE(theft) <= THEFT_REQ) { 9 counter += delta; 10 fastpath = 1; 11 } 12 barrier(); 13 counting = 0; 14 barrier(); 15 if (ACCESS_ONCE(theft) == THEFT_ACK) { 16 smp_mb(); 17 ACCESS_ONCE(theft) = THEFT_READY; 18 } 19 if (fastpath) 20 return 1; 21 spin_lock(&gblcnt_mutex); 22 globalize_count(); 23 if (globalcountmax - globalcount - 24 globalreserve < delta) { 25 flush_local_count(); 26 if (globalcountmax - globalcount - 27 globalreserve < delta) { 28 spin_unlock(&gblcnt_mutex); 29 return 0; 30 } 31 } 32 globalcount += delta; 33 balance_count(); 34 spin_unlock(&gblcnt_mutex); 35 return 1; 36 } Figure 4.25: Signal-Theft Limit Counter Add Function Line 39 blocks for a millisecond to avoid priority-inversion problems, and if line 40 determines that the thread’s signal has not yet arrived, line 41 resends the signal. Execution reaches line 43 when the thread’s  theft  state becomes READY, so lines 43- 46 do the thieving. Line 47 then sets the thread’s  theft  state back to IDLE. Quick Quiz 4.52:  In Figure  4.24 , why does line 41 resend the signal? Lines 51-63 show  balance_count() , which is similar to that of earlier exam- ples. Figure  4.25  shows the  add_count()  function. The fastpath spans lines 5-20, and the slowpath lines 21-35. Line 5 sets the per-thread  counting  variable to 1 so that any subsequent signal handlers interrupting this thread will set the  theft  state to ACK rather than READY, allowing this fastpath to complete properly. Line 6 prevents the compiler from reordering any of the fastpath body to precede the setting of   counting . Lines 7 and 8 check to see if the per-thread data can accommodate the  add_count() and if there is no ongoing theft in progress, and if so line 9 does the fastpath addition and line 10 notes that the fastpath was taken. In either case, line 12 prevents the compiler from reordering the fastpath body to follow line 13, which permits any subsequent signal handlers to undertake theft. Line 14 again disables compiler reordering, and then line 15 checks to see if the signal handler deferred the  theft  state-change to READY, and, if so, line 16 executes a memory barrier to ensure that any CPU that sees line 17 setting state to READY also sees the effects of line 9. If the fastpath addition at line 9 was executed, then line 20 returns 71 38 int sub_count(unsigned long delta) 39 { 40 int fastpath = 0; 41 42 counting = 1; 43 barrier(); 44 if (counter >= delta && 45 ACCESS_ONCE(theft) <= THEFT_REQ) { 46 counter -= delta; 47 fastpath = 1; 48 } 49 barrier(); 50 counting = 0; 51 barrier(); 52 if (ACCESS_ONCE(theft) == THEFT_ACK) { 53 smp_mb(); 54 ACCESS_ONCE(theft) = THEFT_READY; 55 } 56 if (fastpath) 57 return 1; 58 spin_lock(&gblcnt_mutex); 59 globalize_count(); 60 if (globalcount < delta) { 61 flush_local_count(); 62 if (globalcount < delta) { 63 spin_unlock(&gblcnt_mutex); 64 return 0; 65 } 66 } 67 globalcount -= delta; 68 balance_count(); 69 spin_unlock(&gblcnt_mutex); 70 return 1; 71 } Figure 4.26: Signal-Theft Limit Counter Subtract Function 72 1 unsigned long read_count(void) 2 { 3 int t; 4 unsigned long sum; 5 6 spin_lock(&gblcnt_mutex); 7 sum = globalcount; 8 for_each_thread(t) 9 if (counterp[t] != NULL) 10 sum +=  * counterp[t]; 11 spin_unlock(&gblcnt_mutex); 12 return sum; 13 } Figure 4.27: Signal-Theft Limit Counter Read Function success. Otherwise, we fall through to the slowpath starting at line 21. The structure of the slowpath is similar to those of earlier examples, so its analysis is left as an exercise to the reader. Similarly, the structure of  sub_count() on Figure  4.26  is the same as that of   add_count() , so the analysis of   sub_count()  is also left as an exercise for the reader, as is the analysis of   read_count()  in Figure  4.27. Lines 1-12 of Figure  4.28  show  count_init() , which set up  flush_local_  count_sig()  as the signal handler for SIGUSR1 , enabling the pthread_kill() calls in  flush_local_count()  to invoke  flush_local_count_sig() . The code for thread registry and unregistry is similar to that of earlier examples, so its analysis is left as an exercise for the reader. 4.4.5 Signal-Theft Limit Counter Discussion The signal-theft implementation runs more than twice as fast as the atomic implementa- tion on my Intel Core Duo laptop. Is it always preferable? The signal-theft implementation would be vastly preferable on Pentium-4 systems, given their slow atomic instructions, but the old 80386-based Sequent Symmetry sys- tems would do much better with the shorter path length of the atomic implementation. However, this increased update-side performance comes at the prices of higher read-side overhead: Those POSIX signals are not free. If ultimate performance is of the essence, you will need to measure them both on the system that your application is to be deployed on. Quick Quiz 4.53:  Not only are POSIX signals slow, sending one to each thread simply does not scale. What would you do if you had (say) 10,000 threads and needed the read side to be fast? This is but one reason why high-quality APIs are so important: they permit imple- mentations to be changed as required by ever-changing hardware performance charac- teristics. Quick Quiz 4.54:  What if you want an exact limit counter to be exact only for its lower limit, but to allow the upper limit to be inexact? 4.5 Applying Specialized Parallel Counters Although the exact limit counter implementations in Section  4.4  can be very useful, they are not much help if the counter’s value remains near zero at all times, as it might when 73 1 void count_init(void) 2 { 3 struct sigaction sa; 4 5 sa.sa_handler = flush_local_count_sig; 6 sigemptyset(&sa.sa_mask); 7 sa.sa_flags = 0; 8 if (sigaction(SIGUSR1, &sa, NULL) != 0) { 9 perror("sigaction"); 10 exit(-1); 11 } 12 } 13 14 void count_register_thread(void) 15 { 16 int idx = smp_thread_id(); 17 18 spin_lock(&gblcnt_mutex); 19 counterp[idx] = &counter; 20 countermaxp[idx] = &countermax; 21 theftp[idx] = &theft; 22 spin_unlock(&gblcnt_mutex); 23 } 24 25 void count_unregister_thread(int nthreadsexpected) 26 { 27 int idx = smp_thread_id(); 28 29 spin_lock(&gblcnt_mutex); 30 globalize_count(); 31 counterp[idx] = NULL; 32 countermaxp[idx] = NULL; 33 theftp[idx] = NULL; 34 spin_unlock(&gblcnt_mutex); 35 } Figure 4.28: Signal-Theft Limit Counter Initialization Functions 74 counting the number of outstanding accesses to an I/O device. The high overhead of  such near-zero counting is especially painful given that we normally don’t care how many references there are. As noted in the removable I/O device access-count problem posed by Quick Quiz 4.5, the number of accesses is irrelevant except in those rare cases when someone is actually trying to remove the device. One simple solution to this problem is to add a large “bias” (for example, one billion) to the counter in order to ensure that the value is far enough from zero that the counter can operate efficiently. When someone wants to remove the device, this bias is subtracted from the counter value. Counting the last few accesses will be quite inefficient, but the important point is that the many prior accesses will have been counted at full speed. Quick Quiz 4.55:  What else had you better have done when using a biased counter? Although a biased counter can be quite helpful and useful, it is only a partial solution to the removable I/O device access-count problem called out on page  43.  When attempting to remove a device, we must not only know the precise number of current I/O accesses, we also need to prevent any future accesses from starting. One way to accomplish this is to read-acquire a reader-writer lock when updating the counter, and to write-acquire that same reader-writer lock when checking the counter. Code for doing I/O might be as follows: 1 read_lock(&mylock); 2 if (removing) { 3 read_unlock(&mylock); 4 cancel_io(); 5 } else { 6 add_count(1); 7 read_unlock(&mylock); 8 do_io(); 9 sub_count(1); 10 } Line 1 read-acquires the lock, and either line 3 or 7 releases it. Line 2 checks to see if the device is being removed, and, if so, line 3 releases the lock and line 4 cancels the I/O, or takes whatever action is appropriate given that the device is to be removed. Otherwise, line 6 increments the access count, line 7 releases the lock, line 8 performs the I/O, and line 9 decrements the access count. Quick Quiz 4.56:  This is ridiculous! We are  read  -acquiring a reader-writer lock to update  the counter? What are you playing at??? The code to remove the device might be as follows: 1 write_lock(&mylock); 2 removing = 1; 3 sub_count(mybias); 4 write_unlock(&mylock); 5 while (read_count() != 0) { 6 poll(NULL, 0, 1); 7 } 8 remove_device(); Line 1 write-acquires the lock and line 4 releases it. Line 2 notes that the device is being removed, and the loop spanning lines 5-7 wait for any I/O operations to complete. 75 Reads Algorithm Section Updates 1 Core 32 Cores count_stat.c  4.2.2  11.5 ns 408 ns 409 ns count_stat_eventual.c  4.2.3  11.6 ns 1 ns 1 ns count_end.c  4.2.4  6.3 ns 389 ns 51,200 ns count_end_rcu.c  12.2.1  5.7 ns 354 ns 501 ns Table 4.1: Statistical Counter Performance on Power-6 Reads Algorithm Section Exact? Updates 1 Core 64 Cores count_lim.c  4.3.2  N 3.6 ns 375 ns 50,700 ns count_lim_app.c  4.3.4  N 11.7 ns 369 ns 51,000 ns count_lim_atomic.c  4.4.1  Y 51.4 ns 427 ns 49,400 ns count_lim_sig.c  4.4.4  Y 10.2 ns 370 ns 54,000 ns Table 4.2: Limit Counter Performance on Power-6 Finally, line 8 does any additional processing needed to prepare for device removal. Quick Quiz 4.57:  What other issues would need to be accounted for in a real system? 4.6 Parallel Counting Discussion This chapter has presented the reliability, performance, and scalability problems with traditional counting primitives. The C-language  ++  operator is not guaranteed to function reliably in multithreaded code, and atomic operations to a single variable neither perform nor scale well. This chapter has also presented a number of counting algorithms that perform and scale extremely well in certain special cases. Table  4.1  shows the performance of the four parallel statistical counting algorithms. All four algorithms provide near-perfect linear scalability for updates. The per-thread- variable implementation ( count_stat.c ) is significantly faster on updates than the array-based implementation ( count_end.c ), but is slower at reads, and suffers severe lock contention when there are many parallel readers. This contention can be addressed using the deferred-processing techniques introduced in Chapter  8 , as shown on the  count_end_rcu.c  row of Table  4.1.  Deferred processing also shines on the count_stat_eventual.c  row, courtesy of eventual consistency. Quick Quiz 4.58:  On the count_stat.c row of Table  4.1 , we see that the update side scales linearly with the number of threads. How is that possible given that the more threads there are, the more per-thread counters must be summed up? Quick Quiz 4.59:  Even on the last row of Table  4.1,  the read-side performance of  these statistical counter implementations is pretty horrible. So why bother with them? Figure  4.2  shows the performance of the parallel limit-counting algorithms. Exact enforcement of the limits incurs a substantial performance penalty, although on this 4.7GHz Power-6 system that penalty can be reduced by substituting read-side signals for update-side atomic operations. All of these implementations suffer from read-side 76 lock contention in the face of concurrent readers. Quick Quiz 4.60:  Given the performance data shown in Table  4.2,  we should always prefer update-side signals over read-side atomic operations, right? Quick Quiz 4.61:  Can advanced techniques be applied to address the lock con- tention for readers seen in Table  4.2 ? The fact that these algorithms only work well in their respective special cases might be considered a major problem with parallel programming in general. After all, the C-language  ++  operator works just fine in single-threaded code, and not just for special cases, but in general, right? This line of reasoning does contain a grain of truth, but is in essence misguided. The problem is not parallelism as such, but rather scalability. To understand this, first consider the C-language  ++  operator. The fact is that it does  not   work in general, only for a restricted range of numbers. If you need to deal with 1,000-digit decimal numbers, the C-language  ++  operator will not work for you. Quick Quiz 4.62:  The ++ operator works just fine for 1,000-digit numbers! Haven’t you heard of operator overloading??? This problem is not specific to arithmetic. Suppose you need to store and query data. Should you use an ASCII file? XML? A relational database? A linked list? A dense array? A B-tree? A radix tree? Or one of the plethora of other data structures and environments that permit data to be stored and queried? It depends on what you need to do, how fast you need it done, and how large your data set is. Similarly, if you need to count, your solution will depend on how large of numbers you need to work with, how many CPUs need to be manipulating a given number concurrently, howthenumberistobeused, andwhatlevelofperformanceandscalability you will need. Nor is this problem specific to software. The design for a bridge meant to allow people to walk across a small brook might be a simple as a single wooden plank. But you would probably not use a plank to span the kilometers-wide mouth of the Columbia River, nor would such a design be advisable for bridges carrying concrete trucks. In short, just as bridge design must change with increasing span and load, so must software design change as the number of CPUs increases. The examples in this chapter have shown that an important tool permitting large numbers of CPUs to be brought to bear is  partitioning . The counters might be fully partitioned, as in the statistical counters discussed in Section  4.2 , or partially partitioned as in the limit counters discussed in Sections  4.3  and  4.4.  Partitioning in general will be considered in far greater depth in Chapter  5,  and partial parallelization in particular in Section  5.4,  where it is called  parallel fastpath . Quick Quiz 4.63:  But if we are going to have to partition everything, why bother with shared-memory multithreading? Why not just partition the problem completely and run as multiple processes, each in its own address space? The partially partitioned counting algorithms used locking to guard the global data, and locking is the subject of Chapter  6.  In contrast, the partitioned data tended to be fully under the control of the corresponding thread, so that no synchronization whatsoever was required. This  data ownership  will be introduced in Section  5.3.4  and discussed in more detail in Chapter  7. Finally, the eventually consistent statistical counter discussed in Section  4.2.3 showed how deferring activity (in that case, updating the global counter) can pro- vide substantial performance and scalability benefits. Chapter  8  will examine a number of additional ways that deferral can improve performance, scalability, and even real-time 77 response. Summarizing the summary: 1. Partitioning promotes performance and scalability. 2.  Partial partitioning, that is, partitioning applied only to common code paths, works almost as well. 3.  Partial partitioning can be applied to code (as in Section  4.2 ’s statistical coun- ters’ partitioned updates and non-partitioned reads), but also across time (as in Section  4.3’ s and Section  4.4 ’s and limit counters running fast when far from the limit, but slowly when close to the limit). 4.  Read-only code paths should remain read-only: Spurious synchronization writes to shared memory kill performance and scalability, as seen in the count_end.c row of Table  4.1 . 5.  Judicious use of delay promotes performance and scalability, as seen in Sec- tion  4.2.3. 6.  Parallel performance and scalability is usually a balancing act: Beyond a certain point, optimizing some code paths will degrade others. The  count_stat.c and  count_end_rcu.c  rows of Table  4.1  illustrate this point. 7.  Different levels of performance and scalability will affect algorithm and data- structure design, as do a large number of other factors. Figure  4.3  illustrates this point: Atomic increment might be completely acceptable for a two-CPU system, but be completely inadequate for an eight-CPU system. In short, as noted at the beginning of this chapter, the simplicity of the concepts underlying counting have allowed us to explore many fundamental concurrency issues withoutthedistractionofelaboratedatastructuresorcomplexsynchronizationprimitives. Later chapters dig more deeply into these fundamental issues. 78 Chapter 5 Partitioning and Synchronization Design This chapter describes how to design software to take advantage of the multiple CPUs that are increasingly appearing in commodity systems. It does this by presenting a number of idioms, or “design patterns” [ Ale79 ,  GHJV95 ,  SSRB00 ]  that can help you balance performance, scalability, and response time. As noted in earlier chapters, the most important decision you will make when creating parallel software is how to carry out the partitioning. Correctly partitioned problems lead to simple, scalable, and high- performance solutions, while poorly partitioned problems result in slow and complex solutions. This chapter will help you design partitioning into your code. The word “design” is very important: You should partition first and code second. Reversing this order often leads to poor performance and scalability along with great frustration. To this end, Section  5.1  presents partitioning exercises, Section  5.2  reviews partition- ability design criteria, Section  5.3  discusses selecting an appropriate synchronization granularity, Section  5.4  gives an overview of important parallel-fastpath designs that provide speed and scalability in the common case with a simpler but less-scalable fallback “slow path” for unusual situations, and finally Section  5.5  takes a brief look beyond partitioning. 5.1 Partitioning Exercises This section uses a pair of exercises (the classic Dining Philosophers problem and a double-ended queue) to demonstrate the value of partitioning. 5.1.1 Dining Philosophers Problem Figure  5.1  shows a diagram of the classic Dining Philosophers problem  [ Dij71 ]. This problem features five philosophers who do nothing but think and eat a “very difficult kind of spaghetti” which requires two forks to eat. A given philosopher is permitted to use only the forks to his or her immediate right and left, and once a philosopher picks up a fork, he or she will not put it down until sated . 1 1 Readers who have difficulty imagining a food that requires two forks are invited to instead think in terms of chopsticks. 79 P1 P2 P3 P4 P5 Figure 5.1: Dining Philosophers Problem Figure 5.2: Partial Starvation Is Also Bad The object is to construct an algorithm that, quite literally, prevents starvation. One starvation scenario would be if all of the philosophers picked up their leftmost forks simultaneously. Because none of them would put down their fork until after they ate, and because none of them may pick up their second fork until at least one has finished eating, they all starve. Please note that it is not sufficient to allow at least one philosopher to eat. As Figure  5.2  shows, starvation of even a few of the philosophers is to be avoided. Dijkstra’s solution used a global semaphore, which works fine assuming negligible communications delays, an assumption that became invalid in the late 1980s or early 1990s . 2 Therefore, recent solutions number the forks as shown in Figure  5.3 . Each philosopher picks up the lowest-numbered fork next to his or her plate, then picks up the highest-numbered fork. The philosopher sitting in the uppermost position in the 2 It is all too easy to denigrate Dijkstra from the viewpoint of the year 2012, more than 40 years after the fact. If you still feel the need to denigrate Dijkstra, my advice is to publish something, wait 40 years, and then see how  your   words stood the test of time. 80 P1 1 P2 2 P3 3 P4 4 P5 5 Figure 5.3: Dining Philosophers Problem, Textbook Solution diagram thus picks up the leftmost fork first, then the rightmost fork, while the rest of the philosophers instead pick up their rightmost fork first. Because two of the philosophers will attempt to pick up fork 1 first, and because only one of those two philosophers will succeed, there will be five forks available to four philosophers. At least one of these four will be guaranteed to have two forks, and thus be able to proceed eating. This general technique of numbering resources and acquiring them in numerical order is heavily used as a deadlock-prevention technique. However, it is easy to imagine a sequence of events that will result in only one philosopher eating at a time even though all are hungry: 1. P2 picks up fork 1, preventing P1 from taking a fork. 2. P3 picks up fork 2. 3. P4 picks up fork 3. 4. P5 picks up fork 4. 5. P5 picks up fork 5 and eats. 6. P5 puts down forks 4 and 5. 7. P4 picks up fork 4 and eats. In short, this algorithm can result in only one philosopher eating at a given time, even when all five philosophers are hungry, despite the fact that there are more than enough forks for two philosophers to eat concurrently. Please think about ways of partitioning the Dining Philosophers Problem before reading further. 81 82 P1 P2 P3 P4 Figure 5.4: Dining Philosophers Problem, Partitioned One approach is shown in Figure  5.4,  which includes four philosophers rather than five to better illustrate the partition technique. Here the upper and rightmost philosophers share a pair of forks, while the lower and leftmost philosophers share another pair of  forks. If all philosophers are simultaneously hungry, at least two will always be able to eat concurrently. In addition, as shown in the figure, the forks can now be bundled so that the pair are picked up and put down simultaneously, simplifying the acquisition and release algorithms. Quick Quiz 5.1:  Is there a better solution to the Dining Philosophers Problem? This is an example of “horizontal parallelism” [ Inm85 ]  or “data parallelism”, so named because there is no dependency among the pairs of philosophers. In a horizontally parallel data-processing system, a given item of data would be processed by only one of  a replicated set of software components. Quick Quiz 5.2:  And in just what sense can this “horizontal parallelism” be said to be “horizontal”? 5.1.2 Double-Ended Queue A double-ended queue is a data structure containing a list of elements that may be inserted or removed from either end [ Knu73 ] . It has been claimed that a lock-based implementation permitting concurrent operations on both ends of the double-ended queue is difficult [ Gro07 ]. This section shows how a partitioning design strategy can result in a reasonably simple implementation, looking at three general approaches in the following sections. 5.1.2.1 Left- and Right-Hand Locks One seemingly straightforward approach would be to use a doubly linked list with a left-hand lock for left-hand-end enqueue and dequeue operations along with a right-hand lock for right-hand-end operations, as shown in Figure  5.5.  However, the problem with this approach is that the two locks’ domains must overlap when there are fewer than 83 Figure 5.5: Double-Ended Queue With Left- and Right-Hand Locks Lock L DEQ L Lock R DEQ R Figure 5.6: Compound Double-Ended Queue four elements on the list. This overlap is due to the fact that removing any given element affects not only that element, but also its left- and right-hand neighbors. These domains are indicated by color in the figure, with blue with downward stripes indicating the domain of the left-hand lock, red with upward stripes indicating the domain of the right-hand lock, and purple (with no stripes) indicating overlapping domains. Although it is possible to create an algorithm that works this way, the fact that it has no fewer than five special cases should raise a big red flag, especially given that concurrent activity at the other end of the list can shift the queue from one special case to another at any time. It is far better to consider other designs. 5.1.2.2 Compound Double-Ended Queue One way of forcing non-overlapping lock domains is shown in Figure  5.6.  Two separate double-ended queues are run in tandem, each protected by its own lock. This means that elements must occasionally be shuttled from one of the double-ended queues to the other, in which case both locks must be held. A simple lock hierarchy may be used to avoid deadlock, for example, always acquiring the left-hand lock before acquiring the right-hand lock. This will be much simpler than applying two locks to the same double-ended queue, as we can unconditionally left-enqueue elements to the left-hand 84 Lock 0 DEQ 0DEQ 1 Lock 1 DEQ 2 Lock 2 DEQ 3 Lock 3 Index R Lock R Lock L Index L Figure 5.7: Hashed Double-Ended Queue queue and right-enqueue elements to the right-hand queue. The main complication arises when dequeuing from an empty queue, in which case it is necessary to: 1. If holding the right-hand lock, release it and acquire the left-hand lock. 2. Acquire the right-hand lock. 3. Rebalance the elements across the two queues. 4. Remove the required element if there is one. 5. Release both locks. Quick Quiz 5.3:  In this compound double-ended queue implementation, what should be done if the queue has become non-empty while releasing and reacquiring the lock? The rebalancing operation might well shuttle a given element back and forth between the two queues, wasting time and possibly requiring workload-dependent heuristics to obtain optimal performance. Although this might well be the best approach in some cases, it is interesting to try for an algorithm with greater determinism. 5.1.2.3 Hashed Double-Ended Queue One of the simplest and most effective ways to deterministically partition a data structure is to hash it. It is possible to trivially hash a double-ended queue by assigning each element a sequence number based on its position in the list, so that the first element left- enqueued into an empty queue is numbered zero and the first element right-enqueued into an empty queue is numbered one. A series of elements left-enqueued into an otherwise-idle queue would be assigned decreasing numbers (-1, -2, -3, ...), while a series of elements right-enqueued into an otherwise-idle queue would be assigned increasing numbers (2, 3, 4, ...). A key point is that it is not necessary to actually represent a given element’s number, as this number will be implied by its position in the queue. Given this approach, we assign one lock to guard the left-hand index, one to guard the right-hand index, and one lock for each hash chain. Figure  5.7  shows the resulting data structure given four hash chains. Note that the lock domains do not overlap, and that deadlock is avoided by acquiring the index locks before the chain locks, and by never acquiring more than one lock of each type (index or chain) at a time. 85 DEQ 0DEQ 1DEQ 2DEQ 3 Index R Index L Enq 3R R1 DEQ 0DEQ 1DEQ 2DEQ 3 Index R Index L Enq 3L1R R1R2R3 R4 L0L−1 DEQ 0DEQ 1DEQ 2DEQ 3 Index R Index L R1 R2R3 R4R5 L−2 Figure 5.8: Hashed Double-Ended Queue After Insertions Each hash chain is itself a double-ended queue, and in this example, each holds every fourth element. The uppermost portion of Figure  5.8  shows the state after a single element (“R1”) has been right-enqueued, with the right-hand index having been incremented to reference hash chain 2. The middle portion of this same figure shows the state after three more elements have been right-enqueued. As you can see, the indexes are back to their initial states (see Figure  5.7 ), however, each hash chain is now non-empty. The lower portion of this figure shows the state after three additional elements have been left-enqueued and an additional element has been right-enqueued. From the last state shown in Figure  5.8 , a left-dequeue operation would return element “L-2” and leave the left-hand index referencing hash chain 2, which would then contain only a single element (“R2”). In this state, a left-enqueue running concur- rently with a right-enqueue would result in lock contention, but the probability of such 86 L0 R1 R2 R3 L−1 L−2 L−3 L−4 L−8 L−7 L−6 R4 R5 R6 R7 L−5 Figure 5.9: Hashed Double-Ended Queue With 12 Elements 1 struct pdeq { 2 spinlock_t llock; 3 int lidx; 4 spinlock_t rlock; 5 int ridx; 6 struct deq bkt[DEQ_N_BKTS]; 7 }; Figure 5.10: Lock-Based Parallel Double-Ended Queue Data Structure contention can be reduced to arbitrarily low levels by using a larger hash table. Figure  5.9  shows how 12 elements would be organized in a four-hash-bucket parallel double-ended queue. Each underlying single-lock double-ended queue holds a one- quarter slice of the full parallel double-ended queue. Figure 5.10 showsthecorrespondingC-languagedatastructure, assuminganexisting struct deq  that provides a trivially locked double-ended-queue implementation. This data structure contains the left-hand lock on line 2, the left-hand index on line 3, the right-hand lock on line 4 (which is cache-aligned in the actual implementation), the right-hand index on line 5, and, finally, the hashed array of simple lock-based double-ended queues on line 6. A high-performance implementation would of course use padding or special alignment directives to avoid false sharing. Figure  5.11  ( lockhdeq.c ) shows the implementation of the enqueue and de- queue functions . 3 Discussion will focus on the left-hand operations, as the right-hand operations are trivially derived from them. Lines 1-13 show  pdeq_pop_l() , which left-dequeues and returns an element if  possible, returning  NULL  otherwise. Line 6 acquires the left-hand spinlock, and line 7 computes the index to be dequeued from. Line 8 dequeues the element, and, if line 9 finds the result to be non- NULL , line 10 records the new left-hand index. Either way, line 11 releases the lock, and, finally, line 12 returns the element if there was one, or NULL  otherwise. Lines 29-38 shows  pdeq_push_l() , which left-enqueues the specified element. Line 33 acquires the left-hand lock, and line 34 picks up the left-hand index. Line 35 left- enqueues the specified element onto the double-ended queue indexed by the left-hand index. Line 36 then updates the left-hand index and line 37 releases the lock. As noted earlier, the right-hand operations are completely analogous to their left- handed counterparts, so their analysis is left as an exercise for the reader. Quick Quiz 5.4:  Is the hashed double-ended queue a good solution? Why or why not? 3 One could easily create a polymorphic implementation in any number of languages, but doing so is left as an exercise for the reader. 87 1 struct cds_list_head  * pdeq_pop_l(struct pdeq  * d) 2 { 3 struct cds_list_head  * e; 4 int i; 5 6 spin_lock(&d->llock); 7 i = moveright(d->lidx); 8 e = deq_pop_l(&d->bkt[i]); 9 if (e != NULL) 10 d->lidx = i; 11 spin_unlock(&d->llock); 12 return e; 13 } 14 15 struct cds_list_head  * pdeq_pop_r(struct pdeq  * d) 16 { 17 struct cds_list_head  * e; 18 int i; 19 20 spin_lock(&d->rlock); 21 i = moveleft(d->ridx); 22 e = deq_pop_r(&d->bkt[i]); 23 if (e != NULL) 24 d->ridx = i; 25 spin_unlock(&d->rlock); 26 return e; 27 } 28 29 void pdeq_push_l(struct cds_list_head  * e, struct pdeq  * d) 30 { 31 int i; 32 33 spin_lock(&d->llock); 34 i = d->lidx; 35 deq_push_l(e, &d->bkt[i]); 36 d->lidx = moveleft(d->lidx); 37 spin_unlock(&d->llock); 38 } 39 40 void pdeq_push_r(struct cds_list_head  * e, struct pdeq  * d) 41 { 42 int i; 43 44 spin_lock(&d->rlock); 45 i = d->ridx; 46 deq_push_r(e, &d->bkt[i]); 47 d->ridx = moveright(d->ridx); 48 spin_unlock(&d->rlock); 49 } Figure 5.11: Lock-Based Parallel Double-Ended Queue Implementation 88 5.1.2.4 Compound Double-Ended Queue Revisited This section revisits the compound double-ended queue, using a trivial rebalancing scheme that moves all the elements from the non-empty queue to the now-empty queue. Quick Quiz 5.5:  Move  all  the elements to the queue that became empty? In what possible universe is this brain-dead solution in any way optimal??? In contrast to the hashed implementation presented in the previous section, the compound implementation will build on a sequential implementation of a double-ended queue that uses neither locks nor atomic operations. Figure  5.12  shows the implementation. Unlike the hashed implementation, this compound implementation is asymmetric, so that we must consider the  pdeq_pop_  l()  and  pdeq_pop_r()  implementations separately. Quick Quiz 5.6:  Why can’t the compound parallel double-ended queue implemen- tation be symmetric? The  pdeq_pop_l()  implementation is shown on lines 1-16 of the figure. Line 5 acquires the left-hand lock, which line 14 releases. Line 6 attempts to left-dequeue an element from the left-hand underlying double-ended queue, and, if successful, skips lines 8-13 to simply return this element. Otherwise, line 8 acquires the right-hand lock, line 9 left-dequeues an element from the right-hand queue, and line 10 moves any remaining elements on the right-hand queue to the left-hand queue, line 11 initializes the right-hand queue, and line 12 releases the right-hand lock. The element, if any, that was dequeued on line 10 will be returned. The  pdeq_pop_r()  implementation is shown on lines 18-38 of the figure. As before, line 22 acquires the right-hand lock (and line 36 releases it), and line 23 attempts to right-dequeue an element from the right-hand queue, and, if successful, skips lines 24- 35 to simply return this element. However, if line 24 determines that there was no element to dequeue, line 25 releases the right-hand lock and lines 26-27 acquire both locks in the proper order. Line 28 then attempts to right-dequeue an element from the right-hand list again, and if line 29 determines that this second attempt has failed, line 30 right-dequeues an element from the left-hand queue (if there is one available), line 31 moves any remaining elements from the left-hand queue to the right-hand queue, and line 32 initializes the left-hand queue. Either way, line 34 releases the left-hand lock. Quick Quiz 5.7:  Why is it necessary to retry the right-dequeue operation on line 28 of Figure  5.12 ? Quick Quiz 5.8:  Surely the left-hand lock must  sometimes  be available!!! So why is it necessary that line 25 of Figure  5.12  unconditionally release the right-hand lock? The  pdeq_push_l()  implementation is shown on lines 40-47 of Figure  5.12 . Line 44 acquires the left-hand spinlock, line 45 left-enqueues the element onto the left-hand queue, and finally line 46 releases the lock. The  pdeq_enqueue_r() implementation (shown on lines 49-56) is quite similar. 5.1.2.5 Double-Ended Queue Discussion The compound implementation is somewhat more complex than the hashed variant presented in Section  5.1.2.3,  but is still reasonably simple. Of course, a more intelligent rebalancing scheme could be arbitrarily complex, but the simple scheme shown here has been shown to perform well compared to software alternatives  [ DCW + 11 ] and even compared to algorithms using hardware assist [ DLM + 10 ]. Nevertheless, the best we can hope for from such a scheme is 2x scalability, as at most two threads can be holding the dequeue’s locks concurrently. This limitation also applies to algorithms based on 89 1 struct cds_list_head  * pdeq_pop_l(struct pdeq  * d) 2 { 3 struct cds_list_head  * e; 4 5 spin_lock(&d->llock); 6 e = deq_pop_l(&d->ldeq); 7 if (e == NULL) { 8 spin_lock(&d->rlock); 9 e = deq_pop_l(&d->rdeq); 10 cds_list_splice(&d->rdeq.chain, &d->ldeq.chain); 11 CDS_INIT_LIST_HEAD(&d->rdeq.chain); 12 spin_unlock(&d->rlock); 13 } 14 spin_unlock(&d->llock); 15 return e; 16 } 17 18 struct cds_list_head  * pdeq_pop_r(struct pdeq  * d) 19 { 20 struct cds_list_head  * e; 21 22 spin_lock(&d->rlock); 23 e = deq_pop_r(&d->rdeq); 24 if (e == NULL) { 25 spin_unlock(&d->rlock); 26 spin_lock(&d->llock); 27 spin_lock(&d->rlock); 28 e = deq_pop_r(&d->rdeq); 29 if (e == NULL) { 30 e = deq_pop_r(&d->ldeq); 31 cds_list_splice(&d->ldeq.chain, &d->rdeq.chain); 32 CDS_INIT_LIST_HEAD(&d->ldeq.chain); 33 } 34 spin_unlock(&d->llock); 35 } 36 spin_unlock(&d->rlock); 37 return e; 38 } 39 40 void pdeq_push_l(struct cds_list_head  * e, struct pdeq  * d) 41 { 42 int i; 43 44 spin_lock(&d->llock); 45 deq_push_l(e, &d->ldeq); 46 spin_unlock(&d->llock); 47 } 48 49 void pdeq_push_r(struct cds_list_head  * e, struct pdeq  * d) 50 { 51 int i; 52 53 spin_lock(&d->rlock); 54 deq_push_r(e, &d->rdeq); 55 spin_unlock(&d->rlock); 56 } Figure 5.12: Compound Parallel Double-Ended Queue Implementation 90 non-blocking synchronization, such as the compare-and-swap-based dequeue algorithm of Michael [ Mic03] . 4 In fact, as noted by Dice et al. [ DLM + 10 ], an unsynchronized single-threaded double-ended queue significantly outperforms any of the parallel implementations they studied. Therefore, the key point is that there can be significant overhead enqueuing to or dequeuing from a shared queue, regardless of implementation. This should come as no surprise given the material in Chapter  2,  given the strict FIFO nature of these queues. Furthermore, these strict FIFO queues are strictly FIFO only with respect to  lin- earization points  [ HW90 ] 5 that are not visible to the caller, in fact, in these examples, the linearization points are buried in the lock-based critical sections. These queues are not strictly FIFO with respect to (say) the times at which the individual operations started  [ HKLP12 ]. This indicates that the strict FIFO property is not all that valuable in concurrent programs, and in fact, Kirsch et al. present less-strict queues that provide improved performance and scalability  [ KLP12 ] . 6 All that said, if you are pushing all the data used by your concurrent program through a single queue, you really need to rethink your overall design. 5.1.3 Partitioning Example Discussion The optimal solution to the dining philosophers problem given in the answer to the Quick Quiz in Section  5.1.1  is an excellent example of “horizontal parallelism” or “data parallelism”. The synchronization overhead in this case is nearly (or even exactly) zero. In contrast, the double-ended queue implementations are examples of “vertical parallelism” or “pipelining”, given that data moves from one thread to another. The tighter coordination required for pipelining in turn requires larger units of work to obtain a given level of efficiency. Quick Quiz 5.9:  The tandem double-ended queue runs about twice as fast as the hashed double-ended queue, even when I increase the size of the hash table to an insanely large number. Why is that? Quick Quiz 5.10:  Is there a significantly better way of handling concurrency for double-ended queues? These two examples show just how powerful partitioning can be in devising parallel algorithms. Section  5.3.5  looks briefly at a third example, matrix multiply. However, all three of these examples beg for more and better design criteria for parallel programs, a topic taken up in the next section. 5.2 Design Criteria One way to obtain the best performance and scalability is to simply hack away until you converge on the best possible parallel program. Unfortunately, if your program is other than microscopically tiny, the space of possible parallel programs is so huge that 4 This paper is interesting in that it showed that special double-compare-and-swap (DCAS) instructions are not needed for lock-free implementations of double-ended queues. Instead, the common compare-and-swap (e.g., x86 cmpxchg) suffices. 5 In short, a linearization point is a single point within a given function where that function can be said to have taken effect. In this lock-based implementation, the linearization points can be said to be anywhere within the critical section that does the work. 6 Nir Shavit produced relaxed stacks for roughly the same reasons [ Sha11 ] . This situation leads some to believe that the linearization points are useful to theorists rather than developers, and leads others to wonder to what extent the designers of such data structures and algorithms were considering the needs of their users. 91 convergence is not guaranteed in the lifetime of the universe. Besides, what exactly is the “best possible parallel program”? After all, Section  1.2  called out no fewer than three parallel-programming goals of performance, productivity, and generality, and the best possible performance will likely come at a cost in terms of productivity and generality. We clearly need to be able to make higher-level choices at design time in order to arrive at an acceptably good parallel program before that program becomes obsolete. However, more detailed design criteria are required to actually produce a real-world design, a task taken up in this section. This being the real world, these criteria often conflict to a greater or lesser degree, requiring that the designer carefully balance the resulting tradeoffs. As such, these criteria may be thought of as the “forces” acting on the design, with particularly good tradeoffs between these forces being called “design patterns”  [ Ale79 , GHJV95] . The design criteria for attaining the three parallel-programming goals are speedup, contention, overhead, read-to-write ratio, and complexity: Speedup:  As noted in Section  1.2,  increased performance is the major reason to go to all of the time and trouble required to parallelize it. Speedup is defined to be the ratio of the time required to run a sequential version of the program to the time required to run a parallel version. Contention:  If more CPUs are applied to a parallel program than can be kept busy by that program, the excess CPUs are prevented from doing useful work by contention. This may be lock contention, memory contention, or a host of other performance killers. Work-to-Synchronization Ratio:  A uniprocessor, single-threaded, non-preemptible, and non-interruptible 7 version of a given parallel program would not need any synchronization primitives. Therefore, any time consumed by these primitives (including communication cache misses as well as message latency, locking primitives, atomic instructions, and memory barriers) is overhead that does not contribute directly to the useful work that the program is intended to accomplish. Note that the important measure is the relationship between the synchroniza- tion overhead and the overhead of the code in the critical section, with larger critical sections able to tolerate greater synchronization overhead. The work-to- synchronization ratio is related to the notion of synchronization efficiency. Read-to-Write Ratio:  A data structure that is rarely updated may often be replicated rather than partitioned, and furthermore may be protected with asymmetric syn- chronization primitives that reduce readers’ synchronization overhead at the expense of that of writers, thereby reducing overall synchronization overhead. Corresponding optimizations are possible for frequently updated data structures, as discussed in Chapter  4. Complexity:  A parallel program is more complex than an equivalent sequential pro- gram because the parallel program has a much larger state space than does the sequential program, although these larger state spaces can in some cases be easily understood given sufficient regularity and structure. A parallel programmer must 7 Either by masking interrupts or by being oblivious to them. 92 consider synchronization primitives, messaging, locking design, critical-section identification, and deadlock in the context of this larger state space. This greater complexity often translates to higher development and maintenance costs. Therefore, budgetary constraints can limit the number and types of modifi- cations made to an existing program, since a given degree of speedup is worth only so much time and trouble. Worse yet, added complexity can actually  reduce performance and scalability. Therefore, beyond a certain point, there may be potential sequential optimizations that are cheaper and more effective than parallelization. As noted in Section  1.2.1, parallelization is but one performance optimization of many, and is furthermore an optimization that applies most readily to CPU-based bottlenecks. These criteria will act together to enforce a maximum speedup. The first three criteria are deeply interrelated, so the remainder of this section analyzes these interrelationships . 8 Note that these criteria may also appear as part of the requirements specification. For example, speedup may act as a relative desideratum (“the faster, the better”) or as an absolute requirement of the workload (“the system must support at least 1,000,000 web hits per second”). Classic design pattern languages describe relative desiderata as forces and absolute requirements as context. An understanding of the relationships between these design criteria can be very helpful when identifying appropriate design tradeoffs for a parallel program. 1.  The less time a program spends in critical sections, the greater the potential speedup. This is a consequence of Amdahl’s Law  [ Amd67 ] and of the fact that only one CPU may execute within a given critical section at a given time. More specifically, the fraction of time that the program spends in a given exclusive critical section must be much less than the reciprocal of the number of CPUs for the actual speedup to approach the number of CPUs. For example, a program running on 10 CPUs must spend much less than one tenth of its time in the most-restrictive critical section if it is to scale at all well. 2.  Contention effects will consume the excess CPU and/or wallclock time should the actual speedup be less than the number of available CPUs. The larger the gap between the number of CPUs and the actual speedup, the less efficiently the CPUs will be used. Similarly, the greater the desired efficiency, the smaller the achievable speedup. 3.  If the available synchronization primitives have high overhead compared to the critical sections that they guard, the best way to improve speedup is to reduce the number of times that the primitives are invoked (perhaps by batching critical sections, using data ownership, using asymmetric primitives (see Section  8 ), or by moving toward a more coarse-grained design such as code locking). 4.  If the critical sections have high overhead compared to the primitives guarding them, the best way to improve speedup is to increase parallelism by moving to reader/writer locking, data locking, asymmetric, or data ownership. 8 A real-world parallel system will be subject to many additional design criteria, such as data-structure layout, memory size, memory-hierarchy latencies, bandwidth limitations, and I/O issues. 93 Program Sequential Program Sequential Ownership Data Locking Data Locking Code Batch Disown Batch Own Partition Partition Figure 5.13: Design Patterns and Lock Granularity 5.  If the critical sections have high overhead compared to the primitives guarding them and the data structure being guarded is read much more often than modi- fied, the best way to increase parallelism is to move to reader/writer locking or asymmetric primitives. 6.  Many changes that improve SMP performance, for example, reducing lock con- tention, also improve real-time latencies [ McK05d ]. Quick Quiz 5.11:  Don’t all these problems with critical sections mean that we should just always use non-blocking synchronization [ Her90 ], which don’t have critical sections? 5.3 Synchronization Granularity Figure  5.13  gives a pictorial view of different levels of synchronization granularity, each of which is described in one of the following sections. These sections focus primarily on locking, but similar granularity issues arise with all forms of synchronization. 5.3.1 Sequential Program If the program runs fast enough on a single processor, and has no interactions with other processes, threads, or interrupt handlers, you should remove the synchronization primitives and spare yourself their overhead and complexity. Some years back, there were those who would argue that Moore’s Law would eventually force all programs into this category. However, as can be seen in Figure  5.14,  the exponential increase in single-threaded performance halted in about 2003. Therefore, increasing performance will increasingly require parallelism . 9 The debate as to whether this new trend will 9 This plot shows clock frequencies for newer CPUs theoretically capable of retiring one or more instructions per clock, and MIPS for older CPUs requiring multiple clocks to execute even the simplest 94 result in single chips with thousands of CPUs will not be settled soon, but given that Paul is typing this sentence on a dual-core laptop, the age of SMP does seem to be upon us. It is also important to note that Ethernet bandwidth is continuing to grow, as shown in Figure  5.15.  This growth will motivate multithreaded servers in order to handle the communications load.  0.1  1  10  100  1000      1    9    7    5      1    9    8    0      1    9    8    5      1    9    9    0      1    9    9    5      2    0    0    0      2    0    0    5      2    0    1    0      2    0    1    5    C    P    U    C    l   o   c    k    F   r   e   q   u   e   n   c   y    /    M    I    P    S Year Figure 5.14: MIPS/Clock-Frequency Trend for Intel CPUs  0.1  1  10  100  1000  10000  100000  e+      1    9    7    0      1    9    7    5      1    9    8    0      1    9    8    5      1    9    9    0      1    9    9    5      2    0    0    0      2    0    0    5      2    0    1    0      2    0    1    5    R   e    l   a    t    i   v   e    P   e   r    f   o   r   m   a   n   c   e Year Ethernet x86 CPUs Figure 5.15: Ethernet Bandwidth vs. Intel x86 CPU Performance Please note that this does  not   mean that you should code each and every program in instruction. The reason for taking this approach is that the newer CPUs’ ability to retire multiple instructions per clock is typically limited by memory-system performance. 95 a multi-threaded manner. Again, if a program runs quickly enough on a single processor, spare yourself the overhead and complexity of SMP synchronization primitives. The simplicity of the hash-table lookup code in Figure  5.16  underscores this point. 10 A key point is that speedups due to parallelism are normally limited to the number of CPUs. In contrast, speedups due to sequential optimizations, for example, careful choice of  data structure, can be arbitrarily large. 1 struct hash_table 2 { 3 long nbuckets; 4 struct node  ** buckets; 5 }; 6 7 typedef struct node { 8 unsigned long key; 9 struct node  * next; 10 } node_t; 11 12 int hash_search(struct hash_table  * h, long key) 13 { 14 struct node  * cur; 15 16 cur = h->buckets[key % h->nbuckets]; 17 while (cur != NULL) { 18 if (cur->key >= key) { 19 return (cur->key == key); 20 } 21 cur = cur->next; 22 } 23 return 0; 24 } Figure 5.16: Sequential-Program Hash Table Search On the other hand, if you are not in this happy situation, read on! 5.3.2 Code Locking Code locking is quite simple due to the fact that is uses only global locks . 11 It is especially easy to retrofit an existing program to use code locking in order to run it on a multiprocessor. If the program has only a single shared resource, code locking will even give optimal performance. However, many of the larger and more complex programs require much of the execution to occur in critical sections, which in turn causes code locking to sharply limits their scalability. Therefore, you should use code locking on programs that spend only a small fraction of their execution time in critical sections or from which only modest scaling is required. In these cases, code locking will provide a relatively simple program that is very similar to its sequential counterpart, as can be seen in Figure  5.17 . However, note that the simple return of the comparison in  hash_search()  in Figure  5.16  has now become three statements due to the need to release the lock before returning. Unfortunately, code locking is particularly prone to “lock contention”, where mul- tiple CPUs need to acquire the lock concurrently. SMP programmers who have taken care of groups of small children (or groups of older people who are acting like children) 10 The examples in this section are taken from Hart et al.  [ HMB06 ] , adapted for clarity by gathering related code from multiple files. 11 If your program instead has locks in data structures, or, in the case of Java, uses classes with synchronized instances, you are instead using “data locking”, described in Section  5.3.3. 96 1 spinlock_t hash_lock; 2 3 struct hash_table 4 { 5 long nbuckets; 6 struct node  ** buckets; 7 }; 8 9 typedef struct node { 10 unsigned long key; 11 struct node  * next; 12 } node_t; 13 14 int hash_search(struct hash_table  * h, long key) 15 { 16 struct node  * cur; 17 int retval; 18 19 spin_lock(&hash_lock); 20 cur = h->buckets[key % h->nbuckets]; 21 while (cur != NULL) { 22 if (cur->key >= key) { 23 retval = (cur->key == key); 24 spin_unlock(&hash_lock); 25 return retval; 26 } 27 cur = cur->next; 28 } 29 spin_unlock(&hash_lock); 30 return 0; 31 } Figure 5.17: Code-Locking Hash Table Search will immediately recognize the danger of having only one of something, as illustrated in Figure  5.18 . One solution to this problem, named “data locking”, is described in the next section. 5.3.3 Data Locking Many data structures may be partitioned, with each partition of the data structure having its own lock. Then the critical sections for each part of the data structure can execute in parallel, although only one instance of the critical section for a given part could be executing at a given time. You should use data locking when contention must be reduced, and where synchronization overhead is not limiting speedups. Data locking reduces contentionbydistributingtheinstancesoftheoverly-largecriticalsectionacrossmultiple data structures, for example, maintaining per-hash-bucket critical sections in a hash table, as shown in Figure  5.19 . The increased scalability again results in a slight increase in complexity in the form of an additional data structure, the  struct bucket . In contrast with the contentious situation shown in Figure  5.18,  data locking helps promote harmony, as illustrated by Figure  5.20  — and in parallel programs, this  almost  always translates into increased performance and scalability. For this reason, data locking was heavily used by Sequent in both its DYNIX and DYNIX/ptx operating systems [ BK85,  Inm85,  Gar90 ,  Dov90 ,  MD92 ,  MG92 ,  MS93] . However, as those who have taken care of small children can again attest, even providing enough to go around is no guarantee of tranquillity. The analogous situation can arise in SMP programs. For example, the Linux kernel maintains a cache of files and directories (called “dcache”). Each entry in this cache has its own lock, but the entries corresponding to the root directory and its direct descendants are much more 97 Figure 5.18: Lock Contention likely to be traversed than are more obscure entries. This can result in many CPUs contending for the locks of these popular entries, resulting in a situation not unlike that shown in Figure  5.21. In many cases, algorithms can be designed to reduce the instance of data skew, and in some cases eliminate it entirely (as appears to be possible with the Linux kernel’s dcache  [ MSS04 ]). Data locking is often used for partitionable data structures such as hash tables, as well as in situations where multiple entities are each represented by an instance of a given data structure. The task list in version 2.6.17 of the Linux kernel is an example of the latter, each task structure having its own  proc_lock . A key challenge with data locking on dynamically allocated structures is ensuring that the structure remains in existence while the lock is being acquired. The code in Figure  5.19  finesses this challenge by placing the locks in the statically allocated hash buckets, which are never freed. However, this trick would not work if the hash table were resizeable, so that the locks were now dynamically allocated. In this case, there would need to be some means to prevent the hash bucket from being freed during the time that its lock was being acquired. Quick Quiz 5.12:  What are some ways of preventing a structure from being freed while its lock is being acquired? 5.3.4 Data Ownership Data ownership partitions a given data structure over the threads or CPUs, so that each thread/CPU accesses its subset of the data structure without any synchronization overhead whatsoever. However, if one thread wishes to access some other thread’s data, the first thread is unable to do so directly. Instead, the first thread must communicate with the second thread, so that the second thread performs the operation on behalf of  98 1 struct hash_table 2 { 3 long nbuckets; 4 struct bucket  ** buckets; 5 }; 6 7 struct bucket { 8 spinlock_t bucket_lock; 9 node_t  * list_head; 10 }; 11 12 typedef struct node { 13 unsigned long key; 14 struct node  * next; 15 } node_t; 16 17 int hash_search(struct hash_table  * h, long key) 18 { 19 struct bucket  * bp; 20 struct node  * cur; 21 int retval; 22 23 bp = h->buckets[key % h->nbuckets]; 24 spin_lock(&bp->bucket_lock); 25 cur = bp->list_head; 26 while (cur != NULL) { 27 if (cur->key >= key) { 28 retval = (cur->key == key); 29 spin_unlock(&bp->bucket_lock); 30 return retval; 31 } 32 cur = cur->next; 33 } 34 spin_unlock(&bp->bucket_lock); 35 return 0; 36 } Figure 5.19: Data-Locking Hash Table Search 99 Figure 5.20: Data Locking the first, or, alternatively, migrates the data to the first thread. Data ownership might seem arcane, but it is used very frequently: 1.  Any variables accessible by only one CPU or thread (such as  auto  variables in C and C++) are owned by that CPU or process. 2.  An instance of a user interface owns the corresponding user’s context. It is very common for applications interacting with parallel database engines to be written as if they were entirely sequential programs. Such applications own the user interface and his current action. Explicit parallelism is thus confined to the database engine itself. 3.  Parametric simulations are often trivially parallelized by granting each thread ownership of a particular region of the parameter space. There are also computing frameworks designed for this type of problem [ UoC08 ]. If there is significant sharing, communication between the threads or CPUs can result in significant complexity and overhead. Furthermore, if the most-heavily used data happens to be that owned by a single CPU, that CPU will be a “hot spot”, sometimes with results resembling that shown in Figure  5.21.  However, in situations where no sharing is required, data ownership achieves ideal performance, and with code that can be as simple as the sequential-program case shown in Figure  5.16.  Such situations are often referred to as “embarrassingly parallel”, and, in the best case, resemble the situation previously shown in Figure  5.20. Another important instance of data ownership occurs when the data is read-only, in which case, all threads can “own” it via replication. Data ownership will be presented in more detail in Chapter  7. 100 Figure 5.21: Data Locking and Skew 5.3.5 Locking Granularity and Performance Thissectionlooksatlockinggranularityandperformancefromamathematicalsynchronization- efficiency viewpoint. Readers who are uninspired by mathematics might choose to skip this section. The approach is to use a crude queueing model for the efficiency of synchronization mechanism that operate on a single shared global variable, based on an M/M/1 queue. M/M/1 queuing models are based on an exponentially distributed “inter-arrival rate” λ   and an exponentially distributed “service rate”  µ  . The inter-arrival rate  λ   can be thought of as the average number of synchronization operations per second that the system would process if the synchronization were free, in other words,  λ   is an inverse measure of the overhead of each non-synchronization unit of work. For example, if each unit of work was a transaction, and if each transaction took one millisecond to process, excluding synchronization overhead, then  λ   would be 1,000 transactions per second. The service rate  µ   is defined similarly, but for the average number of synchronization operations per second that the system would process if the overhead of each transac- tion was zero, and ignoring the fact that CPUs must wait on each other to complete their synchronization operations, in other words,  µ   can be roughly thought of as the synchronization overhead in absence of contention. For example, suppose that each synchronization operation involves an atomic increment instruction, and that a computer system is able to do an atomic increment every 25 nanoseconds on each CPU to a private variable . 12 The value of   µ   is therefore about 40,000,000 atomic increments per second. Of course, the value of   λ   increases with increasing numbers of CPUs, as each CPU is capable of processing transactions independently (again, ignoring synchronization): λ   =  n λ  0  (5.1) where  n  is the number of CPUs and  λ  0  is the transaction-processing capability of a 12 Of course, if there are 8 CPUs all incrementing the same shared variable, then each CPU must wait at least 175 nanoseconds for each of the other CPUs to do its increment before consuming an additional 25 nanoseconds doing its own increment. In actual fact, the wait will be longer due to the need to move the variable from one CPU to another. 101 single CPU. Note that the expected time for a single CPU to execute a single transaction is 1 / λ  0 . Because the CPUs have to “wait in line” behind each other to get their chance to increment the single shared variable, we can use the M/M/1 queueing-model expression for the expected total waiting time: T   =  1 µ  − λ   (5.2) Substituting the above value of   λ  : T   =  1 µ  − n λ  0 (5.3) Now, the efficiency is just the ratio of the time required to process a transaction in absence of synchronization ( 1 / λ  0 ) to the time required including synchronization ( T   + 1 / λ  0 ): e  =  1 / λ  0 T   + 1 / λ  0 (5.4) Substituting the above value for  T   and simplifying: e  = µ  λ  0 − n µ  λ  0 − ( n − 1 )  (5.5) But the value of   µ  / λ  0  is just the ratio of the time required to process the transaction (absent synchronization overhead) to that of the synchronization overhead itself (absent contention). If we call this ratio  f  , we have: e  =  f  − n  f  − ( n − 1 )  (5.6) Figure  5.22  plots the synchronization efficiency  e  as a function of the number of  CPUs/threads  n  for a few values of the overhead ratio  f  . For example, again using the 25-nanosecond atomic increment, the  f   =  10  line corresponds to each CPU attempting an atomic increment every 250 nanoseconds, and the  f   =  100  line corresponds to each CPU attempting an atomic increment every 2.5 microseconds, which in turn corresponds to several thousand instructions. Given that each trace drops off sharply with increasing numbers of CPUs or threads, we can conclude that synchronization mechanisms based on atomic manipulation of a single global shared variable will not scale well if used heavily on current commodity hardware. This is a mathematical depiction of the forces leading to the parallel counting algorithms that were discussed in Chapter  4. The concept of efficiency is useful even in cases having little or no formal synchro- nization. Consider for example a matrix multiply, in which the columns of one matrix are multiplied (via “dot product”) by the rows of another, resulting in an entry in a third matrix. Because none of these operations conflict, it is possible to partition the columns of the first matrix among a group of threads, with each thread computing the corresponding columns of the result matrix. The threads can therefore operate entirely independently, with no synchronization overhead whatsoever, as is done in  matmul.c . One might therefore expect a parallel matrix multiply to have a perfect efficiency of 1.0. However, Figure 5.23 tellsadifferentstory, especiallyfor a64-by-64matrixmultiply, which never gets above an efficiency of about 0.7, even when running single-threaded. 102  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1      1    0      2    0      3    0      4    0      5    0      6    0      7    0      8    0      9    0      1    0    0    S   y   n   c    h   r   o   n    i   z   a    t    i   o   n    E    f    f    i   c    i   e   n   c   y Number of CPUs/Threads 10 25 50 75 100 Figure 5.22: Synchronization Efficiency  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9   1 10 100    M   a    t   r    i   x    M   u    l    t    i   p    l   y    E    f    f    i   c    i   e   n   c   y Number of CPUs/Threads 64 128 256 512 1024 Figure 5.23: Matrix Multiply Efficiency 103 The 512-by-512 matrix multiply’s efficiency is measurably less than 1.0 on as few as 10 threads, and even the 1024-by-1024 matrix multiply deviates noticeably from perfection at a few tens of threads. Nevertheless, this figure clearly demonstrates the performance and scalability benefits of batching: If you must incur synchronization overhead, you may as well get your money’s worth. Quick Quiz 5.13:  How can a single-threaded 64-by-64 matrix multiple possibly have an efficiency of less than 1.0? Shouldn’t all of the traces in Figure  5.23  have efficiency of exactly 1.0 when running on only one thread? Given these inefficiencies, it is worthwhile to look into more-scalable approaches such as the data locking described in Section  5.3.3  or the parallel-fastpath approach discussed in the next section. Quick Quiz 5.14:  How are data-parallel techniques going to help with matrix multiply? It is  already  data parallel!!! 5.4 Parallel Fastpath Fine-grained (and therefore  usually  higher-performance) designs are typically more complex than are coarser-grained designs. In many cases, most of the overhead is incurred by a small fraction of the code  [ Knu73 ] . So why not focus effort on that small fraction? This is the idea behind the parallel-fastpath design pattern, to aggressively parallelize the common-case code path without incurring the complexity that would be required to aggressively parallelize the entire algorithm. You must understand not only the specific algorithm you wish to parallelize, but also the workload that the algorithm will be subjected to. Great creativity and design effort is often required to construct a parallel fastpath. Parallel fastpath combines different patterns (one for the fastpath, one elsewhere) and is therefore a template pattern. The following instances of parallel fastpath occur often enough to warrant their own patterns, as depicted in Figure  5.24: 1. Reader/Writer Locking (described below in Section  5.4.1 ). 2.  Read-copy update (RCU), which may be used as a high-performance replacement for reader/writer locking, is introduced in Section  8.3 , and will not be discussed further in this chapter. 3. Hierarchical Locking ([ McK96a ]), which is touched upon in Section  5.4.2. 4.  Resource Allocator Caches ([ McK96a ,  MS93 ] ). See Section  5.4.3  for more detail. 5.4.1 Reader/Writer Locking If synchronization overhead is negligible (for example, if the program uses coarse- grained parallelism with large critical sections), and if only a small fraction of the critical sections modify data, then allowing multiple readers to proceed in parallel can greatly increase scalability. Writers exclude both readers and each other. There are many implementations of reader-writer locking, including the POSIX implementation describedinSection 3.2.4.  Figure 5.25 showshowthehashsearchmightbeimplemented using reader-writer locking. 104 Fastpath Parallel Caches Allocator Locking Hierarchical Locking R e ader/  Writer RCU Figure 5.24: Parallel-Fastpath Design Patterns Reader/writer locking is a simple instance of asymmetric locking. Snaman  [ ST87 ] describes a more ornate six-mode asymmetric locking design used in several clus- tered systems. Locking in general and reader-writer locking in particular is described extensively in Chapter  6. 5.4.2 Hierarchical Locking The idea behind hierarchical locking is to have a coarse-grained lock that is held only long enough to work out which fine-grained lock to acquire. Figure  5.26  shows how our hash-table search might be adapted to do hierarchical locking, but also shows the great weakness of this approach: we have paid the overhead of acquiring a second lock, but we only hold it for a short time. In this case, the simpler data-locking approach would be simpler and likely perform better. Quick Quiz 5.15:  In what situation would hierarchical locking work well? 5.4.3 Resource Allocator Caches This section presents a simplified schematic of a parallel fixed-block-size memory allocator. More detailed descriptions may be found in the literature [ MG92 ,  MS93 , BA01,  MSK01 ] or in the Linux kernel [ Tor03c] . 5.4.3.1 Parallel Resource Allocation Problem The basic problem facing a parallel memory allocator is the tension between the need to provide extremely fast memory allocation and freeing in the common case and the need to efficiently distribute memory in face of unfavorable allocation and freeing patterns. To see this tension, consider a straightforward application of data ownership to this problem — simply carve up memory so that each CPU owns its share. For example, suppose that a system with two CPUs has two gigabytes of memory (such as the one that 105 1 rwlock_t hash_lock; 2 3 struct hash_table 4 { 5 long nbuckets; 6 struct node  ** buckets; 7 }; 8 9 typedef struct node { 10 unsigned long key; 11 struct node  * next; 12 } node_t; 13 14 int hash_search(struct hash_table  * h, long key) 15 { 16 struct node  * cur; 17 int retval; 18 19 read_lock(&hash_lock); 20 cur = h->buckets[key % h->nbuckets]; 21 while (cur != NULL) { 22 if (cur->key >= key) { 23 retval = (cur->key == key); 24 read_unlock(&hash_lock); 25 return retval; 26 } 27 cur = cur->next; 28 } 29 read_unlock(&hash_lock); 30 return 0; 31 } Figure 5.25: Reader-Writer-Locking Hash Table Search I am typing on right now). We could simply assign each CPU one gigabyte of memory, and allow each CPU to access its own private chunk of memory, without the need for locking and its complexities and overheads. Unfortunately, this simple scheme breaks down if an algorithm happens to have CPU 0 allocate all of the memory and CPU 1 the free it, as would happen in a simple producer-consumer workload. The other extreme, code locking, suffers from excessive lock contention and over- head [ MS93 ]. 5.4.3.2 Parallel Fastpath for Resource Allocation The commonly used solution uses parallel fastpath with each CPU owning a modest cache of blocks, and with a large code-locked shared pool for additional blocks. To prevent any given CPU from monopolizing the memory blocks, we place a limit on the number of blocks that can be in each CPU’s cache. In a two-CPU system, the flow of  memory blocks will be as shown in Figure  5.27 : when a given CPU is trying to free a block when its pool is full, it sends blocks to the global pool, and, similarly, when that CPU is trying to allocate a block when its pool is empty, it retrieves blocks from the global pool. 5.4.3.3 Data Structures The actual data structures for a “toy” implementation of allocator caches are shown in Figure  5.28 . The “Global Pool” of Figure  5.27  is implemented by  globalmem  of  type  struct globalmempool , and the two CPU pools by the per-CPU variable percpumem  of type  percpumempool . Both of these data structures have arrays 106 1 struct hash_table 2 { 3 long nbuckets; 4 struct bucket  ** buckets; 5 }; 6 7 struct bucket { 8 spinlock_t bucket_lock; 9 node_t  * list_head; 10 }; 11 12 typedef struct node { 13 spinlock_t node_lock; 14 unsigned long key; 15 struct node  * next; 16 } node_t; 17 18 int hash_search(struct hash_table  * h, long key) 19 { 20 struct bucket  * bp; 21 struct node  * cur; 22 int retval; 23 24 bp = h->buckets[key % h->nbuckets]; 25 spin_lock(&bp->bucket_lock); 26 cur = bp->list_head; 27 while (cur != NULL) { 28 if (cur->key >= key) { 29 spin_lock(&cur->node_lock); 30 spin_unlock(&bp->bucket_lock); 31 retval = (cur->key == key); 32 spin_unlock(&cur->node_lock); 33 return retval; 34 } 35 cur = cur->next; 36 } 37 spin_unlock(&bp->bucket_lock); 38 return 0; 39 } Figure 5.26: Hierarchical-Locking Hash Table Search 107 CPU 0 Pool (Owned by CPU 0) CPU 1 Pool (Owned by CPU 1) Global Pool (Code Locked) Allocate/Free      O     v     e     r      f      l     o     w      E     m     p      t     y  O     v     e     r      f      l     o     w      E     m     p      t     y Figure 5.27: Allocator Cache Schematic of pointers to blocks in their  pool  fields, which are filled from index zero upwards. Thus, if   globalmem.pool[3]  is  NULL , then the remainder of the array from index 4 up must also be NULL. The  cur  fields contain the index of the highest-numbered full element of the  pool  array, or -1 if all elements are empty. All elements from globalmem.pool[0]  through  globalmem.pool[globalmem.cur]  must be full, and all the rest must be empty . 13 1 #define TARGET_POOL_SIZE 3 2 #define GLOBAL_POOL_SIZE 40 3 4 struct globalmempool { 5 spinlock_t mutex; 6 int cur; 7 struct memblock  * pool[GLOBAL_POOL_SIZE]; 8 } globalmem; 9 10 struct percpumempool { 11 int cur; 12 struct memblock  * pool[2  *  TARGET_POOL_SIZE]; 13 }; 14 15 DEFINE_PER_THREAD(struct percpumempool, percpumem); Figure 5.28: Allocator-Cache Data Structures The operation of the pool data structures is illustrated by Figure  5.29,  with the six boxes representing the array of pointers making up the  pool  field, and the number preceding them representing the  cur  field. The shaded boxes represent non- NULL pointers, while the empty boxes represent  NULL  pointers. An important, though po- tentially confusing, invariant of this data structure is that the  cur  field is always one smaller than the number of non- NULL  pointers. 13 Both pool sizes ( TARGET_POOL_SIZE  and  GLOBAL_POOL_SIZE ) are unrealistically small, but this small size makes it easier to single-step the program in order to get a feel for its operation. 108 −1 (Empty) 0 1 2 3 4 5 Figure 5.29: Allocator Pool Schematic 5.4.3.4 Allocation Function The allocation function  memblock_alloc()  may be seen in Figure  5.30 . Line 7 picks up the current thread’s per-thread pool, and line 8 check to see if it is empty. If so, lines 9-16 attempt to refill it from the global pool under the spinlock acquired on line 9 and released on line 16. Lines 10-14 move blocks from the global to the per-thread pool until either the local pool reaches its target size (half full) or the global pool is exhausted, and line 15 sets the per-thread pool’s count to the proper value. In either case, line 18 checks for the per-thread pool still being empty, and if not, lines 19-21 remove a block and return it. Otherwise, line 23 tells the sad tale of memory exhaustion. 1 struct memblock  * memblock_alloc(void) 2 { 3 int i; 4 struct memblock  * p; 5 struct percpumempool  * pcpp; 6 7 pcpp = &__get_thread_var(percpumem); 8 if (pcpp->cur < 0) { 9 spin_lock(&globalmem.mutex); 10 for (i = 0; i < TARGET_POOL_SIZE && 11 globalmem.cur >= 0; i++) { 12 pcpp->pool[i] = globalmem.pool[globalmem.cur]; 13 globalmem.pool[globalmem.cur--] = NULL; 14 } 15 pcpp->cur = i - 1; 16 spin_unlock(&globalmem.mutex); 17 } 18 if (pcpp->cur >= 0) { 19 p = pcpp->pool[pcpp->cur]; 20 pcpp->pool[pcpp->cur--] = NULL; 21 return p; 22 } 23 return NULL; 24 } Figure 5.30: Allocator-Cache Allocator Function 109 5.4.3.5 Free Function Figure  5.31  shows the memory-block free function. Line 6 gets a pointer to this thread’s pool, and line 7 checks to see if this per-thread pool is full. If so, lines 8-15 empty half of the per-thread pool into the global pool, with lines 8 and 14 acquiring and releasing the spinlock. Lines 9-12 implement the loop moving blocks from the local to the global pool, and line 13 sets the per-thread pool’s count to the proper value. In either case, line 16 then places the newly freed block into the per-thread pool. 1 void memblock_free(struct memblock  * p) 2 { 3 int i; 4 struct percpumempool  * pcpp; 5 6 pcpp = &__get_thread_var(percpumem); 7 if (pcpp->cur >= 2  *  TARGET_POOL_SIZE - 1) { 8 spin_lock(&globalmem.mutex); 9 for (i = pcpp->cur; i >= TARGET_POOL_SIZE; i--) { 10 globalmem.pool[++globalmem.cur] = pcpp->pool[i]; 11 pcpp->pool[i] = NULL; 12 } 13 pcpp->cur = i; 14 spin_unlock(&globalmem.mutex); 15 } 16 pcpp->pool[++pcpp->cur] = p; 17 } Figure 5.31: Allocator-Cache Free Function 5.4.3.6 Performance Rough performance results 14 are shown in Figure  5.32,  running on a dual-core Intel x86 running at 1GHz (4300 bogomips per CPU) with at most six blocks allowed in each CPU’s cache. In this micro-benchmark, each thread repeatedly allocates a group of blocks and then frees all the blocks in that group, with the number of blocks in the group being the “allocation run length” displayed on the x-axis. The y-axis shows the number of successful allocation/free pairs per microsecond — failed allocations are not counted. The “X”s are from a two-thread run, while the “+”s are from a single-threaded run. Note that run lengths up to six scale linearly and give excellent performance, while run lengths greater than six show poor performance and almost always also show  nega- tive  scaling. It is therefore quite important to size  TARGET_POOL_SIZE  sufficiently large, which fortunately is usually quite easy to do in actual practice [ MSK01 ] , espe- cially given today’s large memories. For example, in most systems, it is quite reasonable to set TARGET_POOL_SIZE to 100, in which case allocations and frees are guaranteed to be confined to per-thread pools at least 99% of the time. Ascanbeseenfromthefigure, thesituationswherethecommon-casedata-ownership applies (run lengths up to six) provide greatly improved performance compared to the cases where locks must be acquired. Avoiding synchronization in the common case will 14 This data was not collected in a statistically meaningful way, and therefore should be viewed with great skepticism and suspicion. Good data-collection and -reduction practice is discussed in Chapter  10.  That said, repeated runs gave similar results, and these results match more careful evaluations of similar algorithms. 110  0  5  10  15  20  25  30  0 5 10 15 20 25    A    l    l   o   c   a    t    i   o   n   s    /    F   r   e   e   s    P   e   r    M    i   c   r   o   s   e   c   o   n    d Allocation Run Length Figure 5.32: Allocator Cache Performance be a recurring theme through this book. Quick Quiz 5.16:  In Figure  5.32,  there is a pattern of performance rising with increasing run length in groups of three samples, for example, for run lengths 10, 11, and 12. Why? Quick Quiz 5.17:  Allocation failures were observed in the two-thread tests at run lengths of 19 and greater. Given the global-pool size of 40 and the per-thread target pool size  s  of three, number of thread  n  equal to two, and assuming that the per-thread pools are initially empty with none of the memory in use, what is the smallest allocation run length  m  at which failures can occur? (Recall that each thread repeatedly allocates m  block of memory, and then frees the  m  blocks of memory.) Alternatively, given  n threads each with pool size  s , and where each thread repeatedly first allocates  m  blocks of memory and then frees those  m  blocks, how large must the global pool size be? 5.4.3.7 Real-World Design The toy parallel resource allocator was quite simple, but real-world designs expand on this approach in a number of ways. First, real-world allocators are required to handle a wide range of allocation sizes, as opposed to the single size shown in this toy example. One popular way to do this is to offer a fixed set of sizes, spaced so as to balance external and internal fragmentation, such as in the late-1980s BSD memory allocator [ MK88 ] . Doing this would mean that the “globalmem” variable would need to be replicated on a per-size basis, and that the associated lock would similarly be replicated, resulting in data locking rather than the toy program’s code locking. Second, production-quality systems must be able to repurpose memory, meaning that they must be able to coalesce blocks into larger structures, such as pages [ MS93 ]. This coalescing will also need to be protected by a lock, which again could be replicated on a per-size basis. 111 Third, coalesced memory must be returned to the underlying memory system, and pages of memory must also be allocated from the underlying memory system. The locking required at this level will depend on that of the underlying memory system, but could well be code locking. Code locking can often be tolerated at this level, because this level is so infrequently reached in well-designed systems [ MSK01 ]. Despite this real-world design’s greater complexity, the underlying idea is the same — repeated application of parallel fastpath, as shown in Table  5.1 . Level Locking Purpose Per-thread pool Data ownership High-speed allocation Global block pool Data locking Distributing blocks among threads Coalescing Data locking Combining blocks into pages System memory Code locking Memory from/to system Table 5.1: Schematic of Real-World Parallel Allocator 5.5 Beyond Partitioning This chapter has discussed how data partitioning can be used to design simple linearly scalable parallel programs. Section  5.3.4  hinted at the possibilities of data replication, which will be used to great effect in Section  8.3. The main goal of applying partitioning and replication is to achieve linear speedups, in other words, to ensure that the total amount of work required does not increase significantly as the number of CPUs or threads increases. A problem that can be solved via partitioning and/or replication, resulting in linear speedups, is  embarrassingly  parallel . But can we do better? To answer this question, let us examine the solution of labyrinths and mazes. Of  course, labyrinths and mazes have been objects of fascination for millenia  [ Wik12 ], so it should come as no surprise that they are generated and solved using computers, including biological computers [ Ada11 ], GPGPUs  [ Eri08 ], and even discrete hard- ware [ KFC11 ]. Parallel solution of mazes is sometimes used as a class project in universities [ ETH11 ,  Uni10 ]  and as a vehicle to demonstrate the benefits of parallel- programming frameworks [ Fos10 ]. Common advice is to use a parallel work-queue algorithm (PWQ)  [ ETH11 ,  Fos10 ]. This section evaluates this advice by comparing PWQ against a sequential algorithm (SEQ) and also against an alternative parallel algorithm, in all cases solving randomly generated square mazes. Section  5.5.1  discusses PWQ, Section  5.5.2  discusses an alternative parallel algorithm, Section  5.5.3  analyzes its anomalous performance, Sec- tion  5.5.4  derives an improved sequential algorithm from the alternative parallel algo- rithm, Section  5.5.5  makes further performance comparisons, and finally Section  5.5.6 presents future directions and concluding remarks. 5.5.1 Work-Queue Parallel Maze Solver PWQ is based on SEQ, which is shown in Figure  5.33  ( maze_seq.c ). The maze is represented by a 2D array of cells and a linear-array-based work queue named ->visited . Line 7 visits the initial cell, and each iteration of the loop spanning lines 8- 21 traverses passages headed by one cell. The loop spanning lines 9-13 scans the 112 1 int maze_solve(maze  * mp, cell sc, cell ec) 2 { 3 cell c = sc; 4 cell n; 5 int vi = 0; 6 7 maze_try_visit_cell(mp, c, c, &n, 1); 8 for (;;) { 9 while (!maze_find_any_next_cell(mp, c, &n)) { 10 if (++vi >= mp->vi) 11 return 0; 12 c = mp->visited[vi].c; 13 } 14 do { 15 if (n == ec) { 16 return 1; 17 } 18 c = n; 19 } while (maze_find_any_next_cell(mp, c, &n)); 20 c = mp->visited[vi].c; 21 } 22 } Figure 5.33: SEQ Pseudocode ->visited[]  array for a visited cell with an unvisited neighbor, and the loop span- ning lines 14-19 traverses one fork of the submaze headed by that neighbor. Line 20 initializes for the next pass through the outer loop. The pseudocode for  maze_try_visit_cell()  is shown on lines 1-12 of Fig- ure  5.34.  Line 4 checks to see if cells  c  and  n  are adjacent and connected, while line 5 checks to see if cell  n  has not yet been visited. The  celladdr()  function returns the address of the specified cell. If either check fails, line 6 returns failure. Line 7 indicates the next cell, line 8 records this cell in the next slot of the  ->visited[]  array, line 9 indicates that this slot is now full, and line 10 marks this cell as visited and also records the distance from the maze start. Line 11 then returns success. The pseudocode for  maze_find_any_next_cell()  is shown on lines 14- 28 of the figure ( maze.c ). Line 17 picks up the current cell’s distance plus 1, while lines19, 21, 23, and25checkthecellineachdirection, andlines20, 22, 24, and26return true if the corresponding cell is a candidate next cell. The  prevcol() ,  nextcol() , prevrow() , and nextrow() each do the specified array-index-conversion operation. If none of the cells is a candidate, line 27 returns false. The path is recorded in the maze by counting the number of cells from the starting point, as shown in Figure  5.35,  where the starting cell is in the upper left and the ending cell is in the lower right. Starting at the ending cell and following consecutively decreasing cell numbers traverses the solution. The parallel work-queue solver is a straightforward parallelization of the algorithm shown in Figures  5.33  and  5.34.  Line 10 of Figure  5.33  must use fetch-and-add, and the local variable  vi  must be shared among the various threads. Lines 5 and 10 of  Figure  5.34  must be combined into a CAS loop, with CAS failure indicating a loop in the maze. Lines 8-9 of this figure must use fetch-and-add to arbitrate concurrent attempts to record cells in the  ->visited[]  array. This approach does provide significant speedups on a dual-CPU Lenovo ™ W500 running at 2.53GHz, as shown in Figure  5.36,  which shows the cumulative distribution functions (CDFs) for the solution times of the two algorithms, based on the solution of  500 different square 500-by-500 randomly generated mazes. The substantial overlap of  113 1 int maze_try_visit_cell(struct maze  * mp, cell c, cell t, 2 cell  * n, int d) 3 { 4 if (!maze_cells_connected(mp, c, t) || 5 ( * celladdr(mp, t) & VISITED)) 6 return 0; 7  * n = t; 8 mp->visited[mp->vi] = t; 9 mp->vi++; 10  * celladdr(mp, t) |= VISITED | d; 11 return 1; 12 } 13 14 int maze_find_any_next_cell(struct maze  * mp, cell c, 15 cell  * n) 16 { 17 int d = ( * celladdr(mp, c) & DISTANCE) + 1; 18 19 if (maze_try_visit_cell(mp, c, prevcol(c), n, d)) 20 return 1; 21 if (maze_try_visit_cell(mp, c, nextcol(c), n, d)) 22 return 1; 23 if (maze_try_visit_cell(mp, c, prevrow(c), n, d)) 24 return 1; 25 if (maze_try_visit_cell(mp, c, nextrow(c), n, d)) 26 return 1; 27 return 0; 28 } Figure 5.34: SEQ Helper Pseudocode 2 2 3 1 3 3 4 5 4 Figure 5.35: Cell-Number Solution Tracking the projection of the CDFs onto the x-axis will be addressed in Section  5.5.3. Interestingly enough, the sequential solution-path tracking works unchanged for the parallel algorithm. However, this uncovers a significant weakness in the parallel algorithm: At most one thread may be making progress along the solution path at any given time. This weakness is addressed in the next section. 5.5.2 Alternative Parallel Maze Solver Youthful maze solvers are often urged to start at both ends, and this advice has been repeated more recently in the context of automated maze solving [ Uni10] . This advice amounts to partitioning, which has been a powerful parallelization strategy in the context of parallel programming for both operating-system kernels [ BK85 ,  Inm85 ] and applications  [ Pat10 ] . This section applies this strategy, using two child threads that start at opposite ends of the solution path, and takes a brief look at the performance and scalability consequences. The partitioned parallel algorithm (PART), shown in Figure  5.37  ( maze_part.c ), 114  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  0 20 40 60 80 100 120 140      r      o        b      a        b        i        l        i       t      y CDF of Solution Time (ms) SEQ PWQ Figure 5.36: CDF of Solution Times For SEQ and PWQ 1 int maze_solve_child(maze  * mp, cell  * visited, cell sc) 2 { 3 cell c; 4 cell n; 5 int vi = 0; 6 7 myvisited = visited; myvi = &vi; 8 c = visited[vi]; 9 do { 10 while (!maze_find_any_next_cell(mp, c, &n)) { 11 if (visited[++vi].row < 0) 12 return 0; 13 if (ACCESS_ONCE(mp->done)) 14 return 1; 15 c = visited[vi]; 16 } 17 do { 18 if (ACCESS_ONCE(mp->done)) 19 return 1; 20 c = n; 21 } while (maze_find_any_next_cell(mp, c, &n)); 22 c = visited[vi]; 23 } while (!ACCESS_ONCE(mp->done)); 24 return 1; 25 } Figure 5.37: Partitioned Parallel Solver Pseudocode is similar to SEQ, but has a few important differences. First, each child thread has its own  visited  array, passed in by the parent as shown on line 1, which must be initialized to all [-1,-1]. Line 7 stores a pointer to this array into the per-thread variable myvisited  to allow access by helper functions, and similarly stores a pointer to the local visit index. Second, the parent visits the first cell on each child’s behalf, which the child retrieves on line 8. Third, the maze is solved as soon as one child locates a cell that has been visited by the other child. When maze_try_visit_cell() detects this, it sets a ->done field in the maze structure. Fourth, each child must therefore periodically check the  ->done  field, as shown on lines 13, 18, and 23. The  ACCESS_ONCE() primitive must disable any compiler optimizations that might combine consecutive loads or that might reload the value. A C++1x volatile relaxed load suffices [ Bec11 ] . Finally, the maze_find_any_next_cell() function must use compare-and-swap to mark 115 1 int maze_try_visit_cell(struct maze  * mp, int c, int t, 2 int  * n, int d) 3 { 4 cell_t t; 5 cell_t  * tp; 6 int vi; 7 8 if (!maze_cells_connected(mp, c, t)) 9 return 0; 10 tp = celladdr(mp, t); 11 do { 12 t = ACCESS_ONCE( * tp); 13 if (t & VISITED) { 14 if ((t & TID) != mytid) 15 mp->done = 1; 16 return 0; 17 } 18 } while (!CAS(tp, t, t | VISITED | myid | d)); 19  * n = t; 20 vi = ( * myvi)++; 21 myvisited[vi] = t; 22 return 1; 23 } Figure 5.38: Partitioned Parallel Helper Pseudocode  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  0 20 40 60 80 100 120 140      r      o        b      a        b        i        l        i       t      y CDF of Solution Time (ms) SEQ PWQ PART Figure 5.39: CDF of Solution Times For SEQ, PWQ, and PART a cell as visited, however no constraints on ordering are required beyond those provided by thread creation and join. The pseudocode for maze_find_any_next_cell() is identical to that shown in Figure  5.34,  but the pseudocode for  maze_try_visit_cell()  differs, and is shown in Figure  5.38.  Lines 8-9 check to see if the cells are connected, returning failure if not. The loop spanning lines 11-18 attempts to mark the new cell visited. Line 13 checks to see if it has already been visited, in which case line 16 returns failure, but only after line 14 checks to see if we have encountered the other thread, in which case line 15 indicates that the solution has been located. Line 19 updates to the new cell, lines 20 and 21 update this thread’s visited array, and line 22 returns success. Performance testing revealed a surprising anomaly, shown in Figure  5.39.  The median solution time for PART (17 milliseconds) is more than four times faster than that of SEQ (79 milliseconds), despite running on only two threads. The next section analyzes this anomaly. 116  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  0.1 1 10 100      r      o        b      a        b        i        l        i       t      y CDF of Speedup Relative to SEQ SEQ/PWQSEQ/PART Figure 5.40: CDF of SEQ/PWQ and SEQ/PART Solution-Time Ratios Figure 5.41: Reason for Small Visit Percentages 5.5.3 Performance Comparison I Thefirstreactiontoaperformanceanomalyistocheckforbugs. Althoughthealgorithms were in fact finding valid solutions, the plot of CDFs in Figure  5.39  assumes independent data points. This is not the case: The performance tests randomly generate a maze, and then run all solvers on that maze. It therefore makes sense to plot the CDF of the ratios of solution times for each generated maze, as shown in Figure  5.40,  greatly reducing the CDFs’ overlap. This plot reveals that for some mazes, PART is more than  forty times faster than SEQ. In contrast, PWQ is never more than about two times faster than SEQ. A forty-times speedup on two threads demands explanation. After all, this is not merely embarrassingly parallel, where partitionability means that adding threads does not increase the overall computational cost. It is instead  humiliatingly parallel : Adding threads significantly reduces the overall computational cost, resulting in large algorithmic superlinear speedups. Further investigation showed that PART sometimes visited fewer than 2% of the maze’s cells, while SEQ and PWQ never visited fewer than about 9%. The reason for this difference is shown by Figure  5.41.  If the thread traversing the solution from the upper left reaches the circle, the other thread cannot reach the upper-right portion of  the maze. Similarly, if the other thread reaches the square, the first thread cannot reach the lower-left portion of the maze. Therefore, PART will likely visit a small fraction of  the non-solution-path cells. In short, the superlinear speedups are due to threads getting 117  0  20  40  60  80  100  120  1  0 10 20 30 40 50 6 0 70  80 90 100    S   o    l   u    t    i   o   n    T    i   m   e    (   m   s    ) Percent of Maze Cells Visited SEQ PART PWQ Figure 5.42: Correlation Between Visit Percentage and Solution Time Figure 5.43: PWQ Potential Contention Points in each others’ way. This is a sharp contrast with decades of experience with parallel programming, where workers have struggled to keep threads  out   of each others’ way. Figure  5.42  confirms a strong correlation between cells visited and solution time for all three methods. The slope of PART’s scatterplot is smaller than that of SEQ, indicating that PART’s pair of threads visits a given fraction of the maze faster than can SEQ’s single thread. PART’s scatterplot is also weighted toward small visit percentages, confirming that PART does less total work, hence the observed humiliating parallelism. The fraction of cells visited by PWQ is similar to that of SEQ. In addition, PWQ’s solution time is greater than that of PART, even for equal visit fractions. The reason for this is shown in Figure  5.43,  which has a red circle on each cell with more than two neighbors. Each such cell can result in contention in PWQ, because one thread can enter but two threads can exit, which hurts performance, as noted earlier in this chapter. In contrast, PART can incur such contention but once, namely when the solution is located. Of course, SEQ never contends. Although PART’s speedup is impressive, we should not neglect sequential optimiza- tions. Figure  5.44  shows that SEQ, when compiled with -O3, is about twice as fast as unoptimized PWQ, approaching the performance of unoptimized PART. Compiling all three algorithms with -O3 gives results similar to (albeit faster than) those shown in Figure  5.40,  except that PWQ provides almost no speedup compared to SEQ, in keeping with Amdahl’s Law [ Amd67 ]. However, if the goal is to double performance compared to unoptimized SEQ, as opposed to achieving optimality, compiler optimizations are 118  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  0.1 1 10 100      r      o        b      a        b        i        l        i       t      y CDF of Speedup Relative to SEQ PWQ PART SEQ -O3 Figure 5.44: Effect of Compiler Optimization (-O3)  0  0.1  0.2  0.3  0.4  0.5  0.6  0.7  0.8  0.9  1  0.1 1 10 100      r      o        b      a        b        i        l        i       t      y CDF of Speedup Relative to SEQ (-O3) PWQ PART COPART Figure 5.45: Partitioned Coroutines quite attractive. Cache alignment and padding often improves performance by reducing false sharing. However, for these maze-solution algorithms, aligning and padding the maze-cell array degrades  performance by up to 42% for 1000x1000 mazes. Cache locality is more important than avoiding false sharing, especially for large mazes. For smaller 20-by- 20 or 50-by-50 mazes, aligning and padding can produce up to a 40% performance improvement for PART, but for these small sizes, SEQ performs better anyway because there is insufficient time for PART to make up for the overhead of thread creation and destruction. In short, the partitioned parallel maze solver is an interesting example of an algo- rithmic superlinear speedup. If “algorithmic superlinear speedup” causes cognitive dissonance, please proceed to the next section. 5.5.4 Alternative Sequential Maze Solver The presence of algorithmic superlinear speedups suggests simulating parallelism via co-routines, for example, manually switching context between threads on each pass 119  0  2  4  6  8  10  1  10 100 1000    S   p   e   e    d   u   p    R   e    l   a    t    i   v   e    t   o    S    E    Q      (   -    O    3    ) Maze Size P WQ PART Figure 5.46: Varying Maze Size vs. SEQ  0  0.2  0.4  0.6  0.8  1  1.2  1.4  1.6  1.  10 100 1000    S   p   e   e    d   u   p    R   e    l   a    t    i   v   e    t   o    C    O    P    A    R    T    (   -    O    3    ) Maze Size PWQ PART Figure 5.47: Varying Maze Size vs. COPART through the main do-while loop in Figure  5.37 . This context switching is straightforward because the context consists only of the variables  c  and  vi : Of the numerous ways to achieve the effect, this is a good tradeoff between context-switch overhead and visit percentage. As can be seen in Figure  5.45,  this coroutine algorithm (COPART) is quite effective, with the performance on one thread being within about 30% of PART on two threads ( maze_2seq.c ). 5.5.5 Performance Comparison II Figures  5.46  and  5.47  show the effects of varying maze size, comparing both PWQ and PART running on two threads against either SEQ or COPART, respectively, with 90%-confidence error bars. PART shows superlinear scalability against SEQ and modest scalability against COPART for 100-by-100 and larger mazes. PART exceeds theoretical energy-efficiency breakeven against COPART at roughly the 200-by-200 maze size, given that power consumption rises as roughly the square of the frequency for high frequencies  [ Mud00 ], so that 1.4x scaling on two threads consumes the same energy as a single thread at equal solution speeds. In contrast, PWQ shows poor scalability against 120  0  0.5  1  1.5  2  2.5  3  .  1 2 3 4 5 6 7 8    M   e   a   n    S   p   e   e    d   u   p    R   e    l   a    t    i   v   e    t   o    C    O    P    A    R    T    (   -    O    3    ) Number of Threads PWQ PART Figure 5.48: Mean Speedup vs. Number of Threads, 1000x1000 Maze both SEQ and COPART unless unoptimized: Figures  5.46  and  5.47  were generated using -O3. Figure  5.48  shows the performance of PWQ and PART relative to COPART. For PART runs with more than two threads, the additional threads were started evenly spaced along the diagonal connecting the starting and ending cells. Simplified link-state routing  [ BG87 ] was used to detect early termination on PART runs with more than two threads (the solution is flagged when a thread is connected to both beginning and end). PWQ performs quite poorly, but PART hits breakeven at two threads and again at five threads, achieving modest speedups beyond five threads. Theoretical energy efficiency breakeven is within the 90% confidence interval for seven and eight threads. The reasons for the peak at two threads are (1) the lower complexity of termination detection in the two-thread case and (2) the fact that there is a lower probability of the third and subsequent threads making useful forward progress: Only the first two threads are guaranteed to start on the solution line. This disappointing performance compared to results in Figure  5.47  is due to the less-tightly integrated hardware available in the larger and older Xeon ® system running at 2.66GHz. 5.5.6 Future Directions and Conclusions Much future work remains. First, this section applied only one technique used by human maze solvers. Others include following walls to exclude portions of the maze and choosing internal starting points based on the locations of previously traversed paths. Second, different choices of starting and ending points might favor different algorithms. Third, although placement of the PART algorithm’s first two threads is straightforward, there are any number of placement schemes for the remaining threads. Optimal placement might well depend on the starting and ending points. Fourth, study of unsolvable mazes and cyclic mazes is likely to produce interesting results. Fifth, the lightweight C++11 atomic operations might improve performance. Sixth, it would be interestingtocomparethespeedupsforthree-dimensionalmazes(orofevenhigher-order mazes). Finally, for mazes, humiliating parallelism indicated a more-efficient sequential implementation using coroutines. Do humiliatingly parallel algorithms always lead to more-efficient sequential implementations, or are there inherently humiliatingly parallel algorithms for which coroutine context-switch overhead overwhelms the speedups? 121 This section demonstrated and analyzed parallelization of maze-solution algorithms. A conventional work-queue-based algorithm did well only when compiler optimizations were disabled, suggesting that some prior results obtained using high-level/overhead languages will be invalidated by advances in optimization. This section gave a clear example where approaching parallelism as a first-class optimization technique rather than as a derivative of a sequential algorithm paves the way for an improved sequential algorithm. High-level design-time application of  parallelism is likely to be a fruitful field of study. This section took the problem of  solving mazes from mildly scalable to humiliatingly parallel and back again. It is hoped that this experience will motivate work on parallelism as a first-class design-time whole- application optimization technique, rather than as a grossly suboptimal after-the-fact micro-optimization to be retrofitted into existing programs. 5.6 Partitioning, Parallelism, and Optimization Most important, although this chapter has demonstrated that although applying paral- lelism at the design level gives excellent results, this final section shows that this is not enough. For search problems such as maze solution, this section has shown that search strategy is even more important than parallel design. Yes, for this particular type of  maze, intelligently applying parallelism identified a superior search strategy, but this sort of luck is no substitute for a clear focus on search strategy itself. As noted back in Section  1.2,  parallelism is but one potential optimization of many. A successful design needs to focus on the most important optimization. Much though I might wish to claim otherwise, that optimization might or might not be parallelism. However, for the many cases where parallelism is the right optimization, the next section covers that synchronization workhorse, locking. 122 Chapter 6 Locking In recent concurrency research, the role of villain is often played by locking. In many papers and presentations, locking stands accused of promoting deadlocks, convoying, starvation, unfairness, data races, and all manner of other concurrency sins. Interestingly enough, the role of workhorse in production-quality shared-memory parallel software is played by, you guessed it, locking. This chapter will look into this dichotomy between villain and hero, as fancifully depicted in Figures  6.1  and Figure  6.2. There are a number of reasons behind this Jekyll-and-Hyde dichotomy: 1.  Many of locking’s sins have pragmatic design solutions that work well in most cases, for example: (a) Use of lock hierarchies to avoid deadlock. (b)  Deadlock-detection tools, for example, the Linux kernel’s lockdep facil- ity [ Cor06a ]. (c)  Locking-friendly data structures, such as arrays, hash tables, and radix trees, which will be covered in Chapter  9. 2.  Some of locking’s sins are problems only at high levels of contention, levels reached only by poorly designed programs. 3.  Some of locking’s sins are avoided by using other synchronization mechanisms in concert with locking. These other mechanisms include statistical counters (see Chapter  4 ), reference counters (see Section  8.1 ), hazard pointers (see Sec- tion  8.1.2 ), sequence-locking readers (see Section  8.2 ) RCU (see Section  8.3 ), and simple non-blocking data structures (see Section  13.3 ). 4.  Until quite recently, almost all large shared-memory parallel programs were developed in secret, so that it was difficult for most researchers to learn of these pragmatic solutions. 5.  Locking works extremely well for some software artifacts and extremely poorly for others. Developers who have worked on artifacts for which locking works well can be expected to have a much more positive opinion of locking than those who have worked on artifacts for which locking works poorly, as will be discussed in Section  6.5 . 123 Figure 6.1: Locking: Villain or Slob? 6.  All good stories need a villain, and locking has a long and honorable history serving as a research-paper whipping boy. Quick Quiz 6.1:  Just how can serving as a whipping boy be considered to be in any way honorable??? This chapter will give an overview of a number of ways to avoid locking’s more serious sins. 6.1 Staying Alive Given that locking stands accused of deadlock and starvation, one important concern for shared-memory parallel developers is simply staying alive. The following sections therefore cover deadlock, livelock, starvation, unfairness, and inefficiency. 6.1.1 Deadlock Deadlock occurs when each of a group of threads is holding at least one lock while at the same time waiting on a lock held by a member of that same group. Without some sort of external intervention, deadlock is forever. No thread can acquire the lock it is waiting on until that lock is released by the thread holding it, but the thread holding it cannot release it until the holding thread acquires the lock that it is waiting on. We can create a directed-graph representation of a deadlock scenario with nodes for threads and locks, as shown in Figure  6.3.  An arrow from a lock to a thread indicates that the thread holds the lock, for example, Thread B holds Locks 2 and 4. An arrow from a thread to a lock indicates that the thread is waiting on the lock, for example, Thread B is waiting on Lock 3. 124 Figure 6.2: Locking: Workhorse or Hero? Lock 1 Thread ALock 2 Thread B Lock 3 Thread CLock 4 Figure 6.3: Deadlock Cycle A deadlock scenario will always contain at least one deadlock cycle. In Figure  6.3, this cycle is Thread B, Lock 3, Thread C, Lock 4, and back to Thread B. Quick Quiz 6.2:  But the definition of deadlock only said that each thread was holding at least one lock and waiting on another lock that was held by some thread. How do you know that there is a cycle? Although there are some software environments such as database systems that can repair an existing deadlock, this approach requires either that one of the threads be killed or that a lock be forcibly stolen from one of the threads. This killing and forcible stealing can be appropriate for transactions, but is often problematic for kernel and application-level use of locking: dealing with the resulting partially updated structures can be extremely complex, hazardous, and error-prone. Kernels and applications therefore work to avoid deadlocks rather than to recover from them. There are a number of deadlock-avoidance strategies, including locking hierarchies (Section  6.1.1.1 ), local locking hierarchies (Section  6.1.1.2) , layered locking 125 hierarchies (Section  6.1.1.3 ), strategies for dealing with APIs containing pointers to locks (Section  6.1.1.4 ), conditional locking (Section  6.1.1.5 ), acquiring all needed locks first (Section  6.1.1.6 ), single-lock-at-a-time designs (Section  6.1.1.7 ), and strategies for signal/interrupt handlers (Section  6.1.1.8) . Although there is no deadlock-avoidance strategy that works perfectly for all situations, there is a good selection of deadlock- avoidance tools to choose from. 6.1.1.1 Locking Hierarchies Locking hierarchies order the locks and prohibit acquiring locks out of order. In Figure  6.3 , we might order the locks numerically, so that a thread was forbidden from acquiring a given lock if it already held a lock with the same or a higher number. Thread B has violated this hierarchy because it is attempting to acquire Lock 3 while holding Lock 4, which permitted the deadlock to occur. Again, to apply a locking hierarchy, order the locks and prohibit out-of-order lock acquisition. In large program, it is wise to use tools to enforce your locking hierarchy [ Cor06a ]. 6.1.1.2 Local Locking Hierarchies However, the global nature of locking hierarchies make them difficult to apply to library functions. After all, the program using a given library function has not even been written yet, so how can the poor library-function implementor possibly hope to adhere to the yet-to-be-written program’s locking hierarchy? One special case that is fortunately the common case is when the library function does not invoke any of the caller’s code. In this case, the caller’s locks will never be acquired while holding any of the library’s locks, so that there cannot be a deadlock cycle containing locks from both the library and the caller. Quick Quiz 6.3:  Are there any exceptions to this rule, so that there really could be a deadlock cycle containing locks from both the library and the caller, even given that the library code never invokes any of the caller’s functions? But suppose that a library function does invoke the caller’s code. For example, the  qsort()  function invokes a caller-provided comparison function. A concurrent implementation of   qsort()  likely uses locking, which might result in deadlock in the perhaps-unlikely case where the comparison function is a complicated function involving also locking. How can the library function avoid deadlock? The golden rule in this case is “release all locks before invoking unknown code.” To follow this rule, the  qsort()  function must release all locks before invoking the comparison function. Quick Quiz 6.4:  But if   qsort()  releases all its locks before invoking the compar- ison function, how can it protect against races with other  qsort()  threads? To see the benefits of local locking hierarchies, compare Figures  6.4  and  6.5.  In both figures, application functions  foo()  and  bar()  invoke  qsort()  while holding locks A and B, respectively. Because this is a parallel implementation of   qsort() , it acquires lock C. Function  foo()  passes function  cmp()  to  qsort() , and  cmp() acquires lock B. Function  bar()  passes a simple integer-comparison function (not shown) to  qsort() , and this simple function does not acquire any locks. Now, if   qsort()  holds Lock C while calling  cmp()  in violation of the golden release-all-locks rule above, as shown in Figure  6.4,  deadlock can occur. To see this, suppose that one thread invokes  foo()  while a second thread concurrently invokes 126 qsort() foo()bar()cmp() Lock BLock B Lock A Lock C Application Library Figure 6.4: Without Local Locking Hierarchy for qsort() Lock C qsort() foo()bar()cmp() Lock BLock B Lock A Application Library Figure 6.5: Local Locking Hierarchy for qsort() bar() . The first thread will acquire lock A and the second thread will acquire lock B. If the first thread’s call to  qsort()  acquires lock C, then it will be unable to acquire lock B when it calls cmp() . But the first thread holds lock C, so the second thread’s call to  qsort()  will be unable to acquire it, and thus unable to release lock B, resulting in deadlock. In contrast, if   qsort()  releases lock C before invoking the comparison function (which is unknown code from  qsort() ’s perspective, then deadlock is avoided as shown in Figure  6.5. If each module releases all locks before invoking unknown code, then deadlock is avoided if each module separately avoids deadlock. This rule therefore greatly simplifies deadlock analysis and greatly improves modularity. 127 qsort() Lock C cmp() Lock D bar() Lock B foo() Lock A Application Library Figure 6.6: Layered Locking Hierarchy for qsort() 6.1.1.3 Layered Locking Hierarchies Unfortunately, it might not be possible for  qsort()  to release all of its locks before invoking the comparison function. In this case, we cannot construct a local locking hierarchy by releasing all locks before invoking unknown code. However, we can instead construct a layered locking hierarchy, as shown in Figure  6.6.  Here, the  cmp() function uses a new lock D that is acquired after all of locks A, B, and C, avoiding deadlock. We therefore have three layers to the global deadlock hierarchy, the first containing locks A and B, the second containing lock C, and the third containing lock D. Please note that it is not typically possible to mechanically change  cmp()  to use the new Lock D. Quite the opposite: It is often necessary to make profound design-level modifications. Nevertheless, the effort required for such modifications is normally a small price to pay in order to avoid deadlock. For another example where releasing all locks before invoking unknown code is impractical, imagine an iterator over a linked list, as shown in Figure  6.7  ( locked_  list.c ). The  list_start()  function acquires a lock on the list and returns the first element (if there is one), and  list_next()  either returns a pointer to the next element in the list or releases the lock and returns  NULL  if the end of the list has been reached. Figure  6.8  shows how this list iterator may be used. Lines 1-4 define the  list_  ints  element containing a single integer, and lines 6-17 show how to iterate over the list. Line 11 locks the list and fetches a pointer to the first element, line 13 provides a pointer to our enclosing list_ints structure, line 14 prints the corresponding integer, and line 15 moves to the next element. This is quite simple, and hides all of the locking. 128 1 struct locked_list { 2 spinlock_t s; 3 struct list_head h; 4 }; 5 6 struct list_head  * list_start(struct locked_list  * lp) 7 { 8 spin_lock(&lp->s); 9 return list_next(lp, &lp->h); 10 } 11 12 struct list_head  * list_next(struct locked_list  * lp, 13 struct list_head  * np) 14 { 15 struct list_head  * ret; 16 17 ret = np->next; 18 if (ret == &lp->h) { 19 spin_unlock(&lp->s); 20 ret = NULL; 21 } 22 return ret; 23 } Figure 6.7: Concurrent List Iterator 1 struct list_ints { 2 struct list_head n; 3 int a; 4 }; 5 6 void list_print(struct locked_list  * lp) 7 { 8 struct list_head  * np; 9 struct list_ints  * ip; 10 11 np = list_start(lp); 12 while (np != NULL) { 13 ip = list_entry(np, struct list_ints, n); 14 printf("t%d", ip->a); 15 np = list_next(lp, np); 16 } 17 } Figure 6.8: Concurrent List Iterator Usage 129 1 spin_lock(&lock2); 2 layer_2_processing(pkt); 3 nextlayer = layer_1(pkt); 4 spin_lock(&nextlayer->lock1); 5 layer_1_processing(pkt); 6 spin_unlock(&lock2); 7 spin_unlock(&nextlayer->lock1); Figure 6.9: Protocol Layering and Deadlock That is, the locking remains hidden as long as the code processing each list element does not itself acquire a lock that is held across some other call to  list_start()  or list_next() , which results in deadlock. We can avoid the deadlock by layering the locking hierarchy to take the list-iterator locking into account. This layered approach can be extended to an arbitrarily large number of layers, but each added layer increases the complexity of the locking design. Such increases in complexity are particularly inconvenient for some types of object-oriented designs, in which control passes back and forth among a large group of objects in an undisciplined manner . 1 This mismatch between the habits of object-oriented design and the need to avoid deadlock is an important reason why parallel programming is perceived by some to be so difficult. Some alternatives to highly layered locking hierarchies are covered in Chapter  8. 6.1.1.4 Locking Hierarchies and Pointers to Locks Althought there are some exceptions, an external API containing a pointer to a lock is very often a misdesigned API. Handing an internal lock to some other software component is after all the antithesis of information hiding, which is in turn a key design principle. Quick Quiz 6.5:  Name one common exception where it is perfectly reasonable to pass a pointer to a lock into a function. One exception is functions that hand off some entity, where the caller’s lock must be held until the handoff is complete, but where the lock must be released before the function returns. One example of such a function is the POSIX  pthread_cond_  wait()  function, where passing an pointer to a  pthread_mutex_t  prevents hangs due to lost wakeups. Quick Quiz 6.6:  Doesn’t the fact that pthread_cond_wait() first releases the mutex and then re-acquires it eliminate the possibility of deadlock? In short, if you find yourself exporting an API with a pointer to a lock as an argument or the return value, do youself a favor and carefully reconsider your API design. It might well be the right thing to do, but experience indicates that this is unlikely. 6.1.1.5 Conditional Locking But suppose that there is no reasonable locking hierarchy. This can happen in real life, for example, in layered network protocol stacks where packets flow in both directions. In the networking case, it might be necessary to hold the locks from both layers when passing a packet from one layer to another. Given that packets travel both up and down the protocol stack, this is an excellent recipe for deadlock, as illustrated in Figure  6.9. 1 One name for this is “object-oriented spaghetti code.” 130 1 retry: 2 spin_lock(&lock2); 3 layer_2_processing(pkt); 4 nextlayer = layer_1(pkt); 5 if (!spin_trylock(&nextlayer->lock1)) { 6 spin_unlock(&lock2); 7 spin_lock(&nextlayer->lock1); 8 spin_lock(&lock2); 9 if (layer_1(pkt) != nextlayer) { 10 spin_unlock(&nextlayer->lock1); 11 spin_unlock(&lock2); 12 goto retry; 13 } 14 } 15 layer_1_processing(pkt); 16 spin_unlock(&lock2); 17 spin_unlock(&nextlayer->lock1); Figure 6.10: Avoiding Deadlock Via Conditional Locking Here, a packet moving down the stack towards the wire must acquire the next layer’s lock out of order. Given that packets moving up the stack away from the wire are acquiring the locks in order, the lock acquisition in line 4 of the figure can result in deadlock. One way to avoid deadlocks in this case is to impose a locking hierarchy, but when it is necessary to acquire a lock out of order, acquire it conditionally, as shown in Figure  6.10 . Instead of unconditionally acquiring the layer-1 lock, line 5 conditionally acquires the lock using the  spin_trylock()  primitive. This primitive acquires the lock immediately if the lock is available (returning non-zero), and otherwise returns zero without acquiring the lock. If   spin_trylock()  was successful, line 15 does the needed layer-1 processing. Otherwise, line 6 releases the lock, and lines 7 and 8 acquire them in the correct order. Unfortunately, there might be multiple networking devices on the system (e.g., Ethernet and WiFi), so that the  layer_1()  function must make a routing decision. This decision might change at any time, especially if the system is mobile . 2 Therefore, line 9 must recheck the decision, and if it has changed, must release the locks and start over. Quick Quiz 6.7:  Can the transformation from Figure  6.9  to Figure  6.10  be applied universally? Quick Quiz 6.8:  But the complexity in Figure  6.10  is well worthwhile given that it avoids deadlock, right? 6.1.1.6 Acquire Needed Locks First In an important special case of conditional locking all needed locks are acquired before any processing is carried out. In this case, processing need not be idempotent: if it turns out to be impossible to acquire a given lock without first releasing one that was already acquired, just release all the locks and try again. Only once all needed locks are held will any processing be carried out. However, this procedure can result in  livelock  , which will be discussed in Sec- tion  6.1.2 . Quick Quiz 6.9:  When using the “acquire needed locks first” approach described in Section  6.1.1.6 , how can livelock be avoided? 2 And, in contrast to the 1900s, mobility is the common case. 131 A related approach, two-phase locking  [ BHG87 ], has seen long production use in transactional database systems. In the first phase of a two-phase locking transaction, locks are acquired but not released. Once all needed locks have been acquired, the trans- action enters the second phase, where locks are released, but not acquired. This locking approach allows databases to provide serializability guarantees for their transactions, in other words, to guarantee that all of values see and produced by the transactions are consistent with some global ordering of all the transactions. Many such systems rely on the ability to abort transactions, although this can be simplified by avoiding making any changes to shared data until all needed locks are acquired. Livelock and deadlock are issues in such systems, but practical solutions may be found in any of a number of  database textbooks. 6.1.1.7 Single-Lock-at-a-Time Designs In some cases, it is possible to avoid nesting locks, thus avoiding deadlock. For example, if a problem is perfectly partitionable, a single lock may be assigned to each partition. Then a thread working on a given partition need only acquire the one corresponding lock. Because no thread ever holds more than one lock at a time, deadlock is impossible. However, there must be some mechanism to ensure that the needed data structures remain in existence during the time that neither lock is held. One such mechanism is discussed in Section  6.4  and several others are presented in Chapter  8. 6.1.1.8 Signal/Interrupt Handlers Deadlocks involving signal handlers are often quickly dismissed by noting that it is not legal to invoke  pthread_mutex_lock()  from within a signal handler  [ Ope97 ]. However, it is possible (though almost always unwise) to hand-craft locking primitives that can be invoked from signal handlers. Besides which, almost all operating-system kernels permit locks to be acquired from within interrupt handlers, which are the kernel analog to signal handlers. Thetrickistoblocksignals(ordisableinterrupts, asthecasemaybe)whenacquiring any lock that might be acquired within an interrupt handler. Furthermore, if holding such a lock, it is illegal to attempt to acquire any lock that is ever acquired outside of a signal handler without blocking signals. Quick Quiz 6.10:  Why is it illegal to acquire a Lock A that is acquired outside of a signal handler without blocking signals while holding a Lock B that is acquired within a signal handler? If a lock is acquired by the handlers for several signals, then each and every one of  these signals must be blocked whenever that lock is acquired, even when that lock is acquired within a signal handler. Quick Quiz 6.11:  How can you legally block signals within a signal handler? Unfortunately, blocking and unblocking signals can be expensive in some operating systems, notably including Linux, so performance concerns often mean that locks acquired in signal handlers are only acquired in signal handlers, and that lockless synchronization mechanisms are used to communicate between application code and signal handlers. Or that signal handlers are avoided completely except for handling fatal errors. Quick Quiz 6.12:  If acquiring locks in signal handlers is such a bad idea, why even discuss ways of making it safe? 132 1 void thread1(void) 2 { 3 retry: 4 spin_lock(&lock1); 5 do_one_thing(); 6 if (!spin_trylock(&lock2)) { 7 spin_unlock(&lock1); 8 goto retry; 9 } 10 do_another_thing(); 11 spin_unlock(&lock2); 12 spin_unlock(&lock1); 13 } 14 15 void thread2(void) 16 { 17 retry: 18 spin_lock(&lock2); 19 do_a_third_thing(); 20 if (!spin_trylock(&lock1)) { 21 spin_unlock(&lock2); 22 goto retry; 23 } 24 do_a_fourth_thing(); 25 spin_unlock(&lock1); 26 spin_unlock(&lock2); 27 } Figure 6.11: Abusing Conditional Locking 6.1.1.9 Discussion There are a large number of deadlock-avoidance strategies available to the shared- memory parallel programmer, but there are sequential programs for which none of them is a good fit. This is one of the reasons that expert programmers have more than one tool in their toolbox: locking is a powerful concurrency tool, but there are jobs better addressed with other tools. Quick Quiz 6.13:  Given an object-oriented application that passes control freely among a group of objects such that there is no straightforward locking hierarchy , 3 layered or otherwise, how can this application be parallelized? Nevertheless, the strategies described in this section have proven quite useful in many settings. 6.1.2 Livelock and Starvation Although conditional locking can be an effective deadlock-avoidance mechanism, it can be abused. Consider for example the beautifully symmetric example shown in Figure  6.11 . This example’s beauty hides an ugly livelock. To see this, consider the following sequence of events: 1. Thread 1 acquires  lock1  on line 4, then invokes  do_one_thing() . 2. Thread 2 acquires  lock2  on line 18, then invokes  do_a_third_thing() . 3.  Thread 1 attempts to acquire  lock2  on line 6, but fails because Thread 2 holds it. 3 Also known as “object-oriented spaghetti code.” 133 1 void thread1(void) 2 { 3 unsigned int wait = 1; 4 retry: 5 spin_lock(&lock1); 6 do_one_thing(); 7 if (!spin_trylock(&lock2)) { 8 spin_unlock(&lock1); 9 sleep(wait); 10 wait = wait << 1; 11 goto retry; 12 } 13 do_another_thing(); 14 spin_unlock(&lock2); 15 spin_unlock(&lock1); 16 } 17 18 void thread2(void) 19 { 20 unsigned int wait = 1; 21 retry: 22 spin_lock(&lock2); 23 do_a_third_thing(); 24 if (!spin_trylock(&lock1)) { 25 spin_unlock(&lock2); 26 sleep(wait); 27 wait = wait << 1; 28 goto retry; 29 } 30 do_a_fourth_thing(); 31 spin_unlock(&lock1); 32 spin_unlock(&lock2); 33 } Figure 6.12: Conditional Locking and Exponential Backoff  4.  Thread 2 attempts to acquire  lock1  on line 20, but fails because Thread 1 holds it. 5. Thread 1 releases  lock1  on line 7, then jumps to  retry  at line 3. 6. Thread 2 releases  lock2  on line 21, and jumps to  retry  at line 17. 7. The livelock dance repeats from the beginning. Quick Quiz 6.14:  How can the livelock shown in Figure  6.11  be avoided? Livelock can be thought of as an extreme form of starvation where a group of threads starve, rather than just one of them . 4 Livelock and starvation are serious issues in software transactional memory imple- mentations, and so the concept of   contention manager   has been introduced to encapsu- late these issues. In the case of locking, simple exponential backoff can often address livelock and starvation. The idea is to introduce exponentially increasing delays before each retry, as shown in Figure  6.12 . Quick Quiz 6.15:  What problems can you spot in the code in Figure  6.12 ? However, for better results, the backoff should be bounded, and even better high- contention results have been obtained via queued locking  [ And90 ], which is discussed more in Section  6.3.2.  Of course, best of all is to use a good parallel design so that lock contention remains low. 4 Try not to get too hung up on the exact definitions of terms like livelock, starvation, and unfairness. Anything that causes a group of threads to fail to make adequate forward progress is a problem that needs to be fixed, regardless of what name you choose for it. 134 CPU 0 Cache CPU 1 Cache Interconnect CPU 2 Cache CPU 3 Cache Int erconnect CPU 6 Cache CPU 7 Cache Interconnect CPU 4 Cache CPU 5 Cache Interconnect MemoryMemory Speed−of−Light Round−Trip Distance in Vacuum for 1.8GHz Clock Period (8cm) System Interconnect Figure 6.13: System Architecture and Lock Unfairness 6.1.3 Unfairness Unfairness can be thought of as a less-severe form of starvation, where a subset of  threads contending for a given lock are granted the lion’s share of the acquisitions. This can happen on machines with shared caches or NUMA characteristics, for example, as shown in Figure  6.13.  If CPU 0 releases a lock that all the other CPUs are attempting to acquire, the interconnect shared between CPUs 0 and 1 means that CPU 1 will have an advantage over CPUs 2-7. Therefore CPU 1 will likely acquire the lock. If CPU 1 hold the lock long enough for CPU 0 to be requesting the lock by the time CPU 1 releases it and vice versa, the lock can shuttle between CPUs 0 and 1, bypassing CPUs 2-7. Quick Quiz 6.16:  Wouldn’t it be better just to use a good parallel design so that lock contention was low enough to avoid unfairness? 6.1.4 Inefficiency Locks are implementedusingatomic instructions andmemory barriers, andoften involve cache misses. As we saw in Chapter  2,  these instructions are quite expensive, roughly two orders of magnitude greater overhead than simple instructions. This can be a serious problem for locking: If you protect a single instruction with a lock, you will increase the overhead by a factor of one hundred. Even assuming perfect scalability,  one hundred  CPUs would be required to keep up with a single CPU executing the same code without locking. This situation underscores the synchronization-granularity tradeoff discussed in Section  5.3 , especially Figure  5.22 : Too coarse a granularity will limit scalability, while too fine a granularity will result in excessive synchronization overhead. That said, once a lock is held, the data protected by that lock can be accessed by the lock holder without interference. Acquiring a lock might be expensive, but once held, the CPU’s caches are an effective performance booster, at least for large critical sections. Quick Quiz 6.17:  How might the lock holder be interfered with? 135 6.2 Types of Locks There are a surprising number of types of locks, more than this short chapter can possibly do justice to. The following sections discuss exclusive locks (Section  6.2.1 ), reader-writer locks (Section  6.2.2) , multi-role locks (Section  6.2.3) , and scoped locking (Section  6.2.4 ). 6.2.1 Exclusive Locks Exclusive locks are what they say they are: only one thread may hold the lock at a time. The holder of such a lock thus has exclusive access to all data protected by that lock, hence the name. Ofcourse, thisallassumesthatthislockisheldacrossallaccessestodatapurportedly protected by the lock. Although there are some tools that can help, the ultimate responsibility for ensuring that the lock is acquired in all necessary code paths rests with the developer. Quick Quiz 6.18:  Does it ever make sense to have an exclusive lock acquisition immediately followed by a release of that same lock, that is, an empty critical section? 6.2.2 Reader-Writer Locks Reader-writer locks  [ CHP71 ]  permit any number of readers to hold the lock concurrently on the one hand or a single writer to hold the lock on the other. In theory, then, reader- writer locks should allow excellent scalability for data that is read often and written rarely. In practice, the scalability will depend on the reader-writer lock implementation. The classic reader-writer lock implementation involves a set of counters and flags that are manipulated atomically. This type of implementation suffers from the same problem as does exclusive locking for short critical sections: The overhead of acquiring and releasing the lock is about two orders of magnitude greater than the overhead of  a simple instruction. Of course, if the critical section is long enough, the overhead of  acquiring and releasing the lock becomes negligible. However, because only one thread at a time can be manipulating the lock, the required critical-section size increases with the number of CPUs. It is possible to design a reader-writer lock that is much more favorable to readers through use of per-thread exclusive locks [ HW92 ] . To read, a thread acquires only its own lock. To write, a thread acquires all locks. In the absence of writers, each reader incurs only atomic-instruction and memory-barrier overhead, with no cache misses, which is quite good for a locking primitive. Unfortunately, writers must incur cache misses as well as atomic-instruction and memory-barrier overhead—multiplied by the number of threads. In short, reader-writer locks can be quite useful in a number of situations, but each type of implementation does have its drawbacks. The canonical use case for reader- writer locking involves very long read-side critical sections, preferably measured in hundreds of microseconds or even milliseconds. 6.2.3 Beyond Reader-Writer Locks Reader-writer locks and exclusive locks differ in their admission policy: exclusive locks allow at most one holder, while reader-writer locks permit an arbitrary number 136     N    u     l     l     (     N    o     t     H    e     l     d     )     C    o    n    c    u    r    r    e    n     t     R    e    a     d     C    o    n    c    u    r    r    e    n     t     W    r     i     t    e     P    r    o     t    e    c     t    e     d     R    e    a     d     P    r    o     t    e    c     t    e     d     W    r     i     t    e     E    x    c     l    u    s     i    v    e Null (Not Held) Concurrent Read X Concurrent Write X X X Protected Read X X X Protected Write X X X X Exclusive X X X X X Table 6.1: VAX/VMS Distributed Lock Manager Policy of read-holders (but only one write-holder). There is a very large number of possible admission policies, one of which is that of the VAX/VMS distributed lock manager (DLM)  [ ST87 ], which is shown in Table  6.1.  Blank cells indicate compatible modes, while cells containing “X” indicate incompatible modes. The VAX/VMS DLM uses six modes. For purposes of comparison, exclusive locks use two modes (not held and held), while reader-writer locks use three modes (not held, read held, and write held). The first mode is null, or not held. This mode is compatible with all other modes, which is to be expected: If a thread is not holding a lock, it should not prevent any other thread from acquiring that lock. The second mode is concurrent read, which is compatible with every other mode ex- cept for exclusive. The concurrent-read mode might be used to accumulate approximate statistics on a data structure, while permitting updates to proceed concurrently. The third mode is concurrent write, which is compatible with null, concurrent read, and concurrent write. The concurrent-write mode might be used to update approximate statistics, while still permitting reads and concurrent updates to proceed concurrently. The fourth mode is protected read, which is compatible with null, concurrent read, and protected read. The protected-read mode might be used to obtain a consistent snapshot of the data structure, while permitting reads but not updates to proceed concur- rently. The fifth mode is protected write, which is compatible with null and concurrent read. The protected-write mode might be used to carry out updates to a data structure that could interfere with protected readers but which could be tolerated by concurrent readers. The sixth and final mode is exclusive, which is compatible only with null. The exclusive mode is used when it is necessary to exclude all other accesses. It is interesting to note that exclusive locks and reader-writer locks can be emulated by the VAX/VMS DLM. Exclusive locks would use only the null and exclusive modes, while reader-writer locks might use the null, protected-read, and protected-write modes. Quick Quiz 6.19:  Is there any other way for the VAX/VMS DLM to emulate a reader-writer lock? Although the VAX/VMS DLM policy has seen widespread production use for dis- tributed databases, it does not appear to be used much in shared-memory applications. 137 One possible reason for this is that the greater communication overheads of distributed databases can hide the greater overhead of the VAX/VMS DLM’s more-complex admis- sion policy. Nevertheless, the VAX/VMS DLM is an interesting illustration of just how flexible the concepts behind locking can be. It also serves as a very simple introduction to the locking schemes used by modern DBMSes, which can have more than thirty locking modes, compared to VAX/VMS’s six. 6.2.4 Scoped Locking The locking primitives discussed thus far require explicit acquisition and release prim- itives, for example,  spin_lock()  and  spin_unlock() , respectively. Another approach is to use the object-oriented “resource allocation is initialization” (RAII) pattern [ ES90 ] . 5 This pattern is often applied to auto variables in languages like C++, where the corresponding  constructor   is invoked upon entry to the object’s scope, and the corresponding  destructor   is invoked upon exit from that scope. This can be applied to locking by having the constructor acquire the lock and the destructor free it. This approach can be quite useful, in fact in 1990 I was convinced that it was the only type of locking that was needed . 6 One very nice property of RAII locking is that you don’t need to carefully release the lock on each and every code path that exits that scope, a property that can eliminate a troublesome set of bugs. However, RAII locking also has a dark side. RAII makes it quite difficult to encapsulate lock acquisition and release, for example, in iterators. In many iterator implementations, you would like to acquire the lock in the interator’s “start” function and release it in the iterator’s “stop” function. RAII locking instead requires that the lock acquisition and release take place in the same level of scoping, making such encapsulation difficult or even impossible. RAII locking also prohibits overlapping critical sections, due to the fact that scopes must nest. This prohibition makes it difficult or impossible to express a number of  useful constructs, for example, locking trees that mediate between multiple concurrent attempts to assert an event. Of an arbitrarily large group of concurrent attempts, only one need succeed, and the best strategy for the remaining attempts is for them to fail as quickly and painlessly as possible. Otherwise, lock contention becomes pathological on large systems (where “large” is many hundreds of CPUs). Example data structures (taken from the Linux kernel’s implementation of RCU) are shown in Figure  6.14.  Here, each CPU is assigned a leaf  rcu_node structure, and each rcu_node structure has a pointer to its parent (named, oddly enough, ->parent ), up to the root  rcu_node  structure, which has a  NULL ->parent  pointer. The number of child  rcu_node  structures per parent can vary, but is typically 32 or 64. Each rcu_node  structure also contains a lock named  ->fqslock . The general approach is a  tournament  , where a given CPU conditionally acquires its leaf   rcu_node  structure’s  ->fqslock , and, if successful, attempt to acquire that of the parent, then release that of the child. In addition, at each level, the CPU checks a global  gp_flags  variable, and if this variable indicates that some other CPU has asserted the event, the first CPU drops out of the competition. This acquire-then-release sequence continues until either the  gp_flags  variable indicates that someone else 5 Thoughmoreclearlyexpressedat http://www.stroustrup.com/bs_faq2.html#finally . 6 My later work with parallelism at Sequent Computer Systems very quickly disabused me of this misguided notion. 138 Root rcu_node Structure Structure 0 Leaf rcu_nodeLeaf rcu_node Structure N    C    P    U   m      *    (    N  −    1    )   +    1    C    P    U   m      *    N  −    1    C    P    U   m      *    (    N  −    1    )    C    P    U   m    C    P    U    1    C    P    U    0 Figure 6.14: Locking Hierarchy won the tournament, one of the attempts to acquire an  ->fqslock  fails, or the root rcu_node  structure’s  ->fqslock  as been acquired. Simplified code to implement this is shown in Figure  6.15.  The purpose of this function is to mediate between CPUs who have concurrently detected a need to invoke the  do_force_quiescent_state()  function. At any given time, it only makes sense for one instance of  do_force_quiescent_state() to be active, so if there are multiple concurrent callers, we need at most one of them to actually invoke  do_  force_quiescent_state() , and we need the rest to (as quickly and painlessly as possible) give up and leave. To this end, each pass through the loop spanning lines 7-15 attempts to advance up one level in the  rcu_node  hierarcy. If the  gp_flags  variable is already set (line 8) or if the attempt to acquire the current  rcu_node  structure’s  ->fqslock is unsuccessful (line 9), then local variable  ret  is set to 1. If line 10 sees that local variable  rnp_old  is non- NULL , meaning that we hold  rnp_old ’s  ->fqs_lock , line 11 releases this lock (but only after the attempt has been made to acquire the parent rcu_node structure’s ->fqslock ). If line 12 sees that either line 8 or 9 saw a reason to give up, line 13 returns to the caller. Otherwise, we must have acquired the current rcu_node structure’s ->fqslock , so line 14 saves a pointer to this structure in local variable  rnp_old  in preparation for the next pass through the loop. If control reaches line 16, we won the tournament, and now holds the root  rcu_  node  structure’s  ->fqslock . If line 16 still sees that the global variable  gp_flags is zero, line 17 sets  gp_flags  to one, line 18 invokes  do_force_quiescent_  state() , and line 19 resets  gp_flags  back to zero. Either way, line 21 releases the root  rcu_node  structure’s  ->fqslock . Quick Quiz 6.20:  The code in Figure  6.15  is ridiculously complicated! Why not conditionally acquire a single global lock? Quick Quiz 6.21:  Wait a minute! If we “win” the tournament on line 16 of Fig- 139 1 void force_quiescent_state(struct rcu_node  * rnp_leaf) 2 { 3 int ret; 4 struct rcu_node  * rnp = rnp_leaf; 5 struct rcu_node  * rnp_old = NULL; 6 7 for (; rnp != NULL; rnp = rnp->parent) { 8 ret = (ACCESS_ONCE(gp_flags)) || 9 !raw_spin_trylock(&rnp->fqslock); 10 if (rnp_old != NULL) 11 raw_spin_unlock(&rnp_old->fqslock); 12 if (ret) 13 return; 14 rnp_old = rnp; 15 } 16 if (ACCESS_ONCE(gp_flags)) { 17 ACCESS_ONCE(gp_flags) = 1; 18 do_force_quiescent_state(); 19 ACCESS_ONCE(gp_flags) = 0; 20 } 21 raw_spin_unlock(&rnp_old->fqslock); 22 } Figure 6.15: Conditional Locking to Reduce Contention ure  6.15 , we get to do all the work of   do_force_quiescent_state() . Exactly how is that a win, really? This function illustrates the not-uncommon pattern of hierarchical locking. This pattern is quite difficult to implement using RAII locking, just like the interator encapsu- lation noted earlier, and so the lock/unlock primitives will be needed for the foreseeable future. 6.3 Locking Implementation Issues Developers are almost always best-served by using whatever locking primitives are provided by the system, for example, the POSIX pthread mutex locks [ Ope97 ,  But97 ] . Nevertheless, studying sample implementations can be helpful, as can considering the challenges posed by extreme workloads and environments. 6.3.1 Sample Exclusive-Locking Implementation Based on Atomic Exchange This section reviews the implementation shown in Figure  6.16 . The data structure for this lock is just an  int , as shown on line 1, but could be any integral type. The initial value of this lock is zero, meaning “unlocked”, as shown on line 2. Quick Quiz 6.22:  Why not rely on the C language’s default initialization of zero instead of using the explicit initializer shown on line 2 of Figure  6.16 ? Lock acquisition is carried out by the  xchg_lock()  function shown on lines 4-9. This function uses a nested loop, with the outer loop repeatedly atomically exchanging the value of the lock with the value one (meaning “locked”). If the old value was already the value one (in other words, someone else already holds the lock), then the inner loop (lines 7-8) spins until the lock is available, at which point the outer loop makes another attempt to acquire the lock. Quick Quiz 6.23:  Why bother with the inner loop on lines 7-8 of Figure  6.16 ? Why not simply repeatedly do the atomic exchange operation on line 6? 140 1 typedef int xchglock_t; 2 #define DEFINE_XCHG_LOCK(n) xchglock_t n = 0 3 4 void xchg_lock(xchglock_t  * xp) 5 { 6 while (xchg(xp, 1) == 1) { 7 while ( * xp == 1) 8 continue; 9 } 10 } 11 12 void xchg_unlock(xchglock_t  * xp) 13 { 14 (void)xchg(xp, 0); 15 } Figure 6.16: Sample Lock Based on Atomic Exchange Lock release is carried out by the xchg_unlock() function shown on lines 12-15. Line 14 atomically exchanges the value zero (“unlocked”) into the lock, thus marking it as having been released. Quick Quiz 6.24:  Why not simply store zero into the lock word on line 14 of  Figure  6.16 ? This lock is a simple example of a test-and-set lock  [ SR84 ], but very similar mecha- nisms have been used extensively as pure spinlocks in production. 6.3.2 Other Exclusive-Locking Implementations There are a great many other possible implementations of locking based on atomic instructions, many of which are reviewed by Mellor-Crummey and Scott [ MCS91 ]. These implementations represent different points in a multi-dimensional design trade- off [ McK96b ]. For example, the atomic-exchange-based test-and-set lock presented in the previous section works well when contention is low and has the advantage of small memory footprint. It avoids giving the lock to threads that cannot use it, but as a result can suffer from unfairness or even starvation at high contention levels. Incontrast, ticketlock [ MCS91 ], whichisusedintheLinuxkernel, avoidsunfairness at high contention levels, but as a consequence of its first-in-first-out discipline can grant the lock to a thread that is currently unable to use it, for example, due to being preempted, interrupted, or otherwise out of action. However, it is important to avoid getting too worried about the possibility of preemption and interruption, given that this preemption and interruption might just as well happen just after the lock was acquired . 7 All locking implementations where waiters spin on a single memory location, including both test-and-set locks and ticket locks, suffer from performance problems at high contention levels. The problem is that the thread releasing the lock must update the value of the corresponding memory location. At low contention, this is not a problem: The corresponding cache line is very likely still local to and writeable by the thread holding the lock. In contrast, at high levels of contention, each thread attempting to acquire the lock will have a read-only copy of the cache line, and the lock holder will need to invalidate all such copies before it can carry out the update that releases the lock. In general, the more CPUs and threads there are, the greater the overhead incurred when 7 Besides, the best way of handling high lock contention is to avoid it in the first place! However, there are some situation where high lock contention is the lesser of the available evils, and in any case, studying schemes that deal with high levels of contention is good mental exercise. 141 releasing the lock under conditions of high contention. This negative scalability has motivated a number of different queued-lock implemen- tations [ And90 ,  GT90 ,  MCS91 ,  WKS94 ,  Cra93 ,  MLH94 ,  TS93 ] . Queued locks avoid high cache-invalidation overhead by assigning each thread a queue element. These queue elements are linked together into a queue that governs the order that the lock will be granted to the waiting threads. The key point is that each thread spins on its own queue element, so that the lock holder need only invalidate the first element from the next thread’s CPU’s cache. This arrangement greatly reduces the overhead of lock handoff at high levels of contention. More recent queued-lock implementations also take the system’s architecture into account, preferentially granting locks locally, while also taking steps to avoid starva- tion  [ SSVM02 ,  RH03 ,  RH02 ,  JMRR02 ,  MCM02 ]. Many of these can be thought of as analogous to the elevator algorithms traditionally used in scheduling disk I/O. Unfortunately, the same scheduling logic that improves the efficiency of queued locks at high contention also increases their overhead at low contention. Beng-Hong Lim and Anant Agarwal therefore combined a simple test-and-set lock with a queued lock, using the test-and-set lock at low levels of contention and switching to the queued lock at high levels of contention [ LA94 ], thus getting low overhead at low levels of contention and getting fairness and high throughput at high levels of contention. Browning et al. took a similar approach, but avoided the use of a separate flag, so that the test-and- set fast path uses the same sequence of instructions that would be used in a simple test-and-set lock [ BMMM05] . This approach has been used in production. Another issue that arises at high levels of contention is when the lock holder is delayed, especially when the delay is due to preemption, which can result in  priority inversion , where a low-priority thread holds a lock, but is preempted by a medium priority CPU-bound thread, which results in a high-priority process blocking while attempting to acquire the lock. The result is that the CPU-bound medium-priority process is preventing the high-priority process from running. One solution is  priority inheritance  [ LR80 ], which has been widely used for real-time computing  [ SRL90 , Cor06b ], despite some lingering controversy over this practice  [Yod04 ,  Loc02 ]. Another way to avoid priority inversion is to prevent preemption while a lock is held. Because preventing preemption while locks are held also improves throughput, most proprietary UNIX kernels offer some form of scheduler-conscious synchronization mechanism [ KWS97 ], largely due to the efforts of a certain sizable database vendor. These mechanisms usually take the form of a hint that preemption would be inappro- priate. These hints frequently take the form of a bit set in a particular machine register, which enables extremely low per-lock-acquisition overhead for these mechanisms. In contrast, Linux avoids these hints, instead getting similar results from a mechanism called  futexes  [ FRK02,  Mol06,  Ros06,  Dre11 ]. Interestingly enough, atomic instructions are not strictly needed to implement locks [ Dij65 ,  Lam74 ] . An excellent exposition of the issues surrounding locking imple- mentations based on simple loads and stores may be found in Herlihy’s and Shavit’s textbook  [ HS08 ]. The main point echoed here is that such implementations currently have little practical application, although a careful study of them can be both entertaining and enlightening. Nevertheless, with one exception described below, such study is left as an exercise for the reader. Gamsa et al. [ GKAS99 ,  Section 5.3] describe a token-based mechanism in which a token circulates among the CPUs. When the token reaches a given CPU, it has exclusive access to anything protected by that token. There are any number of schemes that may 142 be used to implement the token-based mechanism, for example: 1.  Maintain a per-CPU flag, which is initially zero for all but one CPU. When a CPU’s flag is non-zero, it holds the token. When it finishes with the token, it zeroes its flag and sets the flag of the next CPU to one (or to any other non-zero value). 2.  Maintain a per-CPU counter, which is initially set to the corresponding CPU’s number, which we assume to range from zero to  N  − 1 , where  N   is the number of CPUs in the system. When a CPU’s counter is greater than that of the next CPU (taking counter wrap into account), the first CPU holds the token. When it is finished with the token, it sets the next CPU’s counter to a value one greater than its own counter. Quick Quiz 6.25:  How can you tell if one counter is greater than another, while accounting for counter wrap? Quick Quiz 6.26:  Which is better, the counter approach or the flag approach? This lock is unusual in that a given CPU cannot necessarily acquire it immediately, even if no other CPU is using it at the moment. Instead, the CPU must wait until the token comes around to it. This is useful in cases where CPUs need periodic access to the critical section, but can tolerate variances in token-circulation rate. Gamsa et al. [ GKAS99 ] used it to implement a variant of read-copy update (see Section  8.3 ), but it could also be used to protect periodic per-CPU operations such as flushing per-CPU caches used by memory allocators [ MS93 ], garbage-collecting per-CPU data structures, or flushing per-CPU data to shared storage (or to mass storage, for that matter). As increasing numbers of people gain familiarity with parallel hardware and paral- lelize increasing amounts of code, we can expect more special-purpose locking primi- tives to appear. Nevertheless, you should carefully consider this important safety tip: Use the standard synchronization primitives whenever humanly possible. The big ad- vantage of the standard synchronization primitives over roll-your-own efforts is that the standard primitives are typically  much  less bug-prone. 8 6.4 Lock-Based Existence Guarantees A key challenge in parallel programming is to provide  existence guarantees  [ GKAS99 ] , so that attempts to access a given object can rely on that object being in existence throughout a given access attempt. In some cases, existence guarantees are implicit: 1.  Global variables and static local variables in the base module will exist as long as the application is running. 2.  Global variables and static local variables in a loaded module will exist as long as that module remains loaded. 3.  A module will remain loaded as long as at least one of its functions has an active instance. 4.  A given function instance’s on-stack variables will exist until that instance returns. 8 And yes, I have done at least my share of roll-your-own synchronization primitives. However, you will notice that my hair is much greyer than it was before I started doing that sort of work. Coincidence? Maybe. But are you  really  willing to risk your own hair turning prematurely grey? 143 1 int delete(int key) 2 { 3 int b; 4 struct element  * p; 5 6 b = hashfunction(key); 7 p = hashtable[b]; 8 if (p == NULL || p->key != key) 9 return 0; 10 spin_lock(&p->lock); 11 hashtable[b] = NULL; 12 spin_unlock(&p->lock); 13 kfree(p); 14 return 1; 15 } Figure 6.17: Per-Element Locking Without Existence Guarantees 5.  If you are executing within a given function or have been called (directly or indirectly) from that function, then the given function has an active instance. These implicit existence guarantees are straightforward, though bugs involving implicit existence guarantees really can happen. Quick Quiz 6.27:  How can relying on implicit existence guarantees result in a bug? But the more interesting—and troublesome—guarantee involves heap memory: A dynamically allocated data structure will exist until it is freed. The problem to be solved is to synchronize the freeing of the structure with concurrent accesses to that same structure. One way to do this is with  explicit guarantees , such as locking. If a given structure may only be freed while holding a given lock, then holding that lock guarantees that structure’s existence. But this guarantee depends on the existence of the lock itself. One straightforward way to guarantee the lock’s existence is to place the lock in a global variable, but global locking has the disadvantage of limiting scalability. One way of providing scalability that improves as the size of the data structure increases is to place a lock in each element of the structure. Unfortunately, putting the lock that is to protect a data element in the data element itself is subject to subtle race conditions, as shown in Figure  6.17. Quick Quiz 6.28:  What if the element we need to delete is not the first element of  the list on line 8 of Figure  6.17 ? Quick Quiz 6.29:  What race condition can occur in Figure  6.17 ? One way to fix this example is to use a hashed set of global locks, so that each hash bucket has its own lock, as shown in Figure  6.18.  This approach allows acquiring the proper lock (on line 9) before gaining a pointer to the data element (on line 10). Although this approach works quite well for elements contained in a single partitionable data structure such as the hash table shown in the figure, it can be problematic if a given data element can be a member of multiple hash tables or given more-complex data structures such as trees or graphs. These problems can be solved, in fact, such solutions form the basis of lock-based software transactional memory implementa- tions [ ST95 ,  DSS06 ]. However, Chapter  8  describes simpler—and faster—ways of  providing existence guarantees. 144 1 int delete(int key) 2 { 3 int b; 4 struct element  * p; 5 spinlock_t  * sp; 6 7 b = hashfunction(key); 8 sp = &locktable[b]; 9 spin_lock(sp); 10 p = hashtable[b]; 11 if (p == NULL || p->key != key) { 12 spin_unlock(sp); 13 return 0; 14 } 15 hashtable[b] = NULL; 16 spin_unlock(sp); 17 kfree(p); 18 return 1; 19 } Figure 6.18: Per-Element Locking With Lock-Based Existence Guarantees 6.5 Locking: Hero or Villain? As is often the case in real life, locking can be either hero or villain, depending on how it is used and on the problem at hand. In my experience, those writing whole applications are happy with locking, those writing parallel libraries are less happy, and those parallelizing existing sequential libraries are extremely unhappy. The following sections discuss some reasons for these differences in viewpoints. 6.5.1 Locking For Applications: Hero! When writing an entire application (or entire kernel), developers have full control of the design, including the synchronization design. Assuming that the design makes good use of partitioning, as discussed in Chapter  5,  locking can be an extremely effective synchronization mechanism, as demonstrated by the heavy use of locking in production- quality parallel software. Nevertheless, although such software usually bases most of its synchronization design on locking, such software also almost always makes use of other synchroniza- tion mechanisms, including special counting algorithms (Chapter  4 ), data ownership (Chapter  7 ), reference counting (Section  8.1) , sequence locking (Section  8.2) , and read-copy update (Section  8.3 ). In addition, practitioners use tools for deadlock detec- tion  [ Cor06a ] , lock acquisition/release balancing [ Cor04 ] , cache-miss analysis  [ The11 ] , hardware-counter-based profiling [ EGMdB11,  The12 ], and many more besides. Given careful design, use of a good combination of synchronization mechanisms, and good tooling, locking works quite well for applications and kernels. 6.5.2 Locking For Parallel Libraries: Just Another Tool Unlike applications and kernels, the designer of a library cannot know the locking design of the code that the library will be interacting with. In fact, that code might not be written for years to come. Library designers therefore have less control and must exercise more care when laying out their synchronization design. Deadlock is of course of particular concern, and the techniques discussed in Sec- tion  6.1.1  need to be applied. One popular deadlock-avoidance strategy is therefore 145 to ensure that the library’s locks are independent subtrees of the enclosing program’s locking hierarchy. However, this can be harder than it looks. One complication was discussed in Section  6.1.1.2,  namely when library functions call into application code, with qsort() ’s comparison-function argument being a case in point. Another complication is the interaction with signal handlers. If an application signal handler is invoked from a signal received within the library function, deadlock can ensue just as surely as if the library function had called the signal handler directly. A final complication occurs for those library functions that can be used between a fork()  /  exec()  pair, for example, due to use of the  system()  function. In this case, if your library function was holding a lock at the time of the  fork() , then the child process will begin life with that lock held. Because the thread that will release the lock is running in the parent but not the child, if the child calls your library function, deadlock will ensue. The following strategies may be used to avoid deadlock problems in these cases: 1. Don’t use either callbacks or signals. 2. Don’t acquire locks from within callbacks or signal handlers. 3. Let the caller control synchronization. 4. Parameterize the library API to delegate locking to caller. 5. Explicitly avoid callback deadlocks. 6. Explicitly avoid signal-handler deadlocks. Each of these strategies is discussed in on of the following sections. 6.5.2.1 Use Neither Callbacks Nor Signals If a library function avoids callbacks and the application as a whole avoids signals, then any locks acquired by that library function will be leaves of the locking-hierarchy tree. This arrangement avoids deadlock, as discussed in Section  6.1.1.1.  Although this strategy works extremely well where it applies, there are some applications that must use signal handlers, and there are some library functions (such as the  qsort() function discussed in Section  6.1.1.2)  that require callbacks. The strategy described in the next section can often be used in these cases. 6.5.2.2 Avoid Locking in Callbacks and Signal Handlers If neither callbacks nor signal handlers acquire locks, then they cannot be involved in deadlock cycles, which allows straightforward locking hierarchies to once again consider library functions to be leaves on the locking-hierarchy tree. This strategy works very well for most uses of   qsort , whose callbacks usually simply compare the two values passed in to them. This strategy also works wonderfully for many signal handlers, especially given that acquiring locks from within signal handlers is generally frowned upon [ Gro01 ] , 9 but can fail if the application needs to manipulate complex data structures from a signal handler. Here are some ways to avoid acquiring locks in signal handlers even if complex data structures must be manipulated: 9 But the standard’s words do not stop clever coders from creating their own home-brew locking primitives from atomic operations. 146 1.  Use simple data structures based on non-blocking synchronization, as will be discussed in Section  13.3.1 . 2.  If the data structures are too complex for reasonable use of non-blocking syn- chronization, create a queue that allows non-blocking enqueue operations. In the signal handler, instead of manipulating the complex data structure, add an element to the queue describing the required change. A separate thread can then remove elements from the queue and carry out the required changes using normal locking. There are a number of readily available implementations of concurrent queues [ KLP12,  Des09,  MS96 ]. This strategy should be enforced with occasional manual or (preferably) automated inspections of callbacks and signal handlers. When carrying out these inspections, be wary of clever coders who might have (unwisely) created home-brew locks from atomic operations. 6.5.2.3 Caller Controls Synchronization Let the caller control synchronization. This works extremely well when the library functions are operating on independent caller-visible instances of a data structure, each of which may be synchronized separately. For example, if the library functions operate on a search tree, and if the application needs a large number of independent search trees, then the application can associate a lock with each tree. The application then acquires and releases locks as needed, so that the library need not be aware of parallelism at all. Instead, the application controls the parallelism, so that locking can work very well, as was discussed in Section  6.5.1. However, this strategy fails if the library implements a data structure that requires internal concurrency, for example, a hash table or a parallel sort. In this case, the library absolutely must control its own synchronization. 6.5.2.4 Parameterize Library Synchronization The idea here is to add arguments to the library’s API to specify which locks to acquire, how to acquire and release them, or both. This strategy allows the application to take on the global task of avoiding deadlock by specifying which locks to acquire (by passing in pointers to the locks in question) and how to acquire them (by passing in pointers to lock acquisition and release functions), but also allows a given library function to control its own concurrency by deciding where the locks should be acquired and released. In particular, this strategy allows the lock acquisition and release functions to block signals as needed without the library code needing to be concerned with of which signals need to be blocked by which locks. The separation of concerns used by this strategy can be quite effective, but in some cases the strategies laid out in the following sections can work better. That said, passing explicit pointers to locks to external APIs must be very carefully considered, as discussed in Section  6.1.1.4.  Although this practice is sometimes the right thing to do, you should do yourself a favor by looking into alternative designs first. 6.5.2.5 Explicitly Avoid Callback Deadlocks The basic rule behind this strategy was discussed in Section  6.1.1.2:  “Release all locks before invoking unknown code.” This is usually the best approach because it allows 147 the application to ignore the library’s locking hierarchy: the library remains a leaf or isolated subtree of the application’s overall locking hierarchy. In cases where it is not possible to release all locks before invoking unknown code, the layered locking hierarchies described in Section  6.1.1.3  can work well. For example, if the unknown code is a signal handler, this implies that the library function block signals across all lock acquisitions, which can be complex and slow. Therefore, in cases where signal handlers (probably unwisely) acquire locks, the strategies in the next section may prove helpful. 6.5.2.6 Explicitly Avoid Signal-Handler Deadlocks Signal-handler deadlocks can be explicitly avoided as follows: 1.  If the application invokes the library function from within a signal handler, then that signal must be blocked every time that the library function is invoked from outside of a signal handler. 2.  If the application invokes the library function while holding a lock acquired within a given signal handler, then that signal must be blocked every time that the library function is called outside of a signal handler. These rules can be enforced by using tools similar to the Linux kernel’s lockdep lock dependency checker [ Cor06a ]. One of the great strengths of lockdep is that it is not fooled by human intuition [ Ros11 ]. 6.5.2.7 Library Functions Used Between fork() and exec() As noted earlier, if a thread executing a library function is holding a lock at the time that some other thread invokes  fork() , the fact that the parent’s memory is copied to create the child means that this lock will be born held in the child’s context. The thread that will release this lock is running in the parent, but not in the child, which means that the child’s copy of this lock will never be released. Therefore, any attempt on the part of the child to invoke that same library function will result in deadlock. One approach to this problem would be to have the library function check to see if  the owner of the lock is still running, and if not, “breaking” the lock by re-initializing and then acquiring it. However, this approach has a couple of vulnerabilities: 1.  The data structures protected by that lock are likely to be in some intermedi- ate state, so that naively breaking the lock might result in arbitrary memory corruption. 2.  If the child creates additional threads, two threads might break the lock concur- rently, with the result that both threads believe they own the lock. This could again result in arbitrary memory corruption. The  atfork()  function is provided to help deal with these situations. The idea is to register a triplet of functions, one to be called by the parent before the  fork() , one to be called by the parent after the  fork() , and one to be called by the child after the fork() . Appropriate cleanups can then be carried out at these three points. Be warned, however, that coding of   atfork()  handlers is quite subtle in general. The cases where  atfork()  works best are cases where the data structure in question can simply be re-initialized by the child. 148 6.5.2.8 Parallel Libraries: Discussion Regardless of the strategy used, the description of the library’s API must include a clear description of that strategy and how the caller should interact with that strategy. In short, constructing parallel libraries using locking is possible, but not as easy as constructing a parallel application. 6.5.3 Locking For Parallelizing Sequential Libraries: Villain! With the advent of readily available low-cost multicore systems, a common task is parallelizing an existing library that was designed with only single-threaded use in mind. This all-to-common disregard for parallelism can result in a library API that is severely flawed from a parallel-programming viewpoint. Candidate flaws include: 1. Implicit prohibition of partitioning. 2. Callback functions requiring locking. 3. Object-oriented spaghetti code. These flaws and the consequences for locking are discussed in the following sections. 6.5.3.1 Partitioning Prohibited Suppose that you were writing a single-threaded hash-table implementation. It is easy and fast to maintain an exact count of the total number of items in the hash table, and also easy and fast to return this exact count on each addition and deletion operation. So why not? One reason is that exact counters do not perform or scale well on multicore systems, as was seen in Chapter  4.  As a result, the parallelized implementation of the hash table will not perform or scale well. So what can be done about this? One approach is to return an approximate count, using one of the algorithms from Chapter  4 . Another approach is to drop the element count altogether. Either way, it will be necessary to inspect uses of the hash table to see why the addition and deletion operations need the exact count. Here are a few possibilities: 1.  Determining when to resize the hash table. In this case, an approximate count should work quite well. It might also be useful to trigger the resizing operation from the length of the longest chain, which can be computed and maintained in a nicely partitioned per-chain manner. 2.  Producing an estimate of the time required to traverse the entire hash table. An approximate count works well in this case, also. 3.  For diagnostic purposes, for example, to check for items being lost when trans- ferring them to and from the hash table. This clearly requires an exact count. However, given that this usage is diagnostic in nature, it might suffice to maintain the lengths of the hash chains, then to infrequently sum them up while locking out addition and deletion operations. 149 It turns out that there is now a strong theoretical basis for some of the constraints that performance and scalability place on a parallel library’s APIs  [ AGH + 11a ,  AGH + 11b , McK11b ] . Anyone designing a parallel library needs to pay close attention to those constraints. Although it is all too easy to blame locking for what are really problems due to a concurrency-unfriendly API, doing so is not helpful. On the other hand, one has little choice but to sympathize with the hapless developer who made this choice in (say) 1985. It would have been a rare and courageous developer to anticipate the need for parallelism at that time, and it would have required an even more rare combination of  brilliance and luck to actually arrive at a good parallel-friendly API. Times change, and code must change with them. That said, there might be a huge number of users of a popular library, in which case an incompatible change to the API would be quite foolish. Adding a parallel-friendly API to complement the existing heavily used sequential-only API is probably the best course of action in this situation. Nevertheless, human nature being what it is, we can expect our hapless developer to be more likely to complain about locking than about his or her own poor (though understandable) API design choices. 6.5.3.2 Deadlock-Prone Callbacks Sections  6.1.1.2,  6.1.1.3,  and  6.5.2  described how undisciplined use of callbacks can result in locking woes. These sections also described how to design your library function to avoid these problems, but it is unrealistic to expect a 1990s programmer with no experience in parallel programming to have followed such a design. Therefore, someone attempting to parallelize an existing callback-heavy single-threaded library will likely have many opportunities to curse locking’s villainy. If there are a very large number of uses of a callback-heavy library, it may be wise to again add a parallel-friendly API to the library in order to allow existing users to convert their code incrementally. Alternatively, some advocate use of transactional memory in these cases. While the jury is still out on transactional memory, Section  15.2  discusses its strengths and weaknesses. It is important to note that hardware transactional memory (discussed in Section  15.3 ) cannot help here unless the hardware transactional memory implementation provides forward-progress guarantees, which few do. Other alternatives that appear to be quite practical (if less heavily hyped) include the methods discussed in Sections  6.1.1.5,  and  6.1.1.6 , as well as those that will be discussed in Chapters  7  and  8. 6.5.3.3 Object-Oriented Spaghetti Code Object-oriented programming went mainstream sometime in the 1980s or 1990s, and as a result there is a huge amount of object-oriented code in production, much of it single-threaded. Although object orientation can be a valuable software technique, undisciplined use of objects can easily result in object-oriented spaghetti code. In object- oriented spaghetti code, control flits from object to object in an essentially random manner, making the code hard to understand and even harder, and perhaps impossible, to accommodate a locking hierarchy. Although many might argue that such code should be cleaned up in any case, such things are much easier to say than to do. If you are tasked with parallelizing such a beast, you can reduce the number of opportunities to curse locking by using the techniques described in Sections  6.1.1.5,  and  6.1.1.6,  as well as those that will be discussed in Chapters  7  and  8.  This situation appears to be the use case that inspired transactional 150 memory, so it might be worth a try as well. That said, the choice of synchronization mechanism should be made in light of the hardware habits discussed in Chapter  2.  After all, if the overhead of the synchronization mechanism is orders of magnitude more than that of the operations being protected, the results are not going to be pretty. And that leads to a question well worth asking in these situations: Should the code remain sequential? For example, perhaps parallelism should be introduced at the process level rather than the thread level. In general, if a task is proving extremely hard, it is worth some time spent thinking about not only alternative ways to accomplish that particular task, but also alternative tasks that might better solve the problem at hand. 6.6 Summary Locking is perhaps the most widely used and most generally useful synchronization tool. However, it works best when designed into an application or library from the beginning. Given the large quantity of pre-existing single-threaded code that might need to one day run in parallel, locking should therefore not be the only tool in your parallel-programming toolbox. The next few chapters will discuss other tools, and how they can best be used in concert with locking and with each other. 151 152 Chapter 7 Data Ownership One of the simplest ways to avoid the synchronization overhead that comes with locking is to parcel the data out among the threads (or, in the case of kernels, CPUs) so that a given piece of data is accessed and modified by only one of the threads. This approach is used extremely heavily, in fact, it is one usage pattern that even novices use almost instinctively. In fact, it is used so heavily that this chapter will not introduce any new examples, but will instead recycle examples from previous chapters. Quick Quiz 7.1:  What form of data ownership is extremely difficult to avoid when creating shared-memory parallel programs (for example, using pthreads) in C or C++? There are a number of approaches to data ownership. Section  7.1  presents the logical extreme in data ownership, where each thread has its own private address space. Section  7.2  looks at the opposite extreme, where the data is shared, but different threads own different access rights to the data. Section  7.3  describes function shipping, which is a way of allowing other threads to have indirect access to data owned by a particular thread. Section  7.4  describes how designated threads can be assigned ownership of a specified function and the related data. Section  7.5  discusses improving performance by transforming algorithms with shared data to instead use data ownership. Finally, Section  7.6  lists a few software environments that feature data ownership as a first-class citizen. 7.1 Multiple Processes Section  3.1  introduced the following example: 1 compute_it 1 > compute_it.1.out & 2 compute_it 2 > compute_it.2.out & 3 wait 4 cat compute_it.1.out 5 cat compute_it.2.out This example runs two instances of the  compute_it  program in parallel, as separate processes that do not share memory. Therefore, all data in a given process is owned by that process, so that almost the entirety of data in the above example is owned. This approach almost entirely eliminates synchronization overhead. The resulting combination of extreme simplicity and optimal performance is obviously quite 153 attractive. Quick Quiz 7.2:  What synchronization remains in the example shown in Sec- tion  7.1 ? Quick Quiz 7.3:  Is there any shared data in the example shown in Section  7.1 ? This same pattern can be written in C as well as in  sh , as illustrated by Figures  3.2 and  3.3 . The next section discusses use of data ownership in shared-memory parallel pro- grams. 7.2 Partial Data Ownership and pthreads Chapter  4  makes heavy use of data ownership, but adds a twist. Threads are not allowed to modify data owned by other threads, but they are permitted to read it. In short, the use of shared memory allows more nuanced notions of ownership and access rights. For example, consider the per-thread statistical counter implementation shown in Figure  4.9  on page  51.  Here,  inc_count()  updates only the corresponding thread’s instance of   counter , while  read_count()  accesses, but does not modify, all threads’ instances of   counter . Quick Quiz 7.4:  Does it ever make sense to have partial data ownership where each thread reads only its own instance of a per-thread variable, but writes to other threads’ instances? Pure data ownership is also both common and useful, for example, the per-thread memory-allocator caches discussed in Section  5.4.3  starting on page  105 . In this algorithm, each thread’s cache is completely private to that thread. 7.3 Function Shipping The previous section described a weak form of data ownership where threads reached out to other threads’ data. This can be thought of as bringing the data to the functions that need it. An alternative approach is to send the functions to the data. Such an approach is illustrated in Section  4.4.3  beginning on page  68,  in particular the  flush_local_count_sig()  and  flush_local_count()  functions in Figure  4.24  on page  70 . The  flush_local_count_sig()  function is a signal handler that acts as the shipped function. The  pthread_kill()  function in  flush_local_count() sends the signal—shipping the function—and then waits until the shipped function executes. This shipped function has the not-unusual added complication of needing to interact with any concurrently executing  add_count()  or  sub_count()  functions (see Figure  4.25  on page  71  and Figure  4.26  on page  72 ). Quick Quiz 7.5:  What mechanisms other than POSIX signals may be used for function shipping? 7.4 Designated Thread The earlier sections describe ways of allowing each thread to keep its own copy or its own portion of the data. In contrast, this section describes a functional-decomposition approach, where a special designated thread owns the rights to the data that is required 154 to do its job. The eventually consistent counter implementation described in Sec- tion  4.2.3  provides an example. This implementation has a designated thread that runs the  eventual()  function shown on lines 15-32 of Figure  4.8 . This  eventual() thread periodically pulls the per-thread counts into the global counter, so that accesses to the global counter will, as the name says, eventually converge on the actual value. Quick Quiz 7.6:  But none of the data in the  eventual()  function shown on lines 15-32 of Figure  4.8  is actually owned by the  eventual()  thread! In just what way is this data ownership??? 7.5 Privatization One way of improving the performance and scalability of a shared-memory parallel program is to transform it so as to convert shared data to private data that is owned by a particular thread. An excellent example of this is shown in the answer to one of the Quick Quizzes in Section  5.1.1,  which uses privatization to produce a solution to the Dining Philosophers problem with much better performance and scalability than that of the standard textbook solution. The original problem has five philosophers sitting around the table with one fork between each adjacent pair of philosophers, which permits at most two philosophers to eat concurrently. We can trivially privatize this problem by providing an additional five forks, so that each philosopher has his or her own private pair of forks. This allows all five philosophers to eat concurrently, and also offers a considerable reduction in the spread of certain types of disease. In other cases, privatization imposes costs. For example, consider the simple limit counter shown in Figure  4.12  on page  56 . This is an example of an algorithm where threads can read each others’ data, but are only permitted to update their own data. A quick review of the algorithm shows that the only cross-thread accesses are in the summation loop in  read_count() . If this loop is eliminated, we move to the more-efficient pure data ownership, but at the cost of a less-accurate result from read_count() . Quick Quiz 7.7:  Is it possible to obtain greater accuracy while still maintaining full privacy of the per-thread data? In short, privatization is a powerful tool in the parallel programmer’s toolbox, but it must nevertheless be used with care. Just like every other synchronization primitive, it has the potential to increase complexity while decreasing performance and scalability. 7.6 Other Uses of Data Ownership Data ownership works best when the data can be partitioned so that there is little or no need for cross thread access or update. Fortunately, this situation is reasonably common, and in a wide variety of parallel-programming environments. Examples of data ownership include: 1. All message-passing environments, such as MPI [ MPI08]  and BOINC [ UoC08] . 2. Map-reduce  [Jac08 ]. 3.  Client-server systems, including RPC, web services, and pretty much any system with a back-end database server. 155 4. Shared-nothing database systems. 5. Fork-join systems with separate per-process address spaces. 6. Process-based parallelism, such as the Erlang language. 7.  Private variables, for example, C-language on-stack auto variables, in threaded environments. Data ownership is perhaps the most underappreciated synchronization mechanism in existence. When used properly, it delivers unrivaled simplicity, performance, and scalability. Perhaps its simplicity costs it the respect that it deserves. Hopefully a greater appreciation for the subtlety and power of data ownership will lead to greater level of  respect, to say nothing of leading to greater performance and scalability coupled with reduced complexity. 156 Chapter 8 Deferred Processing The strategy of deferring work goes back before the dawn of recorded history. It has occasionally been derided as procrastination or even as sheer laziness. However, in the last few decades workers have recognized this strategy’s value in simplifying and streamlining parallel algorithms [ KL80 ,  Mas92 ]. Believe it or not, “laziness” in parallel programming often outperforms and scales better than does industriousness! General approaches to such work deferral tactics include reference counting, sequence locking, and RCU. 8.1 Reference Counting Reference counting tracks the number of references to a given object in order to prevent that object from being prematurely freed. Although this is a conceptually simple technique, many devils hide in the details. After all, if the object was not subject to premature disposal, there would be no need for the reference counter in the first place. But if the object can be disposed of, what prevents disposal during the reference- acquisition process itself? There are a number of possible answers to this question, including: 1.  Alockresidingoutsideoftheobjectmustbeheldwhilemanipulatingthereference count. 2.  The object is created with a non-zero reference count, and new references may be acquired only when the current value of the reference counter is non-zero. If a thread does not have a reference to a given object, it may obtain one with the help of another thread that already has a reference. 3.  An existence guarantee is provided for the object, preventing it from being freed while some other entity might be attempting to acquire a reference. Existence guarantees are often provided by automatic garbage collectors, and, as will be seen in Section  8.3,  by RCU. 4.  A type-safety guarantee is provided for the object. An additional identity check must be performed once the reference is acquired. Type-safety guarantees can be provided by special-purpose memory allocators, for example, by the  SLAB_  DESTROY_BY_RCU  feature within the Linux kernel, as will be seen in Sec- tion  8.3. 157 Release Synchronization Acquisition Reference Synchronization Locking Counting RCU Locking - CAM CA Reference A AM A Counting RCU CA MCA CA Table 8.1: Reference Counting and Synchronization Mechanisms Of course, any mechanism that provides existence guarantees by definition also provides type-safety guarantees. This section will therefore group the last two answers together under the rubric of RCU, leaving us with three general categories of reference- acquisition protection: Reference counting, sequence locking, and RCU. Quick Quiz 8.1:  Why not implement reference-acquisition using a simple compare- and-swap operation that only acquires a reference if the reference counter is non-zero? Giventhatthekeyreference-countingissueissynchronizationbetweenacquisitionof  a reference and freeing of the object, we have nine possible combinations of mechanisms, as shown in Table  8.1.  This table divides reference-counting mechanisms into the following broad categories: 1.  Simple counting with neither atomic operations, memory barriers, nor alignment constraints (“-”). 2. Atomic counting without memory barriers (“A”). 3. Atomic counting, with memory barriers required only on release (“AM”). 4.  Atomic counting with a check combined with the atomic acquisition operation, and with memory barriers required only on release (“CAM”). 5.  Atomic counting with a check combined with the atomic acquisition operation (“CA”). 6.  Atomic counting with a check combined with the atomic acquisition operation, and with memory barriers also required on acquisition (“MCA”). However, because all Linux-kernel atomic operations that return a value are defined to contain memory barriers, all release operations contain memory barriers, and all checked acquisition operations also contain memory barriers. Therefore, cases “CA” and “MCA” are equivalent to “CAM”, so that there are sections below for only the first four cases:  “-” , “A”, “AM”, and “CAM”. The Linux primitives that support reference counting are presented in Section  8.1.3.  Later sections cite optimizations that can improve performance if reference acquisition and release is very frequent, and the reference count need be checked for zero only very rarely. 8.1.1 Implementation of Reference-Counting Categories Simple counting protected by locking ( “-” ) is described in Section  8.1.1.1,  atomic count- ing with no memory barriers (“A”) is described in Section  8.1.1.2  atomic counting with acquisition memory barrier (“AM”) is described in Section  8.1.1.3,  and atomic counting with check and release memory barrier (“CAM”) is described in Section  8.1.1.4. 158 8.1.1.1 Simple Counting Simple counting, with neither atomic operations nor memory barriers, can be used when the reference-counter acquisition and release are both protected by the same lock. In this case, it should be clear that the reference count itself may be manipulated non- atomically, because the lock provides any necessary exclusion, memory barriers, atomic instructions, and disabling of compiler optimizations. This is the method of choice when the lock is required to protect other operations in addition to the reference count, but where a reference to the object must be held after the lock is released. Figure  8.1  shows a simple API that might be used to implement simple non-atomic reference counting – although simple reference counting is almost always open-coded instead. 1 struct sref { 2 int refcount; 3 }; 4 5 void sref_init(struct sref  * sref) 6 { 7 sref->refcount = 1; 8 } 9 10 void sref_get(struct sref  * sref) 11 { 12 sref->refcount++; 13 } 14 15 int sref_put(struct sref  * sref, 16 void ( * release)(struct sref  * sref)) 17 { 18 WARN_ON(release == NULL); 19 WARN_ON(release == (void ( * )(struct sref  * ))kfree); 20 21 if (--sref->refcount == 0) { 22 release(sref); 23 return 1; 24 } 25 return 0; 26 } Figure 8.1: Simple Reference-Count API 8.1.1.2 Atomic Counting Simple atomic counting may be used in cases where any CPU acquiring a reference must already hold a reference. This style is used when a single CPU creates an object for its own private use, but must allow other CPU, tasks, timer handlers, or I/O completion handlers that it later spawns to also access this object. Any CPU that hands the object off must first acquire a new reference on behalf of the recipient object. In the Linux kernel, the  kref  primitives are used to implement this style of reference counting, as shown in Figure  8.2. Atomic counting is required because locking is not used to protect all reference- count operations, which means that it is possible for two different CPUs to concurrently manipulate the reference count. If normal increment and decrement were used, a pair of CPUs might both fetch the reference count concurrently, perhaps both obtaining the value “3”. If both of them increment their value, they will both obtain “4”, and both will store this value back into the counter. Since the new value of the counter should instead be “5”, one of the two increments has been lost. Therefore, atomic operations must be used both for counter increments and for counter decrements. 159 If releases are guarded by locking or RCU, memory barriers are  not   required, but for different reasons. In the case of locking, the locks provide any needed memory barriers (and disabling of compiler optimizations), and the locks also prevent a pair of  releases from running concurrently. In the case of RCU, cleanup must be deferred until all currently executing RCU read-side critical sections have completed, and any needed memory barriers or disabling of compiler optimizations will be provided by the RCU infrastructure. Therefore, if two CPUs release the final two references concurrently, the actual cleanup will be deferred until both CPUs exit their RCU read-side critical sections. Quick Quiz 8.2:  Why isn’t it necessary to guard against cases where one CPU acquires a reference just after another CPU releases the last reference? 1 struct kref { 2 atomic_t refcount; 3 }; 4 5 void kref_init(struct kref  * kref) 6 { 7 atomic_set(&kref->refcount, 1); 8 } 9 10 void kref_get(struct kref  * kref) 11 { 12 WARN_ON(!atomic_read(&kref->refcount)); 13 atomic_inc(&kref->refcount); 14 } 15 16 static inline int 17 kref_sub(struct kref  * kref, unsigned int count, 18 void ( * release)(struct kref  * kref)) 19 { 20 WARN_ON(release == NULL); 21 22 if (atomic_sub_and_test((int) count, 23 &kref->refcount)) { 24 release(kref); 25 return 1; 26 } 27 return 0; 28 } Figure 8.2: Linux Kernel kref API The  kref  structure itself, consisting of a single atomic data item, is shown in lines 1-3 of Figure  8.2.  The kref_init() function on lines 5-8 initializes the counter to the value “1”. Note that the  atomic_set()  primitive is a simple assignment, the name stems from the data type of   atomic_t  rather than from the operation. The kref_init()  function must be invoked during object creation, before the object has been made available to any other CPU. The  kref_get()  function on lines 10-14 unconditionally atomically increments the counter. The  atomic_inc()  primitive does not necessarily explicitly disable compiler optimizations on all platforms, but the fact that the  kref  primitives are in a separate module and that the Linux kernel build process does no cross-module optimizations has the same effect. The  kref_put()  function on lines 16-28 atomically decrements the counter, and if the result is zero, line 24 invokes the specified  release()  function and line 24 returns, informing the caller that  release()  was invoked. Otherwise,  kref_put() 160 returns zero, informing the caller that  release()  was not called. Quick Quiz 8.3:  Suppose that just after the  atomic_sub_and_test()  on line 22 of Figure  8.2  is invoked, that some other CPU invokes  kref_get() . Doesn’t this result in that other CPU now having an illegal reference to a released object? Quick Quiz 8.4:  Suppose that  kref_sub()  returns zero, indicating that the release()  function was not invoked. Under what conditions can the caller rely on the continued existence of the enclosing object? 8.1.1.3 Atomic Counting With Release Memory Barrier This style of reference is used in the Linux kernel’s networking layer to track the destination caches that are used in packet routing. The actual implementation is quite a bit more involved; this section focuses on the aspects of   struct dst_entry reference-count handling that matches this use case, shown in Figure  8.3. 1 static inline 2 struct dst_entry  *  dst_clone(struct dst_entry  *  dst) 3 { 4 if (dst) 5 atomic_inc(&dst->__refcnt); 6 return dst; 7 } 8 9 static inline 10 void dst_release(struct dst_entry  *  dst) 11 { 12 if (dst) { 13 WARN_ON(atomic_read(&dst->__refcnt) < 1); 14 smp_mb__before_atomic_dec(); 15 atomic_dec(&dst->__refcnt); 16 } 17 } Figure 8.3: Linux Kernel dst_clone API The  dst_clone()  primitive may be used if the caller already has a reference to the specified  dst_entry , in which case it obtains another reference that may be handed off to some other entity within the kernel. Because a reference is already held by the caller,  dst_clone()  need not execute any memory barriers. The act of handing the  dst_entry  to some other entity might or might not require a memory barrier, but if such a memory barrier is required, it will be embedded in the mechanism used to hand the  dst_entry  off. The  dst_release()  primitive may be invoked from any environment, and the caller might well reference elements of the  dst_entry  structure immediately prior to the call to  dst_release() . The  dst_release()  primitive therefore contains a memory barrier on line 14 preventing both the compiler and the CPU from misordering accesses. Pleasenotethattheprogrammermakinguseof  dst_clone() and dst_release() need not be aware of the memory barriers, only of the rules for using these two primi- tives. 8.1.1.4 Atomic Counting With Check and Release Memory Barrier Consider a situation where the caller must be able to acquire a new reference to an object to which it does not already hold a reference. The fact that initial reference- 161 count acquisition can now run concurrently with reference-count release adds further complications. Suppose that a reference-count release finds that the new value of the reference count is zero, signalling that it is now safe to clean up the reference-counted object. We clearly cannot allow a reference-count acquisition to start after such clean-up has commenced, so the acquisition must include a check for a zero reference count. This check must be part of the atomic increment operation, as shown below. Quick Quiz 8.5:  Why can’t the check for a zero reference count be made in a simple “if” statement with an atomic increment in its “then” clause? The Linux kernel’s  fget()  and  fput()  primitives use this style of reference counting. Simplified versions of these functions are shown in Figure  8.4. 1 struct file  * fget(unsigned int fd) 2 { 3 struct file  * file; 4 struct files_struct  * files = current->files; 5 6 rcu_read_lock(); 7 file = fcheck_files(files, fd); 8 if (file) { 9 if (!atomic_inc_not_zero(&file->f_count)) { 10 rcu_read_unlock(); 11 return NULL; 12 } 13 } 14 rcu_read_unlock(); 15 return file; 16 } 17 18 struct file  * 19 fcheck_files(struct files_struct  * files, unsigned int fd) 20 { 21 struct file  *  file = NULL; 22 struct fdtable  * fdt = rcu_dereference((files)->fdt); 23 24 if (fd < fdt->max_fds) 25 file = rcu_dereference(fdt->fd[fd]); 26 return file; 27 } 28 29 void fput(struct file  * file) 30 { 31 if (atomic_dec_and_test(&file->f_count)) 32 call_rcu(&file->f_u.fu_rcuhead, file_free_rcu); 33 } 34 35 static void file_free_rcu(struct rcu_head  * head) 36 { 37 struct file  * f; 38 39 f = container_of(head, struct file, f_u.fu_rcuhead); 40 kmem_cache_free(filp_cachep, f); 41 } Figure 8.4: Linux Kernel fget/fput API Line 4 of   fget()  fetches the pointer to the current process’s file-descriptor ta- ble, which might well be shared with other processes. Line 6 invokes  rcu_read_  lock() , which enters an RCU read-side critical section. The callback function from any subsequent call_rcu() primitive will be deferred until a matching rcu_read_  unlock()  is reached (line 10 or 14 in this example). Line 7 looks up the file structure corresponding to the file descriptor specified by the  fd  argument, as will be described later. If there is an open file corresponding to the specified file descriptor, then line 9 attempts to atomically acquire a reference count. If it fails to do so, lines 10-11 exit the 162 RCU read-side critical section and report failure. Otherwise, if the attempt is successful, lines 14-15 exit the read-side critical section and return a pointer to the file structure. The  fcheck_files()  primitive is a helper function for  fget() . It uses the rcu_dereference()  primitive to safely fetch an RCU-protected pointer for later dereferencing (this emits a memory barrier on CPUs such as DEC Alpha in which data dependencies do not enforce memory ordering). Line 22 uses  rcu_dereference() to fetch a pointer to this task’s current file-descriptor table, and line 24 checks to see if the specified file descriptor is in range. If so, line 25 fetches the pointer to the file structure, again using the  rcu_dereference()  primitive. Line 26 then returns a pointer to the file structure or  NULL  in case of failure. The  fput()  primitive releases a reference to a file structure. Line 31 atomically decrements the reference count, and, if the result was zero, line 32 invokes the  call_  rcu()  primitives in order to free up the file structure (via the  file_free_rcu() function specified in  call_rcu() ’s second argument), but only after all currently- executing RCU read-side critical sections complete. The time period required for all currently-executing RCU read-side critical sections to complete is termed a “grace period”. Note that the  atomic_dec_and_test()  primitive contains a memory barrier. This memory barrier is not necessary in this example, since the structure cannot be destroyed until the RCU read-side critical section completes, but in Linux, all atomic operations that return a result must by definition contain memory barriers. Once the grace period completes, the  file_free_rcu()  function obtains a pointer to the file structure on line 39, and frees it on line 40. This approach is also used by Linux’s virtual-memory system, see  get_page_  unless_zero()  and  put_page_testzero()  for page structures as well as try_to_unuse()  and  mmput()  for memory-map structures. 8.1.2 Hazard Pointers All of the reference-counting mechanisms discussed in the previous section require some other mechanism to prevent the data element from being deleted while the reference count is being acquired. This other mechanism might be a pre-existing reference held on that data element, locking, RCU, or atomic operations, but all of them either degrade performance and scalability or restrict use cases. One way of avoiding these problems is to implement the reference counters inside out, that is, rather than incrementing an integer stored in the data element, instead store a pointer to that data element in per-CPU (or per-thread) lists. Each element of these lists is called a  hazard pointer   [ Mic04 ] . 1 The value of a given data element’s “virtual reference counter” can then be obtained by counting the number of hazard pointers referencing that element. Therefore, if that element has been rendered inaccessible to readers, and there are no longer any hazard pointers referencing it, that element may safely be freed. Of course, this means that hazard-pointer acquisition must be carried out quite care- fully in order to avoid destructive races with concurrent deletion. One implementation is shown in Figure  8.5,  which shows  hp_store()  on lines 1-13 and  hp_erase() on lines 15-20. The  smp_mb()  primitive will be described in detail in Section  13.2, but may be ignored for the purposes of this brief overview. The  hp_store()  function records a hazard pointer at  hp  for the data element whose pointer is referenced by  p , while checking for concurrent modifications. If a 1 Also independently invented by others  [HLM02] . 163 1 int hp_store(void  ** p, void  ** hp) 2 { 3 void  * tmp; 4 5 tmp = ACCESS_ONCE( * p); 6 ACCESS_ONCE( * hp) = tmp; 7 smp_mb(); 8 if (tmp != ACCESS_ONCE( * p) || 9 tmp == HAZPTR_POISON) { 10 ACCESS_ONCE( * hp) = NULL; 11 return 0; 12 } 13 return 1; 14 } 15 16 void hp_erase(void  ** hp) 17 { 18 smp_mb(); 19 ACCESS_ONCE( * hp) = NULL; 20 hp_free(hp); 21 } Figure 8.5: Hazard-Pointer Storage and Erasure concurrent modification occurred,  hp_store()  refuses to record a hazard pointer, and returns zero to indicate that the caller must restart its traversal from the beginning. Otherwise,  hp_store()  returns one to indicate that it successfully recorded a hazard pointer for the data element. Quick Quiz 8.6:  Why does  hp_store()  in Figure  8.5  take a double indirection to the data element? Why not  void  *  instead of   void  ** ? Quick Quiz 8.7:  Why does  hp_store() ’s caller need to restart its traversal from the beginning in case of failure? Isn’t that inefficient for large data structures? Quick Quiz 8.8:  Given that papers on hazard pointers use the bottom bits of each pointer to mark deleted elements, what is up with  HAZPTR_POISON ? Because algorithms using hazard pointers might be restarted at any step of their traversal through the data structure, such algorithms must typically take care to avoid making any changes to the data structure until after they have acquired all relevant hazard pointers. Quick Quiz 8.9:  But don’t these restrictions on hazard pointers also apply to other forms of reference counting? In exchange for these restrictions, hazard pointers offer excellent performance and scalability for readers. Performance comparisons with other mechanisms may be found in Chapter  9  and in other publications [ HMBW07,  McK13,  Mic04 ]. 8.1.3 Linux Primitives Supporting Reference Counting The Linux-kernel primitives used in the above examples are summarized in the following list. •  atomic_t  Type definition for 32-bit quantity to be manipulated atomically. •  void atomic_dec(atomic_t  * var);  Atomically decrements the refer- enced variable without necessarily issuing a memory barrier or disabling compiler optimizations. •  int atomic_dec_and_test(atomic_t  * var); Atomicallydecrements the referenced variable, returning  true  (non-zero) if the result is zero. Issues a 164 memory barrier and disables compiler optimizations that might otherwise move memory references across this primitive. •  void atomic_inc(atomic_t  * var);  Atomically increments the refer- enced variable without necessarily issuing a memory barrier or disabling compiler optimizations. •  int atomic_inc_not_zero(atomic_t  * var); Atomicallyincrements the referenced variable, but only if the value is non-zero, and returning  true (non-zero) if the increment occurred. Issues a memory barrier and disables com- piler optimizations that might otherwise move memory references across this primitive. •  int atomic_read(atomic_t  * var);  Returns the integer value of the referenced variable. This is not an atomic operation, and it does not issue any memory-barrier instructions. Instead of thinking of as “an atomic read,” think of  it as “a normal read from an atomic variable.” •  void atomic_set(atomic_t  * var, int val); Sets the value of the referenced atomic variable to “val”. This is not an atomic operation, and it neither issues memory barriers nor disables compiler optimizations. Instead of thinking of as “an atomic set,” think of it as “a normal set of an atomic variable.” •  void call_rcu(struct rcu_head  * head, void ( * func)(struct rcu_  head  * head));  Invokes  func(head)  some time after all currently exe- cuting RCU read-side critical sections complete, however, the  call_rcu() primitive returns immediately. Note that  head  is normally a field within an RCU-protected data structure, and that  func  is normally a function that frees up this data structure. The time interval between the invocation of   call_rcu() and the invocation of   func  is termed a “grace period”. Any interval of time containing a grace period is itself a grace period. •  type  * container_of(p, type, f);  Given a pointer  p  to a field  f within a structure of the specified type, return a pointer to the structure. •  void rcu_read_lock(void);  Marks the beginning of an RCU read-side critical section. •  void rcu_read_unlock(void); Marks the end of an RCU read-side crit- ical section. RCU read-side critical sections may be nested. •  void smp_mb__before_atomic_dec(void);  Issues a memory barrier and disables code-motion compiler optimizations only if the platform’s atomic_  dec()  primitive does not already do so. •  struct rcu_head  A data structure used by the RCU infrastructure to track objects awaiting a grace period. This is normally included as a field within an RCU-protected data structure. Quick Quiz 8.10:  An  atomic_read()  and an  atomic_set()  that are non- atomic? Is this some kind of bad joke??? 165 Figure 8.6: Reader And Uncooperative Sequence Lock 8.1.4 Counter Optimizations In some cases where increments and decrements are common, but checks for zero are rare, it makes sense to maintain per-CPU or per-task counters, as was discussed in Chapter  4 . See Appendix  D.1  for an example of this technique applied to RCU. This ap- proach eliminates the need for atomic instructions or memory barriers on the increment and decrement primitives, but still requires that code-motion compiler optimizations be disabled. In addition, the primitives such as  synchronize_srcu()  that check for the aggregate reference count reaching zero can be quite slow. This underscores the fact that these techniques are designed for situations where the references are frequently acquired and released, but where it is rarely necessary to check for a zero reference count. However, it is usually the case that use of reference counts requires writing (often atomically) to a data structure that is otherwise read only. In this case, reference counts are imposing expensive cache misses on readers. Quick Quiz 8.11:  But hazard pointers don’t write to the data structure! It is therefore worthwhile to look into synchronization mechanisms that do not require readers to do writes at all. One such synchronization mechanism, sequence locks, is covered in the next section. 8.2 Sequence Locks Sequence locks are used in the Linux kernel for read-mostly data that must be seen in a consistent state by readers. However, unlike reader-writer locking, readers do not exclude writers. Instead, like hazard pointers, sequence locks force readers to  retry  an operation if they detect activity from a concurrent writer. As can be seen from Figure  8.6, it is important to design code using sequence locks so that readers very rarely need to retry. Quick Quiz 8.12:  Why isn’t this sequence-lock discussion in Chapter  6 , you know, the one on  locking ? 166 The key component of sequence locking is the sequence number, which has an even value in the absence of writers and an odd value if there is an update in progress. Readers can then snapshot the value before and after each access. If either snapshot has an odd value, or if the two snapshots differ, there has been a concurrent update, and the reader must discard the results of the access and then retry it. Readers use the read_seqbegin()  and  read_seqretry()  functions, as shown in Figure  8.7, when accessing data protected by a sequence lock. Writers must increment the value before and after each update, and only one writer is permitted at a given time. Writers use the  write_seqlock()  and  write_sequnlock()  functions, as shown in Figure  8.8 , when updating data protected by a sequence lock. Sequence-lock-protected data can have an arbitrarily large number of concurrent readers, but only one writer at a time. Sequence locking is used in the Linux kernel to protect calibration quantities used for timekeeping. It is also used in pathname traversal to detect concurrent rename operations. Quick Quiz 8.13:  Can you use sequence locks as the only synchronization mech- anism protecting a linked list supporting concurrent addition, deletion, and search? A simple implementation of sequence locks is shown in Figure  8.9  ( seqlock.h ). The  seqlock_t  data structure is shown on lines 1-4, and contains the sequence number along with a lock to serialize writers. Lines 6-10 show  seqlock_init() , which, as the name indicates, initializes a  seqlock_t . Lines 12-22 show  read_seqbegin() , which begins a sequence-lock read-side critical section. Line 17 takes a snapshot of the sequence counter, and line 18 orders this snapshot operation before the caller’s critical section. Line 19 checks to see if the snapshot is odd, indicating that there is a concurrent writer, and, if so, line 20 jumps back to the beginning. Otherwise, line 21 returns the value of the snapshot, which the caller will pass to a later call to  read_seqretry() . Quick Quiz 8.14:  Why bother with the check on line 19 of   read_seqbegin() in Figure  8.9 ? Given that a new writer could begin at any time, why not simply incorporate the check into line 31 of   read_seqretry() ? Lines 24-32 show read_seqretry() , which returns true if there were no writers present since the time of the corresponding call to  read_seqbegin() . Line 29 orders the caller’s prior critical section before line 30’s fetch of the new snapshot of the sequence counter. Finally, line 30 checks that the sequence counter has not changed, in other words, that there has been no writer, and returns true if so. Quick Quiz 8.15:  Why is the  smp_mb()  on line 29 of Figure  8.9  needed? Quick Quiz 8.16:  Can’t weaker memory barriers be used in the code in Figure  8.9 ? 1 do { 2 seq = read_seqbegin(&test_seqlock); 3 / *  read-side access.  * / 4 } while (read_seqretry(&test_seqlock, seq)); Figure 8.7: Sequence-Locking Reader 1 write_seqlock(&test_seqlock); 2 / *  Update  * / 3 write_sequnlock(&test_seqlock); Figure 8.8: Sequence-Locking Writer 167 1 typedef struct { 2 unsigned long seq; 3 spinlock_t lock; 4 } seqlock_t; 5 6 static void seqlock_init(seqlock_t  * slp) 7 { 8 slp->seq = 0; 9 spin_lock_init(&slp->lock); 10 } 11 12 static unsigned long read_seqbegin(seqlock_t  * slp) 13 { 14 unsigned long s; 15 16 repeat: 17 s = ACCESS_ONCE(slp->seq); 18 smp_mb(); 19 if (unlikely(s & 1)) 20 goto repeat; 21 return s; 22 } 23 24 static int read_seqretry(seqlock_t  * slp, 25 unsigned long oldseq) 26 { 27 unsigned long s; 28 29 smp_mb(); 30 s = ACCESS_ONCE(slp->seq); 31 return s != oldseq; 32 } 33 34 static void write_seqlock(seqlock_t  * slp) 35 { 36 spin_lock(&slp->lock); 37 ++slp->seq; 38 smp_mb(); 39 } 40 41 static void write_sequnlock(seqlock_t  * slp) 42 { 43 smp_mb(); 44 ++slp->seq; 45 spin_unlock(&slp->lock); 46 } Figure 8.9: Sequence-Locking Implementation 168 Quick Quiz 8.17:  What prevents sequence-locking updaters from starving readers? Lines 34-39 show  write_seqlock() , which simply acquires the lock, incre- ments the sequence number, and executes a memory barrier to ensure that this in- crement is ordered before the caller’s critical section. Lines 41-46 show  write_  sequnlock() , which executes a memory barrier to ensure that the caller’s critical section is ordered before the increment of the sequence number on line 44, then releases the lock. Quick Quiz 8.18:  What if something else serializes writers, so that the lock is not needed? Quick Quiz 8.19:  Why isn’t  seq  on line 2 of Figure  8.9  unsigned  rather than unsigned long ? After all, if   unsigned  is good enough for the Linux kernel, shouldn’t it be good enough for everyone? Both the read-side and write-side critical sections of a sequence lock can be thought of as transactions, and sequence locking therefore can be thought of as a limited form of transactional memory, which will be discussed in Section  15.2.  The limitations of  sequence locking are: (1) Sequence locking restricts updates and (2) sequence locking does not permit traversal of pointers to objects that might be freed by updaters. These limitations are of course overcome by transactional memory, but can also be overcome by combining other synchronization primitives with sequence locking. Sequence locks allow writers to defer readers, but not vice versa. This can result in unfairness and even starvation in writer-heavy workloads. On the other hand, in the absence of writers, sequence-lock readers are reasonably fast and scale linearly. It is only human to want the best of both worlds: fast readers without the possibility of read-side failure, let alone starvation. In addition, it would also be nice to overcome sequence locking’s limitations with pointers. The following section presents a synchronization mechanism with exactly these proporties. 8.3 Read-Copy Update (RCU) This section covers RCU from a number of different perspectives. Section  8.3.1  provides the classic introduction to RCU, Section  8.3.2  covers fundamental RCU concepts, Section  8.3.3  introduces some common uses of RCU, Section  8.3.4  presents the Linux- kernel API, Section  8.3.5  covers a sequence of “toy” implementations of user-level RCU, and finally Section  8.3.6  provides some RCU exercises. 8.3.1 Introduction to RCU Suppose that you are writing a parallel real-time program that needs to access data that is subject to gradual change, perhaps due to changes in temperature, humidity, and barometric pressure. The real-time response constraints on this program are so severe that it is not permissible to spin or block, thus ruling out locking, nor is it permissible to use a retry loop, thus ruling out sequence locks. Fortunately, the temperature and pressure are normally controlled, so that a default hard-coded set of data is usually sufficient. However, the temperature, humidity, and pressure occasionally deviate too far from the defaults, and in such situations it is necessary to provide data that replaces the defaults. Because the temperature, humidity, and pressure change gradually, providing 169 gptr kmalloc() −>a=? −>b=? −>c=? gptr initialization −>a=1 −>b=2 −>c=3 gptr gptr = p; /*almost*/ −>a=1 −>b=2 −>c=3 gptr p p p (1) (2) (3) (4) Figure 8.10: Insertion With Concurrent Readers the updated values is not a matter of urgency, though it must happen within a few minutes. The program is to use a global pointer imaginatively named  gptr  that is normally NULL , which indicates that the default values are to be used. Otherwise, gptr points to a structure providing values imaginatively named  a ,  b , and  c  that are to be used in the real-time calculations. How can we safely provide updated values when needed without impeding real-time readers? A classic approach is shown in Figure  8.10.  The first row shows the default state, with  gptr  equal to  NULL . In the second row, we have allocated a structure which is uninitialized, as indicated by the question marks. In the third row, we have initialized the structure. Next, we assign gptr to reference this new element . 2 On modern general- purpose systems, this assignment is atomic in the sense that concurrent readers will see either a  NULL  pointer or a pointer to the new structure  p , but not some mash-up containing bits from both values. Each reader is therefore guaranteed to either get the default value of   NULL  or to get the newly installed non-default values, but either way each reader will see a consistent result. Even better, readers need not use any expensive synchronization primitives, so this approach is quite suitable for real-time use . 3 2 On many computer systems, simple assignment is insufficient due to interference from both the compiler and the CPU. These issues will be covered in Section  8.3.2. 3 Again, on many computer systems, additional work is required to prevent interference from the compiler, 170 Readers? A B C (1) ea ers 1 Version A C B (2) Readers? 2 Versions A C B (3) 1 Versions A C (4) 1 Versions wait for readers free() list_del() /*almost*/ Figure 8.11: Deletion From Linked List With Concurrent Readers But sooner or later, it will be necessary to remove data that is being referenced by concurrent readers. Let us move to a more complex example where we are removing an element from a linked list, as shown in Figure  8.11.  This list initially contains elements A , B , and C , and we need to remove element B . First, we use list_del() to carry out the removal , 4 at which point all new readers will see element B  as having been deleted from the list. However, there might be old readers still referencing this element. Once all these old readers have finished, we can safely free element  B , resulting in the situation shown at the bottom of the figure. But how can we tell when the readers are finished? It is tempting to consider a reference-counting scheme, but Figure  4.3  in Chapter  4 shows that this can also result in long delays, just as can the locking and sequence- locking approaches that we already rejected. Let’s consider the logical extreme where the readers do absolutely nothing to announce their presence. This approach clearly allows optimal performance for readers (after all, free is a very good price), but leaves open the question of how the updater can possibly determine when all the old readers are done. We clearly need some additional constraints if we are to provide a reasonable answer to this question. One constraint that fits well with some types of real-time operating systems (as well as some operating-system kernels) is to consider the case where threads are not subject to preemption. In such non-preemptible environments, each thread runs until it and, on DEC Alpha systems, the CPU as well. This will be covered in Section  8.3.2. 4 And yet again, this approximates reality, which will be expanded on in Section  8.3.2. 171 Context Switch Reader    G   r   a   c   e    P   e   r    i   o    d CPU 1CPU 2CPU 3   w   a    t   o   r   r   e   a   e   r   s    l    i   s    t _    d   e    l    (    )    f   r   e   e    (    ) Figure 8.12: Waiting for Pre-Existing Readers explicitly and voluntarily blocks. This means that an infinite loop without blocking will render a CPU useless for any other purpose from the start of the infinite loop onwards. 5 Non-preemptibility also requires that threads be prohibited from blocking while holding spinlocks. Without this prohibition, all CPUs might be consumed by threads spinning attempting to acquire a spinlock held by a blocked thread. The spinning threads will not relinquish their CPUs until they acquire the lock, but the thread holding the lock cannot possibly release it until one of the spinning threads relinquishes a CPU. This is a classic deadlock situation. Let us impose this same constraint on reader threads traversing the linked list: such threads are not allowed to block until after completing their traversal. Returning to the second row of Figure  8.11,  where the updater has just completed executing list_del() , imagine that CPU 0 executes a context switch. Because readers are not permitted to block while traversing the linked list, we are guaranteed that all prior readers that might have been running on CPU 0 will have completed. Extending this line of reasoning to the other CPUs, once each CPU has been observed executing a context switch, we are guaranteed that all prior readers have completed, and that there are no longer any reader threads referencing element  B . The updater can then safely free element  B , resulting in the state shown at the bottom of Figure  8.11. A schematic of this approach is shown in Figure  8.12,  with time advancing from the top of the figure to the bottom. Although production-quality implementations of this approach can be quite complex, a toy implementatoin is exceedingly simple: 1 for_each_online_cpu(cpu) 2 run_on(cpu); 5 In contrast, an infinite loop in a preemptible environment might be preempted. This infinite loop might still waste considerable CPU time, but the CPU in question would nevertheless be able to do other work. 172 The  for_each_online_cpu()  primitive iterates over all CPUs, and the  run_  on()  function causes the current thread to execute on the specified CPU, which forces the destination CPU to execute a context switch. Therefore, once the  for_each_  online_cpu()  has completed, each CPU has executed a context switch, which in turn guarantees that all pre-existing reader threads have completed. Please note that this approach is  not   production quality. Correct handling of a number of corner cases and the need for a number of powerful optimizations mean that production-quality implementations have significant additional complexity. In addition, RCU implementations for preemptible environments require that readers actually do something. However, this simple non-preemptible approach is conceptually complete, and forms a good initial basis for understanding the RCU fundamentals covered in the following section. 8.3.2 RCU Fundamentals Authors: Paul E. McKenney and Jonathan Walpole Read-copy update (RCU) is a synchronization mechanism that was added to the Linux kernel in October of 2002. RCU achieves scalability improvements by allow- ing reads to occur concurrently with updates. In contrast with conventional locking primitives that ensure mutual exclusion among concurrent threads regardless of whether they be readers or updaters, or with reader-writer locks that allow concurrent reads but not in the presence of updates, RCU supports concurrency between a single updater and multiple readers. RCU ensures that reads are coherent by maintaining multiple versions of objects and ensuring that they are not freed up until all pre-existing read-side critical sections complete. RCU defines and uses efficient and scalable mechanisms for publishing and reading new versions of an object, and also for deferring the collection of old versions. These mechanisms distribute the work among read and update paths in such a way as to make read paths extremely fast. In some cases (non-preemptible kernels), RCU’s read-side primitives have zero overhead. QuickQuiz8.20:  Butdoesn’tSection 8.2 ’sseqlockalsopermitreadersandupdaters to get work done concurrently? This leads to the question “what exactly is RCU?”, and perhaps also to the question “how can RCU  possibly  work?” (or, not infrequently, the assertion that RCU cannot possibly work). This document addresses these questions from a fundamental viewpoint; laterinstallmentslookatthemfromusageandfromAPIviewpoints. Thislastinstallment also includes a list of references. RCU is made up of three fundamental mechanisms, the first being used for insertion, the second being used for deletion, and the third being used to allow readers to tolerate concurrent insertions and deletions. Section  8.3.2.1  describes the publish-subscribe mechanism used for insertion, Section  8.3.2.2  describes how waiting for pre-existing RCU readers enabled deletion, and Section  8.3.2.3  discusses how maintaining multiple versions of recently updated objects permits concurrent insertions and deletions. Finally, Section  8.3.2.4  summarizes RCU fundamentals. 8.3.2.1 Publish-Subscribe Mechanism One key attribute of RCU is the ability to safely scan data, even though that data is being modified concurrently. To provide this ability for concurrent insertion, RCU uses what can be thought of as a publish-subscribe mechanism. For example, consider an initially  NULL  global pointer  gp  that is to be modified to point to a newly allocated and 173 1 struct foo { 2 int a; 3 int b; 4 int c; 5 }; 6 struct foo  * gp = NULL; 7 8 / *  . . .  * / 9 10 p = kmalloc(sizeof( * p), GFP_KERNEL); 11 p->a = 1; 12 p->b = 2; 13 p->c = 3; 14 gp = p; Figure 8.13: Data Structure Publication (Unsafe) initialized data structure. The code fragment shown in Figure  8.13  (with the addition of  appropriate locking) might be used for this purpose. Unfortunately, there is nothing forcing the compiler and CPU to execute the last four assignment statements in order. If the assignment to gp happens before the initialization of   p  fields, then concurrent readers could see the uninitialized values. Memory barriers are required to keep things ordered, but memory barriers are notoriously difficult to use. We therefore encapsulate them into a primitive  rcu_assign_pointer()  that has publication semantics. The last four lines would then be as follows: 1 p->a = 1; 2 p->b = 2; 3 p->c = 3; 4 rcu_assign_pointer(gp, p); The  rcu_assign_pointer()  would  publish  the new structure, forcing both the compiler and the CPU to execute the assignment to  gp  after   the assignments to the fields referenced by  p . However, it isnot sufficient toonlyenforceorderingat theupdater, asthereader must enforce proper ordering as well. Consider for example the following code fragment: 1 p = gp; 2 if (p != NULL) { 3 do_something_with(p->a, p->b, p->c); 4 } Although this code fragment might well seem immune to misordering, unfortunately, the DEC Alpha CPU  [ McK05a ,  McK05b ] and value-speculation compiler optimizations can, believe it or not, cause the values of   p->a ,  p->b , and  p->c  to be fetched before the value of   p . This is perhaps easiest to see in the case of value-speculation compiler optimizations, where the compiler guesses the value of   p  fetches  p->a ,  p->b , and p->c  then fetches the actual value of   p  in order to check whether its guess was correct. This sort of optimization is quite aggressive, perhaps insanely so, but does actually occur in the context of profile-driven optimization. Clearly, we need to prevent this sort of skullduggery on the part of both the compiler and the CPU. The  rcu_dereference()  primitive uses whatever memory-barrier instructions and compiler directives are required for this purpose : 6 6 In the Linux kernel,  rcu_dereference()  is implemented via a volatile cast, and, on DEC Alpha, a memory barrier instruction. In the C11 and C++11 standards,  memory_order_consume  is intended to provide longer-term support for  rcu_dereference() , but no compilers implement this natively yet. (They instead strengthen  memory_order_consume  to  memory_order_acquire , thus emitting a 174 next next next next prev prev prev prev A B C Figure 8.14: Linux Circular Linked List A B C Figure 8.15: Linux Linked List Abbreviated 1 rcu_read_lock(); 2 p = rcu_dereference(gp); 3 if (p != NULL) { 4 do_something_with(p->a, p->b, p->c); 5 } 6 rcu_read_unlock(); The  rcu_dereference()  primitive can thus be thought of as  subscribing  to a given value of the specified pointer, guaranteeing that subsequent dereference opera- tions will see any initialization that occurred before the corresponding  rcu_assign_  pointer()  operation that published that pointer. The  rcu_read_lock()  and rcu_read_unlock()  calls are absolutely required: they define the extent of the RCU read-side critical section. Their purpose is explained in Section  8.3.2.2,  however, they never spin or block, nor do they prevent the  list_add_rcu()  from executing concurrently. In fact, in non- CONFIG_PREEMPT  kernels, they generate absolutely no code. Although  rcu_assign_pointer()  and  rcu_dereference()  can in the- ory be used to construct any conceivable RCU-protected data structure, in practice it is often better to use higher-level constructs. Therefore, the  rcu_assign_pointer() and  rcu_dereference()  primitives have been embedded in special RCU vari- ants of Linux’s list-manipulation API. Linux has two variants of doubly linked list, the circular  struct list_head  and the linear  struct hlist_head  /  struct hlist_node  pair. The former is laid out as shown in Figure  8.14,  where the green (leftmost) boxes represent the list header and the blue (rightmost three) boxes represent the elements in the list. This notation is cumbersome, and will therefore be abbreviated as shown in Figure  8.15,  which shows only the non-header (blue) elements. Adapting the pointer-publish example for the linked list results in the code shown in Figure  8.16 . Line 15 must be protected by some synchronization mechanism (most commonly some sort of lock) to prevent multiple  list_add_rcu()  instances from executing concurrently. However, such synchronization does not prevent this  list_add() instance from executing concurrently with RCU readers. Subscribing to an RCU-protected list is straightforward: needless memory-barrier instruction on weakly ordered systems.) 175 1 struct foo { 2 struct list_head  * list; 3 int a; 4 int b; 5 int c; 6 }; 7 LIST_HEAD(head); 8 9 / *  . . .  * / 10 11 p = kmalloc(sizeof( * p), GFP_KERNEL); 12 p->a = 1; 13 p->b = 2; 14 p->c = 3; 15 list_add_rcu(&p->list, &head); Figure 8.16: RCU Data Structure Publication next next next prev prev prev first A B C Figure 8.17: Linux Linear Linked List 1 rcu_read_lock(); 2 list_for_each_entry_rcu(p, head, list) { 3 do_something_with(p->a, p->b, p->c); 4 } 5 rcu_read_unlock(); The  list_add_rcu()  primitive publishes an entry, inserting it at the head of the specified list, guaranteeing that the corresponding list_for_each_entry_rcu() invocation will properly subscribe to this same entry. Quick Quiz 8.21:  What prevents the  list_for_each_entry_rcu()  from gettingasegfaultifithappenstoexecuteatexactlythesametimeasthe list_add_rcu() ? Linux’s other doubly linked list, the hlist, is a linear list, which means that it needs only one pointer for the header rather than the two required for the circular list, as shown in Figure  8.17.  Thus, use of hlist can halve the memory consumption for the hash-bucket arrays of large hash tables. As before, this notation is cumbersome, so hlists will be abbreviated in the same way lists are, as shown in Figure  8.15. Publishing a new element to an RCU-protected hlist is quite similar to doing so for the circular list, as shown in Figure  8.18. As before, line 15 must be protected by some sort of synchronization mechanism, for example, a lock. Subscribing to an RCU-protected hlist is also similar to the circular list: 1 rcu_read_lock(); 2 hlist_for_each_entry_rcu(p, q, head, list) { 3 do_something_with(p->a, p->b, p->c); 4 } 5 rcu_read_unlock(); QuickQuiz8.22:  Whydoweneedtopasstwopointersinto hlist_for_each_entry_rcu() when only one is needed for  list_for_each_entry_rcu() ? 176 1 struct foo { 2 struct hlist_node  * list; 3 int a; 4 int b; 5 int c; 6 }; 7 HLIST_HEAD(head); 8 9 / *  . . .  * / 10 11 p = kmalloc(sizeof( * p), GFP_KERNEL); 12 p->a = 1; 13 p->b = 2; 14 p->c = 3; 15 hlist_add_head_rcu(&p->list, &head); Figure 8.18: RCU  hlist  Publication Category Publish Retract Subscribe Pointers  rcu_assign_pointer() rcu_assign_pointer(..., NULL) rcu_dereference() Lists list_add_rcu() list_add_tail_rcu() list_replace_rcu() list_del_rcu() list_for_each_entry_rcu() Hlists hlist_add_after_rcu() hlist_add_before_rcu() hlist_add_head_rcu() hlist_replace_rcu() hlist_del_rcu() hlist_for_each_entry_rcu() Table 8.2: RCU Publish and Subscribe Primitives The set of RCU publish and subscribe primitives are shown in Table  8.2 , along with additional primitives to “unpublish”, or retract. Notethatthe list_replace_rcu() , list_del_rcu() , hlist_replace_  rcu() , and  hlist_del_rcu()  APIs add a complication. When is it safe to free up the data element that was replaced or removed? In particular, how can we possibly know when all the readers have released their references to that data element? These questions are addressed in the following section. 8.3.2.2 Wait For Pre-Existing RCU Readers to Complete In its most basic form, RCU is a way of waiting for things to finish. Of course, there are a great many other ways of waiting for things to finish, including reference counts, reader-writer locks, events, and so on. The great advantage of RCU is that it can wait for each of (say) 20,000 different things without having to explicitly track each and every one of them, and without having to worry about the performance degradation, scalability limitations, complex deadlock scenarios, and memory-leak hazards that are inherent in schemes using explicit tracking. In RCU’s case, the things waited on are called “RCU read-side critical sections”. An RCU read-side critical section starts with an  rcu_read_lock()  primitive, and ends with a corresponding  rcu_read_unlock()  primitive. RCU read-side critical sections can be nested, and may contain pretty much any code, as long as that code does not explicitly block or sleep (although a special form of RCU called SRCU  [McK06b] does permit general sleeping in SRCU read-side critical sections). If you abide by these conventions, you can use RCU to wait for  any  desired piece of code to complete. RCU accomplishes this feat by indirectly determining when these other things have finished [ McK07g, McK07a ], as is described in detail in Appendix  D. 177 Reader Reader Reader Reader Reader Reader Reader Reader Grace Period Extends as Needed Reader Removal Reclamation Time Figure 8.19: Readers and RCU Grace Period 1 struct foo { 2 struct list_head  * list; 3 int a; 4 int b; 5 int c; 6 }; 7 LIST_HEAD(head); 8 9 / *  . . .  * / 10 11 p = search(head, key); 12 if (p == NULL) { 13 / *  Take appropriate action, unlock, & return.  * / 14 } 15 q = kmalloc(sizeof( * p), GFP_KERNEL); 16  * q =  * p; 17 q->b = 2; 18 q->c = 3; 19 list_replace_rcu(&p->list, &q->list); 20 synchronize_rcu(); 21 kfree(p); Figure 8.20: Canonical RCU Replacement Example In particular, as shown in Figure  8.19 , RCU is a way of waiting for pre-existing RCU read-side critical sections to completely finish, including memory operations executed by those critical sections. However, note that RCU read-side critical sections that begin after the beginning of a given grace period can and will extend beyond the end of that grace period. The following pseudocode shows the basic form of algorithms that use RCU to wait for readers: 1. Make a change, for example, replace an element in a linked list. 2.  Wait for all pre-existing RCU read-side critical sections to completely finish (for example, by using the  synchronize_rcu()  primitive). The key observation here is that subsequent RCU read-side critical sections have no way to gain a reference to the newly removed element. 3. Clean up, for example, free the element that was replaced above. The code fragment shown in Figure  8.20,  adapted from those in Section  8.3.2.1, demonstrates this process, with field  a  being the search key. 178 Lines 19, 20, and 21 implement the three steps called out above. Lines 16-19 gives RCU (“read-copy update”) its name: while permitting concurrent  reads , line 16  copies and lines 17-19 do an  update . As discussed in Section  8.3.1,  the  synchronize_rcu()  primitive can be quite simple (see Section  8.3.5  for additional “toy” RCU implementations). However, production-quality implementations must deal with difficult corner cases and also incor- porate powerful optimizations, both of which result in significant complexity. Although it is good to know that there is a simple conceptual implementation of  synchronize_  rcu() , other questions remain. For example, what exactly do RCU readers see when traversing a concurrently updated list? This question is addressed in the following section. 8.3.2.3 Maintain Multiple Versions of Recently Updated Objects This section demonstrates how RCU maintains multiple versions of lists to accommodate synchronization-free readers. Two examples are presented showing how an element that might be referenced by a given reader must remain intact while that reader remains in its RCU read-side critical section. The first example demonstrates deletion of a list element, and the second example demonstrates replacement of an element. Example 1: Maintaining Multiple Versions During Deletion  We can now revisit the deletion example from Section  8.3.1,  but now with the benefit of a firm understanding of the fundamental concepts underlying RCU. To begin this new version of the deletion example, we will modify lines 11-21 in Figure  8.20  to read as follows: 1 p = search(head, key); 2 if (p != NULL) { 3 list_del_rcu(&p->list); 4 synchronize_rcu(); 5 kfree(p); 6 } This code will update the list as shown in Figure  8.21.  The triples in each element represent the values of fields a , b , and c , respectively. The red-shaded elements indicate that RCU readers might be holding references to them, so in the initial state at the top of the diagram, all elements are shaded red. Please note that we have omitted the backwards pointers and the link from the tail of the list to the head for clarity. After the  list_del_rcu()  on line 3 has completed, the  5,6,7  element has been removed from the list, as shown in the second row of Figure  8.21.  Since readers do not synchronize directly with updaters, readers might be concurrently scanning this list. These concurrent readers might or might not see the newly removed element, depending on timing. However, readers that were delayed (e.g., due to interrupts, ECC memory errors, or, in  CONFIG_PREEMPT_RT  kernels, preemption) just after fetching a pointer to the newly removed element might see the old version of the list for quite some time after the removal. Therefore, we now have two versions of the list, one with element 5,6,7  and one without. The  5,6,7  element is now shaded yellow, indicating that old readers might still be referencing it, but that new readers cannot obtain a reference to it. Please note that readers are not permitted to maintain references to element  5,6,7 afterexitingfromtheirRCUread-sidecriticalsections. Therefore, oncethe synchronize_  rcu()  on line 4 completes, so that all pre-existing readers are guaranteed to have completed, there can be no more readers referencing this element, as indicated by its 179 list_del_rcu() synchronize_rcu() kfree() 1,2,3 5,6,7 11,4,8 1,2,3 11,4,8 1,2,3 5,6,7 11,4,8 1,2,3 5,6,7 11,4,8 Figure 8.21: RCU Deletion From Linked List green shading on the third row of Figure  8.21.  We are thus back to a single version of  the list. At this point, the  5,6,7  element may safely be freed, as shown on the final row of Figure  8.21 . At this point, we have completed the deletion of element  5,6,7 . The following section covers replacement. Example 2: Maintaining Multiple Versions During Replacement  To start the re- placement example, here are the last few lines of the example shown in Figure  8.20: 1 q = kmalloc(sizeof( * p), GFP_KERNEL); 2  * q =  * p; 3 q->b = 2; 4 q->c = 3; 5 list_replace_rcu(&p->list, &q->list); 6 synchronize_rcu(); 7 kfree(p); The initial state of the list, including the pointer  p , is the same as for the deletion example, as shown on the first row of Figure  8.22 . As before, the triples in each element represent the values of fields  a ,  b , and  c , respectively. The red-shaded elements might be referenced by readers, and because readers do not synchronize directly with updaters, readers might run concurrently with this entire replacement process. Please note that we again omit the backwards pointers and the link from the tail of the list to the head for clarity. The following text describes how to replace the  5,6,7  element with  5,2,3  in such a way that any given reader sees one of these two values. Line 1  kmalloc() s a replacement element, as follows, resulting in the state as 180 1,2,3 5,6,7 11,4,8 Update 5,2,3 5,6,7 1,2,3 11,4,8 list_replace_rcu() 5,2,3 5,6,7 1,2,3 11,4,8 5,2,3 5,6,7 1,2,3 11,4,8 kfree() 1,2,3 5,2,3 11,4,8 Copy 5,6,7 5,6,7 1,2,3 11,4,8 Allocate ?,?,? 5,6,7 1,2,3 11,4,8 synchronize_rcu() Figure 8.22: RCU Replacement in Linked List 181 shown in the second row of Figure  8.22.  At this point, no reader can hold a reference to the newly allocated element (as indicated by its green shading), and it is uninitialized (as indicated by the question marks). Line 2 copies the old element to the new one, resulting in the state as shown in the third row of Figure  8.22 . The newly allocated element still cannot be referenced by readers, but it is now initialized. Line 3 updates  q->b  to the value “2”, and line 4 updates  q->c  to the value “3”, as shown on the fourth row of Figure  8.22. Now, line 5 does the replacement, so that the new element is finally visible to readers, and hence is shaded red, as shown on the fifth row of Figure  8.22.  At this point, as shown below, we have two versions of the list. Pre-existing readers might see the 5,6,7  element (which is therefore now shaded yellow), but new readers will instead see the  5,2,3  element. But any given reader is guaranteed to see some well-defined list. After the  synchronize_rcu()  on line 6 returns, a grace period will have elapsed, and so all reads that started before the  list_replace_rcu()  will have completed. In particular, any readers that might have been holding references to the 5,6,7  element are guaranteed to have exited their RCU read-side critical sections, and are thus prohibited from continuing to hold a reference. Therefore, there can no longer be any readers holding references to the old element, as indicated its green shading in the sixth row of Figure  8.22.  As far as the readers are concerned, we are back to having a single version of the list, but with the new element in place of the old. After the  kfree()  on line 7 completes, the list will appear as shown on the final row of Figure  8.22. Despite the fact that RCU was named after the replacement case, the vast majority of RCU usage within the Linux kernel relies on the simple deletion case shown in Section  8.3.2.3. Discussion  These examples assumed that a mutex was held across the entire update operation, which would mean that there could be at most two versions of the list active at a given time. Quick Quiz 8.23:  How would you modify the deletion example to permit more than two versions of the list to be active? Quick Quiz 8.24:  How many RCU versions of a given list can be active at any given time? This sequence of events shows how RCU updates use multiple versions to safely carry out changes in presence of concurrent readers. Of course, some algorithms cannot gracefully handle multiple versions. There are techniques for adapting such algorithms to RCU [ McK04 ], but these are beyond the scope of this section. 8.3.2.4 Summary of RCU Fundamentals This section has described the three fundamental components of RCU-based algorithms: 1. a publish-subscribe mechanism for adding new data, 2. a way of waiting for pre-existing RCU readers to finish, and 3.  a discipline of maintaining multiple versions to permit change without harming or unduly delaying concurrent RCU readers. 182 Mechanism RCU Replaces Section Reader-writer locking Section  8.3.3.1 Restricted reference-counting mechanism Section  8.3.3.2 Bulk reference-counting mechanism Section  8.3.3.3 Poor man’s garbage collector Section  8.3.3.4 Existence Guarantees Section  8.3.3.5 Type-Safe Memory Section  8.3.3.6 Wait for things to finish Section  8.3.3.7 Table 8.3: RCU Usage Quick Quiz 8.25:  How can RCU updaters possibly delay RCU readers, given that the  rcu_read_lock()  and  rcu_read_unlock()  primitives neither spin nor block? These three RCU components allow data to be updated in face of concurrent readers, and can be combined in different ways to implement a surprising variety of different types of RCU-based algorithms, some of which are described in the following section. 8.3.3 RCU Usage This section answers the question “what is RCU?” from the viewpoint of the uses to which RCU can be put. Because RCU is most frequently used to replace some existing mechanism, we look at it primarily in terms of its relationship to such mechanisms, as listed in Table  8.3.  Following the sections listed in this table, Section  8.3.3.8  provides a summary. 8.3.3.1 RCU is a Reader-Writer Lock Replacement Perhaps the most common use of RCU within the Linux kernel is as a replacement for reader-writer locking in read-intensive situations. Nevertheless, this use of RCU was not immediately apparent to me at the outset, in fact, I chose to implement a lightweight reader-writer lock  [ HW92 ] 7 before implementing a general-purpose RCU implementation back in the early 1990s. Each and every one of the uses I envisioned for the lightweight reader-writer lock was instead implemented using RCU. In fact, it was more than three years before the lightweight reader-writer lock saw its first use. Boy, did I feel foolish! The key similarity between RCU and reader-writer locking is that both have read- side critical sections that can execute in parallel. In fact, in some cases, it is possible to mechanically substitute RCU API members for the corresponding reader-writer lock API members. But first, why bother? Advantages of RCU include performance, deadlock immunity, and realtime latency. There are, of course, limitations to RCU, including the fact that readers and updaters run concurrently, that low-priority RCU readers can block high-priority threads waiting for a grace period to elapse, and that grace-period latencies can extend for many milliseconds. These advantages and limitations are discussed in the following sections. Performance  The read-side performance advantages of RCU over reader-writer lock- ing are shown in Figure  8.23. Quick Quiz 8.26:  WTF? How the heck do you expect me to believe that RCU 7 Similar to  brlock  in the 2.4 Linux kernel and to  lglock  in more recent Linux kernels. 183  1e-05  1e-04  0.001  0.01  0.1  1  10  100  1000  10000  0 2 4 6 8 10 12 14 16    O   v   e   r    h   e   a    d    (   n   a   n   o   s   e   c   o   n    d   s    ) Number of CPUs rcu rwlock Figure 8.23: Performance Advantage of RCU Over Reader-Writer Locking has a 100-femtosecond overhead when the clock period at 3GHz is more than 300  picoseconds ? Note that reader-writer locking is orders of magnitude slower than RCU on a single CPU, and is almost two  additional  orders of magnitude slower on 16 CPUs. In contrast, RCU scales quite well. In both cases, the error bars span a single standard deviation in either direction. A more moderate view may be obtained from a CONFIG_PREEMPT kernel, though RCU still beats reader-writer locking by between one and three orders of magnitude, as shown in Figure  8.24.  Note the high variability of reader-writer locking at larger numbers of CPUs. The error bars span a single standard deviation in either direction. Ofcourse, thelowperformanceofreader-writerlockinginFigure 8.24 isexaggerated by the unrealistic zero-length critical sections. The performance advantages of RCU become less significant as the overhead of the critical section increases, as shown in Figure  8.25  for a 16-CPU system, in which the y-axis represents the sum of the overhead of the read-side primitives and that of the critical section. Quick Quiz 8.27:  Why does both the variability and overhead of rwlock decrease as the critical-section overhead increases? However, this observation must be tempered by the fact that a number of system calls (and thus any RCU read-side critical sections that they contain) can complete within a few microseconds. In addition, as is discussed in the next section, RCU read-side primitives are almost entirely deadlock-immune. Deadlock Immunity  Although RCU offers significant performance advantages for read-mostly workloads, one of the primary reasons for creating RCU in the first place was in fact its immunity to read-side deadlocks. This immunity stems from the fact that RCU read-side primitives do not block, spin, or even do backwards branches, so that their execution time is deterministic. It is therefore impossible for them to participate in 184  1  10  100  1000  10000  0 2 4 6 8 10 12 14 16    O   v   e   r    h   e   a    d    (   n   a   n   o   s   e   c   o   n    d   s    ) Number of CPUs rcu rwlock Figure 8.24: Performance Advantage of Preemptible RCU Over Reader-Writer Locking a deadlock cycle. Quick Quiz 8.28:  Is there an exception to this deadlock immunity, and if so, what sequence of events could lead to deadlock? An interesting consequence of RCU’s read-side deadlock immunity is that it is possible to unconditionally upgrade an RCU reader to an RCU updater. Attempting to do such an upgrade with reader-writer locking results in deadlock. A sample code fragment that does an RCU read-to-update upgrade follows: 1 rcu_read_lock(); 2 list_for_each_entry_rcu(p, &head, list_field) { 3 do_something_with(p); 4 if (need_update(p)) { 5 spin_lock(my_lock); 6 do_update(p); 7 spin_unlock(&my_lock); 8 } 9 } 10 rcu_read_unlock(); Note that  do_update()  is executed under the protection of the lock  and   under RCU read-side protection. Another interesting consequence of RCU’s deadlock immunity is its immunity to a large class of priority inversion problems. For example, low-priority RCU readers cannot prevent a high-priority RCU updater from acquiring the update-side lock. Similarly, a low-priority RCU updater cannot prevent high-priority RCU readers from entering an RCU read-side critical section. Quick Quiz 8.29:  Immunity to both deadlock and priority inversion??? Sounds too good to be true. Why should I believe that this is even possible? Realtime Latency  Because RCU read-side primitives neither spin nor block, they offer excellent realtime latencies. In addition, as noted earlier, this means that they are immune to priority inversion involving the RCU read-side primitives and locks. However, RCU is susceptible to more subtle priority-inversion scenarios, for exam- 185  0  2000  4000  6000  8000  10000  12000  0 2 4 6 8 10    O   v   e   r    h   e   a    d    (   n   a   n   o   s   e   c   o   n    d   s    ) Critical-Section Duration (microseconds) rcu rwlock Figure 8.25: Comparison of RCU to Reader-Writer Locking as Function of Critical- Section Duration RCU reader rwlock reader rwlock reader rwlock reader RCU reader RCU reader RCU reader RCU reader RCU reader spinrwlock writer RCU updater spin spin spin Update Received rwlock reader rwlock reader rwlock reader RCU reader RCU reader RCU reader Time Figure 8.26: Response Time of RCU vs. Reader-Writer Locking ple, a high-priority process blocked waiting for an RCU grace period to elapse can be blocked by low-priority RCU readers in -rt kernels. This can be solved by using RCU priority boosting [ McK07d,  GMTW08] . RCU Readers and Updaters Run Concurrently  Because RCU readers never spin nor block, and because updaters are not subject to any sort of rollback or abort semantics, RCU readers and updaters must necessarily run concurrently. This means that RCU readers might access stale data, and might even see inconsistencies, either of which can render conversion from reader-writer locking to RCU non-trivial. However, in a surprisingly large number of situations, inconsistencies and stale data are not problems. The classic example is the networking routing table. Because routing updates can take considerable time to reach a given system (seconds or even minutes), 186 the system will have been sending packets the wrong way for quite some time when the update arrives. It is usually not a problem to continue sending updates the wrong way for a few additional milliseconds. Furthermore, because RCU updaters can make changes without waiting for RCU readers to finish, the RCU readers might well see the change more quickly than would batch-fair reader-writer-locking readers, as shown in Figure  8.26 . Once the update is received, the rwlock writer cannot proceed until the last reader completes, and subsequent readers cannot proceed until the writer completes. However, these subsequent readers are guaranteed to see the new value, as indicated by the green shading. In contrast, RCU readers and updaters do not block each other, which permits the RCU readers to see the updated values sooner. Of course, because their execution overlaps that of the RCU updater,  all  of the RCU readers might well see updated values, including the three readers that started before the update. Nevertheless only the RCU readers with green shading are  guaranteed   to see the updated values, again, as indicated by the green shading. Reader-writer locking and RCU simply provide different guarantees. With reader- writer locking, any reader that begins after the writer begins is guaranteed to see new values, and any reader that attempts to begin while the writer is spinning might or might not see new values, depending on the reader/writer preference of the rwlock implementation in question. In contrast, with RCU, any reader that begins after the updater completes is guaranteed to see new values, and any reader that completes after the updater begins might or might not see new values, depending on timing. The key point here is that, although reader-writer locking does indeed guarantee consistency within the confines of the computer system, there are situations where this consistency comes at the price of increased  inconsistency  with the outside world. In other words, reader-writer locking obtains internal consistency at the price of silently stale data with respect to the outside world. Nevertheless, there are situations where inconsistency and stale data within the confines of the system cannot be tolerated. Fortunately, there are a number of approaches that avoid inconsistency and stale data  [ McK04 ,  ACMS03 ] , and some methods based on reference counting are discussed in Section  8.1. Low-Priority RCU Readers Can Block High-Priority Reclaimers  In Realtime RCU[ GMTW08 ](seeSection D.4 ), SRCU[ McK06b ](seeSection D.1 ), orQRCU [ McK07f  ] (see Section  11.6 ), each of which is described in the final installment of this series, a preempted reader will prevent a grace period from completing, even if a high-priority task is blocked waiting for that grace period to complete. Realtime RCU can avoid this problem by substituting  call_rcu()  for  synchronize_rcu()  or by using RCU priority boosting [ McK07d ,  GMTW08 ] , which is still in experimental status as of early 2008. It might become necessary to augment SRCU and QRCU with priority boosting, but not before a clear real-world need is demonstrated. RCU Grace Periods Extend for Many Milliseconds  With the exception of QRCU and several of the “toy” RCU implementations described in Section  8.3.5,  RCU grace periods extend for multiple milliseconds. Although there are a number of techniques to render such long delays harmless, including use of the asynchronous interfaces where available ( call_rcu()  and  call_rcu_bh() ), this situation is a major reason for the rule of thumb that RCU be used in read-mostly situations. 187 Comparison of Reader-Writer Locking and RCU Code  In the best case, the con- version from reader-writer locking to RCU is quite simple, as shown in Figures  8.27, 8.28,  and  8.29,  all taken from Wikipedia [ MPA + 06 ]. 1 struct el { 1 struct el { 2 struct list_head lp; 2 struct list_head lp; 3 long key; 3 long key; 4 spinlock_t mutex; 4 spinlock_t mutex; 5 int data; 5 int data; 6 / *  Other data fields  * / 6 / *  Other data fields  * / 7 }; 7 }; 8 DEFINE_RWLOCK(listmutex); 8 DEFINE_SPINLOCK(listmutex); 9 LIST_HEAD(head); 9 LIST_HEAD(head); Figure 8.27: Converting Reader-Writer Locking to RCU: Data 1 int search(long key, int  * result) 1 int search(long key, int  * result) 2 { 2 { 3 struct el  * p; 3 struct el  * p; 4 4 5 read_lock(&listmutex); 5 rcu_read_lock(); 6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry_rcu(p, &head, lp) { 7 if (p->key == key) { 7 if (p->key == key) { 8  * result = p->data; 8  * result = p->data; 9 read_unlock(&listmutex); 9 rcu_read_unlock(); 10 return 1; 10 return 1; 11 } 11 } 12 } 12 } 13 read_unlock(&listmutex); 13 rcu_read_unlock(); 14 return 0; 14 return 0; 15 } 15 } Figure 8.28: Converting Reader-Writer Locking to RCU: Search 1 int delete(long key) 1 int delete(long key) 2 { 2 { 3 struct el  * p; 3 struct el  * p; 4 4 5 write_lock(&listmutex); 5 spin_lock(&listmutex); 6 list_for_each_entry(p, &head, lp) { 6 list_for_each_entry(p, &head, lp) { 7 if (p->key == key) { 7 if (p->key == key) { 8 list_del(&p->lp); 8 list_del_rcu(&p->lp); 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); 10 synchronize_rcu(); 10 kfree(p); 11 kfree(p); 11 return 1; 12 return 1; 12 } 13 } 13 } 14 } 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); 15 return 0; 16 return 0; 16 } 17 } Figure 8.29: Converting Reader-Writer Locking to RCU: Deletion More-elaborate cases of replacing reader-writer locking with RCU are beyond the scope of this document. 8.3.3.2 RCU is a Restricted Reference-Counting Mechanism Because grace periods are not allowed to complete while there is an RCU read-side critical section in progress, the RCU read-side primitives may be used as a restricted 188  1  10  100  1000  10000  0 2 4 6 8 10 12 14 16    O   v   e   r    h   e   a    d    (   n   a   n   o   s   e   c   o   n    d   s    ) Number of CPUs rcu refcnt Figure 8.30: Performance of RCU vs. Reference Counting reference-counting mechanism. For example, consider the following code fragment: 1 rcu_read_lock(); / *  acquire reference.  * / 2 p = rcu_dereference(head); 3 / *  do something with p.  * / 4 rcu_read_unlock(); / *  release reference.  * / The  rcu_read_lock()  primitive can be thought of as acquiring a reference to  p , because a grace period starting after the  rcu_dereference()  assigns to  p cannot possibly end until after we reach the matching  rcu_read_unlock() . This reference-counting scheme is restricted in that we are not allowed to block in RCU read-side critical sections, nor are we permitted to hand off an RCU read-side critical section from one task to another. Regardless of these restrictions, the following code can safely delete  p : 1 spin_lock(&mylock); 2 p = head; 3 rcu_assign_pointer(head, NULL); 4 spin_unlock(&mylock); 5 / *  Wait for all references to be released.  * / 6 synchronize_rcu(); 7 kfree(p); The assignment to  head  prevents any future references to  p  from being acquired, and the  synchronize_rcu()  waits for any previously acquired references to be released. Quick Quiz 8.30:  But wait! This is exactly the same code that might be used when thinking of RCU as a replacement for reader-writer locking! What gives? Of course, RCU can also be combined with traditional reference counting, as has been discussed on LKML and as summarized in Section  8.1 . But why bother? Again, part of the answer is performance, as shown in Figure  8.30, again showing data taken on a 16-CPU 3GHz Intel x86 system. Quick Quiz 8.31:  Why the dip in refcnt overhead near 6 CPUs? And, as with reader-writer locking, the performance advantages of RCU are most 189  0  2000  4000  6000  8000  10000  12000  0 2 4 6 8 10    O   v   e   r    h   e   a    d    (   n   a   n   o   s   e   c   o   n    d   s    ) Critical-Section Duration (microseconds) rcu refcnt Figure 8.31: Response Time of RCU vs. Reference Counting pronounced for short-duration critical sections, as shown Figure  8.31  for a 16-CPU system. In addition, as with reader-writer locking, many system calls (and thus any RCU read-side critical sections that they contain) complete in a few microseconds. However, the restrictions that go with RCU can be quite onerous. For example, in many cases, the prohibition against sleeping while in an RCU read-side critical section would defeat the entire purpose. The next section looks at ways of addressing this problem, while also reducing the complexity of traditional reference counting, at least in some cases. 8.3.3.3 RCU is a Bulk Reference-Counting Mechanism As noted in the preceding section, traditional reference counters are usually associated with a specific data structure, or perhaps a specific group of data structures. However, maintaining a single global reference counter for a large variety of data structures typically results in bouncing the cache line containing the reference count. Such cache- line bouncing can severely degrade performance. In contrast, RCU’s light-weight read-side primitives permit extremely frequent read- side usage with negligible performance degradation, permitting RCU to be used as a “bulk reference-counting” mechanism with little or no performance penalty. Situations where a reference must be held by a single task across a section of code that blocks may be accommodated with Sleepable RCU (SRCU) [ McK06b ] . This fails to cover the not-uncommon situation where a reference is “passed” from one task to another, for example, when a reference is acquired when starting an I/O and released in the corresponding completion interrupt handler. (In principle, this could be handled by the SRCU implementation, but in practice, it is not yet clear whether this is a good tradeoff.) Of course, SRCU brings restrictions of its own, namely that the return value from srcu_read_lock()  be passed into the corresponding  srcu_read_unlock() , and that no SRCU primitives be invoked from hardware interrupt handlers or from non-maskable interrupt (NMI) handlers. The jury is still out as to how much of a 190 1 int delete(int key) 2 { 3 struct element  * p; 4 int b; 5 6 b = hashfunction(key); 7 rcu_read_lock(); 8 p = rcu_dereference(hashtable[b]); 9 if (p == NULL || p->key != key) { 10 rcu_read_unlock(); 11 return 0; 12 } 13 spin_lock(&p->lock); 14 if (hashtable[b] == p && p->key == key) { 15 rcu_read_unlock(); 16 rcu_assign_pointer(hashtable[b], NULL); 17 spin_unlock(&p->lock); 18 synchronize_rcu(); 19 kfree(p); 20 return 1; 21 } 22 spin_unlock(&p->lock); 23 rcu_read_unlock(); 24 return 0; 25 } Figure 8.32: Existence Guarantees Enable Per-Element Locking problem is presented by these restrictions, and as to how they can best be handled. 8.3.3.4 RCU is a Poor Man’s Garbage Collector A not-uncommon exclamation made by people first learning about RCU is “RCU is sort of like a garbage collector!”. This exclamation has a large grain of truth, but it can also be misleading. Perhaps the best way to think of the relationship between RCU and automatic garbage collectors (GCs) is that RCU resembles a GC in that the  timing  of collection is automatically determined, but that RCU differs from a GC in that: (1) the programmer must manually indicate when a given data structure is eligible to be collected, and (2) the programmer must manually mark the RCU read-side critical sections where references might legitimately be held. Despite these differences, the resemblance does go quite deep, and has appeared in at least one theoretical analysis of RCU. Furthermore, the first RCU-like mechanism I am aware of used a garbage collector to handle the grace periods. Nevertheless, a better way of thinking of RCU is described in the following section. 8.3.3.5 RCU is a Way of Providing Existence Guarantees Gamsa et al. [ GKAS99 ]  discuss existence guarantees and describe how a mechanism resemblingRCUcanbeusedtoprovidetheseexistenceguarantees (seesection5onpage 7 of the PDF), and Section  6.4  discusses how to guarantee existence via locking, along with the ensuing disadvantages of doing so. The effect is that if any RCU-protected data element is accessed within an RCU read-side critical section, that data element is guaranteed to remain in existence for the duration of that RCU read-side critical section. Figure  8.32  demonstrates how RCU-based existence guarantees can enable per- element locking via a function that deletes an element from a hash table. Line 6 computes a hash function, and line 7 enters an RCU read-side critical section. If line 9 191 finds that the corresponding bucket of the hash table is empty or that the element present is not the one we wish to delete, then line 10 exits the RCU read-side critical section and line 11 indicates failure. Quick Quiz 8.32:  What if the element we need to delete is not the first element of  the list on line 9 of Figure  8.32 ? Otherwise, line 13 acquires the update-side spinlock, and line 14 then checks that the element is still the one that we want. If so, line 15 leaves the RCU read-side critical section, line 16 removes it from the table, line 17 releases the lock, line 18 waits for all pre-existing RCU read-side critical sections to complete, line 19 frees the newly removed element, and line 20 indicates success. If the element is no longer the one we want, line 22 releases the lock, line 23 leaves the RCU read-side critical section, and line 24 indicates failure to delete the specified key. Quick Quiz 8.33:  Why is it OK to exit the RCU read-side critical section on line 15 of Figure  8.32  before releasing the lock on line 17? Quick Quiz 8.34:  Why not exit the RCU read-side critical section on line 23 of  Figure  8.32  before releasing the lock on line 22? Alert readers will recognize this as only a slight variation on the original “RCU is a way of waiting for things to finish” theme, which is addressed in Section  8.3.3.7. They might also note the deadlock-immunity advantages over the lock-based existence guarantees discussed in Section  6.4. 8.3.3.6 RCU is a Way of Providing Type-Safe Memory A number of lockless algorithms do not require that a given data element keep the same identity through a given RCU read-side critical section referencing it—but only if that data element retains the same type. In other words, these lockless algorithms can tolerate a given data element being freed and reallocated as the same type of structure while they are referencing it, but must prohibit a change in type. This guarantee, called “type-safe memory” in academic literature [ GC96 ], is weaker than the existence guarantees in the previous section, and is therefore quite a bit harder to work with. Type-safe memory algorithms in the Linux kernel make use of slab caches, specially marking these caches with  SLAB_DESTROY_BY_RCU  so that RCU is used when returning a freed-up slab to system memory. This use of RCU guarantees that any in-use element of such a slab will remain in that slab, thus retaining its type, for the duration of any pre-existing RCU read-side critical sections. Quick Quiz 8.35:  But what if there is an arbitrarily long series of RCU read-side critical sections in multiple threads, so that at any point in time there is at least one thread in the system executing in an RCU read-side critical section? Wouldn’t that prevent any data from a  SLAB_DESTROY_BY_RCU  slab ever being returned to the system, possibly resulting in OOM events? These algorithms typically use a validation step that checks to make sure that the newly referenced data structure really is the one that was requested  [ LS86 ,  Section 2.5]. These validation checks require that portions of the data structure remain untouched by the free-reallocate process. Such validation checks are usually very hard to get right, and can hide subtle and difficult bugs. Therefore, although type-safety-based lockless algorithms can be extremely helpful in a very few difficult situations, you should instead use existence guarantees where possible. Simpler is after all almost always better! 192 8.3.3.7 RCU is a Way of Waiting for Things to Finish As noted in Section  8.3.2  an important component of RCU is a way of waiting for RCU readers to finish. One of RCU’s great strengths is that it allows you to wait for each of thousands of different things to finish without having to explicitly track each and every one of them, and without having to worry about the performance degradation, scalability limitations, complex deadlock scenarios, and memory-leak hazards that are inherent in schemes that use explicit tracking. In this section, we will show how  synchronize_sched() ’s read-side counter- parts (which include anything that disables preemption, along with hardware operations and primitives that disable interrupts) permit you to implement interactions with non- maskable interrupt (NMI) handlers that would be quite difficult if using locking. This approach has been called “Pure RCU” [ McK04 ], and it is used in a number of places in the Linux kernel. The basic form of such “Pure RCU” designs is as follows: 1. Make a change, for example, to the way that the OS reacts to an NMI. 2.  Wait for all pre-existing read-side critical sections to completely finish (for ex- ample, by using the  synchronize_sched()  primitive). The key observation here is that subsequent RCU read-side critical sections are guaranteed to see whatever change was made. 3.  Clean up, for example, return status indicating that the change was successfully made. The remainder of this section presents example code adapted from the Linux ker- nel. In this example, the  timer_stop  function uses  synchronize_sched()  to ensure that all in-flight NMI notifications have completed before freeing the associated resources. A simplified version of this code is shown Figure  8.33. Lines 1-4 define a profile_buffer structure, containing a size and an indefinite array of entries. Line 5 defines a pointer to a profile buffer, which is presumably initialized elsewhere to point to a dynamically allocated region of memory. Lines 7-16 define the  nmi_profile()  function, which is called from within an NMI handler. As such, it cannot be preempted, nor can it be interrupted by a normal interrupts handler, however, it is still subject to delays due to cache misses, ECC errors, and cycle stealing by other hardware threads within the same core. Line 9 gets a local pointer to the profile buffer using the  rcu_dereference()  primitive to ensure memory ordering on DEC Alpha, and lines 11 and 12 exit from this function if there is no profile buffer currently allocated, while lines 13 and 14 exit from this function if the pcvalue  argument is out of range. Otherwise, line 15 increments the profile-buffer entry indexed by the  pcvalue  argument. Note that storing the size with the buffer guarantees that the range check matches the buffer, even if a large buffer is suddenly replaced by a smaller one. Lines 18-27 define the  nmi_stop()  function, where the caller is responsible for mutual exclusion (for example, holding the correct lock). Line 20 fetches a pointer to the profile buffer, and lines 22 and 23 exit the function if there is no buffer. Otherwise, line 24  NULL s out the profile-buffer pointer (using the  rcu_assign_pointer() primitive to maintain memory ordering on weakly ordered machines), and line 25 waits for an RCU Sched grace period to elapse, in particular, waiting for all non-preemptible regions of code, including NMI handlers, to complete. Once execution continues at 193 1 struct profile_buffer { 2 long size; 3 atomic_t entry[0]; 4 }; 5 static struct profile_buffer  * buf = NULL; 6 7 void nmi_profile(unsigned long pcvalue) 8 { 9 struct profile_buffer  * p = rcu_dereference(buf); 10 11 if (p == NULL) 12 return; 13 if (pcvalue >= p->size) 14 return; 15 atomic_inc(&p->entry[pcvalue]); 16 } 17 18 void nmi_stop(void) 19 { 20 struct profile_buffer  * p = buf; 21 22 if (p == NULL) 23 return; 24 rcu_assign_pointer(buf, NULL); 25 synchronize_sched(); 26 kfree(p); 27 } Figure 8.33: Using RCU to Wait for NMIs to Finish line 26, we are guaranteed that any instance of   nmi_profile()  that obtained a pointer to the old buffer has returned. It is therefore safe to free the buffer, in this case using the  kfree()  primitive. Quick Quiz 8.36:  Suppose that the  nmi_profile()  function was preemptible. What would need to change to make this example work correctly? In short, RCU makes it easy to dynamically switch among profile buffers (you just try  doing this efficiently with atomic operations, or at all with locking!). However, RCU is normally used at a higher level of abstraction, as was shown in the previous sections. 8.3.3.8 RCU Usage Summary At its core, RCU is nothing more nor less than an API that provides: 1. a publish-subscribe mechanism for adding new data, 2. a way of waiting for pre-existing RCU readers to finish, and 3.  a discipline of maintaining multiple versions to permit change without harming or unduly delaying concurrent RCU readers. That said, it is possible to build higher-level constructs on top of RCU, including the reader-writer-locking, reference-counting, and existence-guarantee constructs listed in the earlier sections. Furthermore, I have no doubt that the Linux community will continue to find interesting new uses for RCU, as well as for any of a number of other synchronization primitives. In the meantime, Figure  8.34  shows some rough rules of thumb on where RCU is most helpful. As shown in the blue box, RCU works best if you have read-mostly data where stale and inconsistent data is permissible (but see below for more information on stale and 194 Read−Mostly, Stale & Inconsistent Data OK (RCU Works Great!!!) (RCU Works Well) Read−Mostly, Need Consistent Data Read−Write, Need Consistent Data Update−Mostly, Need Consistent Data (RCU Might Be OK...) (1) Provide Existence Guarantees For Update−Friendly Mechanisms (2) Provide Wait−Free Read−Side Primitives for Real−Time Use) (RCU is Very Unlikely to be the Right Tool For The Job, But it Can: Figure 8.34: RCU Areas of Applicability inconsistent data). The canonical example of this case in the Linux kernel is routing tables. Because it may have taken many seconds or even minutes for the routing updates to propagate across Internet, the system has been sending packets the wrong way for quite some time. Having some small probability of continuing to send some of them the wrong way for a few more milliseconds is almost never a problem. If you have a read-mostly workload where consistent data is required, RCU works well, as shown by the green box. One example of this case is the Linux kernel’s mapping from user-level System-V semaphore IDs to the corresponding in-kernel data structures. Semaphores tend to be used far more frequently than they are created and destroyed, so this mapping is read-mostly. However, it would be erroneous to perform a semaphore operation on a semaphore that has already been deleted. This need for consistency is handled by using the lock in the in-kernel semaphore data structure, along with a “deleted” flag that is set when deleting a semaphore. If a user ID maps to an in-kernel data structure with the “deleted” flag set, the data structure is ignored, so that the user ID is flagged as invalid. Although this requires that the readers acquire a lock for the data structure repre- senting the semaphore itself, it allows them to dispense with locking for the mapping data structure. The readers therefore locklessly traverse the tree used to map from ID to data structure, which in turn greatly improves performance, scalability, and real-time response. As indicated by the yellow box, RCU can also be useful for read-write workloads where consistent data is required, although usually in conjunction with a number of  other synchronization primitives. For example, the directory-entry cache in recent Linux kernels uses RCU in conjunction with sequence locks, per-CPU locks, and per-data- structure locks to allow lockless traversal of pathnames in the common case. Although RCU can be very beneficial in this read-write case, such use is often more complex than that of the read-mostly cases. Finally, as indicated by the red box, update-mostly workloads requiring consistent data are rarely good places to use RCU, though there are some exceptions  [ DMS + 12 ]. In addition, as noted in Section  8.3.3.6,  within the Linux kernel, the  SLAB_DESTROY_  BY_RCU  slab-allocator flag provides type-safe memory to RCU readers, which can greatly simplify non-blocking synchronization and other lockless algorithms. In short, RCU is an API that includes a publish-subscribe mechanism for adding new data, a way of waiting for pre-existing RCU readers to finish, and a discipline of  195 maintaining multiple versions to allow updates to avoid harming or unduly delaying concurrent RCU readers. This RCU API is best suited for read-mostly situations, especially if stale and inconsistent data can be tolerated by the application. 8.3.4 RCU Linux-Kernel API This section looks at RCU from the viewpoint of its Linux-kernel API. Section  8.3.4.1 presents RCU’s wait-to-finish APIs, and Section  8.3.4.2  presents RCU’s publish- subscribe and version-maintenance APIs. Finally, Section  8.3.4.4  presents concluding remarks. 8.3.4.1 RCU has a Family of Wait-to-Finish APIs The most straightforward answer to “what is RCU” is that RCU is an API used in the Linux kernel, as summarized by Tables  8.4  and  8.5 , which shows the wait-for-RCU- readers portions of the non-sleepable and sleepable APIs, respectively, and by Table  8.6, which shows the publish/subscribe portions of the API. If you are new to RCU, you might consider focusing on just one of the columns in Table  8.4,  each of which summarizes one member of the Linux kernel’s RCU API family. For example, if you are primarily interested in understanding how RCU is used in the Linux kernel, “RCU Classic” would be the place to start, as it is used most frequently. On the other hand, if you want to understand RCU for its own sake, “SRCU” has the simplest API. You can always come back for the other columns later. If you are already familiar with RCU, these tables can serve as a useful reference. Quick Quiz 8.37:  Why do some of the cells in Table  8.4  have exclamation marks (“!”)? The “RCU Classic” column corresponds to the original RCU implementation, in which RCUread-side critical sections are delimited by rcu_read_lock() and rcu_  read_unlock() , which may be nested. The corresponding synchronous update- side primitives,  synchronize_rcu() , along with its synonym  synchronize_  net() , wait for any currently executing RCU read-side critical sections to complete. The length of this wait is known as a “grace period”. The asynchronous update-side primitive,  call_rcu() , invokes a specified function with a specified argument after a subsequent grace period. For example,  call_rcu(p,f);  will result in the “RCU callback”  f(p)  being invoked after a subsequent grace period. There are situations, such as when unloading a Linux-kernel module that uses  call_rcu() , when it is necessary to wait for all outstanding RCU callbacks to complete [ McK07e ]. The rcu_barrier()  primitive does this job. Note that the more recent hierarchical RCU [ McK08a ]  implementation described in Sections  D.2  and  D.3  also adheres to “RCU Classic” semantics. Finally, RCU may be used to provide type-safe memory [ GC96 ], as described in Section  8.3.3.6.  In the context of RCU, type-safe memory guarantees that a given data element will not change type during any RCU read-side critical section that accesses it. To make use of RCU-based type-safe memory, pass  SLAB_DESTROY_BY_RCU to  kmem_cache_create() . It is important to note that  SLAB_DESTROY_BY_  RCU  will  in no way  prevent  kmem_cache_alloc()  from immediately reallocating memory that was just now freed via  kmem_cache_free() ! In fact, the  SLAB_  DESTROY_BY_RCU -protected data structure just returned by  rcu_dereference might be freed and reallocated an arbitrarily large number of times, even when under the protection of   rcu_read_lock() . Instead,  SLAB_DESTROY_BY_RCU  operates 196 Attribute RCU Classic RCU BH RCU Sched Realtime RCU Purpose Original Prevent DDoS attacks Wait for preempt-disable regions, hardirqs, & NMIs Realtime response Availability 2.5.43 2.6.9 2.6.12 2.6.26 Read-side primitives  rcu_read_lock()  ! rcu_read_  unlock()  ! rcu_read_lock_bh() rcu_read_unlock_  bh() preempt_disable() preempt_enable() (and friends) rcu_read_lock() rcu_read_unlock() Update-side primitives (syn- chronous) synchronize_rcu() synchronize_net() synchronize_  sched() synchronize_rcu() synchronize_net() Update-side primitives (asynchronous/callback) call_rcu()  !  call_rcu_bh() call_rcu_sched() call_rcu() Update-side primitives (wait for callbacks) rcu_barrier() rcu_barrier_bh() rcu_barrier_  sched() rcu_barrier() Type-safe memory  SLAB_DESTROY_BY_  RCU SLAB_DESTROY_BY_  RCU Read side constraints No blocking No irq enabling No blocking Only preemption and lock acquisition Read side overhead Preempt disable/enable (free on non-PREEMPT) BH disable/enable Preempt disable/enable (free on non-PREEMPT) Simple instructions, irq disable/enable Asynchronous update-side overhead sub-microsecond sub-microsecond sub-microsecond Grace-period latency 10s of milliseconds 10s of milliseconds 10s of milliseconds 10s of milliseconds Non- PREEMPT_RT  imple- mentation RCU Classic RCU BH RCU Classic Preemptible RCU PREEMPT_RT  implementa- tion Preemptible RCU Realtime RCU Forced Schedule on all CPUs Realtime RCU Table 8.4: RCU Wait-to-Finish APIs Attribute SRCU QRCU Purpose Sleeping readers Sleeping readers and fast grace periods Availability 2.6.19 Read-side primitives  srcu_read_lock() srcu_read_unlock() qrcu_read_lock() qrcu_read_unlock() Update-side primitives (syn- chronous) synchronize_srcu() synchronize_qrcu() Update-side primitives (asynchronous/callback) N/A N/A Update-side primitives (wait for callbacks) N/A N/A Type-safe memory Read side constraints No  synchronize_srcu()  No  synchronize_qrcu() Read side overhead Simple instructions, preempt dis- able/enable Atomic increment and decrement of  shared variable Asynchronous update-side overhead N/A N/A Grace-period latency 10s of milliseconds 10s of   nanoseconds  in absence of read- ers Non- PREEMPT_RT  imple- mentation SRCU N/A PREEMPT_RT  implementa- tion SRCU N/A Table 8.5: Sleepable RCU Wait-to-Finish APIs 197 by preventing  kmem_cache_free()  from returning a completely freed-up slab of  data structures to the system until after an RCU grace period elapses. In short, although the data element might be freed and reallocated arbitrarily often, at least its type will remain the same. Quick Quiz 8.38:  How do you prevent a huge number of RCU read-side critical sections from indefinitely blocking a  synchronize_rcu()  invocation? Quick Quiz 8.39:  The  synchronize_rcu()  API waits for all pre-existing interrupt handlers to complete, right? In the “RCU BH” column,  rcu_read_lock_bh()  and  rcu_read_unlock_  bh()  delimit RCU read-side critical sections, and  call_rcu_bh()  invokes the specified function and argument after a subsequent grace period. Note that RCU BH does not have a synchronous  synchronize_rcu_bh()  interface, though one could easily be added if required. Quick Quiz 8.40:  What happens if you mix and match? For example, suppose you use  rcu_read_lock()  and  rcu_read_unlock()  to delimit RCU read-side critical sections, but then use  call_rcu_bh()  to post an RCU callback? Quick Quiz 8.41:  Hardware interrupt handlers can be thought of as being under the protection of an implicit  rcu_read_lock_bh() , right? In the “RCU Sched” column, anything that disables preemption acts as an RCU read-side critical section, and  synchronize_sched()  waits for the corresponding RCU grace period. This RCU API family was added in the 2.6.12 kernel, which split the old  synchronize_kernel()  API into the current  synchronize_rcu()  (for RCU Classic) and synchronize_sched() (for RCU Sched). Note that RCU Sched did not originally have an asynchronous  call_rcu_sched()  interface, but one was added in 2.6.26. In accordance with the quasi-minimalist philosophy of the Linux community, APIs are added on an as-needed basis. Quick Quiz 8.42:  What happens if you mix and match RCU Classic and RCU Sched? Quick Quiz 8.43:  In general, you cannot rely on  synchronize_sched()  to wait for all pre-existing interrupt handlers, right? The “Realtime RCU” column has the same API as does RCU Classic, the only differ- ence being that RCU read-side critical sections may be preempted and may block while acquiring spinlocks. The design of Realtime RCU is described elsewhere  [McK07a ]. Quick Quiz 8.44:  Why do both SRCU and QRCU lack asynchronous  call_  srcu()  or  call_qrcu()  interfaces? The “SRCU” column in Table  8.5  displays a specialized RCU API that permits general sleeping in RCU read-side critical sections (see Appendix  D.1  for more details). Of course, use of   synchronize_srcu()  in an SRCU read-side critical section can result in self-deadlock, so should be avoided. SRCU differs from earlier RCU implementations in that the caller allocates an  srcu_struct  for each distinct SRCU usage. This approach prevents SRCU read-side critical sections from blocking unrelated synchronize_srcu()  invocations. In addition, in this variant of RCU,  srcu_  read_lock()  returns a value that must be passed into the corresponding  srcu_  read_unlock() . The “QRCU” column presents an RCU implementation with the same API structure as SRCU, but optimized for extremely low-latency grace periods in absence of readers, as described elsewhere [ McK07f  ]. As with SRCU, use of   synchronize_qrcu() in a QRCU read-side critical section can result in self-deadlock, so should be avoided. Although QRCU has not yet been accepted into the Linux kernel, it is worth mentioning 198 given that it is the only kernel-level RCU implementation that can boast deep sub- microsecond grace-period latencies. Quick Quiz 8.45:  Under what conditions can  synchronize_srcu()  be safely used within an SRCU read-side critical section? The Linux kernel currently has a surprising number of RCU APIs and implementa- tions. There is some hope of reducing this number, evidenced by the fact that a given build of the Linux kernel currently has at most three implementations behind four APIs (given that RCU Classic and Realtime RCU share the same API). However, careful inspection and analysis will be required, just as would be required in order to eliminate one of the many locking APIs. The various RCU APIs are distinguished by the forward-progress guarantees that their RCU read-side critical sections must provide, and also by their scope, as follows: 1.  RCU BH: read-side critical sections must guarantee forward progress against everything except for NMI and interrupt handlers, but not including software- interrupt ( softirq ) handlers. RCU BH is global in scope. 2.  RCU Sched: read-side critical sections must guarantee forward progress against everything except for NMI and irq handlers, including  softirq  handlers. RCU Sched is global in scope. 3.  RCU (both classic and real-time): read-side critical sections must guarantee forward progress against everything except for NMI handlers, irq handlers, softirq  handlers, and (in the real-time case) higher-priority real-time tasks. RCU is global in scope. 4.  SRCU and QRCU: read-side critical sections need not guarantee forward progress unless some other task is waiting for the corresponding grace period to complete, in which case these read-side critical sections should complete in no more than a few seconds (and preferably much more quickly). 8 SRCU’s and QRCU’s scope is defined by the use of the corresponding  srcu_struct  or  qrcu_struct , respectively. In other words, SRCU and QRCU compensate for their extremely weak forward- progress guarantees by permitting the developer to restrict their scope. 8.3.4.2 RCU has Publish-Subscribe and Version-Maintenance APIs Fortunately, the RCU publish-subscribe and version-maintenance primitives shown in the following table apply to all of the variants of RCU discussed above. This commonality can in some cases allow more code to be shared, which certainly reduces the API proliferation that would otherwise occur. The original purpose of the RCU publish-subscribe APIs was to bury memory barriers into these APIs, so that Linux kernel programmers could use RCU without needing to become expert on the memory- ordering models of each of the 20+ CPU families that Linux supports [ Spr01 ]. The first pair of categories operate on Linux  struct list_head  lists, which are circular, doubly-linked lists. The  list_for_each_entry_rcu()  primitive traverses an RCU-protected list in a type-safe manner, while also enforcing memory ordering for situations where a new list element is inserted into the list concurrently with 8 Thanks to James Bottomley for urging me to this formulation, as opposed to simply saying that there are no forward-progress guarantees. 199 Category Primitives Availability Overhead List traversal  list_for_each_entry_  rcu() 2.5.59  Simple instructions (memory barrier on Alpha) List update  list_add_rcu()  2.5.44 Memory barrier list_add_tail_rcu()  2.5.44 Memory barrier list_del_rcu()  2.5.44 Simple instructions list_replace_rcu()  2.6.9 Memory barrier list_splice_init_rcu()  2.6.21 Grace-period latency Hlist traversal  hlist_for_each_entry_  rcu() 2.6.8  Simple instructions (memory barrier on Alpha) hlist_add_after_rcu()  2.6.14 Memory barrier hlist_add_before_rcu()  2.6.14 Memory barrier hlist_add_head_rcu()  2.5.64 Memory barrier hlist_del_rcu()  2.5.64 Simple instructions hlist_replace_rcu()  2.6.15 Memory barrier Pointer traversal  rcu_dereference()  2.6.9  Simple instructions (memory barrier on Alpha) Pointer update  rcu_assign_pointer()  2.6.10 Memory barrier Table 8.6: RCU Publish-Subscribe and Version Maintenance APIs traversal. On non-Alpha platforms, this primitive incurs little or no performance penalty compared to  list_for_each_entry() . The  list_add_rcu() ,  list_add_  tail_rcu() , and  list_replace_rcu()  primitives are analogous to their non- RCU counterparts, but incur the overhead of an additional memory barrier on weakly- ordered machines. The list_del_rcu() primitive is also analogous to its non-RCU counterpart, but oddly enough is very slightly faster due to the fact that it poisons only the prev pointer rather than both the prev and next pointers as list_del() must do. Finally, the  list_splice_init_rcu()  primitive is similar to its non-RCU counterpart, but incurs a full grace-period latency. The purpose of this grace period is to allow RCU readers to finish their traversal of the source list before completely disconnecting it from the list header – failure to do this could prevent such readers from ever terminating their traversal. Quick Quiz 8.46:  Why doesn’t  list_del_rcu()  poison both the  next  and prev  pointers? The second pair of categories operate on Linux’s struct hlist_head , which is a linear linked list. One advantage of   struct hlist_head  over  struct list_  head  is that the former requires only a single-pointer list header, which can save signif- icant memory in large hash tables. The  struct hlist_head  primitives in the table relate to their non-RCU counterparts in much the same way as do the  struct list_  head  primitives. The final pair of categories operate directly on pointers, and are useful for creating RCU-protected non-list data structures, such as RCU-protected arrays and trees. The rcu_assign_pointer()  primitive ensures that any prior initialization remains ordered before the assignment to the pointer on weakly ordered machines. Similarly, 200   c   a    l    l _   r   c   u    (    ) NMI Process IRQ synchronize_rcu()   r   c   u _    d   e   r   e    f   e   r   e   n   c   e    (    )    R    C    U    L    i   s    t    T   r   a   v   e   r   s   a    l   r   c   u _   r   e   a    d _   u   n    l   o   c    k    (    )   r   c   u _   r   e   a    d _    l   o   c    k    (    )    R    C    U    L    i   s    t    M   u    t   a    t    i   o   n   r   c   u _   a   s   s    i   g   n _   p   o    i   n    t   e   r    (    ) Figure 8.35: RCU API Usage Constraints the  rcu_dereference()  primitive ensures that subsequent code dereferencing the pointer will see the effects of initialization code prior to the corresponding  rcu_  assign_pointer() onAlphaCPUs. Onnon-AlphaCPUs, rcu_dereference() documents which pointer dereferences are protected by RCU. Quick Quiz 8.47:  Normally, any pointer subject to  rcu_dereference()  must  always be updated using  rcu_assign_pointer() . What is an exception to this rule? Quick Quiz 8.48:  Are there any downsides to the fact that these traversal and update primitives can be used with any of the RCU API family members? 8.3.4.3 Where Can RCU’s APIs Be Used? Figure  8.35  shows which APIs may be used in which in-kernel environments. The RCU read-side primitives may be used in any environment, including NMI, the RCU mutation and asynchronous grace-period primitives may be used in any environment other than NMI, and, finally, the RCU synchronous grace-period primitives may be used only in process context. The RCU list-traversal primitives include  list_for_each_  entry_rcu() ,  hlist_for_each_entry_rcu() , etc. Similarly, the RCU list- mutation primitives include  list_add_rcu() ,  hlist_del_rcu() , etc. Note that primitives from other families of RCU may be substituted, for example, srcu_read_lock()  may be used in any context in which  rcu_read_lock() may be used. 8.3.4.4 So, What  is  RCU Really? At its core, RCU is nothing more nor less than an API that supports publication and subscription for insertions, waiting for all RCU readers to complete, and maintenance of multiple versions. That said, it is possible to build higher-level constructs on top of  RCU, including the reader-writer-locking, reference-counting, and existence-guarantee constructs listed in the companion article. Furthermore, I have no doubt that the Linux community will continue to find interesting new uses for RCU, just as they do for any of a number of synchronization primitives throughout the kernel. 201 Of course, a more-complete view of RCU would also include all of the things you can do with these APIs. However, for many people, a complete view of RCU must include sample RCU implementations. The next section therefore presents a series of “toy” RCU implemen- tations of increasing complexity and capability. 8.3.5 “Toy” RCU Implementations The toy RCU implementations in this section are designed not for high performance, practicality, or any kind of production use , 9 but rather for clarity. Nevertheless, you will need a thorough understanding of Chapters  1,  2 ,  3,  5 , and  8  for even these toy RCU implementations to be easily understandable. This section provides a series of RCU implementations in order of increasing sophistication, from the viewpoint of solving the existence-guarantee problem. Sec- tion  8.3.5.1  presents a rudimentary RCU implementation based on simple locking, while Section  8.3.5.3  through  8.3.5.9  present a series of simple RCU implementations based on locking, reference counters, and free-running counters. Finally, Section  8.3.5.10 provides a summary and a list of desirable RCU properties. 8.3.5.1 Lock-Based RCU Perhaps the simplest RCU implementation leverages locking, as shown in Figure  8.36 ( rcu_lock.h  and  rcu_lock.c ). In this implementation,  rcu_read_lock() acquires a global spinlock,  rcu_read_unlock()  releases it, and  synchronize_  rcu()  acquires it then immediately releases it. Because synchronize_rcu() doesnotreturnuntilithasacquired(andreleased) the lock, it cannot return until all prior RCU read-side critical sections have completed, thus faithfully implementing RCU semantics. Of course, only one RCU reader may be in its read-side critical section at a time, which almost entirely defeats the purpose of RCU. In addition, the lock operations in  rcu_read_lock()  and  rcu_read_  unlock()  are extremely heavyweight, with read-side overhead ranging from about 100 nanoseconds on a single Power5 CPU up to more than 17  microseconds  on a 64-CPU system. Worse yet, these same lock operations permit  rcu_read_lock() to participate in deadlock cycles. Furthermore, in absence of recursive locks, RCU 9 However, production-quality user-level RCU implementations are available  [Des09] . 1 static void rcu_read_lock(void) 2 { 3 spin_lock(&rcu_gp_lock); 4 } 5 6 static void rcu_read_unlock(void) 7 { 8 spin_unlock(&rcu_gp_lock); 9 } 10 11 void synchronize_rcu(void) 12 { 13 spin_lock(&rcu_gp_lock); 14 spin_unlock(&rcu_gp_lock); 15 } Figure 8.36: Lock-Based RCU Implementation 202 read-side critical sections cannot be nested, and, finally, although concurrent RCU updates could in principle be satisfied by a common grace period, this implementation serializes grace periods, preventing grace-period sharing. Quick Quiz 8.49:  Why wouldn’t any deadlock in the RCU implementation in Figure  8.36  also be a deadlock in any other RCU implementation? Quick Quiz 8.50:  Why not simply use reader-writer locks in the RCU implementa- tion in Figure  8.36  in order to allow RCU readers to proceed in parallel? It is hard to imagine this implementation being useful in a production setting, though it does have the virtue of being implementable in almost any user-level application. Furthermore, similar implementations having one lock per CPU or using reader-writer locks have been used in production in the 2.4 Linux kernel. A modified version of this one-lock-per-CPU approach, but instead using one lock per thread, is described in the next section. 8.3.5.2 Per-Thread Lock-Based RCU Figure  8.37  ( rcu_lock_percpu.h  and  rcu_lock_percpu.c ) shows an imple- mentation based on one lock per thread. The  rcu_read_lock()  and  rcu_read_  unlock()  functions acquire and release, respectively, the current thread’s lock. The synchronize_rcu()  function acquires and releases each thread’s lock in turn. Therefore, all RCU read-side critical sections running when  synchronize_rcu() starts must have completed before  synchronize_rcu()  can return. This implementation does have the virtue of permitting concurrent RCU readers, and does avoid the deadlock condition that can arise with a single global lock. Furthermore, the read-side overhead, though high at roughly 140 nanoseconds, remains at about 140 nanoseconds regardless of the number of CPUs. However, the update-side overhead ranges from about 600 nanoseconds on a single Power5 CPU up to more than 100 microseconds  on 64 CPUs. Quick Quiz 8.51:  Wouldn’t it be cleaner to acquire all the locks, and then release them all in the loop from lines 15-18 of Figure  8.37 ? After all, with this change, there would be a point in time when there were no readers, simplifying things greatly. Quick Quiz 8.52:  Is the implementation shown in Figure  8.37  free from deadlocks? Why or why not? Quick Quiz 8.53:  Isn’t one advantage of the RCU algorithm shown in Figure  8.37 that it uses only primitives that are widely available, for example, in POSIX pthreads? This approach could be useful in some situations, given that a similar approach was used in the Linux 2.4 kernel [ MM00] . The counter-based RCU implementation described next overcomes some of the shortcomings of the lock-based implementation. 8.3.5.3 Simple Counter-Based RCU A slightly more sophisticated RCU implementation is shown in Figure  8.38  ( rcu_  rcg.h  and  rcu_rcg.c ). This implementation makes use of a global reference counter  rcu_refcnt  defined on line 1. The  rcu_read_lock()  primitive atomi- cally increments this counter, then executes a memory barrier to ensure that the RCU read-side critical section is ordered after the atomic increment. Similarly,  rcu_read_  unlock()  executes a memory barrier to confine the RCU read-side critical section, then atomically decrements the counter. The  synchronize_rcu()  primitive spins waiting for the reference counter to reach zero, surrounded by memory barriers. The 203 1 static void rcu_read_lock(void) 2 { 3 spin_lock(&__get_thread_var(rcu_gp_lock)); 4 } 5 6 static void rcu_read_unlock(void) 7 { 8 spin_unlock(&__get_thread_var(rcu_gp_lock)); 9 } 10 11 void synchronize_rcu(void) 12 { 13 int t; 14 15 for_each_running_thread(t) { 16 spin_lock(&per_thread(rcu_gp_lock, t)); 17 spin_unlock(&per_thread(rcu_gp_lock, t)); 18 } 19 } Figure 8.37: Per-Thread Lock-Based RCU Implementation 1 atomic_t rcu_refcnt; 2 3 static void rcu_read_lock(void) 4 { 5 atomic_inc(&rcu_refcnt); 6 smp_mb(); 7 } 8 9 static void rcu_read_unlock(void) 10 { 11 smp_mb(); 12 atomic_dec(&rcu_refcnt); 13 } 14 15 void synchronize_rcu(void) 16 { 17 smp_mb(); 18 while (atomic_read(&rcu_refcnt) != 0) { 19 poll(NULL, 0, 10); 20 } 21 smp_mb(); 22 } Figure 8.38: RCU Implementation Using Single Global Reference Counter 204 1 DEFINE_SPINLOCK(rcu_gp_lock); 2 atomic_t rcu_refcnt[2]; 3 atomic_t rcu_idx; 4 DEFINE_PER_THREAD(int, rcu_nesting); 5 DEFINE_PER_THREAD(int, rcu_read_idx); Figure 8.39: RCU Global Reference-Count Pair Data poll()  on line 19 merely provides pure delay, and from a pure RCU-semantics point of view could be omitted. Again, once  synchronize_rcu()  returns, all prior RCU read-side critical sections are guaranteed to have completed. In happy contrast to the lock-based implementation shown in Section  8.3.5.1 , this implementation allows parallel execution of RCU read-side critical sections. In happy contrast to the per-thread lock-based implementation shown in Section  8.3.5.2,  it also allows them to be nested. In addition, the  rcu_read_lock()  primitive cannot possibly participate in deadlock cycles, as it never spins nor blocks. Quick Quiz 8.54:  But what if you hold a lock across a call to  synchronize_  rcu() , and then acquire that same lock within an RCU read-side critical section? However, this implementations still has some serious shortcomings. First, the atomic operations in rcu_read_lock() and rcu_read_unlock() are still quite heavyweight, with read-side overhead ranging from about 100 nanoseconds on a single Power5 CPU up to almost 40  microseconds  on a 64-CPU system. This means that the RCU read-side critical sections have to be extremely long in order to get any real read-side parallelism. On the other hand, in the absence of readers, grace periods elapse in about 40  nanoseconds , many orders of magnitude faster than production-quality implementations in the Linux kernel. Quick Quiz 8.55:  How can the grace period possibly elapse in 40 nanoseconds when  synchronize_rcu()  contains a 10-millisecond delay? Second, if there are many concurrent  rcu_read_lock()  and  rcu_read_  unlock()  operations, there will be extreme memory contention on  rcu_refcnt , resulting in expensive cache misses. Both of these first two shortcomings largely defeat a major purpose of RCU, namely to provide low-overhead read-side synchronization primitives. Finally, a large number of RCU readers with long read-side critical sections could prevent  synchronize_rcu()  from ever completing, as the global counter might never reach zero. This could result in starvation of RCU updates, which is of course unacceptable in production settings. Quick Quiz 8.56:  Why not simply make  rcu_read_lock()  wait when a con- current synchronize_rcu() has been waiting too long in the RCU implementation in Figure  8.38 ? Wouldn’t that prevent  synchronize_rcu()  from starving? Therefore, it is still hard to imagine this implementation being useful in a production setting, though it has a bit more potential than the lock-based mechanism, for example, as an RCU implementation suitable for a high-stress debugging environment. The next section describes a variation on the reference-counting scheme that is more favorable to writers. 205 1 static void rcu_read_lock(void) 2 { 3 int i; 4 int n; 5 6 n = __get_thread_var(rcu_nesting); 7 if (n == 0) { 8 i = atomic_read(&rcu_idx); 9 __get_thread_var(rcu_read_idx) = i; 10 atomic_inc(&rcu_refcnt[i]); 11 } 12 __get_thread_var(rcu_nesting) = n + 1; 13 smp_mb(); 14 } 15 16 static void rcu_read_unlock(void) 17 { 18 int i; 19 int n; 20 21 smp_mb(); 22 n = __get_thread_var(rcu_nesting); 23 if (n == 1) { 24 i = __get_thread_var(rcu_read_idx); 25 atomic_dec(&rcu_refcnt[i]); 26 } 27 __get_thread_var(rcu_nesting) = n - 1; 28 } Figure 8.40: RCU Read-Side Using Global Reference-Count Pair 8.3.5.4 Starvation-Free Counter-Based RCU Figure  8.40  ( rcu_rcgp.h )  shows the read-side primitives of an RCU implementation that uses a pair of reference counters ( rcu_refcnt[] ), along with a global index that selects one counter out of the pair ( rcu_idx ), a per-thread nesting counter  rcu_  nesting , a per-thread snapshot of the global index ( rcu_read_idx ), and a global lock ( rcu_gp_lock ), which are themselves shown in Figure  8.39. Design  It is the two-element  rcu_refcnt[]  array that provides the freedom from starvation. The key point is that  synchronize_rcu()  is only required to wait for pre-existing readers. If a new reader starts after a given instance of   synchronize_  rcu()  has already begun execution, then that instance of   synchronize_rcu() need not wait on that new reader. At any given time, when a given reader enters its RCU read-side critical section via  rcu_read_lock() , it increments the element of the rcu_refcnt[]  array indicated by the  rcu_idx  variable. When that same reader exits its RCU read-side critical section via  rcu_read_unlock() , it decrements whichever element it incremented, ignoring any possible subsequent changes to the rcu_idx  value. This arrangement means that  synchronize_rcu()  can avoid starvation by complementing the value of   rcu_idx , as in  rcu_idx = !rcu_idx . Suppose that the old value of  rcu_idx was zero, so that the new value is one. New readers that arrive after the complement operation will increment rcu_idx[1] , while the old readers that previously incremented  rcu_idx[0]  will decrement  rcu_idx[0]  when they exit their RCU read-side critical sections. This means that the value of   rcu_idx[0]  will no longer be incremented, and thus will be monotonically decreasing . 10 This means that 10 There is a race condition that this “monotonically decreasing” statement ignores. This race condition 206 1 void synchronize_rcu(void) 2 { 3 int i; 4 5 smp_mb(); 6 spin_lock(&rcu_gp_lock); 7 i = atomic_read(&rcu_idx); 8 atomic_set(&rcu_idx, !i); 9 smp_mb(); 10 while (atomic_read(&rcu_refcnt[i]) != 0) { 11 poll(NULL, 0, 10); 12 } 13 smp_mb(); 14 atomic_set(&rcu_idx, i); 15 smp_mb(); 16 while (atomic_read(&rcu_refcnt[!i]) != 0) { 17 poll(NULL, 0, 10); 18 } 19 spin_unlock(&rcu_gp_lock); 20 smp_mb(); 21 } Figure 8.41: RCU Update Using Global Reference-Count Pair all that synchronize_rcu() need do is wait for the value of  rcu_refcnt[0] to reach zero. With the background, we are ready to look at the implementation of the actual primitives. Implementation  The rcu_read_lock() primitiveatomicallyincrementsthemem- ber of the  rcu_refcnt[]  pair indexed by  rcu_idx , and keeps a snapshot of this index in the per-thread variable  rcu_read_idx . The  rcu_read_unlock()  prim- itive then atomically decrements whichever counter of the pair that the corresponding rcu_read_lock()  incremented. However, because only one value of   rcu_idx  is remembered per thread, additional measures must be taken to permit nesting. These additional measures use the per-thread  rcu_nesting  variable to track nesting. To make all this work, line 6 of   rcu_read_lock()  in Figure  8.40  picks up the current thread’s instance of   rcu_nesting , and if line 7 finds that this is the outermost  rcu_read_lock() , then lines 8-10 pick up the current value of   rcu_  idx , save it in this thread’s instance of  rcu_read_idx , and atomically increment the selected element of  rcu_refcnt . Regardless of the value of  rcu_nesting , line 12 increments it. Line 13 executes a memory barrier to ensure that the RCU read-side critical section does not bleed out before the  rcu_read_lock()  code. Similarly, the  rcu_read_unlock()  function executes a memory barrier at line21to ensurethat theRCUread-sidecritical section does not bleedout after the rcu_  read_unlock()  code. Line 22 picks up this thread’s instance of   rcu_nesting , and if line 23 finds that this is the outermost rcu_read_unlock() , then lines 24 and 25 pick up this thread’s instance of   rcu_read_idx  (saved by the outermost  rcu_  read_lock() ) and atomically decrements the selected element of   rcu_refcnt . Regardless of the nesting level, line 27 decrements this thread’s instance of   rcu_  nesting . Figure  8.41  ( rcu_rcpg.c ) shows the corresponding  synchronize_rcu() implementation. Lines 6 and 19 acquire and release rcu_gp_lock in order to prevent more than one concurrent instance of   synchronize_rcu() . Lines 7-8 pick up the will be dealt with by the code for  synchronize_rcu() . In the meantime, I suggest suspending disbelief. 207 value of   rcu_idx  and complement it, respectively, so that subsequent instances of  rcu_read_lock()  will use a different element of   rcu_idx  that did preceding instances. Lines 10-12 then wait for the prior element of   rcu_idx  to reach zero, with the memory barrier on line 9 ensuring that the check of   rcu_idx  is not reordered to precede the complementing of   rcu_idx . Lines 13-18 repeat this process, and line 20 ensures that any subsequent reclamation operations are not reordered to precede the checking of   rcu_refcnt . Quick Quiz 8.57:  Why the memory barrier on line 5 of   synchronize_rcu() in Figure  8.41  given that there is a spin-lock acquisition immediately after? Quick Quiz 8.58:  Why is the counter flipped twice in Figure  8.41 ? Shouldn’t a single flip-and-wait cycle be sufficient? This implementation avoids the update-starvation issues that could occur in the single-counter implementation shown in Figure  8.38. Discussion  There are still some serious shortcomings. First, the atomic operations in  rcu_read_lock()  and  rcu_read_unlock()  are still quite heavyweight. In fact, theyaremorecomplexthanthoseofthesingle-countervariantshowninFigure 8.38, with the read-side primitives consuming about 150 nanoseconds on a single Power5 CPU and almost 40  microseconds  on a 64-CPU system. The updates-side  synchronize_  rcu()  primitive is more costly as well, ranging from about 200 nanoseconds on a single Power5 CPU to more than 40  microseconds  on a 64-CPU system. This means that the RCU read-side critical sections have to be extremely long in order to get any real read-side parallelism. Second, if there are many concurrent  rcu_read_lock()  and  rcu_read_  unlock() operations, there will be extreme memory contention on the rcu_refcnt elements, resulting in expensive cache misses. This further extends the RCU read-side critical-section duration required to provide parallel read-side access. These first two shortcomings defeat the purpose of RCU in most situations. Third, the need to flip  rcu_idx  twice imposes substantial overhead on updates, especially if there are large numbers of threads. Finally, despite the fact that concurrent RCU updates could in principle be satisfied by a common grace period, this implementation serializes grace periods, preventing grace-period sharing. Quick Quiz 8.59:  Given that atomic increment and decrement are so expensive, why not just use non-atomic increment on line 10 and a non-atomic decrement on line 25 of Figure  8.40 ? Despite these shortcomings, one could imagine this variant of RCU being used on small tightly coupled multiprocessors, perhaps as a memory-conserving implementation that maintains API compatibility with more complex implementations. However, it would not not likely scale well beyond a few CPUs. The next section describes yet another variation on the reference-counting scheme that provides greatly improved read-side performance and scalability. 8.3.5.5 Scalable Counter-Based RCU Figure  8.43  ( rcu_rcpl.h )  shows the read-side primitives of an RCU implementation that uses per-thread pairs of reference counters. This implementation is quite similar to that shown in Figure  8.40,  the only difference being that  rcu_refcnt  is now a per-thread array (as shown in Figure  8.42 ). As with the algorithm in the previous section, 208 1 DEFINE_SPINLOCK(rcu_gp_lock); 2 DEFINE_PER_THREAD(int [2], rcu_refcnt); 3 atomic_t rcu_idx; 4 DEFINE_PER_THREAD(int, rcu_nesting); 5 DEFINE_PER_THREAD(int, rcu_read_idx); Figure 8.42: RCU Per-Thread Reference-Count Pair Data 1 static void rcu_read_lock(void) 2 { 3 int i; 4 int n; 5 6 n = __get_thread_var(rcu_nesting); 7 if (n == 0) { 8 i = atomic_read(&rcu_idx); 9 __get_thread_var(rcu_read_idx) = i; 10 __get_thread_var(rcu_refcnt)[i]++; 11 } 12 __get_thread_var(rcu_nesting) = n + 1; 13 smp_mb(); 14 } 15 16 static void rcu_read_unlock(void) 17 { 18 int i; 19 int n; 20 21 smp_mb(); 22 n = __get_thread_var(rcu_nesting); 23 if (n == 1) { 24 i = __get_thread_var(rcu_read_idx); 25 __get_thread_var(rcu_refcnt)[i]--; 26 } 27 __get_thread_var(rcu_nesting) = n - 1; 28 } Figure 8.43: RCU Read-Side Using Per-Thread Reference-Count Pair 209 1 static void flip_counter_and_wait(int i) 2 { 3 int t; 4 5 atomic_set(&rcu_idx, !i); 6 smp_mb(); 7 for_each_thread(t) { 8 while (per_thread(rcu_refcnt, t)[i] != 0) { 9 poll(NULL, 0, 10); 10 } 11 } 12 smp_mb(); 13 } 14 15 void synchronize_rcu(void) 16 { 17 int i; 18 19 smp_mb(); 20 spin_lock(&rcu_gp_lock); 21 i = atomic_read(&rcu_idx); 22 flip_counter_and_wait(i); 23 flip_counter_and_wait(!i); 24 spin_unlock(&rcu_gp_lock); 25 smp_mb(); 26 } Figure 8.44: RCU Update Using Per-Thread Reference-Count Pair use of this two-element array prevents readers from starving updaters. One benefit of  per-thread  rcu_refcnt[]  array is that the  rcu_read_lock()  and  rcu_read_  unlock()  primitives no longer perform atomic operations. Quick Quiz 8.60:  Come off it! We can see the  atomic_read()  primitive in rcu_read_lock() !!! So why are you trying to pretend that  rcu_read_lock() contains no atomic operations??? Figure 8.44 ( rcu_rcpl.c )showstheimplementationof  synchronize_rcu() , alongwithahelperfunctionnamed flip_counter_and_wait() . The synchronize_  rcu()  function resembles that shown in Figure  8.41 , except that the repeated counter flip is replaced by a pair of calls on lines 22 and 23 to the new helper function. The new  flip_counter_and_wait()  function updates the  rcu_idx  vari- able on line 5, executes a memory barrier on line 6, then lines 7-11 spin on each thread’s prior  rcu_refcnt  element, waiting for it to go to zero. Once all such elements have gone to zero, it executes another memory barrier on line 12 and returns. This RCU implementation imposes important new requirements on its software environment, namely, (1) that it be possible to declare per-thread variables, (2) that these per-thread variables be accessible from other threads, and (3) that it is possible to enumerate all threads. These requirements can be met in almost all software environ- ments, but often result in fixed upper bounds on the number of threads. More-complex implementations might avoid such bounds, for example, by using expandable hash tables. Such implementations might dynamically track threads, for example, by adding them on their first call to  rcu_read_lock() . Quick Quiz 8.61:  Great, if we have  N   threads, we can have  2  N   ten-millisecond waits (one set per  flip_counter_and_wait()  invocation, and even that assumes that we wait only once for each thread. Don’t we need the grace period to complete much  more quickly? This implementation still has several shortcomings. First, the need to flip rcu_idx twice imposes substantial overhead on updates, especially if there are large numbers of  210 1 DEFINE_SPINLOCK(rcu_gp_lock); 2 DEFINE_PER_THREAD(int [2], rcu_refcnt); 3 long rcu_idx; 4 DEFINE_PER_THREAD(int, rcu_nesting); 5 DEFINE_PER_THREAD(int, rcu_read_idx); Figure 8.45: RCU Read-Side Using Per-Thread Reference-Count Pair and Shared Update Data threads. Second,  synchronize_rcu()  must now examine a number of variables that increases linearly with the number of threads, imposing substantial overhead on applica- tions with large numbers of threads. Third, as before, although concurrent RCU updates could in principle be satisfied by a common grace period, this implementation serializes grace periods, preventing grace-period sharing. Finally, as noted in the text, the need for per-thread variables and for enumerating threads may be problematic in some software environments. Thatsaid, theread-sideprimitivesscaleverynicely, requiringabout115nanoseconds regardless of whether running on a single-CPU or a 64-CPU Power5 system. As noted above, the  synchronize_rcu()  primitive does not scale, ranging in overhead from almost a microsecond on a single Power5 CPU up to almost 200 microseconds on a 64-CPU system. This implementation could conceivably form the basis for a production- quality user-level RCU implementation. The next section describes an algorithm permitting more efficient concurrent RCU updates. 8.3.5.6 Scalable Counter-Based RCU With Shared Grace Periods Figure  8.46  ( rcu_rcpls.h )  shows the read-side primitives for an RCU implementa- tion using per-thread reference count pairs, as before, but permitting updates to share grace periods. The main difference from the earlier implementation shown in Fig- ure  8.43  is that rcu_idx is now a long that counts freely, so that line 8 of Figure  8.46 must mask off the low-order bit. We also switched from using  atomic_read()  and atomic_set()  to using  ACCESS_ONCE() . The data is also quite similar, as shown in Figure  8.45,  with  rcu_idx  now being a  long  instead of an  atomic_t . Figure 8.47 ( rcu_rcpls.c ) showstheimplementationof  synchronize_rcu() and its helper function  flip_counter_and_wait() . These are similar to those in Figure  8.44 . The differences in  flip_counter_and_wait()  include: 1.  Line 6 uses  ACCESS_ONCE()  instead of   atomic_set() , and increments rather than complementing. 2. A new line 7 masks the counter down to its bottom bit. The changes to  synchronize_rcu()  are more pervasive: 1.  There is a new oldctr local variable that captures the pre-lock-acquisition value of   rcu_idx  on line 23. 2. Line 26 uses  ACCESS_ONCE()  instead of   atomic_read() . 211 1 static void rcu_read_lock(void) 2 { 3 int i; 4 int n; 5 6 n = __get_thread_var(rcu_nesting); 7 if (n == 0) { 8 i = ACCESS_ONCE(rcu_idx) & 0x1; 9 __get_thread_var(rcu_read_idx) = i; 10 __get_thread_var(rcu_refcnt)[i]++; 11 } 12 __get_thread_var(rcu_nesting) = n + 1; 13 smp_mb(); 14 } 15 16 static void rcu_read_unlock(void) 17 { 18 int i; 19 int n; 20 21 smp_mb(); 22 n = __get_thread_var(rcu_nesting); 23 if (n == 1) { 24 i = __get_thread_var(rcu_read_idx); 25 __get_thread_var(rcu_refcnt)[i]--; 26 } 27 __get_thread_var(rcu_nesting) = n - 1; 28 } Figure 8.46: RCU Read-Side Using Per-Thread Reference-Count Pair and Shared Update 3.  Lines 27-30 check to see if at least three counter flips were performed by other threads while the lock was being acquired, and, if so, releases the lock, does a memory barrier, and returns. In this case, there were two full waits for the counters to go to zero, so those other threads already did all the required work. 4.  At lines 33-34,  flip_counter_and_wait()  is only invoked a second time if there were fewer than two counter flips while the lock was being acquired. On the other hand, if there were two counter flips, some other thread did one full wait for all the counters to go to zero, so only one more is required. Withthisapproach, ifanarbitrarilylargenumberofthreadsinvoke synchronize_  rcu()  concurrently, with one CPU for each thread, there will be a total of only three waits for counters to go to zero. Despite the improvements, this implementation of RCU still has a few shortcomings. First, as before, the need to flip  rcu_idx  twice imposes substantial overhead on updates, especially if there are large numbers of threads. Second, each updater still acquires  rcu_gp_lock , even if there is no work to be done. This can result in a severe scalability limitation if there are large numbers of  concurrent updates. Section  D.4  shows one way to avoid this in a production-quality real-time implementation of RCU for the Linux kernel. Third, this implementation requires per-thread variables and the ability to enumerate threads, which again can be problematic in some software environments. Finally, on 32-bit machines, a given update thread might be preempted long enough for the  rcu_idx  counter to overflow. This could cause such a thread to force an unnecessary pair of counter flips. However, even if each grace period took only one microsecond, the offending thread would need to be preempted for more than an hour, in which case an extra pair of counter flips is likely the least of your worries. 212 1 static void flip_counter_and_wait(int ctr) 2 { 3 int i; 4 int t; 5 6 ACCESS_ONCE(rcu_idx) = ctr + 1; 7 i = ctr & 0x1; 8 smp_mb(); 9 for_each_thread(t) { 10 while (per_thread(rcu_refcnt, t)[i] != 0) { 11 poll(NULL, 0, 10); 12 } 13 } 14 smp_mb(); 15 } 16 17 void synchronize_rcu(void) 18 { 19 int ctr; 20 int oldctr; 21 22 smp_mb(); 23 oldctr = ACCESS_ONCE(rcu_idx); 24 smp_mb(); 25 spin_lock(&rcu_gp_lock); 26 ctr = ACCESS_ONCE(rcu_idx); 27 if (ctr - oldctr >= 3) { 28 spin_unlock(&rcu_gp_lock); 29 smp_mb(); 30 return; 31 } 32 flip_counter_and_wait(ctr); 33 if (ctr - oldctr < 2) 34 flip_counter_and_wait(ctr + 1); 35 spin_unlock(&rcu_gp_lock); 36 smp_mb(); 37 } Figure 8.47: RCU Shared Update Using Per-Thread Reference-Count Pair 213 1 DEFINE_SPINLOCK(rcu_gp_lock); 2 long rcu_gp_ctr = 0; 3 DEFINE_PER_THREAD(long, rcu_reader_gp); 4 DEFINE_PER_THREAD(long, rcu_reader_gp_snap); Figure 8.48: Data for Free-Running Counter Using RCU As with the implementation described in Section  8.3.5.3 , the read-side primitives scale extremely well, incurring roughly 115 nanoseconds of overhead regardless of the number of CPUs. The  synchronize_rcu()  primitives is still expensive, ranging from about one microsecond up to about 16 microseconds. This is nevertheless much cheaper than the roughly 200 microseconds incurred by the implementation in Sec- tion  8.3.5.5.  So, despite its shortcomings, one could imagine this RCU implementation being used in production in real-life applications. Quick Quiz 8.62:  All of these toy RCU implementations have either atomic op- erations in  rcu_read_lock()  and  rcu_read_unlock() , or  synchronize_  rcu()  overhead that increases linearly with the number of threads. Under what circumstances could an RCU implementation enjoy light-weight implementations for all three of these primitives, all having deterministic ( O ( 1 ) ) overheads and latencies? Referring back to Figure  8.46 , we see that there is one global-variable access and no fewer than four accesses to thread-local variables. Given the relatively high cost of thread-local accesses on systems implementing POSIX threads, it is tempting to collapse the three thread-local variables into a single structure, permitting  rcu_read_  lock()  and  rcu_read_unlock()  to access their thread-local data with a single thread-local-storage access. However, an even better approach would be to reduce the number of thread-local accesses to one, as is done in the next section. 8.3.5.7 RCU Based on Free-Running Counter Figure 8.49 ( rcu.h and rcu.c ) showanRCUimplementationbasedonasingleglobal free-running counter that takes on only even-numbered values, with data shown in Fig- ure  8.48.  The resulting  rcu_read_lock()  implementation is extremely straightfor- ward. Lines 3 and 4 simply add one to the global free-running  rcu_gp_ctr  variable and stores the resulting odd-numbered value into the  rcu_reader_gp  per-thread variable. Line 5 executes a memory barrier to prevent the content of the subsequent RCU read-side critical section from “leaking out”. The  rcu_read_unlock()  implementation is similar. Line 10 executes a mem- ory barrier, again to prevent the prior RCU read-side critical section from “leaking out”. Lines 11 and 12 then copy the rcu_gp_ctr global variable to the rcu_reader_gp per-thread variable, leaving this per-thread variable with an even-numbered value so that a concurrent instance of   synchronize_rcu()  will know to ignore it. Quick Quiz 8.63:  If any even value is sufficient to tell  synchronize_rcu()  to ignore a given task, why don’t lines 10 and 11 of Figure  8.49  simply assign zero to rcu_reader_gp ? Thus, synchronize_rcu() could wait for all of the per-thread rcu_reader_  gp variables to take on even-numbered values. However, it is possible to do much better than that because  synchronize_rcu()  need only wait on  pre-existing  RCU read- side critical sections. Line 19 executes a memory barrier to prevent prior manipulations of RCU-protected data structures from being reordered (by either the CPU or the compiler) to follow the increment on line 21. Line 20 acquires the  rcu_gp_lock 214 1 static void rcu_read_lock(void) 2 { 3 __get_thread_var(rcu_reader_gp) = 4 ACCESS_ONCE(rcu_gp_ctr) + 1; 5 smp_mb(); 6 } 7 8 static void rcu_read_unlock(void) 9 { 10 smp_mb(); 11 __get_thread_var(rcu_reader_gp) = 12 ACCESS_ONCE(rcu_gp_ctr); 13 } 14 15 void synchronize_rcu(void) 16 { 17 int t; 18 19 smp_mb(); 20 spin_lock(&rcu_gp_lock); 21 ACCESS_ONCE(rcu_gp_ctr) += 2; 22 smp_mb(); 23 for_each_thread(t) { 24 while ((per_thread(rcu_reader_gp, t) & 0x1) && 25 ((per_thread(rcu_reader_gp, t) - 26 ACCESS_ONCE(rcu_gp_ctr)) < 0)) { 27 poll(NULL, 0, 10); 28 } 29 } 30 spin_unlock(&rcu_gp_lock); 31 smp_mb(); 32 } Figure 8.49: Free-Running Counter Using RCU (and line 30 releases it) in order to prevent multiple  synchronize_rcu()  instances from running concurrently. Line 21 then increments the global  rcu_gp_ctr  variable by two, so that all pre-existing RCU read-side critical sections will have corresponding per-thread  rcu_reader_gp  variables with values less than that of   rcu_gp_ctr , modulo the machine’s word size. Recall also that threads with even-numbered values of   rcu_reader_gp  are not in an RCU read-side critical section, so that lines 23-29 scan the  rcu_reader_gp  values until they all are either even (line 24) or are greater than the global  rcu_gp_ctr  (lines 25-26). Line 27 blocks for a short period of time to wait for a pre-existing RCU read-side critical section, but this can be replaced with a spin-loop if grace-period latency is of the essence. Finally, the memory barrier at line 31 ensures that any subsequent destruction will not be reordered into the preceding loop. Quick Quiz 8.64:  Why are the memory barriers on lines 19 and 31 of Figure  8.49 needed? Aren’t the memory barriers inherent in the locking primitives on lines 20 and 30 sufficient? This approach achieves much better read-side performance, incurring roughly 63 nanoseconds of overhead regardless of the number of Power5 CPUs. Updates incur more overhead, ranging from about 500 nanoseconds on a single Power5 CPU to more than 100  microseconds  on 64 such CPUs. Quick Quiz 8.65:  Couldn’t the update-side batching optimization described in Section  8.3.5.6  be applied to the implementation shown in Figure  8.49 ? This implementation suffers from some serious shortcomings in addition to the high update-side overhead noted earlier. First, it is no longer permissible to nest RCU read- side critical sections, a topic that is taken up in the next section. Second, if a reader is preempted at line 3 of Figure  8.49  after fetching from  rcu_gp_ctr  but before storing 215 1 DEFINE_SPINLOCK(rcu_gp_lock); 2 #define RCU_GP_CTR_SHIFT 7 3 #define RCU_GP_CTR_BOTTOM_BIT (1 << RCU_GP_CTR_SHIFT) 4 #define RCU_GP_CTR_NEST_MASK (RCU_GP_CTR_BOTTOM_BIT - 1) 5 long rcu_gp_ctr = 0; 6 DEFINE_PER_THREAD(long, rcu_reader_gp); Figure 8.50: Data for Nestable RCU Using a Free-Running Counter to  rcu_reader_gp , and if the  rcu_gp_ctr  counter then runs through more than half but less than all of its possible values, then  synchronize_rcu()  will ignore the subsequent RCU read-side critical section. Third and finally, this implementation requires that the enclosing software environment be able to enumerate threads and maintain per-thread variables. Quick Quiz 8.66:  Is the possibility of readers being preempted in lines 3-4 of  Figure  8.49  a real problem, in other words, is there a real sequence of events that could lead to failure? If not, why not? If so, what is the sequence of events, and how can the failure be addressed? 8.3.5.8 Nestable RCU Based on Free-Running Counter Figure  8.51  ( rcu_nest.h  and  rcu_nest.c ) show an RCU implementation based on a single global free-running counter, but that permits nesting of RCU read-side critical sections. This nestability is accomplished by reserving the low-order bits of the global  rcu_gp_ctr  to count nesting, using the definitions shown in Figure  8.50.  This is a generalization of the scheme in Section  8.3.5.7,  which can be thought of as having a single low-order bit reserved for counting nesting depth. Two C-preprocessor macros are used to arrange this, RCU_GP_CTR_NEST_MASK and RCU_GP_CTR_BOTTOM_BIT . These are related:  RCU_GP_CTR_NEST_MASK=RCU_GP_CTR_BOTTOM_BIT-1 . The  RCU_GP_CTR_BOTTOM_BIT  macro contains a single bit that is positioned just above the bits reserved for counting nesting, and the  RCU_GP_CTR_NEST_MASK  has all one bits covering the region of   rcu_gp_ctr  used to count nesting. Obviously, these two C-preprocessor macros must reserve enough of the low-order bits of the counter to permit the maximum required nesting of RCU read-side critical sections, and this implementation reserves seven bits, for a maximum RCU read-side critical-section nesting depth of 127, which should be well in excess of that needed by most applications. The resulting  rcu_read_lock()  implementation is still reasonably straightfor- ward. Line 6 places a pointer to this thread’s instance of   rcu_reader_gp  into the local variable  rrgp , minimizing the number of expensive calls to the pthreads thread- local-state API. Line 7 records the current value of   rcu_reader_gp  into another local variable  tmp , and line 8 checks to see if the low-order bits are zero, which would indicate that this is the outermost  rcu_read_lock() . If so, line 9 places the global rcu_gp_ctr  into  tmp  because the current value previously fetched by line 7 is likely to be obsolete. In either case, line 10 increments the nesting depth, which you will recall is stored in the seven low-order bits of the counter. Line 11 stores the updated counter back into this thread’s instance of   rcu_reader_gp , and, finally, line 12 executes a memory barrier to prevent the RCU read-side critical section from bleeding out into the code preceding the call to  rcu_read_lock() . In other words, this implementation of  rcu_read_lock() picks up a copy of the global  rcu_gp_ctr  unless the current invocation of   rcu_read_lock()  is nested 216 1 static void rcu_read_lock(void) 2 { 3 long tmp; 4 long  * rrgp; 5 6 rrgp = &__get_thread_var(rcu_reader_gp); 7 tmp =  * rrgp; 8 if ((tmp & RCU_GP_CTR_NEST_MASK) == 0) 9 tmp = ACCESS_ONCE(rcu_gp_ctr); 10 tmp++; 11  * rrgp = tmp; 12 smp_mb(); 13 } 14 15 static void rcu_read_unlock(void) 16 { 17 long tmp; 18 19 smp_mb(); 20 __get_thread_var(rcu_reader_gp)--; 21 } 22 23 void synchronize_rcu(void) 24 { 25 int t; 26 27 smp_mb(); 28 spin_lock(&rcu_gp_lock); 29 ACCESS_ONCE(rcu_gp_ctr) += 30 RCU_GP_CTR_BOTTOM_BIT; 31 smp_mb(); 32 for_each_thread(t) { 33 while (rcu_gp_ongoing(t) && 34 ((per_thread(rcu_reader_gp, t) - 35 rcu_gp_ctr) < 0)) { 36 poll(NULL, 0, 10); 37 } 38 } 39 spin_unlock(&rcu_gp_lock); 40 smp_mb(); 41 } Figure 8.51: Nestable RCU Using a Free-Running Counter 217 1 DEFINE_SPINLOCK(rcu_gp_lock); 2 long rcu_gp_ctr = 0; 3 DEFINE_PER_THREAD(long, rcu_reader_qs_gp); Figure 8.52: Data for Quiescent-State-Based RCU within an RCU read-side critical section, in which case it instead fetches the contents of  the current thread’s instance of   rcu_reader_gp . Either way, it increments whatever value it fetched in order to record an additional nesting level, and stores the result in the current thread’s instance of   rcu_reader_gp . Interestingly enough, the implementation of   rcu_read_unlock()  is identical to that shown in Section  8.3.5.7.  Line 19 executes a memory barrier in order to prevent the RCU read-side critical section from bleeding out into code following the call to  rcu_read_unlock() , and line 20 decrements this thread’s instance of   rcu_  reader_gp , which has the effect of decrementing the nesting count contained in rcu_reader_gp ’s low-order bits. Debugging versions of this primitive would check (before decrementing!) that these low-order bits were non-zero. The implementation of   synchronize_rcu()  is quite similar to that shown in Section  8.3.5.7.  There are two differences. The first is that lines 29 and 30 adds  RCU_  GP_CTR_BOTTOM_BIT  to the global  rcu_gp_ctr  instead of adding the constant “2”, and the second is that the comparison on line 33 has been abstracted out to a separate function, where it checks the bit indicated by  RCU_GP_CTR_BOTTOM_BIT  instead of unconditionally checking the low-order bit. This approach achieves read-side performance almost equal to that shown in Sec- tion  8.3.5.7,  incurring roughly 65 nanoseconds of overhead regardless of the number of  Power5CPUs. Updatesagainincurmoreoverhead, rangingfromabout600nanoseconds on a single Power5 CPU to more than 100  microseconds  on 64 such CPUs. Quick Quiz 8.67:  Why not simply maintain a separate per-thread nesting-level variable, as was done in previous section, rather than having all this complicated bit manipulation? This implementation suffers from the same shortcomings as does that of Sec- tion  8.3.5.7 , except that nesting of RCU read-side critical sections is now permitted. In addition, on 32-bit systems, this approach shortens the time required to overflow the global rcu_gp_ctr variable. The following section shows one way to greatly increase the time required for overflow to occur, while greatly reducing read-side overhead. Quick Quiz 8.68:  Given the algorithm shown in Figure  8.51,  how could you double the time required to overflow the global  rcu_gp_ctr ? Quick Quiz 8.69:  Again, given the algorithm shown in Figure  8.51 , is counter overflow fatal? Why or why not? If it is fatal, what can be done to fix it? 8.3.5.9 RCU Based on Quiescent States Figure  8.53  ( rcu_qs.h )  shows the read-side primitives used to construct a user-level implementation of RCU based on quiescent states, with the data shown in Figure  8.52. As can be seen from lines 1-7 in the figure, the  rcu_read_lock()  and  rcu_  read_unlock()  primitives do nothing, and can in fact be expected to be inlined and optimized away, as they are in server builds of the Linux kernel. This is due to the fact that quiescent-state-based RCU implementations  approximate  the extents of RCU read-side critical sections using the aforementioned quiescent states, which contains 218 1 static void rcu_read_lock(void) 2 { 3 } 4 5 static void rcu_read_unlock(void) 6 { 7 } 8 9 rcu_quiescent_state(void) 10 { 11 smp_mb(); 12 __get_thread_var(rcu_reader_qs_gp) = 13 ACCESS_ONCE(rcu_gp_ctr) + 1; 14 smp_mb(); 15 } 16 17 static void rcu_thread_offline(void) 18 { 19 smp_mb(); 20 __get_thread_var(rcu_reader_qs_gp) = 21 ACCESS_ONCE(rcu_gp_ctr); 22 smp_mb(); 23 } 24 25 static void rcu_thread_online(void) 26 { 27 rcu_quiescent_state(); 28 } Figure 8.53: Quiescent-State-Based RCU Read Side calls to  rcu_quiescent_state() , shown from lines 9-15 in the figure. Threads entering extended quiescent states (for example, when blocking) may instead use the thread_offline() and thread_online() APIs to mark the beginning and the end, respectively, of such an extended quiescent state. As such, thread_online() is analogous to  rcu_read_lock()  and  thread_offline()  is analogous to  rcu_  read_unlock() . These two functions are shown on lines 17-28 in the figure. In either case, it is illegal for a quiescent state to appear within an RCU read-side critical section. In  rcu_quiescent_state() , line 11 executes a memory barrier to prevent any code prior to the quiescent state (including possible RCU read-side critical sections) from being reordered into the quiescent state. Lines 12-13 pick up a copy of the global rcu_gp_ctr , using  ACCESS_ONCE()  to ensure that the compiler does not employ any optimizations that would result in rcu_gp_ctr being fetched more than once, and then adds one to the value fetched and stores it into the per-thread  rcu_reader_qs_  gp  variable, so that any concurrent instance of   synchronize_rcu()  will see an odd-numbered value, thus becoming aware that a new RCU read-side critical section has started. Instances of   synchronize_rcu()  that are waiting on older RCU read-side critical sections will thus know to ignore this new one. Finally, line 14 executes a memory barrier, which prevents subsequent code (including a possible RCU read-side critical section) from being re-ordered with the lines 12-13. Quick Quiz 8.70:  Doesn’t the additional memory barrier shown on line 14 of  Figure  8.53 , greatly increase the overhead of   rcu_quiescent_state ? Some applications might use RCU only occasionally, but use it very heavily when they do use it. Such applications might choose to use  rcu_thread_online() when starting to use RCU and  rcu_thread_offline()  when no longer using RCU. The time between a call to rcu_thread_offline() and a subsequent call to 219 1 void synchronize_rcu(void) 2 { 3 int t; 4 5 smp_mb(); 6 spin_lock(&rcu_gp_lock); 7 rcu_gp_ctr += 2; 8 smp_mb(); 9 for_each_thread(t) { 10 while (rcu_gp_ongoing(t) && 11 ((per_thread(rcu_reader_qs_gp, t) - 12 rcu_gp_ctr) < 0)) { 13 poll(NULL, 0, 10); 14 } 15 } 16 spin_unlock(&rcu_gp_lock); 17 smp_mb(); 18 } Figure 8.54: RCU Update Side Using Quiescent States rcu_thread_online() is an extended quiescent state, so that RCU will not expect explicit quiescent states to be registered during this time. The rcu_thread_offline() functionsimplysetstheper-thread rcu_reader_  qs_gp  variable to the current value of   rcu_gp_ctr , which has an even-numbered value. Any concurrent instances of   synchronize_rcu()  will thus know to ignore this thread. Quick Quiz 8.71:  Why are the two memory barriers on lines 19 and 22 of Fig- ure  8.53  needed? The rcu_thread_online() functionsimplyinvokes rcu_quiescent_state() , thus marking the end of the extended quiescent state. Figure  8.54  ( rcu_qs.c )  shows the implementation of   synchronize_rcu() , which is quite similar to that of the preceding sections. This implementation has blazingly fast read-side primitives, with an  rcu_read_  lock() - rcu_read_unlock()  round trip incurring an overhead of roughly 50  pi- coseconds . The synchronize_rcu() overhead ranges from about 600 nanoseconds on a single-CPU Power5 system up to more than 100 microseconds on a 64-CPU system. Quick Quiz 8.72:  To be sure, the clock frequencies of Power systems in 2008 were quite high, but even a 5GHz clock frequency is insufficient to allow loops to be executed in 50 picoseconds! What is going on here? However, thisimplementationrequiresthateachthreadeitherinvoke rcu_quiescent_  state()  periodically or to invoke  rcu_thread_offline()  for extended quies- cent states. The need to invoke these functions periodically can make this implementa- tion difficult to use in some situations, such as for certain types of library functions. Quick Quiz 8.73:  Why would the fact that the code is in a library make any difference for how easy it is to use the RCU implementation shown in Figures  8.53  and 8.54 ? Quick Quiz 8.74:  But what if you hold a lock across a call to  synchronize_  rcu() , and then acquire that same lock within an RCU read-side critical section? This should be a deadlock, but how can a primitive that generates absolutely no code possibly participate in a deadlock cycle? Inaddition, thisimplementationdoesnotpermitconcurrentcallsto synchronize_  rcu()  to share grace periods. That said, one could easily imagine a production-quality RCU implementation based on this version of RCU. 220 8.3.5.10 Summary of Toy RCU Implementations If you made it this far, congratulations! You should now have a much clearer under- standing not only of RCU itself, but also of the requirements of enclosing software environments and applications. Those wishing an even deeper understanding are in- vited to read Appendix  D,  which presents some RCU implementations that have seen extensive use in production. The preceding sections listed some desirable properties of the various RCU primi- tives. The following list is provided for easy reference for those wishing to create a new RCU implementation. 1.  There must be read-side primitives (such as  rcu_read_lock()  and  rcu_  read_unlock() )andgrace-periodprimitives(suchas synchronize_rcu() and  call_rcu() ), such that any RCU read-side critical section in existence at the start of a grace period has completed by the end of the grace period. 2.  RCU read-side primitives should have minimal overhead. In particular, expensive operations such as cache misses, atomic instructions, memory barriers, and branches should be avoided. 3.  RCU read-side primitives should have  O ( 1 )  computational complexity to enable real-time use. (This implies that readers run concurrently with updaters.) 4.  RCU read-side primitives should be usable in all contexts (in the Linux kernel, they are permitted everywhere except in the idle loop). An important special case is that RCU read-side primitives be usable within an RCU read-side critical section, in other words, that it be possible to nest RCU read-side critical sections. 5.  RCU read-side primitives should be unconditional, with no failure returns. This property is extremely important, as failure checking increases complexity and complicates testing and validation. 6.  Any operation other than a quiescent state (and thus a grace period) should be permitted in an RCU read-side critical section. In particular, irrevocable operations such as I/O should be permitted. 7. It should be possible to update an RCU-protected data structure while executing within an RCU read-side critical section. 8.  Both RCU read-side and update-side primitives should be independent of memory allocator design and implementation, in other words, the same RCU implementa- tion should be able to protect a given data structure regardless of how the data elements are allocated and freed. 9.  RCU grace periods should not be blocked by threads that halt outside of RCU read- side critical sections. (But note that most quiescent-state-based implementations violate this desideratum.) Quick Quiz 8.75:  Given that grace periods are prohibited within RCU read-side critical sections, how can an RCU data structure possibly be updated while in an RCU read-side critical section? 221     E    x     i    s     t    e    n    c    e     G    u    a    r    a    n     t    e    e     U    p     d    a     t    e    s    a    n     d     R    e    a     d    e    r    s     P    r    o    g    r    e    s    s     C    o    n    c    u    r    r    e    n     t     l    y     R    e    a     d   -     S     i     d    e     O    v    e    r     h    e    a     d     B    u     l     k     R    e     f    e    r    e    n    c    e     L    o    w     M    e    m    o    r    y     F    o    o     t    p    r     i    n     t     U    n    c    o    n     d     i     t     i    o    n    a     l     A    c    q    u     i    s     i     t     i    o    n     N    o    n   -     B     l    o    c     k     i    n    g     U    p     d    a     t    e    s Reference Counting Y ++ → atomic † Y ? Hazard Pointers Y MB † Y Y Sequence Locks Y 2 MB ‡ N/A N/A RCU Y 0 → 2 MB Y Y †Incurred on each element traversed on each retry ‡Incurred on each retry atomic: Atomic operation MB: Memory barrier Table 8.7: Which Deferred Technique to Choose? 8.3.6 RCU Exercises This section is organized as a series of Quick Quizzes that invite you to apply RCU to a number of examples earlier in this book. The answer to each Quick Quiz gives somehints, andalsocontainsapointertoalatersectionwherethesolutionisexplainedat length. The rcu_read_lock() , rcu_read_unlock() , rcu_dereference() , rcu_assign_pointer() , and  synchronize_rcu()  primitives should suffice for most of these exercises. Quick Quiz 8.76:  The statistical-counter implementation shown in Figure  4.9 ( count_end.c ) used a global lock to guard the summation in  read_count() , which resulted in poor performance and negative scalability. How could you use RCU to provide  read_count()  with excellent performance and good scalability. (Keep in mind that  read_count() ’s scalability will necessarily be limited by its need to scan all threads’ counters.) Quick Quiz 8.77:  Section  4.5  showed a fanciful pair of code fragments that dealt with counting I/O accesses to removable devices. These code fragments suffered from high overhead on the fastpath (starting an I/O) due to the need to acquire a reader-writer lock. How would you use RCU to provide excellent performance and scalability? (Keep in mind that the performance of the common-case first code fragment that does I/O accesses is much more important than that of the device-removal code fragment.) 8.4 Which to Choose? Table  8.7  provides some rough rules of thumb that can help you choose among the four deferred-processing techniques presented in this chapter. As shown in the “Existence Guarantee” column, if you need existence guarantees for linked data elements, you must use reference counting, hazard pointers, or RCU. Sequence locks do not provide existence guarantees, instead providing detection of  222 updates, retrying any read-side critical sections that do encounter an update. Of course, as shown in the “Updates and Readers Progress Concurrently” column, this detection of updates implies that sequence locking does not permit updates and readers to make forward progress concurrently. After all, preventing such forward progress is the whole point of using sequence locking in the first place! This situation points the way to using sequence locking in conjunction with reference counting, hazard pointers, or RCU in order to provide both existence guarantees and update detection. In fact, the Linux kernel combines RCU and sequence locking in this manner during pathname lookup. The “Read-Side Overhead” column gives a rough sense of the read-side overhead of  these techniques. The overhead of reference counting can vary widely. At the low end, a simple non-atomic increment suffices, at least in the case where the reference is acquired under the protection of a lock that must acquired for other reasons. At the high end, a fully ordered atomic operation is required. Reference counting incurs this overhead on each and every data element traversed. Hazard pointers incur the overhead of a memory barrier for each data element traversed, and sequence locks incur the overhead of a pair of memory barriers for each attempt to execute the critical section. The overhead of  RCU implemntations vary from nothing to that of a pair of memory barriers for each read-side critical section, thus providing RCU with the best performance, particularly for read-side critical sections that traverse many data elements. The “Bulk Reference” column indicates that only RCU is capable of acquiring multiple references with constant overhead. The entry for sequence locks is “N/A” because, again, sequence locks detect updates rather than acquiring references. Quick Quiz 8.78:  But can’t both reference counting and hazard pointers can also acquire a reference to multiple data elements with constant overhead? A single reference count can cover multiple data elements, right? The“LowMemoryFootprint”columnindicateswhichtechniquesenjoylowmemory footprint. This column ends up being the mirror image of the “Bulk Reference” column: The ability to acquire references on large numbers of data elements implies that all these data elements must persist, which in turn implies a large memory footprint in some cases. For example, one thread might delete a large number of data elements while another thread concurrently executes a long RCU read-side critical section. Because the read-side critical section could potentially retain a reference to any of the newly deleted data elements, all those data elements must be retained for the full duration of  that critical section. In contrast, reference counting and hazard pointers would retain only those specific data elements actually referenced by concurrent readers. However, this low-memory-footprint advantage comes at a price, as shown in the “Unconditional Acquisition” column. To see this, imagine a large linked data structure in which a reference-counting or hazard-pointer reader (call it Thread A) holds a reference to an isolated data element in the middle of that structure. Consider the following sequence of events: 1.  Thread B removes the data element referenced by Thread A. Because of this reference, the data element cannot yet be freed. 2.  ThreadBremovesallthedataelementsadjacenttotheonereferencedbyThreadA. Because there are no references held for these data elements, they are all imme- diately freed. Because Thread A’s data element has already been removed, its outgoing pointers are not updated. 223 3.  All of Thread A’s data element’s outgoing pointers now reference the freelist, and therefore cannot safely be traversed. 4.  The reference-counting or hazard-pointer implementation therefore has no choice but to fail any attempt by Thread A to acquire a reference via any of the pointers emanating from its data element. In short, any defered-processing technique that offers precise tracking of references must also be prepared to fail attempts to acquire references. Therefore, RCU’s memory- footprint disadvantage implies an ease-of-use advantage, namely that RCU readers need not deal with acquisition failure. This tension between memory footprint, precise tracking, and acquisition failures is sometimes resolved within the Linux kernel by combining use of RCU and reference counters. RCU is used for short-lived references, which means that RCU read-side critical sections can be short. These short RCU read-side critical sections in turn mean that the corresponding RCU grace periods can also be short, limiting the memory footprint. For the fewdata elements that need longer-lived references, reference counting is used. This means that the complexity of reference-acquisition failure only needs to be dealt with for those few data elements: The bulk of the reference acquisitions are unconditional, courtesy of RCU. Finally, the “Non-Blocking Updates” column shows that hazard pointers can pro- vide non-blocking updates [ Mic04 ,  HLM02 ]. Reference counting might or might not, depending on the implementation. However, sequence locking cannot provide non- blocking updates, courtesy of its update-side lock. RCU updaters must wait on readers, which also rules out fully non-blocking updates. However, there are situations in which the only blocking operation is a wait to free memory, which results in an situation that, for many purposes, is as good as non-blocking  [DMS + 12 ]. As more experience is gained using these techniques, both separately and in combi- nation, the rules of thumb laid out in this section will need to be refined. However, this section does reflect the current state of the art. 8.5 What About Updates? The deferred-processing techniques called out in this chapter are most directly applicable to read-mostly situations, which begs the question “But what about updates?” After all, increasing the performance and scalability of readers is all well and good, but it is only natural to also want great performance and scalability for writers. We have already seen one situation featuring high performance and scalability for writers, namely the counting algorithms surveyed in Chapter  4.  These algorithms featured partially partitioned data structures so that updates can can operate locally, while the more-expensive reads must sum across the entire data structure. Silas Boyd- Wickhizer has generalized this notion to produce OpLog, which he has applied to Linux- kernel pathname lookup, VM reverse mappings, and the  stat()  system call [ BW14 ] . Another approach, called “Disruptor,” is designed for applications that process high-volume streams of input data. The approach is to rely on single-producer-single- consumer FIFO queues, minimizing the need for synchronization  [ Sut13 ] . For Java applications, Disruptor also has the virtue of minimizing use of the garbage collector. And of course, where feasible, fully partitioned or “sharded” systems provide excellent performance and scalability, as noted in Chapter  5. 224 The next chapter will look at updates in the context of several types of data struc- tures. 225 226 Chapter 9 Data Structures Efficient access to data is critically important, so that discussions of algorithms include timecomplexityoftherelateddatastructures[ CLRS01 ] . However, forparallelprograms, measures of time complexity must also include concurrency effects. These effects can be overwhelmingly large, as shown in Chapter  2,  which means that concurrent data structure designs must focus as much on concurrency as they do on sequential time complexity. Section  9.1  presents a motivating application that will be used to evaluate the data structures presented in this chapter. As discussed in Chapter  5,  an excellent way to achieve high scalability is partitioning. This points the way to partitionable data structures, a topic taken up by Section  9.2 . Chapter  8  described how deferring some actions can greatly improve both performance and scalability. Section  8.3  in particular showed how to tap the awesome power of  procrastination in pursuit of performance and scalability, a topic taken up by Section  9.3 . Notalldatastructuresarepartitionable. Section 9.4 looksatamildlynon-partitionable example data structure. This section shows how to split it into read-mostly and parti- tionable portions, enabling a fast and scalable implementation. Because this chapter cannot delve into the details of every concurrent data structure that has ever been used Section  9.5  provides a brief survey of the most common and important ones. Although the best performance and scalability results design rather than after-the-fact micro-optimization, it is nevertheless the case that micro-optimization has an important place in achieving the absolute best possible performance and scalability. This topic is therefore taken up in Section  9.6. Finally, Section  9.7  presents a summary of this chapter. 9.1 Motivating Application We will use the Schrödinger’s Zoo application to evaluate performance  [ McK13 ]. Schrödinger has a zoo containing a large number of animals, and he would like to track them using an in-memory database with each animal in the zoo represented by a data item in this database. Each animal has a unique name that is used as a key, with a variety of data tracked for each animal. Births, captures, and purchases result in insertions, while deaths, releases, and sales result in deletions. Because Schrödinger’s zoo contains a large quantity of short-lived 227 animals, including mice and insects, the database must be able to support a high update rate. Those interested in Schrödinger’s animals can query them, however, Schrödinger has noted extremely high rates of queries for his cat, so much so that he suspects that his mice might be using the database to check up on their nemesis. This means that Schödinger’s application must be able to support a high rate of queries to a single data element. Please keep this application in mind as various data structures are presented. 9.2 Partitionable Data Structures There are a huge number of data structures in use today, so much so that there are multiple textbooks covering them. This small section focuses on a single data structure, namely the hash table. This focused approach allows a much deeper investigation of  how concurrency interacts with data structures, and also focuses on a data structure that is heavily used in practice. Section  9.2.1  overviews of the design, and Section  9.2.2 presents the implementation. Finally, Section  9.2.3  discusses the resulting performance and scalability. 9.2.1 Hash-Table Design Chapter  5  emphasized the need to apply partitioning in order to attain respectable performance and scalability, so partitionability must be a first-class criterion when selecting data structures. This criterion is well satisfied by that workhorse of parallelism, the hash table. Hash tables are conceptually simple, consisting of an array of   hash buckets . A  hash function  maps from a given element’s  key  to the hash bucket that this element will be stored in. Each hash bucket therefore heads up a linked list of elements, called a  hash chain . When properly configured, these hash chains will be quite short, permitting a hash table to access the element with a given key extremely efficiently. Quick Quiz 9.1:  But there are many types of hash tables, of which the chained hash tables described here are but one type. Why the focus on chained hash tables? In addition, each bucket can be given its own lock, so that elements in different bucketsofthehashtablemaybeadded, deleted, andlookedupcompletelyindependently. A large hash table containing a large number of elements therefore offers excellent scalability. 9.2.2 Hash-Table Implementation Figure  9.1  ( hash_bkt.c ) shows a set of data structures used in a simple fixed-sized hash table using chaining and per-hash-bucket locking, and Figure  9.2  diagrams how they fit together. The  hashtab  structure (lines 11-14 in Figure  9.1 ) contains four ht_bucket  structures (lines 6-9 in Figure  9.1) , with the  ->bt_nbuckets  field controlling the number of buckets. Each such bucket contains a list header  ->htb_  head  and a lock ->htb_lock . The list headers chain  ht_elem  structures (lines 1-4 in Figure  9.1 ) through their  ->hte_next  fields, and each  ht_elem  structure also caches the corresponding element’s hash value in the  ->hte_hash  field. The  ht_  elem  structure would be included in the larger structure being placed in the hash table, and this larger structure might contain a complex key. 228 1 struct ht_elem { 2 struct cds_list_head hte_next; 3 unsigned long hte_hash; 4 }; 5 6 struct ht_bucket { 7 struct cds_list_head htb_head; 8 spinlock_t htb_lock; 9 }; 10 11 struct hashtab { 12 unsigned long ht_nbuckets; 13 struct ht_bucket ht_bkt[0]; 14 }; Figure 9.1: Hash-Table Data Structures struct hashtab −>ht_nbuckets = 4 −>ht_bkt[3] −>htb_head −>htb_lock −>ht_bkt[2] −>htb_head −>htb_lock −>ht_bkt[1] −>htb_head −>htb_lock −>ht_bkt[0] −>htb_head −>htb_lock −>hte_next −>hte_hash −>hte_next −>hte_hash −>hte_next −>hte_hash struct ht_elem struct ht_elem struct ht_elem Figure 9.2: Hash-Table Data-Structure Diagram The diagram shown in Figure  9.2  has bucket 0 with two elements and bucket 2 with one. Figure  9.3  shows mapping and locking functions. Lines 1 and 2 show the macro HASH2BKT() , which maps from a hash value to the corresponding  ht_bucket structure. This macro uses a simple modulus: if more aggressive hashing is required, the caller needs to implement it when mapping from key to hash value. The remaining two functions acquire and release the  ->htb_lock  corresponding to the specified hash value. Figure  9.4  shows  hashtab_lookup() , which returns a pointer to the element with the specified hash and key if it exists, or  NULL  otherwise. This function takes both a hash value and a pointer to the key because this allows users of this function to use arbitrary keys and arbitrary hash functions, with the key-comparison function passed in via  cmp() , in a manner similar to  qsort() . Line 11 maps from the hash value to a pointer to the corresponding hash bucket. Each pass through the loop spanning line 12-19 examines one element of the bucket’s hash chain. Line 15 checks to see if the hash values match, and if not, line 16 proceeds to the next element. Line 17 checks to see if the actual key matches, and if so, line 18 returns a pointer to the matching element. If no element matches, line 20 returns  NULL . Quick Quiz 9.2:  But isn’t the double comparison on lines 15-18 in Figure  9.4 229 1 #define HASH2BKT(htp, h) 2 (&(htp)->ht_bkt[h % (htp)->ht_nbuckets]) 3 4 static void hashtab_lock(struct hashtab  * htp, 5 unsigned long hash) 6 { 7 spin_lock(&HASH2BKT(htp, hash)->htb_lock); 8 } 9 10 static void hashtab_unlock(struct hashtab  * htp, 11 unsigned long hash) 12 { 13 spin_unlock(&HASH2BKT(htp, hash)->htb_lock); 14 } Figure 9.3: Hash-Table Mapping and Locking 1 struct ht_elem  * 2 hashtab_lookup(struct hashtab  * htp, 3 unsigned long hash, 4 void  * key, 5 int ( * cmp)(struct ht_elem  * htep, 6 void  * key)) 7 { 8 struct ht_bucket  * htb; 9 struct ht_elem  * htep; 10 11 htb = HASH2BKT(htp, hash); 12 cds_list_for_each_entry(htep, 13 &htb->htb_head, 14 hte_next) { 15 if (htep->hte_hash != hash) 16 continue; 17 if (cmp(htep, key)) 18 return htep; 19 } 20 return NULL; 21 } Figure 9.4: Hash-Table Lookup inefficient in the case where the key fits into an unsigned long? Figure  9.5  shows the hashtab_add() and hashtab_del() functions that add and delete elements from the hash table, respectively. The hashtab_add() function simply sets the element’s hash value on line 6, then adds it to the corresponding bucket on lines 7 and 8. The  hashtab_del()  function simply removes the specified element from whatever hash chain it is on, courtesy of the doubly linked nature of the hash-chain lists. Before calling either of these two functions, the caller is required to ensure that no other thread is accessing or modifying this same bucket, for example, by invoking  hashtab_lock()  beforehand. Figure  9.6  shows  hashtab_alloc()  and  hashtab_free() , which do hash- table allocation and freeing, respectively. Allocation begins on lines 7-9 with allocation of the underlying memory. If line 10 detects that memory has been exhausted, line 11 returns  NULL  to the caller. Otherwise, line 12 initializes the number of buckets, and the loop spanning lines 13-16 initializes the buckets themselves, including the chain list header on line 14 and the lock on line 15. Finally, line 17 returns a pointer to the newly allocated hash table. The  hashtab_free()  function on lines 20-23 is straightforward. 230 1 void 2 hashtab_add(struct hashtab  * htp, 3 unsigned long hash, 4 struct ht_elem  * htep) 5 { 6 htep->hte_hash = hash; 7 cds_list_add(&htep->hte_next, 8 &HASH2BKT(htp, hash)->htb_head); 9 } 10 11 void hashtab_del(struct ht_elem  * htep) 12 { 13 cds_list_del_init(&htep->hte_next); 14 } Figure 9.5: Hash-Table Modification 1 struct hashtab  * 2 hashtab_alloc(unsigned long nbuckets) 3 { 4 struct hashtab  * htp; 5 int i; 6 7 htp = malloc(sizeof( * htp) + 8 nbuckets  * 9 sizeof(struct ht_bucket)); 10 if (htp == NULL) 11 return NULL; 12 htp->ht_nbuckets = nbuckets; 13 for (i = 0; i < nbuckets; i++) { 14 CDS_INIT_LIST_HEAD(&htp->ht_bkt[i].htb_head); 15 spin_lock_init(&htp->ht_bkt[i].htb_lock); 16 } 17 return htp; 18 } 19 20 void hashtab_free(struct hashtab  * htp) 21 { 22 free(htp); 23 } Figure 9.6: Hash-Table Allocation and Free 9.2.3 Hash-Table Performance The performance results for an eight-CPU 2GHz Intel ® Xeon ® system using a bucket- locked hash table with 1024 buckets are shown in Figure  9.7.  The performance does scale nearly linearly, but is not much more than half of the ideal performance level, even at only eight CPUs. Part of this shortfall is due to the fact that the lock acquisitions and releases incur no cache misses on a single CPU, but do incur misses on two or more CPUs. And things only get worse with larger number of CPUs, as can be seen in Figure  9.8 . We do not need an additional line to show ideal performance: The performance for nine CPUs and beyond is worse than abysmal. This clearly underscores the dangers of  extrapolating performance from a modest number of CPUs. Of course, one possible reason for the collapse in performance might be that more hash buckets are needed. After all, we did not pad each hash bucket to a full cache line, so there are a number of hash buckets per cache line. It is possible that the resulting cache-thrashing comes into play at nine CPUs. This is of course easy to test 231  10000  20000  30000  40000  50000  60000  70000  80000  90000  1 2 3 4 5 6 7 8    T   o    t   a    l    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads ideal Figure 9.7: Read-Only Hash-Table Performance For Schrödinger’s Zoo by increasing the number of hash buckets. Quick Quiz 9.3:  Instead of simply increasing the number of hash buckets, wouldn’t it be better to cache-align the existing hash buckets? However, as can be seen in Figure  9.9,  although increasing the number of buckets does increase performance somewhat, scalability is still abysmal. In particular, we still see a sharp dropoff at nine CPUs and beyond. Furthermore, going from 8192 buckets to 16,384 buckets produced almost no increase in performance. Clearly something else is going on. The problem is that this is a multi-socket system, with CPUs 0-7 and 32-39 mapped to the first socket as shown in Table  9.1.  Test runs confined to the first eight CPUs therefore perform quite well, but tests that involve socket 0’s CPUs 0-7 as well as socket 1’s CPU 8 incur the overhead of passing data across socket boundaries. This can severely degrade performance, as was discussed in Section  2.2.1 . In short, large multi-socket systems require good locality of reference in addition to full partitioning. Quick Quiz 9.4:  Given the negative scalability of the Schrödinger’s Zoo application across sockets, why not just run multiple copies of the application, with each copy having a subset of the animals and confined to run on a single socket? One key property of the Schrödinger’s-zoo runs discussed thus far is that they are all read-only. This makes the performance degradation due to lock-acquisition-induced cache misses all the more painful. Even though we are not updating the underlying hash table itself, we are still paying the price for writing to memory. Of course, if the hash table was never going to be updated, we could dispense entirely with mutual exclusion. This approach is quite straightforward and is left as an exercise for the reader. But even with the occasional update, avoiding writes avoids cache misses, and allows the read-mostly data to be replicated across all the caches, which in turn promotes locality of reference. The next section therefore examines optimizations that can be carried out in read- mostly cases where updates are rare, but could happen at any time. 232  10000  15000  20000  25000  30000  35000  40000  45000  50000  55000  60000  0 10 20 30 40 50 60    T   o    t   a    l    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads Figure 9.8: Read-Only Hash-Table Performance For Schrödinger’s Zoo, 60 CPUs Socket Core 0 0 1 2 3 4 5 6 7 32 33 34 35 36 37 38 39 1 8 9 10 11 12 13 14 15 40 41 42 43 44 45 46 47 2 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55 3 24 25 26 27 28 29 30 31 56 47 58 59 60 61 62 63 Table 9.1: NUMA Topology of System Under Test 9.3 Read-Mostly Data Structures Although partitioned data structures can offer excellent scalability, NUMA effects can result in severe degradations of both performance and scalability. In addition, the need for readers to exclude writers can degrade performance in read-mostly situations. However, we can achieve both performance and scalability by using RCU, which was introduced in Section  8.3.  Similar results can be achieved using hazard pointers ( hazptr.c )  [ Mic04 ], which will be included in the performance results shown in this section  [McK13] . 9.3.1 RCU-Protected Hash Table Implementation For an RCU-protected hash table with per-bucket locking, updaters use locking ex- actly as described in Section  9.2,  but readers use RCU. The data structures remain as shown in Figure  9.1,  and the  HASH2BKT() ,  hashtab_lock() , and  hashtab_  unlock()  functions remain as shown in Figure  9.3.  However, readers use the lighter-weight concurrency-control embodied by  hashtab_lock_lookup()  and 233  10000  20000  30000  40000  50000  60000  70000  0 10 20 30 40 50 60    T   o    t   a    l    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads 1024 2048 409 6 8192 16384 Figure 9.9: Read-Only Hash-Table Performance For Schrödinger’s Zoo, Varying Buck- ets 1 static void hashtab_lock_lookup(struct hashtab  * htp, 2 unsigned long hash) 3 { 4 rcu_read_lock(); 5 } 6 7 static void hashtab_unlock_lookup(struct hashtab  * htp, 8 unsigned long hash) 9 { 10 rcu_read_unlock(); 11 } Figure 9.10: RCU-Protected Hash-Table Read-Side Concurrency Control hashtab_unlock_lookup()  shown in Figure  9.10. Figure 9.11 shows hashtab_lookup() fortheRCU-protectedper-bucket-locked hash table. This is identical to that in Figure  9.4  except that  cds_list_for_each_  entry()  is replaced by  cds_list_for_each_entry_rcu() . Both of these primitives sequence down the hash chain referenced by  htb->htb_head  but  cds_  list_for_each_entry_rcu()  also correctly enforces memory ordering in case of concurrent insertion. This is an important difference between these two hash-table implementations: Unlike the pure per-bucket-locked implementation, the RCU protected implementation allows lookups to run concurrently with insertions and deletions, and RCU-aware primitives like  cds_list_for_each_entry_rcu()  are required to correctly handle this added concurrency. Note also that  hashtab_lookup() ’s caller must be within an RCU read-side critical section, for example, the caller must invoke hashtab_lock_lookup() before invoking hashtab_lookup() (and of course invoke  hashtab_unlock_lookup()  some time afterwards). Quick Quiz 9.5:  But if elements in a hash table can be deleted concurrently with lookups, doesn’t that mean that a lookup could return a reference to a data element that was deleted immediately after it was looked up? 234 1 struct ht_elem 2  * hashtab_lookup(struct hashtab  * htp, 3 unsigned long hash, 4 void  * key, 5 int ( * cmp)(struct ht_elem  * htep, 6 void  * key)) 7 { 8 struct ht_bucket  * htb; 9 struct ht_elem  * htep; 10 11 htb = HASH2BKT(htp, hash); 12 cds_list_for_each_entry_rcu(htep, 13 &htb->htb_head, 14 hte_next) { 15 if (htep->hte_hash != hash) 16 continue; 17 if (cmp(htep, key)) 18 return htep; 19 } 20 return NULL; 21 } Figure 9.11: RCU-Protected Hash-Table Lookup 1 void 2 hashtab_add(struct hashtab  * htp, 3 unsigned long hash, 4 struct ht_elem  * htep) 5 { 6 htep->hte_hash = hash; 7 cds_list_add_rcu(&htep->hte_next, 8 &HASH2BKT(htp, hash)->htb_head); 9 } 10 11 void hashtab_del(struct ht_elem  * htep) 12 { 13 cds_list_del_rcu(&htep->hte_next); 14 } Figure 9.12: RCU-Protected Hash-Table Modification Figure  9.12  shows  hashtab_add()  and  hashtab_del() , both of which are quite similar to their counterparts in the non-RCU hash table shown in Figure  9.5.  The hashtab_add()  function uses  cds_list_add_rcu()  instead of   cds_list_  add()  in order to ensure proper ordering when an element is added to the hash table at the same time that it is being looked up. The  hashtab_del()  function uses cds_list_del_rcu() instead of  cds_list_del_init() to allow for the case where an element is looked up just before it is deleted. Unlike  cds_list_  del_init() ,  cds_list_del_rcu()  leaves the forward pointer intact, so that hashtab_lookup()  can traverse to the newly deleted element’s successor. Of course, after invoking hashtab_del() , the caller must wait for an RCU grace period (e.g., by invoking  synchronize_rcu() ) before freeing or otherwise reusing the memory for the newly deleted element. 9.3.2 RCU-Protected Hash Table Performance Figure  9.13  shows the read-only performance of RCU-protected and hazard-pointer- protected hash tables against the previous section’s per-bucket-locked implementation. As you can see, both RCU and hazard pointers achieve near-ideal performance and 235  1000  10000  100000  1e+06  1 10 100    T   o    t   a    l    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads global bucket RCU,hazptr ideal Figure 9.13: Read-Only RCU-Protected Hash-Table Performance For Schrödinger’s Zoo scalability despite the larger numbers of threads and the NUMA effects. Results from a globally locked implementation are also shown, and as expected the results are even worse than those of the per-bucket-locked implementation. RCU does slightly better than hazard pointers, but the difference is not readily visible in this log-scale plot. Figure  9.14  shows the same data on a linear scale. This drops the global-locking trace into the x-axis, but allows the relative performance of RCU and hazard pointers to be more readily discerned. Both show a change in slope at 32 CPUs, and this is due to hardware multithreading. At 32 and fewer CPUs, each thread has a core to itself. In this regime, RCU does better than does hazard pointers because hazard pointers’s read-side memory barriers result in dead time within the core. In short, RCU is better able to utilize a core from a single hardware thread than is hazard pointers. This situation changes above 32 CPUs. Because RCU is using more than half of  each core’s resources from a single hardware thread, RCU gains relatively litte benefit from the second hardware thread in each core. The slope of hazard pointers’s trace also decreases at 32 CPUs, but less dramatically, because the second hardware thread is able to fill in the time that the first hardware thread is stalled due to memory-barrier latency. As we will see in later sections, hazard pointers’s second-hardware-thread advantage depends on the workload. As noted earlier, Schrödinger is surprised by the popularity of his cat [ Sch35 ], but recognizes the need to reflect this popularity in his design. Figure  9.15  shows the results of 60-CPU runs, varying the number of CPUs that are doing nothing but looking up the cat. Both RCU and hazard pointers respond well to this challenge, but bucket locking scales negatively, eventually performing even worse than global locking. This should not be a surprise because if all CPUs are doing nothing but looking up the cat, the lock corresponding to the cat’s bucket is for all intents and purposes a global lock. This cat-only benchmark illustrates one potential problem with fully partitioned sharding approaches. Only the CPUs associated with the cat’s partition is able to access the cat, limiting the cat-only throughput. Of course, a great many applications have good load-spreading properties, and for these applications sharding works quite well. 236  0  100000  200000  300000  400000  500000  600000  700000  800000  900000  0 10 20 30 40 50 60    T   o    t   a    l    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads bucket hazptr ideal RCU Figure 9.14: Read-Only RCU-Protected Hash-Table Performance For Schrödinger’s Zoo, Linear Scale However, sharding does not handle “hot spots” very well, with the hot spot exemplified by Schrödinger’s cat being but one case in point. Of course, if we were only ever going to read the data, we would not need any concurrency control to begin with. Figure  9.16  therefore shows the effect of updates. At the extreme left-hand side of this graph, all 60 CPUs are doing lookups, while to the right all 60 CPUs are doing updates. For all four implementations, the number of  lookups per millisecond decreases as the number of updating CPUs increases, of course reaching zero lookups per millisecond when all 60 CPUs are updating. RCU does well relative to hazard pointers due to the fact that hazard pointers’s read-side memory barriers incur greater overhead in the presence of updates. It therefore seems likely that modern hardware heavily optimizes memory-barrier execution, greatly reducing memory-barrier overhead in the read-only case. Where Figure  9.16  showed the effect of increasing update rates on lookups, Fig- ure  9.17  shows the effect of increasing update rates on the updates themselves. Hazard pointers and RCU start off with a significant advantage because, unlike bucket locking, readers do not exclude updaters. However, as the number of updating CPUs increases, update-side overhead starts to make its presence known, first for RCU and then for hazard pointers. Of course, all three of these implementations fare much better than does global locking. Of course, it is quite possible that the differences in lookup performance is affected by the differences in update rates. One way to check this is to artificially throttle the update rates of per-bucket locking and hazard pointers to match that of RCU. Doing so does not significantly improve the lookup performace of per-bucket locking, nor does it close the gap between hazard pointers and RCU. However, removing hazard point- ers’s read-side memory barriers (thus resulting in an unsafe implementation of hazard pointers) does nearly close the gap between hazard pointers and RCU. Although this unsafe hazard-pointer implementation will usually be reliable enough for benchmarking purposes, it is absolutely not recommended for production use. Quick Quiz 9.6:  The dangers of extrapolating from eight CPUs to 60 CPUs was 237  10  100  1000  10000  100000  1e+06  1 10 100    C   a    t    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads Looking Up The Cat global bucket hazptr RCU Figure 9.15: Read-Side Cat-Only RCU-Protected Hash-Table Performance For Schrödinger’s Zoo at 60 CPUs made quite clear in Section  9.2.3.  But why should extrapolating up from 60 CPUs be any safer? 9.3.3 RCU-Protected Hash Table Discussion One consequence of the RCU and hazard-pointer implementations is that a pair of  concurrent readers might disagree on the state of the cat. For example, one of the readers might have fetched the pointer to the cat’s data structure just before it was removed, while another reader might have fetched this same pointer just afterwards. The first reader would then believe that the cat was alive, while the second reader would believe that the cat was dead. Of course, this situation is completely fitting for Schrödinger’s cat, but it turns out that it is quite reasonable for normal non-quantum cats as well. The reason for this is that it is impossible to determine exactly when an animal is born or dies. To see this, let’s suppose that we detect a cat’s death by heartbeat. This raise the question of exactly how long we should wait after the last heartbeat before declaring death. It is clearly ridiculous to wait only one millisecond, because then a healthy living cat would have to be declared dead—and then resurrected—more than once every second. It is equally ridiculous to wait a full month, because by that time the poor cat’s death would have made itself very clearly known via olfactory means. Because an animal’s heart can stop for some seconds and then start up again, there is a tradeoff between timely recognition of death and probability of false alarms. It is quite possible that a pair of veterinarians might disagree on the time to wait between the last heartbeat and the declaration of death. For example, one veterinarian might declare death thirty seconds after the last heartbeat, while another might insist on waiting a full minute. In this case, the two veterinarians would disagree on the state of the cat for the second period of thirty seconds following the last heartbeat, as fancifully depicted in Figure  9.18. 238  10  100  1000  10000  100000  1e+06  1 10 100    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs Doing Updates global bucket hazptr RCU Figure 9.16: Read-Side RCU-Protected Hash-Table Performance For Schrödinger’s Zoo at 60 CPUs Of course, Heisenberg taught us to live with this sort of uncertainty  [ Hei27 ] , which is a good thing because computing hardware and software acts similarly. For example, how do you know that a piece of computing hardware has failed? Often because it does not respond in a timely fashion. Just like the cat’s heartbeat, this results in a window of  uncertainty as to whether or not the hardware has failed. Furthermore, most computing systems are intended to interact with the outside world. Consistency with the outside world is therefore of paramount importance. However, as we saw in Figure  8.26  on page  186 , increased internal consistency can come at the expense of external consistency. Techniques such as RCU and hazard pointers give up some degree of internal consistency to attain improved external consistency. In short, internal consistency is not a natural part of all problem domains, and often incurs great expense in terms of performance, scalability, external consistency, or all of  the above. 9.4 Non-Partitionable Data Structures Fixed-size hash tables are perfectly partitionable, but resizable hash tables pose parti- tioning challenges when growing or shrinking, as fancifully depicted in Figure  9.19 . However, it turns out that it is possible to construct high-performance scalable RCU- protected hash tables, as described in the following sections. 9.4.1 Resizable Hash Table Design In happy contrast to the situation in the early 2000s, there are now no fewer than three different types of scalable RCU-protected hash tables. The first (and simplest) was developed for the Linux kernel by Herbert Xu [ Xu10 ], and is described in the following sections. The other two are covered briefly in Section  9.4.4. The key insight behind the first hash-table implementation is that each data element 239  10  100  1000  10000  100000  1 10 100    U   p    d   a    t   e   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs Doing Updates global bucket RCU hazptr Figure 9.17: Update-Side RCU-Protected Hash-Table Performance For Schrödinger’s Zoo at 60 CPUs Figure 9.18: Even Veterinarians Disagree! can have two sets of list pointers, with one set currently being used by RCU readers (as well as by non-RCU updaters) and the other being used to construct a new resized hash table. This approach allows lookups, insertions, and deletions to all run concurrently with a resize operation (as well as with each other). The resize operation proceeds as shown in Figures  9.20 - 9.23 , with the initial two- bucket state shown in Figure  9.20  and with time advancing from figure to figure. The initial state uses the zero-index links to chain the elements into hash buckets. A four- bucket array is allocated, and the one-index links are used to chain the elements into these four new hash buckets. This results in state (b) shown in Figure  9.21,  with readers still using the original two-bucket array. The new four-bucket array is exposed to readers and then a grace-period operation waits for all readers, resulting in state (c), shown in Figure  9.22.  In this state, all readers 240 Figure 9.19: Partitioning Problems Figure 9.20: Growing a Double-List Hash Table, State (a) are using the new four-bucket array, which means that the old two-bucket array may now be freed, resulting in state (d), shown in Figure  9.23 . This design leads to a relatively straightforward implementation, which is the subject of the next section. 9.4.2 Resizable Hash Table Implementation Resizing is accomplished by the classic approach of inserting a level of indirection, in this case, the ht structure shown on lines 12-25 of Figure  9.24.  The hashtab structure shown on lines 27-30 contains only a pointer to the current  ht  structure along with a spinlock that is used to serialize concurrent attempts to resize the hash table. If we were to use a traditional lock- or atomic-operation-based implementation, this  hashtab structure could become a severe bottleneck from both performance and scalability viewpoints. However, because resize operations should be relatively infrequent, we should be able to make good use of RCU. The  ht  structure represents a specific size of the hash table, as specified by the ->ht_nbuckets  field on line 13. The size is stored in the same structure containing the array of buckets ( ->ht_bkt[]  on line 24) in order to avoid mismatches between the size and the array. The  ->ht_resize_cur  field on line 14 is equal to -1 unless a resize operation is in progress, in which case it indicates the index of the bucket whose 241 Figure 9.21: Growing a Double-List Hash Table, State (b) Figure 9.22: Growing a Double-List Hash Table, State (c) elements are being inserted into the new hash table, which is referenced by the  ->ht_  new  field on line 15. If there is no resize operation in progress,  ->ht_new  is  NULL . Thus, a resize operation proceeds by allocating a new ht structure and referencing it via the  ->ht_new  pointer, then advancing  ->ht_resize_cur  through the old table’s buckets. When all the elements have been added to the new table, the new table is linked into the  hashtab  structure’s  ->ht_cur  field. Once all old readers have completed, the old hash table’s  ht  structure may be freed. The  ->ht_idx  field on line 16 indicates which of the two sets of list pointers are being used by this instantiation of the hash table, and is used to index the  ->hte_  next[]  array in the  ht_bucket  structure on line 3. The  ->ht_hash_private ,  ->ht_cmp() ,  ->ht_gethash() , and  ->ht_  getkey()  fields on lines 17-23 collectively define the per-element key and the hash function. The  ->ht_hash_private  allows the hash function to be per- turbed [ McK90a ,  McK90b ,  McK91 ], which can be used to avoid denial-of-service attacks based on statistical estimation of the parameters used in the hash function. The ->ht_cmp()  function compares a specified key with that of the specified element, the  ->ht_gethash()  calculates the specified key’s hash, and  ->ht_getkey() extracts the key from the enclosing data element. The ht_bucket structure is the same as before, and the ht_elem structure differs 242 Figure 9.23: Growing a Double-List Hash Table, State (d) 1 struct ht_elem { 2 struct rcu_head rh; 3 struct cds_list_head hte_next[2]; 4 unsigned long hte_hash; 5 }; 6 7 struct ht_bucket { 8 struct cds_list_head htb_head; 9 spinlock_t htb_lock; 10 }; 11 12 struct ht { 13 long ht_nbuckets; 14 long ht_resize_cur; 15 struct ht  * ht_new; 16 int ht_idx; 17 void  * ht_hash_private; 18 int ( * ht_cmp)(void  * hash_private, 19 struct ht_elem  * htep, 20 void  * key); 21 long ( * ht_gethash)(void  * hash_private, 22 void  * key); 23 void  * ( * ht_getkey)(struct ht_elem  * htep); 24 struct ht_bucket ht_bkt[0]; 25 }; 26 27 struct hashtab { 28 struct ht  * ht_cur; 29 spinlock_t ht_lock; 30 }; Figure 9.24: Resizable Hash-Table Data Structures from that of previous implementations only in providing a two-element array of list pointer sets in place of the prior single set of list pointers. In a fixed-sized hash table, bucket selection is quite straightforward: Simply trans- form the hash value to the corresponding bucket index. In contrast, when resizing, it is also necessary to determine which of the old and new sets of buckets to select from. If  the bucket that would be selected from the old table has already been distributed into the new table, then the bucket should be selected from the new table. Conversely, if the bucket that would be selected from the old table has not yet been distributed, then the bucket should be selected from the old table. BucketselectionisshowninFigure 9.25,  whichshows ht_get_bucket_single() on lines 1-8 and  ht_get_bucket()  on lines 10-24. The  ht_get_bucket_  single()  function returns a reference to the bucket corresponding to the specified key in the specified hash table, without making any allowances for resizing. It also stores the hash value corresponding to the key into the location referenced by parameter 243 1 static struct ht_bucket  * 2 ht_get_bucket_single(struct ht  * htp, 3 void  * key, long  * b) 4 { 5  * b = htp->ht_gethash(htp->ht_hash_private, 6 key) % htp->ht_nbuckets; 7 return &htp->ht_bkt[ * b]; 8 } 9 10 static struct ht_bucket  * 11 ht_get_bucket(struct ht  ** htp, void  * key, 12 long  * b, int  * i) 13 { 14 struct ht_bucket  * htbp; 15 16 htbp = ht_get_bucket_single( * htp, key, b); 17 if ( * b <= ( * htp)->ht_resize_cur) { 18  * htp = ( * htp)->ht_new; 19 htbp = ht_get_bucket_single( * htp, key, b); 20 } 21 if (i) 22  * i = ( * htp)->ht_idx; 23 return htbp; 24 } Figure 9.25: Resizable Hash-Table Bucket Selection b  on lines 5 and 6. Line 7 then returns a reference to the corresponding bucket. The  ht_get_bucket()  function handles hash-table selection, invoking  ht_  get_bucket_single() on line 16 to select the bucket corresponding to the hash in the current hash table, storing the hash value through parameter  b . If line 17 determines that the table is being resized and that line 16’s bucket has already been distributed across the new hash table, then line 18 selects the new hash table and line 19 selects the bucket corresponding to the hash in the new hash table, again storing the hash value through parameter  b . Quick Quiz 9.7:  The code in Figure  9.25  computes the hash twice! Why this blatant inefficiency? If line 21 finds that parameter  i  is non- NULL , then line 22 stores the pointer-set index for the selected hash table. Finally, line 23 returns a reference to the selected hash bucket. Quick Quiz 9.8:  How does the code in Figure  9.25  protect against the resizing process progressing past the selected bucket? Thisimplementationof  ht_get_bucket_single() and ht_get_bucket() will permit lookups and modifications to run concurrently with a resize operation. Read-side concurrency control is provided by RCU as was shown in Figure  9.10,  but theupdate-sideconcurrency-controlfunctions hashtab_lock_mod() and hashtab_  unlock_mod()  must now deal with the possibility of a concurrent resize operation as shown in Figure  9.26 . The hashtab_lock_mod() spans lines 1-19 in the figure. Line 9 enters an RCU read-side critical section to prevent the data structures from being freed during the traversal, line 10 acquires a reference to the current hash table, and then line 11 obtains a reference to the bucket in this hash table corresponding to the key. Line 12 acquires that bucket’s lock, which will prevent any concurrent resizing operation from distributing that bucket, though of course it will have no effect if the resizing operation has already distributed this bucket. Line 13 then checks to see if a concurrent resize operation has already distributed this bucket across the new hash table, and if not, line 14 returns with 244 1 void hashtab_lock_mod(struct hashtab  * htp_master, 2 void  * key) 3 { 4 long b; 5 struct ht  * htp; 6 struct ht_bucket  * htbp; 7 struct ht_bucket  * htbp_new; 8 9 rcu_read_lock(); 10 htp = rcu_dereference(htp_master->ht_cur); 11 htbp = ht_get_bucket_single(htp, key, &b); 12 spin_lock(&htbp->htb_lock); 13 if (b > htp->ht_resize_cur) 14 return; 15 htp = htp->ht_new; 16 htbp_new = ht_get_bucket_single(htp, key, &b); 17 spin_lock(&htbp_new->htb_lock); 18 spin_unlock(&htbp->htb_lock); 19 } 20 21 void hashtab_unlock_mod(struct hashtab  * htp_master, 22 void  * key) 23 { 24 long b; 25 struct ht  * htp; 26 struct ht_bucket  * htbp; 27 28 htp = rcu_dereference(htp_master->ht_cur); 29 htbp = ht_get_bucket(&htp, key, &b, NULL); 30 spin_unlock(&htbp->htb_lock); 31 rcu_read_unlock(); 32 } Figure 9.26: Resizable Hash-Table Update-Side Concurrency Control the selected hash bucket’s lock held (and also within an RCU read-side critical section). Otherwise, a concurrent resize operation has already distributed this bucket, so line 15 proceeds to the new hash table and line 16 selects the bucket corresponding to the key. Finally, line 17 acquires the bucket’s lock and line 18 releases the lock for the old hash table’s bucket. Once again,  hashtab_lock_mod()  exits within an RCU read-side critical section. Quick Quiz 9.9:  The code in Figures  9.25  and  9.26  compute the hash and execute the bucket-selection logic twice for updates! Why this blatant inefficiency? The hashtab_unlock_mod() functionreleasesthelockacquiredby hashtab_  lock_mod() . Line 28 picks up the current hash table, and then line 29 invokes ht_get_bucket()  in order to gain a reference to the bucket that corresponds to the key—and of course this bucket might well in a new hash table. Line 30 releases the bucket’s lock and finally line 31 exits the RCU read-side critical section. Quick Quiz 9.10:  Suppose that one thread is inserting an element into the new hash table during a resize operation. What prevents this insertion to be lost due to a subsequent resize operation completing before the insertion does? Now that we have bucket selection and concurrency control in place, we are ready to search and update our resizable hash table. The  hashtab_lookup() ,  hashtab_  add() , and  hashtab_del()  functions shown in Figure  9.27 . The  hashtab_lookup()  function on lines 1-21 of the figure does hash lookups. Line 11 fetches the current hash table and line 12 obtains a reference to the bucket corresponding to the specified key. This bucket will be located in a new resized hash table when a resize operation has progressed past the bucket in the old hash table that contained the desired data element. Note that line 12 also passes back the index that 245 1 struct ht_elem  * 2 hashtab_lookup(struct hashtab  * htp_master, 3 void  * key) 4 { 5 long b; 6 int i; 7 struct ht  * htp; 8 struct ht_elem  * htep; 9 struct ht_bucket  * htbp; 10 11 htp = rcu_dereference(htp_master->ht_cur); 12 htbp = ht_get_bucket(&htp, key, &b, &i); 13 cds_list_for_each_entry_rcu(htep, 14 &htbp->htb_head, 15 hte_next[i]) { 16 if (htp->ht_cmp(htp->ht_hash_private, 17 htep, key)) 18 return htep; 19 } 20 return NULL; 21 } 22 23 void 24 hashtab_add(struct hashtab  * htp_master, 25 struct ht_elem  * htep) 26 { 27 long b; 28 int i; 29 struct ht  * htp; 30 struct ht_bucket  * htbp; 31 32 htp = rcu_dereference(htp_master->ht_cur); 33 htbp = ht_get_bucket(&htp, htp->ht_getkey(htep), 34 &b, &i); 35 cds_list_add_rcu(&htep->hte_next[i], 36 &htbp->htb_head); 37 } 38 39 void 40 hashtab_del(struct hashtab  * htp_master, 41 struct ht_elem  * htep) 42 { 43 long b; 44 int i; 45 struct ht  * htp; 46 struct ht_bucket  * htbp; 47 48 htp = rcu_dereference(htp_master->ht_cur); 49 htbp = ht_get_bucket(&htp, htp->ht_getkey(htep), 50 &b, &i); 51 cds_list_del_rcu(&htep->hte_next[i]); 52 } Figure 9.27: Resizable Hash-Table Access Functions will be used to select the correct set of pointers from the pair in each element. The loop spanning lines 13-19 searches the bucket, so that if line 16 detects a match, line 18 returns a pointer to the enclosing data element. Otherwise, if there is no match, line 20 returns  NULL  to indicate failure. Quick Quiz 9.11:  In the  hashtab_lookup()  function in Figure  9.27,  the code carefully finds the right bucket in the new hash table if the element to be looked up has already been distributed by a concurrent resize operation. This seems wasteful for RCU-protected lookups. Why not just stick with the old hash table in this case? The  hashtab_add()  function on lines 23-37 of the figure adds new data el- ements to the hash table. Lines 32-34 obtain a pointer to the hash bucket corre- 246 sponding to the key (and provide the index), as before, and line 35 adds the new element to the table. The caller is required to handle concurrency, for example, by invoking  hashtab_lock_mod()  before the call to  hashtab_add()  and invok- ing hashtab_unlock_mod() afterwards. These two concurrency-control functions will correctly synchronize with a concurrent resize operation: If the resize operation has already progressed beyond the bucket that this data element would have been added to, then the element is added to the new table. The  hashtab_del()  function on lines 39-52 of the figure removes an existing element from the hash table. Lines 48-50 provide the bucket and index as before, and line 51 removes the specified element. As with  hashtab_add() , the caller is respon- sible for concurrency control and this concurrency control suffices for synchronizing with a concurrent resize operation. Quick Quiz 9.12:  The  hashtab_del()  function in Figure  9.27  does not always remove the element from the old hash table. Doesn’t this mean that readers might access this newly removed element after it has been freed? The actual resizing itself is carried out by  hashtab_resize , shown in Fig- ure  9.28  on page  248.  Line 17 conditionally acquires the top-level  ->ht_lock , and if this acquisition fails, line 18 returns  -EBUSY  to indicate that a resize is already in progress. Otherwise, line 19 picks up a reference to the current hash table, and lines 21- 24 allocate a new hash table of the desired size. If a new set of hash/key functions have been specified, these are used for the new table, otherwise those of the old table are preserved. If line 25 detects memory-allocation failure, line 26 releases  ->htlock and line 27 returns a failure indication. Line 29 starts the bucket-distribution process by installing a reference to the new table into the  ->ht_new  field of the old table. Line 30 ensures that all readers who are not aware of the new table complete before the resize operation continues. Line 31 picks up the current table’s index and stores its inverse to the new hash table, thus ensuring that the two hash tables avoid overwriting each other’s linked lists. Each pass through the loop spanning lines 33-44 distributes the contents of one of  the old hash table’s buckets into the new hash table. Line 34 picks up a reference to the old table’s current bucket, line 35 acquires that bucket’s spinlock, and line 36 updates ->ht_resize_cur  to indicate that this bucket is being distributed. Quick Quiz 9.13:  In the  hashtab_resize()  function in Figure  9.27,  what guarantees that the update to  ->ht_new  on line 29 will be seen as happening before the update to  ->ht_resize_cur  on line 36 from the perspective of   hashtab_  lookup() ,  hashtab_add() , and  hashtab_del() ? Each pass through the loop spanning lines 37-42 adds one data element from the current old-table bucket to the corresponding new-table bucket, holding the new-table bucket’s lock during the add operation. Finally, line 43 releases the old-table bucket lock. Execution reaches line 45 once all old-table buckets have been distributed across the new table. Line 45 installs the newly created table as the current one, and line 46 waits for all old readers (who might still be referencing the old table) to complete. Then line 47 releases the resize-serialization lock, line 48 frees the old hash table, and finally line 48 returns success. 247 1 int hashtab_resize(struct hashtab  * htp_master, 2 unsigned long nbuckets, void  * hash_private, 3 int ( * cmp)(void  * hash_private, struct ht_elem  * htep, void  * key), 4 long ( * gethash)(void  * hash_private, void  * key), 5 void  * ( * getkey)(struct ht_elem  * htep)) 6 { 7 struct ht  * htp; 8 struct ht  * htp_new; 9 int i; 10 int idx; 11 struct ht_elem  * htep; 12 struct ht_bucket  * htbp; 13 struct ht_bucket  * htbp_new; 14 unsigned long hash; 15 long b; 16 17 if (!spin_trylock(&htp_master->ht_lock)) 18 return -EBUSY; 19 htp = htp_master->ht_cur; 20 htp_new = ht_alloc(nbuckets, 21 hash_private ? hash_private : htp->ht_hash_private, 22 cmp ? cmp : htp->ht_cmp, 23 gethash ? gethash : htp->ht_gethash, 24 getkey ? getkey : htp->ht_getkey); 25 if (htp_new == NULL) { 26 spin_unlock(&htp_master->ht_lock); 27 return -ENOMEM; 28 } 29 htp->ht_new = htp_new; 30 synchronize_rcu(); 31 idx = htp->ht_idx; 32 htp_new->ht_idx = !idx; 33 for (i = 0; i < htp->ht_nbuckets; i++) { 34 htbp = &htp->ht_bkt[i]; 35 spin_lock(&htbp->htb_lock); 36 htp->ht_resize_cur = i; 37 cds_list_for_each_entry(htep, &htbp->htb_head, hte_next[idx]) { 38 htbp_new = ht_get_bucket_single(htp_new, htp_new->ht_getkey(htep), &b); 39 spin_lock(&htbp_new->htb_lock); 40 cds_list_add_rcu(&htep->hte_next[!idx], &htbp_new->htb_head); 41 spin_unlock(&htbp_new->htb_lock); 42 } 43 spin_unlock(&htbp->htb_lock); 44 } 45 rcu_assign_pointer(htp_master->ht_cur, htp_new); 46 synchronize_rcu(); 47 spin_unlock(&htp_master->ht_lock); 48 free(htp); 49 return 0; 50 } Figure 9.28: Resizable Hash-Table Resizing 9.4.3 Resizable Hash Table Discussion Figure  9.29  compares resizing hash tables to their fixed-sized counterparts for 2048, 16,384, and 131,072 elements in the hash table. The figure shows three traces for each element count, one for a fixed-size 1024-bucket hash table, another for a fixed-size 2048-bucket hash table, and a third for a resizable hash table that shifts back and forth between 1024 and 2048 buckets, with a one-millisecond pause between each resize operation. The uppermost three traces are for the 2048-element hash table. The upper trace corresponds to the 2048-bucket fixed-size hash table, the middle trace to the 1024- bucket fixed-size hash table, and the lower trace to the resizable hash table. In this case, the short hash chains cause normal lookup overhead to be so low that the overhead 248  100  1000  10000  100000  1e+06  1e+07  1 10 100    L   o   o    k   u   p   s   p   e   r    M    i    l    l    i   s   e   c   o   n    d Number of CPUs/Threads 2048 16,384 131,072 Figure 9.29: Overhead of Resizing Hash Tables of resizing dominates. Nevertheless, the larger fixed-size hash table has a significant performance advantage, so that resizing can be quite beneficial, at least given sufficient time between resizing operations: One millisecond is clearly too short a time. The middle three traces are for the 16,384-element hash table. Again, the upper trace corresponds to the 2048-bucket fixed-size hash table, but the middle trace now corresponds to the resizable hash table and the lower trace to the 1024-bucket fixed-size hash table. However, the performance difference between the resizable and the 1024- bucket hash table is quite small. One consequence of the eight-fold increase in number of elements (and thus also in hash-chain length) is that incessant resizing is now no worse than maintaining a too-small hash table. The lower three traces are for the 131,072-element hash table. The upper trace corresponds to the 2048-bucket fixed-size hash table, the middle trace to the resizable hash table, and the lower trace to the 1024-bucket fixed-size hash table. In this case, longer hash chains result in higher lookup overhead, so that this lookup overhead dominates that of resizing the hash table. However, the performance of all three approaches at the 131,072-element level is more than an order of magnitude worse than that at the 2048-element level, suggesting that the best strategy would be a single 64-fold increase in hash-table size. The key point from this data is that the RCU-protected resizable hash table performs and scales almost as well as does its fixed-size counterpart. The performance during an actual resize operation of course suffers somewhat due to the cache misses causes by the updates to each element’s pointers, and this effect is most pronounced when the hash-tables bucket lists are short. This indicates that hash tables should be resized by substantial amounts, and that hysteresis should be be applied to prevent performance degradation due to too-frequent resize operations. In memory-rich environments, hash- table sizes should furthermore be increased much more aggressively than they are decreased. Another key point is that although the  hashtab  structure is non-partitionable, it is also read-mostly, which suggests the use of RCU. Given that the performance and scalability of this resizable hash table is very nearly that of RCU-protected fixed-sized 249 hash tables, we must conclude that this approach was quite successful. Finally, it is important to note that insertions, deletions, and lookups can proceed concurrently with a resize operation. This concurrency is critically important when resizing large hash tables, especially for applications that must meet severe response- time constraints. Of course, the  ht_elem  structure’s pair of pointer sets does impose some memory overhead, which is taken up in the next section. 9.4.4 Other Resizable Hash Tables One shortcoming of the resizable hash table described earlier in this section is memory consumption. Each data element has two pairs of linked-list pointers rather than just one. Is it possible to create an RCU-protected resizable hash table that makes do with  just one pair? It turns out that the answer is “yes.” Josh Triplett et al. [ TMW11 ] produced a relativistic hash table  that incrementally splits and combines corresponding hash chains so that readers always see valid hash chains at all points during the resizing operation. This incremental splitting and combining relies on the fact that it is harmless for a reader to see a data element that should be in some other hash chain: When this happens, the reader will simply ignore the extraneous data element due to key mismatches. The process of shrinking a relativistic hash table by a factor of two is shown in Figure  9.30,  in this case shrinking a two-bucket hash table into a one-bucket hash table, otherwise known as a linear list. This process works by coalescing pairs of buckets in the old larger hash table into single buckets in the new smaller hash table. For this process to work correctly, we clearly need to constrain the hash functions for the two tables. One such constraint is to use the same underlying hash function for both tables, but to throw out the low-order bit when shrinking from large to small. For example, the old two-bucket hash table would use the two top bits of the value, while the new one-bucket hash table could use the top bit of the value. In this way, a given pair of  adjacent even and odd buckets in the old large hash table can be coalesced into a single bucket in the new small hash table, while still having a single hash value cover all of the elements in that single bucket. The initial state is shown at the top of the figure, with time advancing from top to bottom, starting with initial state (a). The shrinking process begins by allocating the new smaller array of buckets, and having each bucket of this new smaller array reference the first element of one of the buckets of the corresponding pair in the old large hash table, resulting in state (b). Then the two hash chains are linked together, resulting in state (c). In this state, readers looking up an even-numbered element see no change, and readers looking up elements 1 and 3 likewise see no change. However, readers looking up some other odd number will also traverse elements 0 and 2. This is harmless because any odd number will compare not-equal to these two elements. There is some performance loss, but on the other hand, this is exactly the same performance loss that will be experienced once the new small hash table is fully in place. Next, the new small hash table is made accessible to readers, resulting in state (d). Note that older readers might still be traversing the old large hash table, so in this state both hash tables are in use. The next step is to wait for all pre-existing readers to complete, resulting in state (e). In this state, all readers are using the new small hash table, so that the old large hash 250 Figure 9.30: Shrinking a Relativistic Hash Table table’s buckets may be freed, resulting in the final state (f). Growing a relativistic hash table reverses the shrinking process, but requires more grace-period steps, as shown in Figure  9.31.  The initial state (a) is at the top of this figure, with time advancing from top to bottom. We start by allocating the new large two-bucket hash table, resulting in state (b). Note that each of these new buckets references the first element destined for that bucket. These new buckets are published to readers, resulting in state (c). After a grace-period operation, all readers are using the new large hash table, resulting in state (d). In this state, only those readers traversing the even-values hash bucket traverse element 0, which is therefore now colored white. At this point, the old small hash buckets may be freed, although many implemen- tations use these old buckets to track progress “unzipping” the list of items into their respective new buckets. The last even-numbered element in the first consecutive run of such elements now has its pointer-to-next updated to reference the following even- numbered element. After a subsequent grace-period operation, the result is state (e). The vertical arrow indicates the next element to be unzipped, and element 1 is now colored black to indicate that only those readers traversing the odd-values hash bucket may reach it. Next, the last odd-numbered element in the first consecutive run of such elements 251 Figure 9.31: Growing a Relativistic Hash Table now has its pointer-to-next updated to reference the following odd-numbered element. After a subsequent grace-period operation, the result is state (f). A final unzipping operation (including a grace-period operation) results in the final state (g). In short, the relativistic hash table reduces the number of per-element list pointers at the expense of additional grace periods incurred during resizing. These additional grace periods are usually not a problem because insertions, deletions, and lookups may proceed concurrently with a resize operation. It turns out that it is possible to reduce the per-element memory overhead from a pair of pointers to a single pointer, while still retaining  O ( 1 )  deletions. This is accomplished by augmenting split-order list [ SS06 ] with RCU protection  [ Des09 ,  MDJ13a ]. The data elements in the hash table are arranged into a single sorted linked list, with each hash bucket referencing the first element in that bucket. Elements are deleted by setting low-order bits in their pointer-to-next fields, and these elements are removed from the list by later traversals that encounter them. This RCU-protected split-order list is complex, but offers lock-free progress guaran- tees for all insertion, deletion, and lookup operations. Such guarantees can be important 252 in real-time applications. An implementation is available from recent versions of the userspace RCU library [ Des09 ]. 9.5 Other Data Structures The preceding sections have focused on data structures that enhance concurrency due to partitionability (Section  9.2) , efficient handling of read-mostly access patterns (Sec- tion  9.3) , or application of read-mostly techniques to avoid non-partitionability (Sec- tion  9.4) . This section gives a brief review of other data structures. One of the hash table’s greatest advantages for parallel use is that it is fully parti- tionable, at least while not being resized. One way of preserving the partitionability and the size independence is to use a radix tree, which is also called a trie. Tries partition the search key, using each successive key partition to traverse the next level of the trie. As such, a trie can be thought of as a set of nested hash tables, thus providing the required partitionability. One disadvantage of tries is that a sparse key space can result in inefficient use of memory. There are a number of compression techniques that may be used to work around this disadvantage, including hashing the key value to a smaller keyspace before the traversal  [ON06] . Radix trees are heavily used in practice, including in the Linux kernel  [Pig06] . One important special case of both a hash table and a trie is what is perhaps the oldest of data structures, the array and its multi-dimensional counterpart, the matrix. The fully partitionable nature of matrices is exploited heavily in concurrent numerical algorithms. Self-balancing trees are heavily used in sequential code, with AVL trees and red- black trees being perhaps the most well-known examples [ CLRS01 ]. Early attempts to parallelize AVL trees were complex and not necessarily all that efficient  [ Ell80 ], how- ever, more recent work on red-black trees provides better performance and scalability by using RCU for readers and hashed arrays of locks 1 to protect reads and updates, respectively [ HW11 ,  HW13 ]. It turns out that red-black trees rebalance aggressively, which works well for sequential programs, but not necessarily so well for parallel use. Recent work has therefore made use of RCU-protected “bonsai trees” that rebalance less aggressively [ CKZ12 ] , trading off optimal tree depth to gain more efficient concurrent updates. Concurrent skip lists lend themselves well to RCU readers, and in fact represents an early academic use of a technique resembling RCU [ Pug90 ]. Concurrent double-ended queues were discussed in Section  5.1.2 , and concur- rent stacks and queues have a long history [ Tre86 ] , though not normally the most impressive performance or scalability. They are nevertheless a common feature of  concurrent libraries  [ MDJ13b ]. Researchers have recently proposed relaxing the ordering constraints of stacks and queues  [ Sha11 ], with some work indicating that relaxed-ordered queues actually have better ordering properties than do strict FIFO queues [ HKLP12,  KLP12,  HHK + 13 ]. It seems likely that continued work with concurrent data structures will produce novel algorithms with surprising properties. 1 In the guise of swissTM  [ DFGG11 ] , which is a variant of software transactional memory in which the developer flags non-shared accesses. 253 9.6 Micro-Optimization The data structures shown in this section were coded straightforwardly, with no adap- tation to the underlying system’s cache hierarchy. In addition, many of the imple- mentations used pointers to functions for key-to-hash conversions and other frequent operations. Although this approach provides simplicity and portability, in many cases it does give up some performance. The following sections touch on specialization, memory conservation, and hardware considerations. Please do not mistakes these short sections for a definitive treatise on this subject. Whole books have been written on optimizing to a specific CPU, let alone to the set of CPU families in common use today. 9.6.1 Specialization The resizable hash table presented in Section  9.4  used an opaque type for the key. This allows great flexibility, permitting any sort of key to be used, but it also incurs significant overhead due to the calls via of pointers to functions. Now, modern hardware uses sophisticated branch-prediction techniques to minimize this overhead, but on the other hand, real-world software is often larger than can be accommodated even by today’s large hardware branch-prediction tables. This is especially the case for calls via pointers, in which case the branch prediction hardware must record a pointer in addition to branch-taken/branch-not-taken information. This overhead can be eliminated by specializing a hash-table implementation to a given key type and hash function. Doing so eliminates the  ->ht_cmp() ,  ->ht_  gethash() , and  ->ht_getkey()  function pointers in the  ht  structure shown in Figure  9.24  on page  243 . It also eliminates the corresponding calls through these point- ers, which could allow the compiler to inline the resulting fixed functions, eliminating not only the overhead of the call instruction, but the argument marshalling as well. In addition, the resizable hash table is designed to fit an API that segregates bucket selection from concurrency control. Although this allows a single torture test to exercise all the hash-table implementations in this chapter, it also means that many operations must compute the hash and interact with possible resize operations twice rather than just once. In a performance-conscious environment, the hashtab_lock_mod() function would also return a reference to the bucket selected, eliminating the subsequent call to ht_get_bucket() . Quick Quiz 9.14:  Couldn’t the  hashtorture.h  code be modified to accommo- date a version of   hashtab_lock_mod()  that subsumes the  ht_get_bucket() functionality? Quick Quiz 9.15:  How much do these specializations really save? Are they really worth it? All that aside, one of the great benefits of modern hardware compared to that available when I first started learning to program back in the early 1970s is that much less specialization is required. This allows much greater productivity than was possible back in the days of four-kilobyte address spaces. 9.6.2 Bits and Bytes The hash tables discussed in this chapter made almost no attempt to conserve memory. For example, the  ->ht_idx  field in the  ht  structure in Figure  9.24  on page  243 always has a value of either zero or one, yet takes up a full 32 bits of memory. It 254 could be eliminated, for example, by stealing a bit from the ->ht_resize_key  field. This works because the  ->ht_resize_key  field is large enough to address every byte of memory and the  ht_bucket  structure is more than one byte long, so that the ->ht_resize_key  field must have several bits to spare. This sort of bit-packing trick is frequently used in data structures that are highly replicated, as is the  page  structure in the Linux kernel. However, the resizable hash table’s  ht  structure is not all that highly replicated. It is instead the  ht_bucket structures we should focus on. There are two major opportunities for shrinking the ht_bucket  structure: (1) Placing the  ->htb_lock  field in a low-order bit of one of  the  ->htb_head  pointers and (2) Reducing the number of pointers required. The first opportunity might make use of bit-spinlocks in the Linux kernel, which are provided by the  include/linux/bit_spinlock.h  header file. These are used in space-critical data structures in the Linux kernel, but are not without their disadvantages: 1. They are significantly slower than the traditional spinlock primitives. 2.  They cannot participate in the lockdep deadlock detection tooling in the Linux kernel [ Cor06a ]. 3. They do not record lock ownership, further complicating debugging. 4.  They do not participate in priority boosting in -rt kernels, which means that preemption must be disabled when holding bit spinlocks, which can degrade real-time latency. Despite these disadvantages, bit-spinlocks are extremely useful when memory is at a premium. One aspect of the second opportunity was covered in Section  9.4.4 , which presented resizable hash tables that require only one set of bucket-list pointers in place of the pair of sets required by the resizable hash table presented in Section  9.4.  Another approach would be to use singly linked bucket lists in place of the doubly linked lists used in this chapter. One downside of this approach is that deletion would then require additional overhead, either by marking the outgoing pointer for later removal or by searching the bucket list for the element being deleted. In short, there is a tradeoff between minimal memory overhead on the one hand, and performance and simplicity on the other. Fortunately, the relatively large memories available on modern systems have allowed us to prioritize performance and simplicity over memory overhead. However, even with today’s large-memory systems 2 it is sometime necessary to take extreme measures to reduce memory overhead. 9.6.3 Hardware Considerations Modern computers typically move data between CPUs and main memory in fixed-sized blocks that range in size from 32 bytes to 256 bytes. These blocks are called  cache lines , and are extremely important to high performance and scalability, as was discussed in Section  2.2 . One timeworn way to kill both performance and scalability is to place incompatible variables into the same cacheline. For example, suppose that a resizable hash table data element had the  ht_elem  structure in the same cacheline as a counter that was incremented quite frequently. The frequent incrementing would cause the 2 Smartphones with gigabytes of memory, anyone? 255 struct hash_elem { struct ht_elem e; long __attribute__ ((aligned(64))) counter; }; Figure 9.32: Alignment for 64-Byte Cache Lines cacheline to be present at the CPU doing the incrementing, but nowhere else. If other CPUs attempted to traverse the hash bucket list containing that element, they would incur expensive cache misses, degrading both performance and scalability. One way to solve this problem on systems with 64-byte cache line is shown in Figure  9.32.  Here a gcc  aligned  attribute is used to force the  ->counter  and the ht_elem  structure into separate cache lines. This would allow CPUs to traverse the hash bucket list at full speed despite the frequent incrementing. Of course, this raises the question “How did we know that cache lines are 64 bytes in size?” On a Linux system, this information may be obtained from the /sys/devices/system/cpu/cpu * /cache/  directories, and it is even possi- ble to make the installation process rebuild the application to accommodate the system’s hardware structure. However, this would be more difficult if you wanted your applica- tion to also run on non-Linux systems. Furthermore, even if you were content to run only on Linux, such a self-modifying installation poses validation challenges. Fortunately, there are some rules of thumb that work reasonably well in practice, which were gathered into a 1995 paper  [ GKPS95 ] . 3 The first group of rules involve rearranging structures to accommodate cache geometry: 1.  Separate read-mostly data from data that is frequently updated. For example, place read-mostly data at the beginning of the structure and frequently updated data at the end. Where possible, place data that is rarely accessed in between. 2.  If the structure has groups of fields such that each group is updated by an indepen- dent code path, separate these groups from each other. Again, it can make sense to place data that is rarely accessed between the groups. In some cases, it might also make sense to place each such group into a separate structure referenced by the original structure. 3.  Where possible, associate update-mostly data with a CPU, thread, or task. We saw several very effective examples of this rule of thumb in the counter imple- mentations in Chapter  4. 4.  In fact, where possible, you should partition your data on a per-CPU, per-thread, or per-task basis, as was discussed in Chapter  7. There has recently been some work towards automated trace-based rearrangement of structure fields [ GDZE10 ]. This work might well ease one of the more painstaking tasks required to get excellent performance and scalability from multithreaded software. An additional set of rules of thumb deal with locks: 1.  Given a heavily contended lock protecting data that is frequently modified, take one of the following approaches: 3 A number of these rules are paraphrased and expanded on here with permission from Orran Krieger. 256 (a) Place the lock in a different cacheline than the data that it protects. (b) Use a lock that is adapted for high contention, such as a queued lock. (c)  Redesign to reduce lock contention. (This approach is best, but can require quite a bit of work.) 2.  Place uncontended locks into the same cache line as the data that they protect. This approach means that the cache miss that brought the lock to the current CPU also brought its data. 3.  Protect read-mostly data with RCU, or, if RCU cannot be used and the critical sections are of very long duration, reader-writer locks. Of course, these are rules of thumb rather than absolute rules. Some experimentation is required to work out which are most applicable to your particular situation. 9.7 Summary This chapter has focused primarily on hash tables, including resizable hash tables, which are not fully partitionable. Section  9.5  gave a quick overview of a few non-hash-table data structures. Nevertheless, this exposition of hash tables is an excellent introduction to the many issues surrounding high-performance scalable data access, including: 1.  Fully partitioned data structures work well on small systems, for example, single- socket systems. 2. Larger systems require locality of reference as well as full partitioning. 3.  Read-mostly techniques, such as hazard pointers and RCU, provide good locality of reference for read-mostly workloads, and thus provide excellent performance and scalability even on larger systems. 4.  Read-mostly techniques also work well on some types of non-partitionable data structures, such as resizable hash tables. 5.  Additional performance and scalability can be obtained by specializing the data structure to a specific workload, for example, by replacing a general key with a 32-bit integer. 6.  Although requirements for portability and for extreme performance often conflict, there are some data-structure-layout techniques that can strike a good balance between these two sets of requirements. That said, performance and scalability is of little use without reliability, so the next chapter covers validation. 257 258 Chapter 10 Validation I have had a few parallel programs work the first time, but that is only because I have written a large number parallel programs over the past two decades. And I have had far more parallel programs that fooled me into thinking that they were working correctly the first time than actually were working the first time. I have therefore had great need of validation for my parallel programs. The basic trick behind parallel validation, as with other software validation, is to realize that the computer knows what is wrong. It is therefore your job to force it to tell you. This chapter can therefore be thought of as a short course in machine interrogation. 1 A longer course may be found in many recent books on validation, as well as at least one rather old but quite worthwhile one [ Mye79 ] . Validation is an extremely important topic that cuts across all forms of software, and is therefore worth intensive study in its own right. However, this book is primarily about concurrency, so this chapter will necessarily do little more than scratch the surface of this critically important topic. Section  10.1  introduces the philosophy of debugging. Section  10.2  discusses tracing, Section 10.3 discussesassertions, andSection 10.4 discussesstaticanalysis. Section 10.5 describes some unconventional approaches to code review that can be helpful when the fabled 10,000 eyes happen not to be looking at your code. Section  10.6  gives an overview of the use of probability for validating parallel software. Because performance and scalability are first-class requirements for parallel programming, Section  10.7  which covers these topics. Finally, Section  10.8  gives a fanciful summary and a short list of  statistical traps to avoid. 10.1 Introduction Section  10.1.1  discusses the sources of bugs, and Section  10.1.2  overviews the mindset required when validating software. Section  10.1.3  discusses when you should start validation, and Section  10.1.4  describes the surprisingly effective open-source regimen of code review and community testing. 1 But you can leave the thumbscrews and waterboards at home. This chapter covers much more so- phisticated and effective methods, especially given that most computer systems neither feel pain nor fear drowning. 259 10.1.1 Where Do Bugs Come From? Bugs come from developers. The basic problem is that the human brain did not evolve with computer software in mind. Instead, the human brain evolved in concert with other human brains and with animal brains. Because of this history, the following three characteristics of computers often come as a shock to human intuition: 1.  Computers typically lack common sense, despite decades of research sacrificed at the altar of artificial intelligence. 2.  Computers generally fail to understand user intent, or more formally, computers generally lack a theory of mind. 3.  Computers usually cannot do anything useful with a fragmentary plan, instead requiring that each and every detail of each and every possible scenario be spelled out in full. The first two points should be uncontroversial, as they are illustrated by any number of failed products, perhaps most famously Clippy and Microsoft Bob. By attempting to relate to users as people, these two products raised common-sense and theory-of- mind expectations that they proved incapable of meeting. Perhaps the set of software assistants that have recently started appearing on smartphones will fare better. That said, the developers working on them by all accounts still develop the old way: The assistants might well benefit end users, but not so much their own developers. This human love of fragmentary plans deserves more explanation, especially given that it is a classic two-edged sword. This love of fragmentary plans is apparently due to the assumption that the person carrying out the plan will have (1) common sense and (2) a good understanding of the intent behind the plan. This latter assumption is especially likely to hold in the common case where the person doing the planning and the person carrying out the plan are one and the same: In this case, the plan will be revised almost subconsciously as obstacles arise. Therefore, the love of fragmentary plans has served human beings well, in part because it is better to take random actions that have a high probability of locating food than to starve to death while attempting to plan the unplannable. However, the past usefulness of fragmentary plans in everyday life is no guarantee of their future usefulness in stored-program computers. Furthermore, the need to follow fragmentary plans has had important effects on the human psyche, due to the fact that throughout much of human history, life was often difficult and dangerous. It should come as no surprise that executing a fragmentary plan that has a high probability of a violent encounter with sharp teeth and claws requires almost insane levels of optimism—a level of optimism that actually is present in most human beings. These insane levels of optimism extend to self-assessments of programming ability, as evidenced by the effectiveness of (and the controversy over) interviewing techniques involving coding trivial programs  [ Bra07 ] . In fact, the clinical term for a human being with less-than-insane levels of optimism is “clinically depressed.” Such people usually have extreme difficulty functioning in their daily lives, underscoring the perhaps counter-intuitive importance of insane levels of optimism to a normal, healthy life. If you are not insanely optimistic, you are less likely to start a 260 difficult but worthwhile project . 2 Quick Quiz 10.1:  When in computing is the willingness to follow a fragmentary plan critically important? An important special case is the project that, while valuable, is not valuable enough to justify the time required to implement it. This special case is quite common, and one early symptom is the unwillingness of the decision-makers to invest enough to actually implement the project. A natural reaction is for the developers to produce an unrealistically optimistic estimate in order to be permitted to start the project. If the organization (be it open source or proprietary) is strong enough, it might survive the resulting schedule slips and budget overruns, so that the project might see the light of  day. However, if the organization is not strong enough and if the decision-makers fail to cancel the project as soon as it becomes clear that the estimates are garbage, then the project might well kill the organization. This might result in another organization picking up the project and either completing it, cancelling it, or being killed by it. A given project might well succeed only after killing several organizations. One can only hope that the organization that eventually makes a success of a serial-organization-killer project manages maintains a suitable level of humility, lest it be killed by the next project. Important though insane levels of optimism might be, they are a key source of bugs (and perhaps failure of organizations). The question is therefore “How to maintain the optimism required to start a large project while at the same time injecting enough reality to keep the bugs down to a dull roar?” The next section examines this conundrum. 10.1.2 Required Mindset When carrying out any validation effort, you should keep the following defintions in mind: 1. The only bug-free programs are trivial programs. 2. A reliable program has no known bugs. From these definitions, it logically follows that any reliable non-trivial program contains at least one bug that you do not know about. Therefore, any validation effort undertaken on a non-trivial program that fails to find any bugs is itself a failure. A good validation is therefore an exercise in destruction. This means that if you are the type of  person who enjoys breaking things, validation is just the right type of job for you. Quick Quiz 10.2:  Suppose that you are writing a script that processes the output of  the  time  command, which looks as follows: real 0m0.132s user 0m0.040s sys 0m0.008s The script is required to check its input for errors, and to give appropriate diagnostics if fed erroneous  time  output. What test inputs should you provide to this program to test it for use with  time  output generated by single-threaded programs? But perhaps you are a super-programmer whose code is always perfect the first time every time. If so, congratulations! Feel free to skip this chapter, but I do hope that you 2 There are some famous exceptions to this rule of thumb. One set of exceptions is people who take on difficult or risky projects in order to make at least a temporary escape from their depression. Another set is people who have nothing to lose: the project is literally a matter of life or death. 261 Figure 10.1: Validation and the Geneva Convention will forgive my skepticism. You see, I have met far more people who claimed to be able to write perfect code the first time than I have people who were actually capable of carrying out this feat, which is not too surprising given the previous discussion of  optimism and over-confidence. And even if you really are a super-programmer, you just might find yourself debugging lesser mortals’ work. One approach for the rest of us is to alternate between our normal state of insane optimism (Sure, I can program that!) and severe pessimism (It seems to work, but I just know that there have to be more bugs hiding in there somewhere!). It helps if you enjoy breaking things. If you don’t, or if your joy in breaking things is limited to breaking other   people’s things, find someone who does love breaking your code and get them to help you test it. Another helpful frame of mind is to hate other people finding bugs in your code. This hatred can help motivate you to torture your code beyond reason in order to increase the probability that you find the bugs rather than someone else. One final frame of mind is to consider the possibility that someone’s life depends on your code being correct. This can also motivate you to torture your code into revealing the whereabouts of its bugs. This wide variety of frames of mind opens the door to the possibility of multiple people with different frames of mind contributing to the project, with varying levels of  optimism. This can work well, if properly organized. Some people might see vigorous validation as a form of torture, as depicted in Figure  10.1. 3 Such people might do well to remind themselves that, Tux cartoons aside, they are really torturing an inanimate object, as shown in Figure  10.2 . In addition, rest assured that those who fail to torture their code are doomed to be tortured by it. However, this leaves open the question of exactly when during the project lifetime validation should start, a topic taken up by the next section. 3 More cynical people might question whether these people are instead merely afraid that validation will find bugs that they will then be expected to fix. 262 Figure 10.2: Rationalizing Validation 10.1.3 When Should Validation Start? Validation should start at the same time that the project starts. To see this, consider that tracking down a bug is much harder in a large program than in a small one. Therefore, to minimize the time and effort required to track down bugs, you should test small units of code. Although you won’t find all the bugs this way, you will find a substantial fraction, and it will be much easier to find and fix the ones you do find. Testing at this level can also alert you to larger flaws in your overall design, minimizing the time you waste writing code that is quite literally broken by design. But why wait until you have code before validating your design? 4 Hopefully reading Chapters  2  and  3  provided you with the information required to avoid some regrettably common design flaws, but discussing your design with a colleague or even simply writing it down can help flush out additional flaws. However, it is all too often the case that waiting to start validation until you have a design is waiting too long. Mightn’t your natural level of optimism caused you to start the design before you fully understood the requirements? The answer to this question will almost always be “yes”. One good way to avoid flawed requirements is to get to know your users. To really serve them well, you will have to live among them. Quick Quiz 10.3:  You are asking me to do all this validation BS before I even start coding??? That sounds like a great way to never get started!!! First-of-a-kind projects require different approaches to validation, for example, rapid prototyping. Here, the main goal of the first few prototypes is to learn how the project should be implemented, not so much to create a correct implementation on the first try. However, it is important to keep in mind that you should not omit validation, but rather take a radically different approach to it. Now that we have established that you should start validation when you start the project, the following sections cover a number of validation techniques and methods that have proven their worth. 4 The old saying “First we must code, then we have incentive to think” notwithstanding. 263 10.1.4 The Open Source Way The open-source programming methodology has proven quite effective, and includes a regimen of intense code review and testing. I can personally attest to the effectiveness of the open-source community’s intense code review. One of the first patches I prepared for the Linux kernel involved a distributed filesystem where a user on one node writes to a given file at a location that a user on another node has mapped into memory. In this case, it is necessary to invalidate the affected pages from the mapping in order to allow the filesystem to maintain coherence during the write operation. I coded up a first attempt at a patch, and, in keeping with the open-source maxim “post early, post often”, I posted the patch. I then considered how I was going to test it. But before I could even decide on an overall test strategy, I got a reply to my posting pointing out a few bugs. I fixed the bugs and reposted the patch, and returned to thinking out my test strategy. However, before I had a chance to write any test code, I received a reply to my reposted patch, pointing out more bugs. This process repeated itself many times, and I am not sure that I ever got a chance to actually test the patch. This experience brought home the truth of the open-source saying: Given enough eyeballs, all bugs are shallow  [Ray99 ]. However, when you post some code or a given patch, it is worth asking a few questions: 1. How many of those eyeballs are actually going to look at your code? 2. How many will be experienced and clever enough to actually find your bugs? 3. Exactly when are they going to look? I was lucky: There was someone out there who wanted the functionality provided by my patch, who had long experience with distributed filesystems, and who looked at my patch almost immediately. If no one had looked at my patch, there would have been no review, and therefore no finding of bugs. If the people looking at my patch had lacked experience with distributed filesystems, it is unlikely that they would have found all the bugs. Had they waited months or even years to look, I likely would have forgotten how the patch was supposed to work, making it much more difficult to fix them. However, we must not forget the second tenet of the open-source development, namely intensive testing. For example, a great many people test the Linux kernel. Some test patches as they are submitted, perhaps even yours. Others test the -next tree, which is helpful, but there is likely to be several weeks or even months delay between the time that you write the patch and the time that it appears in the -next tree, by which time the patch will not be quite as fresh in your mind. Still others test maintainer trees, which often have a similar time delay. Quite a few people don’t test code until it is committed to mainline, or the master source tree (Linus’s tree in the case of the Linux kernel). If your maintainer won’t accept your patch until it has been tested, this presents you with a deadlock situation: your patch won’t be accepted until it is tested, but it won’t be tested until it is accepted. Nevertheless, people who test mainline code are still relatively aggressive, given that many people and organizations do not test code until it has been pulled into a Linux distro. And even if someone does test your patch, there is no guarantee that they will be running the hardware and software configuration and workload required to locate your bugs. 264 Therefore, even when writing code for an open-source project, you need to be preparedtodevelopandrunyourowntestsuite. Testdevelopmentisanunderappreciated and very valuable skill, so be sure to take full advantage of any existing test suites available to you. Important as test development is, we will leave further discussion of to books dedicated to that topic. The following sections therefore discuss locating bugs in your code given that you already have a good test suite. 10.2 Tracing When all else fails, add a  printk() ! Or a  printf() , if you are working with user-mode C-language applications. The rationale is simple: If you cannot figure out how execution reached a given point in the code, sprinkle print statements earlier in the code to work out what happened. You can get a similar effect, and with more convenience and flexibility, by using a debugger such as gdb (for user applications) or kgdb (for debugging Linux kernels). Much more sophisticated tools exist, with some of the more recent offering the ability to rewind backwards in time from the point of failure. These brute-force testing tools are all valuable, especially now that typical systems have more than 64K of memory and CPUs running faster than 4MHz. Much has been written about these tools, so this chapter will add little more. However, these tools all have a serious shortcoming when the job at hand is to convince a the fastpath of a high-performance parallel algorithm to tell you what is going wrong, namely, they often have excessive overheads. There are special tracing technologies for this purpose, which typically leverage data ownership techniques (see Chapter  7 ) to minimize the overhead of runtime data collection. One example within the Linux kernel is “trace events” [ Ros10b ,  Ros10c ,  Ros10d ,  Ros10a ] . Another example that handles userspace (but has not been accepted into the Linux kernel) is LTTng  [ DD09 ] . Each of these uses per-CPU buffers to allow data to be collected with extremely low overhead. Even so, enabling tracing can sometimes change timing enough to hide bugs, resulting in  heisenbugs , which are discussed in Section  10.6  and especially Section  10.6.4. Even if you avoid heisenbugs, other pitfalls await you. For example, although the machine really does know all, what it knows is almost always way more than your head can hold. For this reason, high-quality test suites normally come with sophisticated scripts to analyze the voluminous output. But beware—scripts won’t necessarily notice surprising things. My rcutorture scripts are a case in point: Early versions of those scripts were quite satisfied with a test run in whichRCUgrace periods stalled indefinitely. This of course resulted in the scripts being modified to detect RCU grace-period stalls, but this does not change the fact that the scripts will only detects problems that I think to make them detect. The scripts are useful, but they are no substitute for occasional manual scans of the rcutorture output. Another problem with tracing and especially with  printk()  calls is that their overhead is often too much for production use. In some such cases, assertions can be helpful. 10.3 Assertions Assertions are usually implemented in the following manner: 265 1 if (something_bad_is_happening()) 2 complain(); This pattern is often encapsulated into C-preprocessor macros or language intrinsics, forexample, intheLinuxkernel, thismightberepresentedas WARN_ON(something_  bad_is_happening()) . Of course, if   something_bad_is_happening() quite frequently, the resulting output might obscure reports of other problems, in which case  WARN_ON_ONCE(something_bad_is_happening())  might be more ap- propriate. Quick Quiz 10.4:  How can you implement  WARN_ON_ONCE() ? In parallel code, one especially bad something that might happen is that a function expecting to be called under a particular lock might be called without that lock being held. Such functions sometimes have header comments stating something like “The caller must hold  foo_lock  when calling this function”, but such a comment does no good unless someone actually reads it. An executable statement like  lock_is_  held(&foo_lock)  carries far more force. The Linux kernel’s lockdep facility [ Cor06a ,  Ros11 ]  takes this a step farther, report- ing potential deadlocks as well as allowing functions to verify that the proper locks are held. Of course, this additional functionality incurs significant overhead, so that lockdep is not necessarily appropriate for production use. So what can be done in cases where checking is necessary, but where the overhead of runtime checking cannot be tolerated? One approach is static analysis, which is discussed in the next section. 10.4 Static Analysis Static analysis is a validation technique were one program takes a second program as input, reporting errors and vulnerabilities located in this second program. Interestingly enough, almost all programs are subjected to static analysis by their compilers or interpreters. These tools are of course far from perfect, but their ability to locate errors has improved immensely over the past few decades, in part because they now have much more than 64K bytes of memory in which to carry out their analysis. The original UNIX  lint  tool [ Joh77 ]  was quite useful, though much of its func- tionality has since been incorporated into C compilers. There are nevertheless lint-like tools under development and in use to this day. The sparse static analyzer [ Cor04 ]  looks for higher-level issues in the Linux kernel, including: 1. Misuse of pointers to use-space structures. 2. Assignments from too-long constants. 3. Empty  switch  statements. 4. Mismatched lock acquisition and release primitives. 5. Misuse of per-CPU primitives. 6. Use of RCU primitives on non-RCU pointers and vice versa. Although it is likely that compilers will continue to increase their static-analysis capabilities, the sparse static analyzer demonstrates the benefits of static analysis outside of the compiler, particularly for finding application-specific bugs. 266 10.5 Code Review Various code-review activities are special cases of static analysis, but with human beings doing the analysis. This section covers inspection, walkthroughs, and self-inspection. 10.5.1 Inspection Traditionally, formal code inspections take place in face-to-face meetings with formally defined roles: moderator, developer, and one or two other participants. The developer reads through the code, explaining what it is doing and why it works. The one or two other participants ask questions and raise issues, while the moderator’s job is to resolve any conflicts and to take notes. This process can be extremely effective at locating bugs, particularly if all of the participants are familiar with the code at hand. However, this face-to-face formal procedure does not necessarily work well in the global Linux kernel community, although it might work well via an IRC session. Instead, individuals review code separately and provide comments via email or IRC. The note-taking is provided by email archives or IRC logs, and moderators volunteer their services as appropriate. Give or take the occasional flamewar, this process also works reasonably well, particularly if all of the participants are familiar with the code at hand . 5 It is quite likely that the Linux kernel community’s review process is ripe for improvement: 1.  There is sometimes a shortage of people with the time and expertise required to carry out an effective review. 2.  Even though all review discussions are archived, they are often “lost” in the sense that insights are forgotten and people often fail to look up the discussions. This can result in re-insertion of the same old bugs. 3. It is sometimes difficult to resolve flamewars when they do break out, especially when the combatants have disjoint goals, experience, and vocabulary. When reviewing, therefore, it is worthwhile to review relevant documentation in commit logs, bug reports, and LWN articles. 10.5.2 Walkthroughs A traditional code walkthrough is similar to a formal inspection, except that the group “playscomputer”withthecode, drivenbyspecifictestcases. Atypicalwalkthroughteam has a moderator, a secretary (who records bugs found), a testing expert (who generates the test cases) and perhaps one to two others. These can be extremely effective, albeit also extremely time-consuming. It has been some decades since I have participated in a formal walkthrough, and I suspect that a present-day walkthrough would use single-stepping debuggers. One could imagine a particularly sadistic procedure as follows: 1. The tester presents the test case. 5 That said, one advantage of the Linux kernel community approach over traditional formal inspections is the greater probability of contributions from people  not   familiar with the code, who therefore might not be blinded by the invalid assumptions harbored by those familiar with the code. 267 2.  The moderator starts the code under a debugger, using the specified test case as input. 3.  Before each statement is executed, the developer is required to predict the outcome of the statement and explain why this outcome is correct. 4.  If the outcome differs from that predicted by the developer, this is taken as evidence of a potential bug. 5.  In parallel code, a “concurrency shark” asks what code might execute concurrently with this code, and why such concurrency is harmless. Sadistic, certainly. Effective? Maybe. If the participants have a good understanding of the requirements, software tools, data structures, and algorithms, then walkthroughs can be extremely effective. If not, walkthroughs are often a waste of time. 10.5.3 Self-Inspection Although developers are usually not all that effective at inspecting their own code, there are a number of situations where there is no reasonable alternative. For example, the developer might be the only person authorized to look at the code, other qualified developers might all be too busy, or the code in question might be sufficiently bizarre that the developer is unable to convince anyone else to take it seriously until after demonstrating a prototype. In these cases, the following procedure can be quite helpful, especially for complex parallel code: 1.  Write design document with requirements, diagrams for data structures, and rationale for design choices. 2. Consult with experts, update the design document as needed. 3.  Write the code in pen on paper, correct errors as you go. Resist the temptation to refer to pre-existing nearly identical code sequences, instead, copy them. 4.  If there were errors, copy the code in pen on fresh paper, correcting errors as you go. Repeat until the last two copies are identical. 5. Produce proofs of correctness for any non-obvious code. 6. Where possible, test the code fragments from the bottom up. 7. When all the code is integrated, do full-up functional and stress testing. 8.  Once the code passes all tests, write code-level documentation, perhaps as an extension to the design document discussed above. When I faithfully follow this procedure for new RCU code, there are normally only a few bugs left at the end. With a few prominent (and embarrassing) exceptions [ McK11a ], I usually manage to locate these bugs before others do. That said, this is getting more difficult over time as the number and variety of Linux-kernel users increases. Quick Quiz 10.5:  Why would anyone bother copying existing code in pen on paper??? Doesn’t that just increase the probability of transcription errors? Quick Quiz 10.6:  This procedure is ridiculously over-engineered! How can you expect to get a reasonable amount of software written doing it this way??? 268 The above procedure works well for new code, but what if you need to inspect code that you have already written? You can of course apply the above procedure for old code in the special case where you wrote one to throw away  [ FPB79 ] , but the following approach can also be helpful in less desperate circumstances: 1.  Using your favorite documentation tool (L A T E X, HTML, OpenOffice, or straight ASCII), describe the high-level design of the code in question. Use lots of  diagrams to illustrate the data structures and how these structures are updated. 2. Make a copy of the code, stripping away all comments. 3. Document what the code does statement by statement. 4. Fix bugs as you find them. This works because describing the code in detail is an excellent way to spot bugs [ Mye79 ]. Although this second procedure is also a good way to get your head around someone else’s code, in many cases, the first step suffices. Although review and inspection by others is probably more efficient and effective, the above procedures can be quite helpful in cases where for whatever reason it is not feasible to involve others. At this point, you might be wondering how to write parallel code without having to do all this boring paperwork. Here are some time-tested ways of accomplishing this: 1.  Write a sequential program that scales through use of available parallel library functions. 2.  Write sequential plug-ins for a parallel framework, such as map-reduce, BOINC, or a web-application server. 3.  Do such a good job of parallel design that the problem is fully partitioned, then  just implement sequential program(s) that run in parallel without communication. 4.  Stick to one of the application areas (such as linear algebra) where tools can automatically decompose and parallelize the problem. 5.  Make extremely disciplined use of parallel-programming primitives, so that the resulting code is easily seen to be correct. But beware: It is always tempting to break the rules “just a little bit” to gain better performance or scalability. Breaking the rules often results in general breakage. That is, unless you carefully do the paperwork described in this section. But the sad fact is that even if you do the paperwork or use one of the above ways to more-or-less safely avoid paperwork, there will be bugs. If nothing else, more users and a greater variety of users will expose more bugs more quickly, especially if those users are doing things that the original developers did not consider. The next section describes how to handle the probabilistic bugs that occur all too commonly when validating parallel software. 269 Figure 10.3: Passed on Merits? Or Dumb Luck? 10.6 Probability and Heisenbugs So your parallel program fails. Sometimes. But you used techniques from the earlier sections to locate the problem and now have a fix in place! Congratulations!!! Now the question is just how much testing is required in order to be certain that you actually fixed the bug, as opposed to just reducing the probability of it occurring on the one hand, having fixed only one of several related bugs on the other and, or made some ineffectual unrelated change on yet a third hand. In short, what is the answer to the eternal question posed by Figure  10.3 ? Unfortunately, the honest answer is that an infinite amount of testing is required to attain absolute certainty. Quick Quiz 10.7:  Suppose that you had a very large number of systems at your disposal. For example, at current cloud prices, you can purchase a huge amount of  CPU time at a reasonably low cost. Why not use this approach to get close enough to certainty for all practical purposes? But suppose that we are willing to give up absolute certainty in favor of high probability. Then we can bring powerful statistical tools to bear on this problem. However, this section will focus on simple statistical tools. These tools are extremely helpful, please note that reading this section not a substitute for taking a good set of  statistics classes . 6 For our start with simple statistical tools, we need to decide whether we are doing discrete or continuous testing. Discrete testing features well-defined individual test runs. For example, a boot-up test of a Linux kernel patch is an example of a discrete test. You boot the kernel, and it either comes up or it does not. Although you might spend an hour boot-testing your kernel, the number of times you attempted to boot the kernel and the number of times the boot-up succeeded would often be of more interest than the length 6 Which I most highly recommend. The few statistics courses I have taken have provided value way out of  proportion to the time I spent studying for them. 270 of time you spent testing. Functional tests tend to be discrete. On the other hand, if my patch involved RCU, I would probably run rcutorture, which is a kernel module that, strangely enough, tests RCU. Unlike booting the kernel, where the appearance of a login prompt signals the successful end of a discrete test, rcutorture will happily continue torturing RCU until either the kernel crashes or until you tell it to stop. The duration of the rcutorture test is therefore (usually) of more interest than the number of times you started and stopped it. Therefore, rcutorture is an example of a continuous test, a category that includes many stress tests. The statistics governing discrete and continuous tests differ somewhat. However, the statistics for discrete tests is simpler and more familiar than that for continuous tests, and furthermore the statistics for discrete tests can often be pressed into service (with some loss of accuracy) for continuous tests. We therefore start with discrete tests. 10.6.1 Statistics for Discrete Testing Suppose that the bug had a 10% chance of occurring in a given run and that we do five runs. How do we compute that probability of at least one run failing? One way is as follows: 1. Compute the probability of a given run succeeding, which is 90%. 2.  Compute the probability of all five runs succeeding, which is 0.9 raised to the fifth power, or about 59%. 3.  There are only two possibilities: either all five runs succeed, or at least one fails. Therefore, the probability of at least one failure is 59% taken away from 100%, or 41%. However, many people find it easier to work with a formula than a series of steps, although if you prefer the above series of steps, have at it! For those who like formulas, call the probability of a single failure  f  . The probability of a single success is then  1 −  f  and the probability that all of   n  tests will succeed is then: S  n  = ( 1 −  f  ) n (10.1) The probability of failure is 1 − S  n , or: F  n  =  1 − ( 1 −  f  ) n (10.2) Quick Quiz 10.8:  Say what??? When I plug the earlier example of five tests each with a 10% failure rate into the formula, I get 59,050% and that just doesn’t make sense!!! So suppose that a given test has been failing 10% of the time. How many times do you have to run the test to be 99% sure that your supposed fix has actually improved matters? Another way to ask this question is “how many times would we need to run the test to cause the probability of failure to rise above 99%?” After all, if we were to run the test enough times that the probability of seeing at least one failure becomes 99%, if  there are no failures, there is only 1% probability of this being due to dumb luck. And if  we plug  f   =  0 . 1  into Equation  10.2  and vary  n , we find that 43 runs gives us a 98.92% chance of at least one test failing given the original 10% per-test failure rate, while 44 runs gives us a 99.03% chance of at least one test failing. So if we run the test on our fix 271  1  10  100   0 0.2 0.4 0.6 0.8 1    N   u   m    b   e   r   o    f    R   u   n   s    f   o   r    9    9    %      C   o   n    f    i    d   e   n   c   e Per-Run Failure Probability Figure 10.4: Number of Tests Required for 99 Percent Confidence Given Failure Rate 44 times and see no failures, there is a 99% probability that our fix was actually a real improvement. But repeatedly plugging numbers into Equation  10.2  can get tedious, so let’s solve for  n : F  n  =  1 − ( 1 −  f  ) n (10.3) 1 − F  n  = ( 1 −  f  ) n (10.4) log ( 1 − F  n ) =  n  log ( 1 −  f  )  (10.5) Finally the number of tests required is given by: n  =  log ( 1 − F  n ) log ( 1 −  f  )  (10.6) Plugging  f   = 0 . 1  and  F  n  = 0 . 99  into Equation  10.6  gives 43.7, meaning that we need 44 consecutive successful test runs to be 99% certain that our fix was a real improvement. This matches the number obtained by the previous method, which is reassuring. Quick Quiz 10.9:  In Equation  10.6,  are the logarithms base-10, base-2, or base- e ? Figure  10.4  shows a plot of this function. Not surprisingly, the less frequently each test run fails, the more test runs are required to be 99% confident that the bug has been fixed. If the bug caused the test to fail only 1% of the time, then a mind-boggling 458 test runs are required. As the failure probability decreases, the number of test runs required increases, going to infinity as the failure probability goes to zero. The moral of this story is that when you have found a rarely occurring bug, your testing job will be much easier if you can come up with a carefully targeted test with a much higher failure rate. For example, if your targeted test raised the failure rate from 1% to 30%, then the number of runs required for 99% confidence would drop from 458 test runs to a mere thirteen test runs. 272 But these thirteen test runs would only give you 99% confidence that your fix had produced “some improvement”. Suppose you instead want to have 99% confidence that your fix reduced the failure rate by an order of magnitude. How many failure-free test runs are required? An order of magnitude improvement from a 30% failure rate would be a 3% failure rate. Plugging these numbers into Equation  10.6  yields: n  =  log ( 1 − 0 . 99 ) log ( 1 − 0 . 03 )  =  151 . 2 (10.7) So our order of magnitude improvement requires roughly an order of magnitude more testing. Certainty is impossible, and high probabilities are quite expensive. Clearly making tests run more quickly and making failures more probable are essential skills in the development of highly reliable software. These skills will be covered in Sec- tion  10.6.4 . 10.6.2 Abusing Statistics for Discrete Testing But suppose that you have a continuous test that fails about three times every ten hours, and that you fix the bug that you believe was causing the failure. How long do you have to run this test without failure to be 99% certain that you reduced the probability of  failure? Without doing excessive violence to statistics, we could simply redefine a one-hour run to be a discrete test that has a 30% probability of failure. Then the results of in the previous section tell us that if the test runs for 13 hours without failure, there is a 99% probability that our fix actually improved the program’s reliability. A dogmatic statistician might not approve of this approach, but the sad fact is that the errors introduced by this sort of abuse of statistical methodology are usually quite small compared to the errors inherent in your measurements of your program’s failure rates. Nevertheless, the next section describes a slightly less dodgy approach. 10.6.3 Statistics for Continuous Testing This section contains more aggressive mathematics. If you are not in the mood for mathematical aggression, feel free to use the results of the previous section or to skip ahead to Section  10.6.3.2,  possibly noting Equation  10.30  on page  276  for future reference. 10.6.3.1 Derivation of Poisson Distribution As the number of tests  n  increases and the probability of per-test failure  f   decreases, it makes sense to move the mathematics to the continuous domain. It is convenient to define  λ   as  nf  : as we increase  n  and decrease  f  ,  λ   will remain fixed. Intuitively,  λ   is the expected number of failures per unit time. What then is the probability of all  n  tests succeeding? This is given by: ( 1 −  f  ) n (10.8) But because  λ   is equal to  nf  , we can solve for  f   and obtain  f   =  λ  n . Substituting this into the previous equation yields: 273  1 − λ  n  n (10.9) Readers who are both alert and mathematically inclined will recognize this as approaching  e − λ  as  n  increases without limit. In other words, if we expect  λ   failures from a test of a given duration, the probability  F  0  of zero failures from the test is given by: F  0  =  e − λ  (10.10) The next step is to compute the probability of all but one of   n  tests succeeding, which is: n ! 1! ( n − 1 ) !  f  ( 1 −  f  ) n − 1 (10.11) The ratio of factorials accounts for the different permutations of test results. The  f  is the chance of the single failure, and the  ( 1 −  f  ) n − 1 is the chance that the rest of the tests succeed. The  n !  in the numerator allows for all permutations of   n  tests, while the two factors in the denominator allow for the indistinguishability of the one failure on the one hand and the  n − 1 successes on the other. Cancelling the factorials and multiplying top and bottom by 1 −  f   yields: nf  1 −  f  ( 1 −  f  ) n (10.12) But because  f   is assumed to be arbitrarily small,  1 −  f   is arbitrarily close to the value one, allowing us to dispense with the denominator: nf  ( 1 −  f  ) n (10.13) Substituting  f   =  λ  n  as before yields: λ  ( 1 − λ  n ) n (10.14) For large  n , as before, the latter term is approximated by  e − λ  , so that the probability of a single failure in a test from which  λ   failures were expected is given by: F  1  =  λ  e − λ  (10.15) The third step is to compute the probability of all but two of the  n  tests succeeding, which is: n ! 2! ( n − 2 ) !  f  2 ( 1 −  f  ) n − 2 (10.16) Cancelling the factorials and multiplying top and bottom by  ( 1 −  f  ) 2 yields: n ( n − 1 )  f  2 2 ( 1 −  f  ) 2  ( 1 −  f  ) n (10.17) Once again, because  f   is assumed to be arbitrarily small,  ( 1 −  f  ) 2 is arbitrarily close to the value one, once again allowing us to dispense with this portion of the denominator: 274 n ( n − 1 )  f  2 2  ( 1 −  f  ) n (10.18) Substituting  f   =  λ  n  once again yields: n ( n − 1 ) λ  2 2 n 2  ( 1 − λ  n ) n (10.19) Because  n  is assumed large,  n − 1  is arbitrarily close to  n , allowing the  n ( n − 1 )  in the numerator to be cancelled with the  n 2 in the denominator. And again, the final term is approximated by  e − λ  , yielding the probability of two failures from a test from which λ   failures were expected: F  2  =  λ  2 2  e − λ  (10.20) We are now ready to try a more general result. Assume that there are  m  failures, where  m  is extremely small compared to  n . Then we have: n ! m ! ( n − m ) !  f  m ( 1 −  f  ) n − m (10.21) Cancelling the factorials and multiplying top and bottom by  ( 1 −  f  ) m yields: n ( n − 1 ) . . . ( n − m + 2 )( n − m + 1 )  f  m m ! ( 1 −  f  ) m  ( 1 −  f  ) n (10.22) And you guessed it, because  f   is arbitrarily small,  ( 1 −  f  ) m is arbitrarily close to the value one and may therefore be dropped: n ( n − 1 ) . . . ( n − m + 2 )( n − m + 1 )  f  m m !  ( 1 −  f  ) n (10.23) Substituting  f   =  λ  n  one more time: n ( n − 1 ) . . . ( n − m + 2 )( n − m + 1 ) λ  m m ! n m  ( 1 − λ  n ) n (10.24) Because  m  is small compared to  n , we can cancel all but the last of the factors in the numerator with the  n m in the denominator, resulting in: λ  m m !  ( 1 − λ  n ) n (10.25) As always, for large  n , the last term is approximated by  e − λ  , yielding the probability of   m  failures from a test from which  λ   failures were expected: F  m  =  λ  m m !  e − λ  (10.26) This is the celebrated Poisson distribution. A more rigorous derivation may be found in any advanced probability textbook, for example, Feller’s classic “An Introduction to Probability Theory and Its Applications” [ Fel50] . 275 10.6.3.2 Use of Poisson Distribution Let’s try reworking the example from Section  10.6.2  using the Poisson distribution. Recall that this example involved a test with a 30% failure rate per hour, and that the question was how long the test would need to run on a alleged fix to be 99% certain that the fix actually reduced the failure rate. Solving this requires setting  e − λ  to 0.01 and solving for  λ  , resulting in: λ   = − log0 . 01  =  4 . 6 (10.27) Because we get  0 . 3  failures per hour, the number of hours required is  4 . 6 / 0 . 3 = 14 . 3 , which is within 10% of the 13 hours calculated using the method in Section  10.6.2. Given that you normally won’t know your failure rate to within 10%, this indicates that the method in Section  10.6.2  is a good and sufficient substitute for the Poisson distribution in a great many situations. More generally, if we have  n  failures per unit time, and we want to be P% certain that a fix reduced the failure rate, we can use the following formula: T   = − 1 n log  100 − P 100  (10.28) Quick Quiz 10.10:  Suppose that a bug causes a test failure three times per hour on average. How long must the test run error-free to provide 99.9% confidence that the fix significantly reduced the probability of failure? As before, the less frequently the bug occurs and the greater the required level of  confidence, the longer the required error-free test run. Suppose that a given test fails about once every hour, but after a bug fix, a 24-hour test run fails only twice. What is the probability of this being due to random chance, in other words, what is the probability that the fix had no statistical effect? This probability may be calculated by summing Equation  10.26  as follows: F  0  + F  1  + . . . + F  m − 1  + F  m  = m ∑ i = 0 λ  i i !  e − λ  (10.29) This is the Poisson cumulative distribution function, which can be written more compactly as: F  i ≤ m  = m ∑ i = 0 λ  i i !  e − λ  (10.30) Here  m  is the number of errors in the long test run (in this case, two) and  λ   is expected number of errors in the long test run (in this case, 24). Plugging  m  =  2  and λ   =  24  into this expression gives the probability of two or fewer failures as about 1 . 2 × 10 − 8 , indicating that the odds are extremely good that the fix had a statistically significant effect . 7 Quick Quiz 10.11:  Doing the summation of all the factorials and exponentials is a real pain. Isn’t there an easier way? Quick Quiz 10.12:  But wait!!! Given that there has to be  some  number of fail- ures (including the possibility of zero failures), shouldn’t the summation shown in Equation  10.30  approach the value 1 as  m  goes to infinity? 7 Of course, this result in no way excuses you from finding and fixing the bug(s) resulting in the remaining two failures! 276 The Poisson distribution is a powerful tool for analyzing test results, but the fact is that in this last example there were still two remaining test failures in a 24-hour test run. Such a low failure rate results in very long test runs. The next section discusses counter-intuitive ways of improving this situation. 10.6.4 Hunting Heisenbugs This line of thought also helps explain heisenbugs: adding tracing and assertions can easily reduce the probability of a bug appearing. And this is why extremely lightweight tracing and assertion mechanism are so critically important. The name “heisenbug” stems from the Heisenberg Uncertainty Principle from quantum physics, which states that it is impossible to exactly quantify a given particle’s position and velocity at any given point in time  [ Hei27 ] . Any attempt to more accurately measure that particle’s position will result in increased uncertainty of its velocity. A similar effect occurs for heisenbugs: attempts to track down the heisenbug causes it to radically change its symptoms or even disappear completely. If the field of physics inspired the name of this problem, it is only logical that we should look to the field of physics for the solution. Fortunately, particle physics is up to the task: Why not create an anti-heisenbug to annihilate the heisenbug? This section describes a number of ways to do just that: 1. Add delay to race-prone regions. 2. Increase workload intensity. 3. Test suspicious subsystems in isolation. 4. Simulate unusual events. Although producing an anti-heisenbug for a given heisenbug is more an art than a science, the following sections give some tips on generating the corresponding species of anti-heisenbug. 10.6.4.1 Add Delay Consider the count-lossy code in Section  4.1.  Adding printf() statements will likely greatly reduce or even eliminate the lost counts. However, converting the load-add-store sequence to a load-add-delay-store sequence will greatly increase the incidence of lost counts (try it!). Once you spot a bug involving a race condition, it is frequently possible to create an anti-heisenbug by adding delay in this manner. Of course, this begs the question of how to find the race condition in the first place. This is a bit of a dark art, but there are a number of things you can do to find them. On approach is to recognize that race conditions often end up corrupting some of the data involved in the race. It is therefore good practice to double-check the synchronization of any corrupted data. Even if you cannot immediately recognize the race condition, adding delay before and after accesses to the corrupted data might change the failure rate. By adding and removing the delays in an organized fashion (e.g., binary search), you might learn more about the workings of the race condition. Quick Quiz 10.13:  How is this approach supposed to help if the corruption affected some unrelated pointer, which then caused the corruption??? Another important approach is to vary the software and hardware configuration and look for statistically significant differences in failure rate. You can then look more 277 intensively at the code implicated by the software or hardware configuration changes that make the greatest difference in failure rate. It might be helpful to test that code in isolation, for example. One important aspect of software configuration is the history of changes, which is why  git bisect  is so useful. Bisection of the change history can provide very valuable clues as to the nature of the heisenbug. Quick Quiz 10.14:  But I did the bisection, and ended up with a huge commit. What do I do now? However you locate the suspicious section of code, you can then introduce delays to attempt to increase the probability of failure. As we have seen, increasing the probability of failure makes it much easier to gain high confidence in the corresponding fix. However, it is sometimes quite difficult to track down the problem using normal debugging techniques. The following sections present some other alternatives. 10.6.4.2 Increase Workload Intensity It is often the case that a given test suite places relatively low stress on a given subsystem, so that a small change in timing can cause a heisenbug to disappear. One way to create an anti-heisenbug for this case is to increase the workload intensity, which has a good chance of increasing the probability of the bug appearing. If the probability is increased sufficiently, it may be possible to add lightweight diagnostics such as tracing without causing the bug to vanish. How can you increase the workload intensity? This depends on the program, but here are some things to try: 1. Add more CPUs. 2.  If the program uses networking, add more network adapters and more or faster remote systems. 3.  If the program is doing heavy I/O when the problem occurs, either (1) add more storage devices, (2) use faster storage devices, for example, substitute SSDs for disks, or (3) use a RAM-based filesystem to substitute main memory for mass storage. 4.  Change the size of the problem, for example, if doing a parallel matrix multiply, change the size of the matrix. Larger problems may introduce more complexity, but smaller problems often increase the level of contention. If you aren’t sure whether you should go large or go small, just try both. However, it is often the case that the bug is in a specific subsystem, and the structure of the program limits the amount of stress that can be applied to that subsystem. The next section addresses this situation. 10.6.4.3 Isolate Suspicious Subsystems If the program is structured such that it is difficult or impossible to apply much stress to a subsystem that is under suspicion, a useful anti-heisenbug is a stress test that tests that subsystem in isolation. The Linux kernel’s rcutorture module takes exactly this approach with RCU: By applying more stress to RCU than is feasible in a production 278 environment, the probability that any RCU bugs will be found during rcutorture testing rather than during production use is increased . 8 In fact, when creating a parallel program, it is wise to stress-test the components separately. Creating such component-level stress tests can seem like a waste of time, but a little bit of component-level testing can save a huge amount of system-level debugging. 10.6.4.4 Simulate Unusual Events Heisenbugs are sometimes due to unusual events, such as memory-allocation failure, conditional-lock-acquisition failure, CPU-hotplug operations, timeouts, packet losses, and so on. One way to construct an anti-heisenbug for this class of heisenbug is to introduce spurious failures. For example, instead of invoking  malloc()  directly, invoke a wrapper function that uses a random number to decide whether to return  NULL  unconditionally on the one hand, or to actually invoke  malloc()  and return the resulting pointer on the other. Inducing spurious failures is an excellent way to bake robustness into sequential programs as well as parallel programs. Quick Quiz 10.15:  Why don’t existing conditional-locking primitives provide this spurious-failure functionality? Thus far, we have focused solely on bugs in the parallel program’s functional- ity. However, because performance is a first-class requirement for a parallel program (otherwise, why not write a sequential program?), the next section looks into finding performance bugs. 10.7 Performance Estimation Parallel programs usually have performance and scalability requirements, after all, if  performance is not an issue, why not use a sequential program? Ultimate performance and linear scalability might not be necessary, but there is little use for a parallel program that runs slower than its optimal sequential counterpart. And there really are cases where every microsecond matters and every nanosecond is needed. Therefore, for parallel programs, insufficient performance is just as much a bug as is incorrectness. Quick Quiz 10.16:  That is ridiculous!!! After all, isn’t getting the correct answer later than one would like  has  better than getting an incorrect answer??? Quick Quiz10.17:  But if you are goingto put inall the hardwork of parallelizing an application, why not do it right? Why settle for anything less than optimal performance and linear scalability? Validating a parallel program must therfore include validating its performance. But validating performance means having a workload to run and performance criteria with which to evaluate the program at hand. These needs are often met by  performance benchmarks , which are discussed in the next section. 10.7.1 Benchmarking The old saying goes “There are lies, damn lies, statistics, and benchmarks.” However, benchmarks are heavily used, so it is not helpful to be too dismissive of them. Benchmarks span the range from ad hoc test jigs to international standards, but regardless of their level of formality, benchmarks serve three major purposes: 8 Though sadly not increased to probability one. 279 1. Providing a fair framework for comparing competing implementations. 2.  Focusing competitive energy on improving implementations in ways that matter to users. 3. Serving as example uses of the implementations being benchmarked. 4.  Serving as a marketing tool to highlight your software’s strong points against your competitors’ offerings. Of course, the only completely fair framework is the intended application itself. So why would anyone who cared about fairness in benchmarking bother creating imperfect benchmarks rather than simply using the application itself as the benchmark? Running the actual application is in fact the best approach where it is practical. Unfortunately, it is often impractical for the following reasons: 1.  The application might be proprietary, and you might not have the right to run the intended application. 2. The application might require more hardware that you have access to. 3.  The application might use data that you cannot legally access, for example, due to privacy regulations. In these cases, creating a benchmark that approximates the application can help overcome these obstacles. A carefully constructed benchmark can help promote perfor- mance, scalability, energy efficiency, and much else besides. 10.7.2 Profiling In many cases, a fairly small portion of your software is responsible for the majority of  the performance and scalability shortfall. However, developers are notoriously unable to identify the actual bottlenecks by hand. For example, in the case of a kernel buffer allocator, all attention focused on a search of a dense array which turned out to represent only a few percent of the allocator’s execution time. An execution profile collected via a logic analyzer focused attention on the cache misses that were actually responsible for the majority of the problem [ MS93 ]. There are a number of tools including  gprof  and  perf  that can help you to focus your attention where it will do the most good. 10.7.3 Differential Profiling Scalability problems will not necessarily be apparent unless you are running on very large systems. However, it is sometimes possible to detect impending scalability problems even when running on much smaller systems. One technique for doing this is called  differential profiling . The idea is to run your workload under two different sets of conditions. For example, you might run it on two CPUs, then run it again on four CPUs. You might instead vary the load placed on the system, the number of network adapters, the number of mass- storage devices, and so on. You then collect profiles of the two runs, and mathematically combine corresponding profile measurements. For example, if your main concern is scalability, you might take the ratio of corresponding measurements, and then sort the 280 ratios into descending numerical order. The prime scalability suspects will then be sorted to the top of the list /citeMcKenney95a,McKenney99b. Some tools such as  perf  have built-in differential-profiling support. 10.7.4 Microbenchmarking Microbenchmarking can be useful when deciding which algorithms or data structures are worth incorporating into a larger body of software for deeper evaluation. One common approach to microbenchmarking is to measure the time, run some number of iterations of the code under test, then measure the time again. The difference between the two times divided by the number of iterations gives the measured time required to execute the code under test. Unfortunately, this approach to measurement allows any number of errors to creep in, including: 1.  The measurement will include some of the overhead of the time measurement. This source of error can be reduced to an arbitrarily small value by increasing the number of iterations. 2.  The first few iterations of the test might incur cache misses or (worse yet) page faults that might inflate the measured value. This source of error can also be reduced by increasing the number of iterations, or it can often be eliminated entirely by running a few warm-up iterations before starting the measurement period. 3.  Some types of interference, for example, random memory errors, are so rare that they can be dealt with by running a number of sets of interations of the test. If the level of interference was statistically significant, any performance outliers could be rejected statistically. 4.  Any iteration of the test might be interfered with by other activity on the system. Sources of interference include other applications, system utilities and daemons, device interrupts, firmware interrupts (including system management interrupts, or SMIs), virtualization, memory errors, and much else besides. Assuming that these sources of interference occur randomly, their effect can be minimized by reducing the number of iterations. The first and third sources of interference provide conflicting advice, which is one sign that we are living in the real world. The remainder of this section looks at ways of  resolving this conflict. Quick Quiz 10.18:  But what about other sources of error, for example, due to interactions between caches and memory layout? The following sections discuss ways of dealing with these measurement errors, with Section  10.7.5  covering isolation techniques that may be used to prevent some forms of  interference, and with Section  10.7.6  covering methods for detecting interference so as to reject measurement data that might have been corrupted by that interference. 10.7.5 Isolation The Linux kernel provides a number of ways to isolate a group of CPUs from outside interference. 281 First, let’s look at interference by other processes, threads, and tasks. The POSIX sched_setaffinity()  system call may be used to move most tasks off of a given set of CPUs and to confine your tests to that same group. The Linux-specific user-level taskset command may be used for the same purpose, though both sched_  setaffinity() and taskset require elevated permissions. Linux-specific control groups (cgroups) may be used for this same purpose. This approach can be quite effective at reducing interference, and is sufficient in many cases. However, it does have limitations, for example, it cannot do anything about the per-CPU kernel threads that are often used for housekeeping tasks. One way to avoid interference from per-CPU kernel threads is to run your test at a high real-time priority, for example, by using the POSIX  sched_setscheduler() system call. However, note that if you do this, you are implicitly taking on responsibility for avoiding infinite loops, because otherwise your test will prevent part of the kernel from functioning . 9 These approaches can greatly reduce, and perhaps even eliminate, interference from processes, threads, and tasks. However, it does nothing to prevent interference from device interrupts, at least in the absence of threaded interrupts. Linux allows some control of threaded interrupts via the  /proc/irq  directory, which contains numerical directories, one per interrupt vector. Each numerical directory contains smp_affinity  and  smp_affinity_list . Given sufficient permissions, you can write a value to these files to restrict interrupts to the specified set of CPUs. For example, “ sudo echo 3 > /proc/irq/23/smp_affinity ” would confine interrupts on vector 23 to CPUs 0 and 1. The same results may be obtained via “ sudo echo 0-1 > /proc/irq/23/smp_affinity_list ”. You can use “cat /proc/interrupts” to obtain a list of the interrupt vectors on your system, how many are handled by each CPU, and what devices use each interrupt vector. Running a similar command for all interrupt vectors on your system would confine interrupts to CPUs 0 and 1, leaving the remaining CPUs free of interference. Or mostly free of interference, anyway. It turns out that the scheduling-clock interrupt fires on each CPU that is running in user mode . 10 In addition you must take care to ensure that the set of CPUs that you confine the interrupts to is capable of handling the load. But this only handles processes and interrupts running in the same operating-system instance as the test. Suppose that you are running the test in a guest OS that is itself  running on a hypervisor, for example, Linux running KVM? Although you can in theory apply the same techniques at the hypervisor level that you can at the guest-OS level, it is quite common for hypervisor-level operations to be restricted to authorized personnel. In addition, none of these techniques work against firmware-level interference. Quick Quiz 10.19:  Wouldn’t the techniques suggested to isolate the code under test also affect that code’s performance, particularly if it is running within a larger application? If you find yourself in this painful situation, instead of preventing the interference, you might need to detect the interference as described in the next section. 9 This is an example of the Spiderman Principle: “With great power comes great responsibility.” 10 Frederic Weisbecker is working on an adaptive-ticks project that will allow the scheduling-clock interrupt to be shut off on any CPU that has only one runnable task, but as of early 2013, this is unfortunately still work in progress. 282 1 #include 2 #include 3 4 / *  Return 0 if test results should be rejected.  * / 5 int runtest(void) 6 { 7 struct rusage ru1; 8 struct rusage ru2; 9 10 if (getrusage(RUSAGE_SELF, &ru1) != 0) { 11 perror("getrusage"); 12 abort(); 13 } 14 / *  run test here.  * / 15 if (getrusage(RUSAGE_SELF, &ru2 != 0) { 16 perror("getrusage"); 17 abort(); 18 } 19 return (ru1.ru_nvcsw == ru2.ru_nvcsw && 20 ru1.runivcsw == ru2.runivcsw); 21 } Figure 10.5: Using getrusage() to Detect Context Switches 10.7.6 Detecting Interference If you cannot prevent interference, perhaps you can detect the interference after the fact and reject the test runs that were affected by that interference. Section  10.7.6.1  de- scribes methods of rejection involving additional measurements, while Section  10.7.6.2 describes statistics-based rejection. 10.7.6.1 Detecting Interference Via Measurement Many systems, including Linux, provide means for determining after the fact whether some forms of interference have occurred. For example, if your test encountered process-based interference, a a context switch must have occurred during the test. On Linux-based systems, this context switch will be visible in  /proc//sched  in the  nr_switches  field. Similarly, interrupt-based interference can be detected via the  /proc/interrupts  file. Opening and reading files is not the way to low overhead, and it is possible to get the count of context switches for a given thread by using the  getrusage()  system call, as shown in Figure  10.5.  This same system call can be used to detect minor page faults ( ru_minflt ) and major page faults ( ru_majflt ). Unfortunately, detecting memory errors and firmware interference is quite system- specific, as is the detection of interference due to virtualization. Although avoidance is better than detection, and detection is better than statistics, there are times when one must avail oneself of statistics, a topic addressed in the next section. 10.7.6.2 Detecting Interference Via Statistics Any statistical analysis will be based on assumptions about the data, and performance microbenchmarks often support the following assumptions: 1. Smaller measurements are more likely to be accurate than larger measurements. 2. The measurement uncertainty of good data is known. 283 3. A reasonable fraction of the test runs will result in good data. The fact that smaller measurements are more likely to be accurate than larger measurements suggests that sorting the measurements in increasing order is likely to be productive . 11 The fact that the measurement uncertainty is known allows us to accept measurements within this uncertainty of each other: If the effects of interference are large compared to this uncertainty, this will ease rejection of bad data. Finally, the fact that some fraction (for example, one third) can be assumed to be good allows us to blindly accept the first portion of the sorted list, and this data can then be used to gain an estimate of the natural variation of the measured data, over and above the assumed measurement error. The approach is to take the specified number of leading elements from the beginning of the sorted list, and use these to estimate a typical inter-element delta, which in turn may be multiplied by the number of elements in the list to obtain an upper bound on permissible values. The algorithm then repeatedly considers the next element of the list. If it is falls below the upper bound, and if the distance between the next element and the previous element is not too much greater than the average inter-element distance for the portion of the list accepted thus far, then the next element is accepted and the process repeats. Otherwise, the remainder of the list is rejected. Figure  10.6  shows a simple  sh  /  awk  script implementing this notion. Input consists of an x-value followed by an arbitrarily long list of y-values, and output consists of one line for each input line, with fields as follows: 1. The x-value. 2. The average of the selected data. 3. The minimum of the selected data. 4. The maximum of the selected data. 5. The number of selected data items. 6. The number of input data items. This script takes three optional arguments as follows: •  --divisor : Number of segments to divide the list into, for example, a divisor of four means that the first quarter of the data elements will be assumed to be good. This defaults to three. •  --relerr : Relative measurement error. The script assumes that values that differ by less than this error are for all intents and purposes equal. This defaults to 0.01, which is equivalent to 1%. •  --trendbreak : Ratio of inter-element spacing constituting a break in the trend of the data. Fr example, if the average spacing in the data accepted so far is 1.5, then if the trend-break ratio is 2.0, then if the next data value differs from the last one by more than 3.0, this constitutes a break in the trend. (Unless of course, the relative error is greater than 3.0, in which case the “break” will be ignored.) 11 To paraphrase the old saying, “Sort first and ask questions later.” 284 Lines 1-3 of Figure  10.6  set the default values for the parameters, and lines 4-21 parse any command-line overriding of these parameters. The awk invocation on lines 23 and 24 sets the values of the  divisor ,  relerr , and  trendbreak  variables to their sh  counterparts. In the usual  awk  manner, lines 25-52 are executed on each input line. The loop spanning lines 24 and 26 copies the input y-values to the d array, which line 27 sorts into increasing order. Line 28 computes the number of y-values that are to be trusted absolutely by applying  divisor  and rounding up. Lines 29-33 compute the  maxdelta  value used as a lower bound on the upper bound of y-values. To this end, lines 29 and 30 multiply the difference in values over the trusted region of data by the  divisor , which projects the difference in values across the trusted region across the entire set of y-values. However, this value might well be much smaller than the relative error, so line 31 computes the absolute error ( d[i]  *  relerr ) and adds that to the difference  delta  across the trusted portion of the data. Lines 32 and 33 then compute the maximum of these two values. Each pass through the loop spanning lines 34-43 attempts to add another data value to the set of good data. Lines 35-39 compute the trend-break delta, with line 36 disabling this limit if we don’t yet have enough values to compute a trend, and with lines 38 and 39 multiplying  trendbreak  by the average difference between pairs of data values in the good set. If line 40 determines that the candidate data value would exceed the lower bound on the upper bound ( maxdelta )  and   line 41 determines that the difference between the candidate data value and its predecessor exceeds the trend-break difference ( maxdiff ), then line 42 exits the loop: We have the full good set of data. Lines 44-52 then compute and print the statistics for the data set. Quick Quiz 10.20:  This approach is just plain weird! Why not use means and standard deviations, like we were taught in our statistics classes? Quick Quiz 10.21:  But what if all the y-values in the trusted group of data are exactly zero? Won’t that cause the script to reject any non-zero value? Although statistical interference detection can be quite useful, it should be used only as a last resort. It is far better to avoid interference in the first place (Section  10.7.5) , or, failing that, detecting interference via measurement (Section  10.7.6.1 ). 10.8 Summary Althoguh validation never will be an exact science, much can be gained by taking an organized approach to it, as an organized approach will help you choose the right validation tools for your job, avoiding situations like the one fancifully depicted in Figure  10.7 . A key choice is that of statistics. Although the methods described in this chapter work very well most of the time, they do have their limitations. These limitations are inherent because we are attempting to do something that is in general impossible, courtesy of the Halting Problem [ Tur37 ,  Pul00 ]. Fortunately for us, there are a huge number of special cases in which we can not only work out whether a given program will halt, but also establish estimates for how long it will run before halting, as discussed in Section  10.7.  Furthermore, in cases where a given program might or might not work correctly, we can often establish estimates for what fraction of the time it will work correctly, as discussed in Section  10.6. Nevertheless, unthinking reliance on these estimates is brave to the point of fool- hardiness. After all, we are summarizing a huge mass of complexity in code and data structures down to a single solitary number. Even though we can get away with such 285 bravery a surprisingly large fraction of the time, it is only reasonable to expect that the code and data being abstracted away will occasionally cause severe problems. One possible problem is variability, where repeated runs might give wildly different results. This is often dealt with by maintaining a standard deviation as well as a mean, but the fact is that attempting to summarize the behavior of a large and complex program with two numbers is almost as brave as summarizing its behavior with only one number. In computer programming, the surprising thing is that use of the mean or the mean and standard deviation are often sufficient, but there are no guarantees. One cause of variation is confounding factors. For example, the CPU time consumed by a linked-list search will depend on the length of the list. Averaging together runs with wildly different list lengths will probably not be useful, and adding a standard deviation to the mean will not be much better. The right thing to do would be control for list length, either by holding the length constant or to measure CPU time as a function of list length. Of course, this advice assumes that you are aware of the confounding factors, and Murphy says that you probably will not be. I have been involved in projects that had confounding factors as diverse as air conditioners (which drew considerable power at startup, thus causing the voltage supplied to the computer to momentarily drop too low, sometimes resulting in failure), cache state (resulting in odd variations in performance), I/O errors (including disk errors, packet loss, and duplicate Ethernet MAC addresses), and even porpoises (which could not resist playing with an array of transponders, which, in the absence of porpoises, could be used for high-precision acoustic positioning and navigation). In short, validation always will require some measure of the behavior of the system. Because this measure must be asevere summarization of the system, it canbe misleading. So as the saying goes, “Be careful. It is a real world out there.” But suppose you are working on the Linux kernel, which as of 2013 has about a billion instances throughout the world? In that case, a bug that would be encountered once every million years will be encountered almost three times per day across the installed base. A test with a 50% chance of encountering this bug in a one-hour run would need to increase that bug’s probability of occurrence by more than nine orders of magnitude, which poses a severe challenge to today’s testing methodologies. One important tool that can sometimes be applied with good effect to such situations is formal validation, the subject of the next chapter. 286 1 divisor=3 2 relerr=0.01 3 trendbreak=10 4 while test $# -gt 0 5 do 6 case "$1" in 7 --divisor) 8 shift 9 divisor=$1 10 ;; 11 --relerr) 12 shift 13 relerr=$1 14 ;; 15 --trendbreak) 16 shift 17 trendbreak=$1 18 ;; 19 esac 20 shift 21 done 22 23 awk -v divisor=$divisor -v relerr=$relerr 24 -v trendbreak=$trendbreak ’{ 25 for (i = 2; i <= NF; i++) 26 d[i - 1] = $i; 27 asort(d); 28 i = int((NF + divisor - 1) / divisor); 29 delta = d[i] - d[1]; 30 maxdelta = delta  *  divisor; 31 maxdelta1 = delta + d[i]  *  relerr; 32 if (maxdelta1 > maxdelta) 33 maxdelta = maxdelta1; 34 for (j = i + 1; j < NF; j++) { 35 if (j <= 2) 36 maxdiff = d[NF - 1] - d[1]; 37 else 38 maxdiff = trendbreak  *   39 (d[j - 1] - d[1]) / (j - 2); 40 if (d[j] - d[1] > maxdelta && 41 d[j] - d[j - 1] > maxdiff) 42 break; 43 } 44 n = sum = 0; 45 for (k = 1; k < j; k++) { 46 sum += d[k]; 47 n++; 48 } 49 min = d[1]; 50 max = d[j - 1]; 51 avg = sum / n; 52 print $1, avg, min, max, n, NF - 1; 53 }’ Figure 10.6: Statistical Elimination of Interference 287 Figure 10.7: Choose Validation Methods Wisely 288 Chapter 11 Formal Verification Parallel algorithms can be hard to write, and even harder to debug. Testing, though essential, is insufficient, as fatal race conditions can have extremely low probabilities of occurrence. Proofs of correctness can be valuable, but in the end are just as prone to human error as is the original algorithm. In addition, a proof of correctness cannot be expected to find errors in your assumptions, shortcomings in the requirements, misunderstandings of the underlying software or hardware primitives, or errors that you did not think to construct a proof for. This means that formal methods can never replace testing, however, formal methods are nevertheless a valuable addition to your validation toolbox. It would be very helpful to have a tool that could somehow locate all race conditions. A number of such tools exist, for example, the language Promela and its compiler Spin, which are described in this chapter. Section  11.1  provide an introduction to Promela and Spin, Section  11.2  demonstrates use of Promela and Spin to find a race in a non-atomic increment example, Section  11.3  uses Promela and Spin to validate a similar atomic-increment example, Section  11.4  gives an overview of using Promela and Spin, Section  11.5  demonstrates a Promela model of a spinlock, Section  11.6 applies Promela and spin to validate a simple RCU implementation, Section  11.7  applies Promela to validate an interface between preemptible RCU and the dyntick-idle energy- conservation feature in the Linux kernel, Section  11.8  presents a simpler interface that does not require formal verification, Section  11.9  descripes the PPCMEM tool that understands ARM and Power memory ordering, and finally Section  11.10  sums up use of formal-verification tools for verifying parallel algorithms. 11.1 What are Promela and Spin? Promela is a language designed to help verify protocols, but which can also be used to verify small parallel algorithms. You recode your algorithm and correctness constraints in the C-like language Promela, and then use Spin to translate it into a C program that you can compile and run. The resulting program conducts a full state-space search of  your algorithm, either verifying or finding counter-examples for assertions that you can include in your Promela program. This full-state search can extremely powerful, but can also be a two-edged sword. If  your algorithm is too complex or your Promela implementation is careless, there might be more states than fit in memory. Furthermore, even given sufficient memory, the 289 1 #define NUMPROCS 2 2 3 byte counter = 0; 4 byte progress[NUMPROCS]; 5 6 proctype incrementer(byte me) 7 { 8 int temp; 9 10 temp = counter; 11 counter = temp + 1; 12 progress[me] = 1; 13 } 14 15 init { 16 int i = 0; 17 int sum = 0; 18 19 atomic { 20 i = 0; 21 do 22 :: i < NUMPROCS -> 23 progress[i] = 0; 24 run incrementer(i); 25 i++ 26 :: i >= NUMPROCS -> break 27 od; 28 } 29 atomic { 30 i = 0; 31 sum = 0; 32 do 33 :: i < NUMPROCS -> 34 sum = sum + progress[i]; 35 i++ 36 :: i >= NUMPROCS -> break 37 od; 38 assert(sum < NUMPROCS || counter == NUMPROCS) 39 } 40 } Figure 11.1: Promela Code for Non-Atomic Increment state-space search might well run for longer than the expected lifetime of the universe. Therefore, use this tool for compact but complex parallel algorithms. Attempts to naively apply it to even moderate-scale algorithms (let alone the full Linux kernel) will end badly. Promela and Spin may be downloaded from  http://spinroot.com/spin/ whatispin.html . The above site also gives links to Gerard Holzmann’s excellent book [ Hol03 ]  on Promela and Spin, as well as searchable online references starting at:  http://www. spinroot.com/spin/Man/index.html . The remainder of this article describes how to use Promela to debug parallel algo- rithms, starting with simple examples and progressing to more complex uses. 11.2 Promela Example: Non-Atomic Increment Figure  11.1  demonstrates the textbook race condition resulting from non-atomic incre- ment. Line 1 defines the number of processes to run (we will vary this to see the effect on state space), line 3 defines the counter, and line 4 is used to implement the assertion that appears on lines 29-39. 290 pan: assertion violated ((sum<2)||(counter==2)) (at depth 20) pan: wrote increment.spin.trail (Spin Version 4.2.5 -- 2 April 2005) Warning: Search not completed + Partial Order Reduction Full statespace search for: never claim - (none specified) assertion violations + cycle checks - (disabled by -DSAFETY) invalid end states + State-vector 40 byte, depth reached 22, errors: 1 45 states, stored 13 states, matched 58 transitions (= stored+matched) 51 atomic steps hash conflicts: 0 (resolved) 2.622 memory usage (Mbyte) Figure 11.2: Non-Atomic Increment spin Output Lines 6-13 define a process that increments the counter non-atomically. The argu- ment  me  is the process number, set by the initialization block later in the code. Because simple Promela statements are each assumed atomic, we must break the increment into the two statements on lines 10-11. The assignment on line 12 marks the process’s com- pletion. Because the Spin system will fully search the state space, including all possible sequences of states, there is no need for the loop that would be used for conventional testing. Lines 15-40 are the initialization block, which is executed first. Lines 19-28 actually do the initialization, while lines 29-39 perform the assertion. Both are atomic blocks in order to avoid unnecessarily increasing the state space: because they are not part of the algorithm proper, we loose no verification coverage by making them atomic. Thedo-odconstructonlines21-27implementsaPromelaloop, whichcanbethought of as a C  for (;;)  loop containing a  switch  statement that allows expressions in case labels. The condition blocks (prefixed by  :: ) are scanned non-deterministically, though in this case only one of the conditions can possibly hold at a given time. The first block of the do-od from lines 22-25 initializes the i-th incrementer’s progress cell, runs the i-th incrementer’s process, and then increments the variable  i . The second block of the do-od on line 26 exits the loop once these processes have been started. The atomic block on lines 29-39 also contains a similar do-od loop that sums up the progress counters. The  assert()  statement on line 38 verifies that if all processes have been completed, then all counts have been correctly recorded. You can build and run this program as follows: spin -a increment.spin # Translate the model to C cc -DSAFETY -o pan pan.c # Compile the model ./pan # Run the model This will produce output as shown in Figure  11.2.  The first line tells us that our assertion was violated (as expected given the non-atomic increment!). The second line that a trail file was written describing how the assertion was violated. The “Warning” line reiterates that all was not well with our model. The second paragraph describes the type of state-search being carried out, in this case for assertion violations and invalid end states. The third paragraph gives state-size statistics: this small model had only 45 291 states. The final line shows memory usage. The  trail  file may be rendered human-readable as follows: spin -t -p increment.spin Starting :init: with pid 0 1: proc 0 (:init:) line 20 "increment.spin" (state 1) [i = 0] 2: proc 0 (:init:) line 22 "increment.spin" (state 2) [((i<2))] 2: proc 0 (:init:) line 23 "increment.spin" (state 3) [progress[i] = 0] Starting incrementer with pid 1 3: proc 0 (:init:) line 24 "increment.spin" (state 4) [(run incrementer(i))] 3: proc 0 (:init:) line 25 "increment.spin" (state 5) [i = (i+1)] 4: proc 0 (:init:) line 22 "increment.spin" (state 2) [((i<2))] 4: proc 0 (:init:) line 23 "increment.spin" (state 3) [progress[i] = 0] Starting incrementer with pid 2 5: proc 0 (:init:) line 24 "increment.spin" (state 4) [(run incrementer(i))] 5: proc 0 (:init:) line 25 "increment.spin" (state 5) [i = (i+1)] 6: proc 0 (:init:) line 26 "increment.spin" (state 6) [((i>=2))] 7: proc 0 (:init:) line 21 "increment.spin" (state 10) [break] 8: proc 2 (incrementer) line 10 "increment.spin" (state 1) [temp = counter] 9: proc 1 (incrementer) line 10 "increment.spin" (state 1) [temp = counter] 10: proc 2 (incrementer) line 11 "increment.spin" (state 2) [counter = (temp+1)] 11: proc 2 (incrementer) line 12 "increment.spin" (state 3) [progress[me] = 1] 12: proc 2 terminates 13: proc 1 (incrementer) line 11 "increment.spin" (state 2) [counter = (temp+1)] 14: proc 1 (incrementer) line 12 "increment.spin" (state 3) [progress[me] = 1] 15: proc 1 terminates 16: proc 0 (:init:) line 30 "increment.spin" (state 12) [i = 0] 16: proc 0 (:init:) line 31 "increment.spin" (state 13) [sum = 0] 17: proc 0 (:init:) line 33 "increment.spin" (state 14) [((i<2))] 17: proc 0 (:init:) line 34 "increment.spin" (state 15) [sum = (sum+progress[i])] 17: proc 0 (:init:) line 35 "increment.spin" (state 16) [i = (i+1)] 18: proc 0 (:init:) line 33 "increment.spin" (state 14) [((i<2))] 18: proc 0 (:init:) line 34 "increment.spin" (state 15) [sum = (sum+progress[i])] 18: proc 0 (:init:) line 35 "increment.spin" (state 16) [i = (i+1)] 19: proc 0 (:init:) line 36 "increment.spin" (state 17) [((i>=2))] 20: proc 0 (:init:) line 32 "increment.spin" (state 21) [break] spin: line 38 "increment.spin", Error: assertion violated spin: text of failed assertion: assert(((sum<2)||(counter==2))) 21: proc 0 (:init:) line 38 "increment.spin" (state 22) [assert(((sum<2)||(counter==2)))] spin: trail ends after 21 steps #processes: 1 counter = 1 progress[0] = 1 progress[1] = 1 21: proc 0 (:init:) line 40 "increment.spin" (state 24) 3 processes created Figure 11.3: Non-Atomic Increment Error Trail This gives the output shown in Figure  11.3 . As can be seen, the first portion of the init block created both incrementer processes, both of which first fetched the counter, then both incremented and stored it, losing a count. The assertion then triggered, after which the global state is displayed. 11.3 Promela Example: Atomic Increment It is easy to fix this example by placing the body of the incrementer processes in an atomic blocks as shown in Figure  11.4.  One could also have simply replaced the pair of statements with  counter = counter + 1 , because Promela statements are 292 1 proctype incrementer(byte me) 2 { 3 int temp; 4 5 atomic { 6 temp = counter; 7 counter = temp + 1; 8 } 9 progress[me] = 1; 10 } Figure 11.4: Promela Code for Atomic Increment (Spin Version 4.2.5 -- 2 April 2005) + Partial Order Reduction Full statespace search for: never claim - (none specified) assertion violations + cycle checks - (disabled by -DSAFETY) invalid end states + State-vector 40 byte, depth reached 20, errors: 0 52 states, stored 21 states, matched 73 transitions (= stored+matched) 66 atomic steps hash conflicts: 0 (resolved) 2.622 memory usage (Mbyte) unreached in proctype incrementer (0 of 5 states) unreached in proctype :init: (0 of 24 states) Figure 11.5: Atomic Increment spin Output 293 # incrementers # states megabytes 1 11 2.6 2 52 2.6 3 372 2.6 4 3,496 2.7 5 40,221 5.0 6 545,720 40.5 7 8,521,450 652.7 Table 11.1: Memory Usage of Increment Model atomic. Either way, running this modified model gives us an error-free traversal of the state space, as shown in Figure  11.5. 11.3.1 Combinatorial Explosion Table  11.1  shows the number of states and memory consumed as a function of number of incrementers modeled (by redefining  NUMPROCS ): Running unnecessarily large models is thus subtly discouraged, although 652MB is well within the limits of modern desktop and laptop machines. With this example under our belt, let’s take a closer look at the commands used to analyze Promela models and then look at more elaborate examples. 11.4 How to Use Promela Given a source file  qrcu.spin ,  one can use the following commands: •  spin -a qrcu.spin  Create a file pan.c that fully searches the state machine. •  cc -DSAFETY -o pan pan.c  Compile the generated state-machine search. The  -DSAFETY  generates optimizations that are appropriate if you have only assertions (and perhaps  never  statements). If you have liveness, fairness, or forward-progress checks, you may need to compile without  -DSAFETY . If you leave off  -DSAFETY when you could have used it, the program will let you know. The optimizations produced by -DSAFETY greatly speedthings up, so you should use it when you can. An example situation where you cannot use  -DSAFETY  is when checking for livelocks (AKA “non-progress cycles”) via  -DNP . •  ./pan This actually searches the state space. The number of states can reach into the tens of millions with very small state machines, so you will need a machine with large memory. For example, qrcu.spin with 3 readers and 2 updaters required 2.7GB of memory. If you aren’t sure whether your machine has enough memory, run  top  in one window and  ./pan  in another. Keep the focus on the  ./pan  window so that you can quickly kill execution if need be. As soon as CPU time drops much below 100%, kill  ./pan . If you have removed focus from the window running  ./pan , you may wait a long time for the windowing system to grab enough memory to do anything for you. 294 Don’t forget to capture the output, especially if you are working on a remote machine, If your model includes forward-progress checks, you will likely need to enable “weak fairness” via the  -f  command-line argument to  ./pan . If your forward- progress checks involve  accept  labels, you will also need the  -a  argument. •  spin -t -p qrcu.spin Given trail file output by a run that encountered an error, output the sequence of steps leading to that error. The  -g  flag will also include the values of changed global variables, and the  -l  flag will also include the values of changed local variables. 11.4.1 Promela Peculiarities Although all computer languages have underlying similarities, Promela will provide some surprises to people used to coding in C, C++, or Java. 1.  In C, “ ; ” terminates statements. In Promela it separates them. Fortunately, more recent versions of Spin have become much more forgiving of “extra” semicolons. 2.  Promela’s looping construct, the  do  statement, takes conditions. This  do  state- ment closely resembles a looping if-then-else statement. 3.  In C’s  switch  statement, if there is no matching case, the whole statement is skipped. In Promela’s equivalent, confusingly called  if , if there is no matching guard expression, you get an error without a recognizable corresponding error message. So, if the error output indicates an innocent line of code, check to see if  you left out a condition from an  if  or  do  statement. 4.  When creating stress tests in C, one usually races suspect operations against each other repeatedly. In Promela, one instead sets up a single race, because Promela will search out all the possible outcomes from that single race. Sometimes you do need to loop in Promela, for example, if multiple operations overlap, but doing so greatly increases the size of your state space. 5.  In C, the easiest thing to do is to maintain a loop counter to track progress and terminate the loop. In Promela, loop counters must be avoided like the plague because they cause the state space to explode. On the other hand, there is no penalty for infinite loops in Promela as long as the none of the variables monotonically increase or decrease – Promela will figure out how many passes through the loop really matter, and automatically prune execution beyond that point. 6.  In C torture-test code, it is often wise to keep per-task control variables. They are cheap to read, and greatly aid in debugging the test code. In Promela, per-task control variables should be used only when there is no other alternative. To see this, consider a 5-task verification with one bit each to indicate completion. This gives 32 states. In contrast, a simple counter would have only six states, more than a five-fold reduction. That factor of five might not seem like a problem, at least not until you are struggling with a verification program possessing more than 150 million states consuming more than 10GB of memory! 295 1 i = 0; 2 sum = 0; 3 do 4 :: i < N_QRCU_READERS -> 5 sum = sum + (readerstart[i] == 1 && 6 readerprogress[i] == 1); 7 i++ 8 :: i >= N_QRCU_READERS -> 9 assert(sum == 0); 10 break 11 od Figure 11.6: Complex Promela Assertion 7. One of the most challenging things both in C torture-test code and in Promela is formulating good assertions. Promela also allows  never  claims that act sort of  like an assertion replicated between every line of code. 8.  Dividing and conquering is extremely helpful in Promela in keeping the state space under control. Splitting a large model into two roughly equal halves will result in the state space of each half being roughly the square root of the whole. For example, a million-state combined model might reduce to a pair of thousand- state models. Not only will Promela handle the two smaller models much more quickly with much less memory, but the two smaller algorithms are easier for people to understand. 11.4.2 Promela Coding Tricks Promela was designed to analyze protocols, so using it on parallel programs is a bit abusive. The following tricks can help you to abuse Promela safely: 1.  Memory reordering. Suppose you have a pair of statements copying globals x and y to locals r1 and r2, where ordering matters (e.g., unprotected by locks), but where you have no memory barriers. This can be modeled in Promela as follows: 1 if 2 :: 1 -> r1 = x; 3 r2 = y 4 :: 1 -> r2 = y; 5 r1 = x 6 fi The two branches of the  if  statement will be selected nondeterministically, since they both are available. Because the full state space is searched,  both  choices will eventually be made in all cases. Of course, this trick will cause your state space to explode if used too heavily. In addition, it requires you to anticipate possible reorderings. 2.  State reduction. If you have complex assertions, evaluate them under  atomic . After all, they are not part of the algorithm. One example of a complex assertion (to be discussed in more detail later) is as shown in Figure  11.6. There is no reason to evaluate this assertion non-atomically, since it is not actually part of the algorithm. Because each statement contributes to state, we can reduce the number of useless states by enclosing it in an  atomic  block as shown in Figure  11.7 296 1 atomic { 2 i = 0; 3 sum = 0; 4 do 5 :: i < N_QRCU_READERS -> 6 sum = sum + (readerstart[i] == 1 && 7 readerprogress[i] == 1); 8 i++ 9 :: i >= N_QRCU_READERS -> 10 assert(sum == 0); 11 break 12 od 13 } Figure 11.7: Atomic Block for Complex Promela Assertion 1 #define spin_lock(mutex) 2 do 3 :: 1 -> atomic { 4 if 5 :: mutex == 0 -> 6 mutex = 1; 7 break 8 :: else -> skip 9 fi 10 } 11 od 12 13 #define spin_unlock(mutex) 14 mutex = 0 Figure 11.8: Promela Code for Spinlock 3.  Promela does not provide functions. You must instead use C preprocessor macros. However, you must use them carefully in order to avoid combinatorial explosion. Now we are ready for more complex examples. 11.5 Promela Example: Locking Since locks are generally useful,  spin_lock()  and  spin_unlock()  macros are provided in  lock.h , which may be included from multiple Promela models, as shown in Figure  11.8.  The  spin_lock()  macro contains an infinite do-od loop spanning lines 2-11, courtesy of the single guard expression of “1” on line 3. The body of this loop is a single atomic block that contains an if-fi statement. The if-fi construct is similar to the do-od construct, except that it takes a single pass rather than looping. If the lock is not held on line 5, then line 6 acquires it and line 7 breaks out of the enclosing do-od loop (and also exits the atomic block). On the other hand, if the lock is already held on line 8, we do nothing ( skip ), and fall out of the if-fi and the atomic block so as to take another pass through the outer loop, repeating until the lock is available. The  spin_unlock()  macro simply marks the lock as no longer held. Note that memory barriers are not needed because Promela assumes full ordering. In any given Promela state, all processes agree on both the current state and the order of state changes that caused us to arrive at the current state. This is analogous to the “sequentially consistent” memory model used by a few computer systems (such as MIPS and PA-RISC). As noted earlier, and as will be seen in a later example, weak memory ordering must be explicitly coded. 297 1 #include "lock.h" 2 3 #define N_LOCKERS 3 4 5 bit mutex = 0; 6 bit havelock[N_LOCKERS]; 7 int sum; 8 9 proctype locker(byte me) 10 { 11 do 12 :: 1 -> 13 spin_lock(mutex); 14 havelock[me] = 1; 15 havelock[me] = 0; 16 spin_unlock(mutex) 17 od 18 } 19 20 init { 21 int i = 0; 22 int j; 23 24 end: do 25 :: i < N_LOCKERS -> 26 havelock[i] = 0; 27 run locker(i); 28 i++ 29 :: i >= N_LOCKERS -> 30 sum = 0; 31 j = 0; 32 atomic { 33 do 34 :: j < N_LOCKERS -> 35 sum = sum + havelock[j]; 36 j = j + 1 37 :: j >= N_LOCKERS -> 38 break 39 od 40 } 41 assert(sum <= 1); 42 break 43 od 44 } Figure 11.9: Promela Code to Test Spinlocks 298 These macros are tested by the Promela code shown in Figure  11.9.  This code is similar to that used to test the increments, with the number of locking processes defined by the  N_LOCKERS  macro definition on line 3. The mutex itself is defined on line 5, an array to track the lock owner on line 6, and line 7 is used by assertion code to verify that only one process holds the lock. The locker process is on lines 9-18, and simply loops forever acquiring the lock on line 13, claiming it on line 14, unclaiming it on line 15, and releasing it on line 16. The init block on lines 20-44 initializes the current locker’s havelock array entry on line 26, starts the current locker on line 27, and advances to the next locker on line 28. Once all locker processes are spawned, the do-od loop moves to line 29, which checks the assertion. Lines 30 and 31 initialize the control variables, lines 32-40 atomically sum the havelock array entries, line 41 is the assertion, and line 42 exits the loop. We can run this model by placing the above two code fragments into files named lock.h  and  lock.spin , respectively, and then running the following commands: spin -a lock.spin cc -DSAFETY -o pan pan.c ./pan (Spin Version 4.2.5 -- 2 April 2005) + Partial Order Reduction Full statespace search for: never claim - (none specified) assertion violations + cycle checks - (disabled by -DSAFETY) invalid end states + State-vector 40 byte, depth reached 357, errors: 0 564 states, stored 929 states, matched 1493 transitions (= stored+matched) 368 atomic steps hash conflicts: 0 (resolved) 2.622 memory usage (Mbyte) unreached in proctype locker line 18, state 20, "-end-" (1 of 20 states) unreached in proctype :init: (0 of 22 states) Figure 11.10: Output for Spinlock Test The output will look something like that shown in Figure  11.10.  As expected, this run has no assertion failures (“errors: 0”). Quick Quiz 11.1:  Why is there an unreached statement in locker? After all, isn’t this a  full  state-space search? Quick Quiz 11.2:  What are some Promela code-style issues with this example? 11.6 Promela Example: QRCU This final example demonstrates a real-world use of Promela on Oleg Nesterov’s QRCU [ Nes06a ,  Nes06b ], but modified to speed up the  synchronize_qrcu() fastpath. But first, what is QRCU? 299 QRCU is a variant of SRCU  [ McK06b ]  that trades somewhat higher read overhead (atomic increment and decrement on a global variable) for extremely low grace-period latencies. If there are no readers, the grace period will be detected in less than a microsecond, compared to the multi-millisecond grace-period latencies of most other RCU implementations. 1.  There is a qrcu_struct that defines a QRCU domain. Like SRCU (and unlike other variants of RCU) QRCU’s action is not global, but instead focused on the specified  qrcu_struct . 2.  There are qrcu_read_lock() and qrcu_read_unlock() primitives that delimit QRCU read-side critical sections. The corresponding  qrcu_struct must be passed into these primitives, and the return value from  rcu_read_  lock()  must be passed to  rcu_read_unlock() . For example: idx = qrcu_read_lock(&my_qrcu_struct); / *  read-side critical section.  * / qrcu_read_unlock(&my_qrcu_struct, idx); 3.  There is a  synchronize_qrcu()  primitive that blocks until all pre-existing QRCU read-side critical sections complete, but, like SRCU’s  synchronize_  srcu() , QRCU’s synchronize_qrcu() need wait only for those read-side critical sections that are using the same  qrcu_struct . Forexample, synchronize_qrcu(&your_qrcu_struct) would not   need towaitontheearlierQRCUread-sidecriticalsection. Incontrast, synchronize_  qrcu(&my_qrcu_struct)  would   need to wait, since it shares the same qrcu_struct . A Linux-kernel patch for QRCU has been produced [ McK07b ], but has not yet been included in the Linux kernel as of April 2008. 1 #include "lock.h" 2 3 #define N_QRCU_READERS 2 4 #define N_QRCU_UPDATERS 2 5 6 bit idx = 0; 7 byte ctr[2]; 8 byte readerprogress[N_QRCU_READERS]; 9 bit mutex = 0; Figure 11.11: QRCU Global Variables Returning to the Promela code for QRCU, the global variables are as shown in Figure  11.11.  This example uses locking, hence including lock.h .  Both the number of  readers and writers can be varied using the two  #define  statements, giving us not one but two ways to create combinatorial explosion. The idx  variable controls which of the two elements of the  ctr  array will be used by readers, and the  readerprogress variable allows to assertion to determine when all the readers are finished (since a QRCU update cannot be permitted to complete until all pre-existing readers have completed their QRCU read-side critical sections). The readerprogress array elements have values as follows, indicating the state of the corresponding reader: 300 1. 0: not yet started. 2. 1: within QRCU read-side critical section. 3. 2: finished with QRCU read-side critical section. Finally, the  mutex  variable is used to serialize updaters’ slowpaths. 1 proctype qrcu_reader(byte me) 2 { 3 int myidx; 4 5 do 6 :: 1 -> 7 myidx = idx; 8 atomic { 9 if 10 :: ctr[myidx] > 0 -> 11 ctr[myidx]++; 12 break 13 :: else -> skip 14 fi 15 } 16 od; 17 readerprogress[me] = 1; 18 readerprogress[me] = 2; 19 atomic { ctr[myidx]-- } 20 } Figure 11.12: QRCU Reader Process QRCU readers are modeled by the  qrcu_reader()  process shown in Fig- ure  11.12 . A do-od loop spans lines 5-16, with a single guard of “1” on line 6 that makes it an infinite loop. Line 7 captures the current value of the global index, and lines 8-15 atomically increment it (and break from the infinite loop) if its value was non-zero ( atomic_inc_not_zero() ). Line 17 marks entry into the RCU read-side critical section, and line 18 marks exit from this critical section, both lines for the benefit of the assert()  statement that we shall encounter later. Line 19 atomically decrements the same counter that we incremented, thereby exiting the RCU read-side critical section. 1 #define sum_unordered 2 atomic { 3 do 4 :: 1 -> 5 sum = ctr[0]; 6 i = 1; 7 break 8 :: 1 -> 9 sum = ctr[1]; 10 i = 0; 11 break 12 od; 13 } 14 sum = sum + ctr[i] Figure 11.13: QRCU Unordered Summation The C-preprocessor macro shown in Figure  11.13  sums the pair of counters so as to emulate weak memory ordering. Lines 2-13 fetch one of the counters, and line 14 fetches the other of the pair and sums them. The atomic block consists of a single do-od statement. This do-od statement (spanning lines 3-12) is unusual in that it contains 301 two unconditional branches with guards on lines 4 and 8, which causes Promela to non-deterministically choose one of the two (but again, the full state-space search causes Promela to eventually make all possible choices in each applicable situation). The first branch fetches the zero-th counter and sets  i  to 1 (so that line 14 will fetch the first counter), while the second branch does the opposite, fetching the first counter and setting  i  to 0 (so that line 14 will fetch the second counter). Quick Quiz 11.3:  Is there a more straightforward way to code the do-od statement? With the  sum_unordered  macro in place, we can now proceed to the update- side process shown in Figure. The update-side process repeats indefinitely, with the corresponding do-od loop ranging over lines 7-57. Each pass through the loop first snapshots the global readerprogress  array into the local readerstart  array on lines 12-21. This snapshot will be used for the assertion on line 53. Line 23 invokes sum_unordered , and then lines 24-27 re-invoke  sum_unordered  if the fastpath is potentially usable. Lines 28-40 execute the slowpath code if need be, with lines 30 and 38 acquiring and releasing the update-side lock, lines 31-33 flipping the index, and lines 34-37 waiting for all pre-existing readers to complete. Lines 44-56 then compare the current values in the  readerprogress  array to those collected in the  readerstart  array, forcing an assertion failure should any readers that started before this update still be in progress. Quick Quiz 11.4:  Why are there atomic blocks at lines 12-21 and lines 44-56, when the operations within those atomic blocks have no atomic implementation on any current production microprocessor? Quick Quiz 11.5:  Is the re-summing of the counters on lines 24-27  really  necessary? All that remains is the initialization block shown in Figure  11.15.  This block simply initializes the counter pair on lines 5-6, spawns the reader processes on lines 7-14, and spawns the updater processes on lines 15-21. This is all done within an atomic block to reduce state space. 11.6.1 Running the QRCU Example To run the QRCU example, combine the code fragments in the previous section into a single file named  qrcu.spin , and place the definitions for  spin_lock()  and spin_unlock()  into a file named  lock.h .  Then use the following commands to build and run the QRCU model: spin -a qrcu.spin cc -DSAFETY -o pan pan.c ./pan TheresultingoutputshowsthatthismodelpassesallofthecasesshowninTable 11.2. Now, it would be nice to run the case with three readers and three updaters, however, simple extrapolation indicates that this will require on the order of a terabyte of memory best case. So, what to do? Here are some possible approaches: 1.  See whether a smaller number of readers and updaters suffice to prove the general case. 2. Manually construct a proof of correctness. 302 1 proctype qrcu_updater(byte me) 2 { 3 int i; 4 byte readerstart[N_QRCU_READERS]; 5 int sum; 6 7 do 8 :: 1 -> 9 10 / *  Snapshot reader state.  * / 11 12 atomic { 13 i = 0; 14 do 15 :: i < N_QRCU_READERS -> 16 readerstart[i] = readerprogress[i]; 17 i++ 18 :: i >= N_QRCU_READERS -> 19 break 20 od 21 } 22 23 sum_unordered; 24 if 25 :: sum <= 1 -> sum_unordered 26 :: else -> skip 27 fi; 28 if 29 :: sum > 1 -> 30 spin_lock(mutex); 31 atomic { ctr[!idx]++ } 32 idx = !idx; 33 atomic { ctr[!idx]-- } 34 do 35 :: ctr[!idx] > 0 -> skip 36 :: ctr[!idx] == 0 -> break 37 od; 38 spin_unlock(mutex); 39 :: else -> skip 40 fi; 41 42 / *  Verify reader progress.  * / 43 44 atomic { 45 i = 0; 46 sum = 0; 47 do 48 :: i < N_QRCU_READERS -> 49 sum = sum + (readerstart[i] == 1 && 50 readerprogress[i] == 1); 51 i++ 52 :: i >= N_QRCU_READERS -> 53 assert(sum == 0); 54 break 55 od 56 } 57 od 58 } Figure 11.14: QRCU Updater Process 303 1 init { 2 int i; 3 4 atomic { 5 ctr[idx] = 1; 6 ctr[!idx] = 0; 7 i = 0; 8 do 9 :: i < N_QRCU_READERS -> 10 readerprogress[i] = 0; 11 run qrcu_reader(i); 12 i++ 13 :: i >= N_QRCU_READERS -> break 14 od; 15 i = 0; 16 do 17 :: i < N_QRCU_UPDATERS -> 18 run qrcu_updater(i); 19 i++ 20 :: i >= N_QRCU_UPDATERS -> break 21 od 22 } 23 } Figure 11.15: QRCU Initialization Process updaters readers # states MB 1 1 376 2.6 1 2 6,177 2.9 1 3 82,127 7.5 2 1 29,399 4.5 2 2 1,071,180 75.4 2 3 33,866,700 2,715.2 3 1 258,605 22.3 3 2 169,533,000 14,979.9 Table 11.2: Memory Usage of QRCU Model 304 3. Use a more capable tool. 4. Divide and conquer. The following sections discuss each of these approaches. 11.6.2 How Many Readers and Updaters Are Really Needed? One approach is to look carefully at the Promela code for  qrcu_updater()  and notice that the only global state change is happening under the lock. Therefore, only one updater at a time can possibly be modifying state visible to either readers or other updaters. This means that any sequences of state changes can be carried out serially by a single updater due to the fact that Promela does a full state-space search. Therefore, at most two updaters are required: one to change state and a second to become confused. The situation with the readers is less clear-cut, as each reader does only a single read-side critical section then terminates. It is possible to argue that the useful number of readers is limited, due to the fact that the fastpath must see at most a zero and a one in the counters. This is a fruitful avenue of investigation, in fact, it leads to the full proof  of correctness described in the next section. 11.6.3 Alternative Approach: Proof of Correctness An informal proof [ McK07b ] follows: 1.  For  synchronize_qrcu()  to exit too early, then by definition there must have been at least one reader present during  synchronize_qrcu() ’s full execution. 2.  The counter corresponding to this reader will have been at least 1 during this time interval. 3.  The  synchronize_qrcu()  code forces at least one of the counters to be at least 1 at all times. 4.  Therefore, at any given point in time, either one of the counters will be at least 2, or both of the counters will be at least one. 5.  However, the  synchronize_qrcu()  fastpath code can read only one of the counters at a given time. It is therefore possible for the fastpath code to fetch the first counter while zero, but to race with a counter flip so that the second counter is seen as one. 6.  There can be at most one reader persisting through such a race condition, as otherwise the sum would be two or greater, which would cause the updater to take the slowpath. 7.  But if the race occurs on the fastpath’s first read of the counters, and then again on its second read, there have to have been two counter flips. 8.  Because a given updater flips the counter only once, and because the update-side lock prevents a pair of updaters from concurrently flipping the counters, the only way that the fastpath code can race with a flip twice is if the first updater completes. 305 9.  But the first updater will not complete until after all pre-existing readers have completed. 10.  Therefore, if the fastpath races with a counter flip twice in succession, all pre- existing readers must have completed, so that it is safe to take the fastpath. Of course, not all parallel algorithms have such simple proofs. In such cases, it may be necessary to enlist more capable tools. 11.6.4 Alternative Approach: More Capable Tools Although Promela and Spin are quite useful, much more capable tools are available, particularly for verifying hardware. This means that if it is possible to translate your algorithm to the hardware-design VHDL language, as it often will be for low-level parallel algorithms, then it is possible to apply these tools to your code (for example, this was done for the first realtime RCU algorithm). However, such tools can be quite expensive. Although the advent of commodity multiprocessing might eventually result in pow- erful free-software model-checkers featuring fancy state-space-reduction capabilities, this does not help much in the here and now. As an aside, there are Spin features that support approximate searches that require fixed amounts of memory, however, I have never been able to bring myself to trust approximations when verifying parallel algorithms. Another approach might be to divide and conquer. 11.6.5 Alternative Approach: Divide and Conquer It is often possible to break down a larger parallel algorithm into smaller pieces, which can then be proven separately. For example, a 10-billion-state model might be broken into a pair of 100,000-state models. Taking this approach not only makes it easier for tools such as Promela to verify your algorithms, it can also make your algorithms easier to understand. 11.7 Promela Parable: dynticks and Preemptible RCU In early 2008, a preemptible variant of RCU was accepted into mainline Linux in support of real-time workloads, a variant similar to the RCU implementations in the -rt patchset [ Mol05 ] since August 2005. Preemptible RCU is needed for real-time workloads because older RCU implementations disable preemption across RCU read- side critical sections, resulting in excessive real-time latencies. However, one disadvantage of the older -rt implementation (described in Ap- pendix  D.4 ) was that each grace period requires work to be done on each CPU, even if  that CPU is in a low-power “dynticks-idle” state, and thus incapable of executing RCU read-side critical sections. The idea behind the dynticks-idle state is that idle CPUs should be physically powered down in order to conserve energy. In short, preemptible RCU can disable a valuable energy-conservation feature of recent Linux kernels. Al- though Josh Triplett and Paul McKenney had discussed some approaches for allowing CPUs to remain in low-power state throughout an RCU grace period (thus preserving the Linux kernel’s ability to conserve energy), matters did not come to a head until 306 Steve Rostedt integrated a new dyntick implementation with preemptible RCU in the -rt patchset. This combination caused one of Steve’s systems to hang on boot, so in October, Paul coded up a dynticks-friendly modification to preemptible RCU’s grace-period process- ing. Steve coded up  rcu_irq_enter()  and  rcu_irq_exit()  interfaces called from the  irq_enter()  and  irq_exit()  interrupt entry/exit functions. These rcu_irq_enter()  and  rcu_irq_exit()  functions are needed to allow RCU to reliably handle situations where a dynticks-idle CPUs is momentarily powered up for an interrupt handler containing RCU read-side critical sections. With these changes in place, Steve’s system booted reliably, but Paul continued inspecting the code periodically on the assumption that we could not possibly have gotten the code right on the first try. Paul reviewed the code repeatedly from October 2007 to February 2008, and almost always found at least one bug. In one case, Paul even coded and tested a fix before realizing that the bug was illusory, and in fact in all cases, the “bug” turned out to be illusory. Near the end of February, Paul grew tired of this game. He therefore decided to enlist the aid of Promela and spin [ Hol03 ] , as described in Section  11.  The following presents a series of seven increasingly realistic Promela models, the last of which passes, consuming about 40GB of main memory for the state space. More important, Promela and Spin did find a very subtle bug for me! Quick Quiz 11.6:  Yeah, that’s just great! Now, just what am I supposed to do if I don’t happen to have a machine with 40GB of main memory??? Still better would be to come up with a simpler and faster algorithm that has a smaller state space. Even better would be an algorithm so simple that its correctness was obvious to the casual observer! Section  11.7.1  gives an overview of preemptible RCU’s dynticks interface, Sec- tion  11.7.2 , and Section  11.7.3  lists lessons (re)learned during this effort. 11.7.1 Introduction to Preemptible RCU and dynticks The per-CPU  dynticks_progress_counter  variable is central to the interface between dynticks and preemptible RCU. This variable has an even value whenever the corresponding CPU is in dynticks-idle mode, and an odd value otherwise. A CPU exits dynticks-idle mode for the following three reasons: 1. to start running a task, 2. when entering the outermost of a possibly nested set of interrupt handlers, and 3. when entering an NMI handler. Preemptible RCU’s grace-period machinery samples the value of the  dynticks_  progress_counter  variable in order to determine when a dynticks-idle CPU may safely be ignored. Thefollowingthreesectionsgiveanoverviewof thetaskinterface, theinterrupt/NMI interface, and the use of the dynticks_progress_counter variable by the grace- period machinery. 11.7.1.1 Task Interface When a given CPU enters dynticks-idle mode because it has no more tasks to run, it invokes  rcu_enter_nohz() : 307 1 static inline void rcu_enter_nohz(void) 2 { 3 mb(); 4 __get_cpu_var(dynticks_progress_counter)++; 5 WARN_ON(__get_cpu_var(dynticks_progress_counter) & 6 0x1); 7 } This function simply increments  dynticks_progress_counter  and checks that the result is even, but first executing a memory barrier to ensure that any other CPU that sees the new value of   dynticks_progress_counter  will also see the completion of any prior RCU read-side critical sections. Similarly, when a CPU that is in dynticks-idle mode prepares to start executing a newly runnable task, it invokes  rcu_exit_nohz : 1 static inline void rcu_exit_nohz(void) 2 { 3 __get_cpu_var(dynticks_progress_counter)++; 4 mb(); 5 WARN_ON(!(__get_cpu_var(dynticks_progress_counter) & 6 0x1)); 7 } This function again increments dynticks_progress_counter , but follows it with a memory barrier to ensure that if any other CPU sees the result of any subsequent RCU read-side critical section, then that other CPU will also see the incremented value of   dynticks_progress_counter . Finally,  rcu_exit_nohz()  checks that the result of the increment is an odd value. The  rcu_enter_nohz()  and  rcu_exit_nohz  functions handle the case where a CPU enters and exits dynticks-idle mode due to task execution, but does not handle interrupts, which are covered in the following section. 11.7.1.2 Interrupt Interface The  rcu_irq_enter()  and  rcu_irq_exit()  functions handle interrupt/NMI entry and exit, respectively. Of course, nested interrupts must also be properly accounted for. The possibility of nested interrupts is handled by a second per-CPU variable,  rcu_  update_flag , which is incremented upon entry to an interrupt or NMI handler (in rcu_irq_enter() ) and is decremented upon exit (in  rcu_irq_exit() ). In addition, the pre-existing  in_interrupt()  primitive is used to distinguish between an outermost or a nested interrupt/NMI. Interrupt entry is handled by the  rcu_irq_enter  shown below: 1 void rcu_irq_enter(void) 2 { 3 int cpu = smp_processor_id(); 4 5 if (per_cpu(rcu_update_flag, cpu)) 6 per_cpu(rcu_update_flag, cpu)++; 7 if (!in_interrupt() && 8 (per_cpu(dynticks_progress_counter, 9 cpu) & 0x1) == 0) { 10 per_cpu(dynticks_progress_counter, cpu)++; 11 smp_mb(); 12 per_cpu(rcu_update_flag, cpu)++; 13 } 14 } Line 3 fetches the current CPU’s number, while lines 5 and 6 increment the  rcu_  update_flag  nesting counter if it is already non-zero. Lines 7-9 check to see 308 whether we are the outermost level of interrupt, and, if so, whether  dynticks_  progress_counter needstobeincremented. Ifso, line10increments dynticks_  progress_counter , line 11 executes a memory barrier, and line 12 increments rcu_update_flag . As with  rcu_exit_nohz() , the memory barrier ensures that any other CPU that sees the effects of an RCU read-side critical section in the interrupt handler (following the  rcu_irq_enter()  invocation) will also see the increment of   dynticks_progress_counter . Quick Quiz 11.7:  Why not simply increment rcu_update_flag , and then only increment  dynticks_progress_counter  if the old value of   rcu_update_  flag  was zero??? Quick Quiz 11.8:  But if line 7 finds that we are the outermost interrupt, wouldn’t we  always  need to increment  dynticks_progress_counter ? Interrupt exit is handled similarly by  rcu_irq_exit() : 1 void rcu_irq_exit(void) 2 { 3 int cpu = smp_processor_id(); 4 5 if (per_cpu(rcu_update_flag, cpu)) { 6 if (--per_cpu(rcu_update_flag, cpu)) 7 return; 8 WARN_ON(in_interrupt()); 9 smp_mb(); 10 per_cpu(dynticks_progress_counter, cpu)++; 11 WARN_ON(per_cpu(dynticks_progress_counter, 12 cpu) & 0x1); 13 } 14 } Line 3 fetches the current CPU’s number, as before. Line 5 checks to see if the rcu_update_flag  is non-zero, returning immediately (via falling off the end of  the function) if not. Otherwise, lines 6 through 12 come into play. Line 6 decrements rcu_update_flag , returning if the result is not zero. Line 8 verifies that we are indeed leaving the outermost level of nested interrupts, line 9 executes a memory barrier, line 10 increments dynticks_progress_counter , and lines 11 and 12 verify that this variable is now even. As with  rcu_enter_nohz() , the memory barrier ensures that any other CPU that sees the increment of   dynticks_progress_counter will also see the effects of an RCU read-side critical section in the interrupt handler (preceding the  rcu_irq_exit()  invocation). These two sections have described how the  dynticks_progress_counter variable is maintained during entry to and exit from dynticks-idle mode, both by tasks and by interrupts and NMIs. The following section describes how this variable is used by preemptible RCU’s grace-period machinery. 11.7.1.3 Grace-Period Interface Of the four preemptible RCU grace-period states shown in Figure  D.63  on page  558 in Appendix  D.4,  only the  rcu_try_flip_waitack_state()  and  rcu_try_  flip_waitmb_state()  states need to wait for other CPUs to respond. Ofcourse, ifagivenCPUisindynticks-idlestate, weshouldn’twaitforit. Therefore,  just before entering one of these two states, the preceding state takes a snapshot of each CPU’s  dynticks_progress_counter  variable, placing the snapshot in another per-CPU variable,  rcu_dyntick_snapshot . This is accomplished by invoking dyntick_save_progress_counter , shown below: 309 1 static void dyntick_save_progress_counter(int cpu) 2 { 3 per_cpu(rcu_dyntick_snapshot, cpu) = 4 per_cpu(dynticks_progress_counter, cpu); 5 } The rcu_try_flip_waitack_state() stateinvokes rcu_try_flip_waitack_  needed() , shown below: 1 static inline int 2 rcu_try_flip_waitack_needed(int cpu) 3 { 4 long curr; 5 long snap; 6 7 curr = per_cpu(dynticks_progress_counter, cpu); 8 snap = per_cpu(rcu_dyntick_snapshot, cpu); 9 smp_mb(); 10 if ((curr == snap) && ((curr & 0x1) == 0)) 11 return 0; 12 if ((curr - snap) > 2 || (snap & 0x1) == 0) 13 return 0; 14 return 1; 15 } Lines 7 and 8 pick up current and snapshot versions of   dynticks_progress_  counter , respectively. The memory barrier on line ensures that the counter checks in the later  rcu_try_flip_waitzero_state  follow the fetches of these counters. Lines 10 and 11 return zero (meaning no communication with the specified CPU is required) if that CPU has remained in dynticks-idle state since the time that the snapshot was taken. Similarly, lines 12 and 13 return zero if that CPU was initially in dynticks- idle state or if it has completely passed through a dynticks-idle state. In both these cases, there is no way that that CPU could have retained the old value of the grace-period counter. If neither of these conditions hold, line 14 returns one, meaning that the CPU needs to explicitly respond. For its part, the  rcu_try_flip_waitmb_state  state invokes  rcu_try_  flip_waitmb_needed() , shown below: 1 static inline int 2 rcu_try_flip_waitmb_needed(int cpu) 3 { 4 long curr; 5 long snap; 6 7 curr = per_cpu(dynticks_progress_counter, cpu); 8 snap = per_cpu(rcu_dyntick_snapshot, cpu); 9 smp_mb(); 10 if ((curr == snap) && ((curr & 0x1) == 0)) 11 return 0; 12 if (curr != snap) 13 return 0; 14 return 1; 15 } This is quite similar to  rcu_try_flip_waitack_needed , the difference be- ing in lines 12 and 13, because any transition either to or from dynticks-idle state executes the memory barrier needed by the  rcu_try_flip_waitmb_state() state. We now have seen all the code involved in the interface between RCU and the dynticks-idle state. The next section builds up the Promela model used to verify this code. Quick Quiz 11.9:  Can you spot any bugs in any of the code in this section? 310 11.7.2 Validating Preemptible RCU and dynticks This section develops a Promela model for the interface between dynticks and RCU step by step, with each of the following sections illustrating one step, starting with the process-level code, adding assertions, interrupts, and finally NMIs. 11.7.2.1 Basic Model This section translates the process-level dynticks entry/exit code and the grace-period processing into Promela [ Hol03 ]. We start with  rcu_exit_nohz()  and  rcu_  enter_nohz()  from the 2.6.25-rc4 kernel, placing these in a single Promela process that models exiting and entering dynticks-idle mode in a loop as follows: 1 proctype dyntick_nohz() 2 { 3 byte tmp; 4 byte i = 0; 5 6 do 7 :: i >= MAX_DYNTICK_LOOP_NOHZ -> break; 8 :: i < MAX_DYNTICK_LOOP_NOHZ -> 9 tmp = dynticks_progress_counter; 10 atomic { 11 dynticks_progress_counter = tmp + 1; 12 assert((dynticks_progress_counter & 1) == 1); 13 } 14 tmp = dynticks_progress_counter; 15 atomic { 16 dynticks_progress_counter = tmp + 1; 17 assert((dynticks_progress_counter & 1) == 0); 18 } 19 i++; 20 od; 21 } Lines 6 and 20 define a loop. Line 7 exits the loop once the loop counter  i  has exceeded the limit  MAX_DYNTICK_LOOP_NOHZ . Line 8 tells the loop construct to execute lines 9-19 for each pass through the loop. Because the conditionals on lines 7 and 8 are exclusive of each other, the normal Promela random selection of  true conditions is disabled. Lines 9 and 11 model  rcu_exit_nohz() ’s non-atomic increment of   dynticks_progress_counter , while line 12 models the  WARN_  ON() . The  atomic  construct simply reduces the Promela state space, given that the  WARN_ON()  is not strictly speaking part of the algorithm. Lines 14-18 similarly models the increment and  WARN_ON()  for  rcu_enter_nohz() . Finally, line 19 increments the loop counter. Each pass through the loop therefore models a CPU exiting dynticks-idle mode (for example, starting to execute a task), then re-entering dynticks-idle mode (for example, that same task blocking). Quick Quiz 11.10:  Why isn’t the memory barrier in  rcu_exit_nohz()  and rcu_enter_nohz()  modeled in Promela? Quick Quiz 11.11:  Isn’t it a bit strange to model  rcu_exit_nohz()  followed by  rcu_enter_nohz() ? Wouldn’t it be more natural to instead model entry before exit? The next step is to model the interface to RCU’s grace-period processing. For this, we need to model  dyntick_save_progress_counter() ,  rcu_try_flip_  waitack_needed() , rcu_try_flip_waitmb_needed() , as well as portions of   rcu_try_flip_waitack()  and  rcu_try_flip_waitmb() , all from the 2.6.25-rc4 kernel. The following  grace_period()  Promela process models these 311 functions as they would be invoked during a single pass through preemptible RCU’s grace-period processing. 1 proctype grace_period() 2 { 3 byte curr; 4 byte snap; 5 6 atomic { 7 printf("MDLN = %dn", MAX_DYNTICK_LOOP_NOHZ); 8 snap = dynticks_progress_counter; 9 } 10 do 11 :: 1 -> 12 atomic { 13 curr = dynticks_progress_counter; 14 if 15 :: (curr == snap) && ((curr & 1) == 0) -> 16 break; 17 :: (curr - snap) > 2 || (snap & 1) == 0 -> 18 break; 19 :: 1 -> skip; 20 fi; 21 } 22 od; 23 snap = dynticks_progress_counter; 24 do 25 :: 1 -> 26 atomic { 27 curr = dynticks_progress_counter; 28 if 29 :: (curr == snap) && ((curr & 1) == 0) -> 30 break; 31 :: (curr != snap) -> 32 break; 33 :: 1 -> skip; 34 fi; 35 } 36 od; 37 } Lines 6-9 print out the loop limit (but only into the .trail file in case of error) and mod- els a line of code from  rcu_try_flip_idle()  and its call to  dyntick_save_  progress_counter() , which takes a snapshot of the current CPU’s  dynticks_  progress_counter  variable. These two lines are executed atomically to reduce state space. Lines 10-22 model the relevant code in rcu_try_flip_waitack() and its call to  rcu_try_flip_waitack_needed() . This loop is modeling the grace-period state machine waiting for a counter-flip acknowledgement from each CPU, but only that part that interacts with dynticks-idle CPUs. Line23modelsalinefrom rcu_try_flip_waitzero() anditscallto dyntick_  save_progress_counter() , again taking a snapshot of the CPU’s  dynticks_  progress_counter  variable. Finally, lines 24-36 model the relevant code in  rcu_try_flip_waitack() and its call to  rcu_try_flip_waitack_needed() . This loop is modeling the grace-period state-machine waiting for each CPU to execute a memory barrier, but again only that part that interacts with dynticks-idle CPUs. QuickQuiz11.12:  Waitaminute! IntheLinuxkernel, both dynticks_progress_  counter  and  rcu_dyntick_snapshot  are per-CPU variables. So why are they instead being modeled as single global variables? The resulting model  ( dyntickRCU-base.spin ), when run with the runspin. sh  script, generates 691 states and passes without errors, which is not at all surprising 312 given that it completely lacks the assertions that could find failures. The next section therefore adds safety assertions. 11.7.2.2 Validating Safety A safe RCU implementation must never permit a grace period to complete before the completion of any RCU readers that started before the start of the grace period. This is modeled by a  gp_state  variable that can take on three states as follows: 1 #define GP_IDLE 0 2 #define GP_WAITING 1 3 #define GP_DONE 2 4 byte gp_state = GP_DONE; The  grace_period()  process sets this variable as it progresses through the grace-period phases, as shown below: 1 proctype grace_period() 2 { 3 byte curr; 4 byte snap; 5 6 gp_state = GP_IDLE; 7 atomic { 8 printf("MDLN = %dn", MAX_DYNTICK_LOOP_NOHZ); 9 snap = dynticks_progress_counter; 10 gp_state = GP_WAITING; 11 } 12 do 13 :: 1 -> 14 atomic { 15 curr = dynticks_progress_counter; 16 if 17 :: (curr == snap) && ((curr & 1) == 0) -> 18 break; 19 :: (curr - snap) > 2 || (snap & 1) == 0 -> 20 break; 21 :: 1 -> skip; 22 fi; 23 } 24 od; 25 gp_state = GP_DONE; 26 gp_state = GP_IDLE; 27 atomic { 28 snap = dynticks_progress_counter; 29 gp_state = GP_WAITING; 30 } 31 do 32 :: 1 -> 33 atomic { 34 curr = dynticks_progress_counter; 35 if 36 :: (curr == snap) && ((curr & 1) == 0) -> 37 break; 38 :: (curr != snap) -> 39 break; 40 :: 1 -> skip; 41 fi; 42 } 43 od; 44 gp_state = GP_DONE; 45 } Lines 6, 10, 25, 26, 29, and 44 update this variable (combining atomically with algorithmic operations where feasible) to allow the  dyntick_nohz()  process to verify the basic RCU safety property. The form of this verification is to assert that the 313 value of the  gp_state  variable cannot jump from  GP_IDLE  to  GP_DONE  during a time period over which RCU readers could plausibly persist. Quick Quiz 11.13:  Given there are a pair of back-to-back changes to  gp_state on lines 25 and 26, how can we be sure that line 25’s changes won’t be lost? The  dyntick_nohz()  Promela process implements this verification as shown below: 1 proctype dyntick_nohz() 2 { 3 byte tmp; 4 byte i = 0; 5 bit old_gp_idle; 6 7 do 8 :: i >= MAX_DYNTICK_LOOP_NOHZ -> break; 9 :: i < MAX_DYNTICK_LOOP_NOHZ -> 10 tmp = dynticks_progress_counter; 11 atomic { 12 dynticks_progress_counter = tmp + 1; 13 old_gp_idle = (gp_state == GP_IDLE); 14 assert((dynticks_progress_counter & 1) == 1); 15 } 16 atomic { 17 tmp = dynticks_progress_counter; 18 assert(!old_gp_idle || 19 gp_state != GP_DONE); 20 } 21 atomic { 22 dynticks_progress_counter = tmp + 1; 23 assert((dynticks_progress_counter & 1) == 0); 24 } 25 i++; 26 od; 27 } Line 13 sets a new  old_gp_idle  flag if the value of the  gp_state  variable is GP_IDLE  at the beginning of task execution, and the assertion at lines 18 and 19 fire if the  gp_state  variable has advanced to  GP_DONE  during task execution, which would be illegal given that a single RCU read-side critical section could span the entire intervening time period. Theresultingmodel( dyntickRCU-base-s.spin ), whenrunwiththe runspi n. sh  script, generates 964 states and passes without errors, which is reassuring. That said, although safety is critically important, it is also quite important to avoid indefinitely stalling grace periods. The next section therefore covers verifying liveness. 11.7.2.3 Validating Liveness Although liveness can be difficult to prove, there is a simple trick that applies here. The first step is to make  dyntick_nohz()  indicate that it is done via a  dyntick_  nohz_done  variable, as shown on line 27 of the following: 1 proctype dyntick_nohz() 2 { 3 byte tmp; 4 byte i = 0; 5 bit old_gp_idle; 6 7 do 8 :: i >= MAX_DYNTICK_LOOP_NOHZ -> break; 9 :: i < MAX_DYNTICK_LOOP_NOHZ -> 10 tmp = dynticks_progress_counter; 11 atomic { 12 dynticks_progress_counter = tmp + 1; 13 old_gp_idle = (gp_state == GP_IDLE); 314 14 assert((dynticks_progress_counter & 1) == 1); 15 } 16 atomic { 17 tmp = dynticks_progress_counter; 18 assert(!old_gp_idle || 19 gp_state != GP_DONE); 20 } 21 atomic { 22 dynticks_progress_counter = tmp + 1; 23 assert((dynticks_progress_counter & 1) == 0); 24 } 25 i++; 26 od; 27 dyntick_nohz_done = 1; 28 } With this variable in place, we can add assertions to  grace_period()  to check for unnecessary blockage as follows: 1 proctype grace_period() 2 { 3 byte curr; 4 byte snap; 5 bit shouldexit; 6 7 gp_state = GP_IDLE; 8 atomic { 9 printf("MDLN = %dn", MAX_DYNTICK_LOOP_NOHZ); 10 shouldexit = 0; 11 snap = dynticks_progress_counter; 12 gp_state = GP_WAITING; 13 } 14 do 15 :: 1 -> 16 atomic { 17 assert(!shouldexit); 18 shouldexit = dyntick_nohz_done; 19 curr = dynticks_progress_counter; 20 if 21 :: (curr == snap) && ((curr & 1) == 0) -> 22 break; 23 :: (curr - snap) > 2 || (snap & 1) == 0 -> 24 break; 25 :: else -> skip; 26 fi; 27 } 28 od; 29 gp_state = GP_DONE; 30 gp_state = GP_IDLE; 31 atomic { 32 shouldexit = 0; 33 snap = dynticks_progress_counter; 34 gp_state = GP_WAITING; 35 } 36 do 37 :: 1 -> 38 atomic { 39 assert(!shouldexit); 40 shouldexit = dyntick_nohz_done; 41 curr = dynticks_progress_counter; 42 if 43 :: (curr == snap) && ((curr & 1) == 0) -> 44 break; 45 :: (curr != snap) -> 46 break; 47 :: else -> skip; 48 fi; 49 } 50 od; 51 gp_state = GP_DONE; 52 } We have added the  shouldexit  variable on line 5, which we initialize to zero on 315 line 10. Line 17 asserts that  shouldexit  is not set, while line 18 sets  shouldexit to the  dyntick_nohz_done  variable maintained by  dyntick_nohz() . This assertion will therefore trigger if we attempt to take more than one pass through the wait-for-counter-flip-acknowledgement loop after  dyntick_nohz()  has completed execution. After all, if   dyntick_nohz()  is done, then there cannot be any more state changes to force us out of the loop, so going through twice in this state means an infinite loop, which in turn means no end to the grace period. Lines 32, 39, and 40 operate in a similar manner for the second (memory-barrier) loop. However, running this model ( dyntickRCU-base-sl-busted.spin )  re- sults in failure, as line 23 is checking that the wrong variable is even. Upon failure, spin  writes out a “trail” file  ( dyntickRCU-base-sl-busted.spin.trail ) file, which records the sequence of states that lead to the failure. Use the spin -t -p -g -l dyntickRCU-base-sl-busted.spin  command to cause  spin  to re- trace this sequence of state, printing the statements executed and the values of variables ( dyntickRCU-base-sl-busted.spin.trail.txt ) . Note that the line num- bers do not match the listing above due to the fact that spin takes both functions in a sin- glefile. However, thelinenumbers do matchthefullmodel( dyntickRCU-base-sl- busted. spin ). We see that the dyntick_nohz() process completed at step 34 (search for “34:”), but that the  grace_period()  process nonetheless failed to exit the loop. The value of   curr  is  6  (see step 35) and that the value of   snap  is  5  (see step 17). Therefore the first condition on line 21 above does not hold because  curr != snap , and the second condition on line 23 does not hold either because  snap  is odd and because curr  is only one greater than  snap . So one of these two conditions has to be incorrect. Referring to the comment block in  rcu_try_flip_waitack_needed()  for the first condition: If the CPU remained in dynticks mode for the entire time and didn’t take any interrupts, NMIs, SMIs, or whatever, then it cannot be in the middle of an rcu_read_lock() , so the next rcu_read_lock() it executes must use the new value of the counter. So we can safely pretend that this CPU already acknowledged the counter. The first condition does match this, because if   curr == snap  and if   curr  is even, then the corresponding CPU has been in dynticks-idle mode the entire time, as required. So let’s look at the comment block for the second condition: If the CPU passed through or entered a dynticks idle phase with no active irq handlers, then, as above, we can safely pretend that this CPU already acknowledged the counter. The first part of the condition is correct, because if   curr  and  snap  differ by two, there will be at least one even number in between, corresponding to having passed completely through a dynticks-idle phase. However, the second part of the condition corresponds to having  started   in dynticks-idle mode, not having  finished   in this mode. We therefore need to be testing  curr  rather than  snap  for being an even number. The corrected C code is as follows: 1 static inline int 2 rcu_try_flip_waitack_needed(int cpu) 3 { 316 4 long curr; 5 long snap; 6 7 curr = per_cpu(dynticks_progress_counter, cpu); 8 snap = per_cpu(rcu_dyntick_snapshot, cpu); 9 smp_mb(); 10 if ((curr == snap) && ((curr & 0x1) == 0)) 11 return 0; 12 if ((curr - snap) > 2 || (curr & 0x1) == 0) 13 return 0; 14 return 1; 15 } Lines 10-13 can now be combined and simplified, resulting in the following. A similar simplification can be applied to  rcu_try_flip_waitmb_needed . 1 static inline int 2 rcu_try_flip_waitack_needed(int cpu) 3 { 4 long curr; 5 long snap; 6 7 curr = per_cpu(dynticks_progress_counter, cpu); 8 snap = per_cpu(rcu_dyntick_snapshot, cpu); 9 smp_mb(); 10 if ((curr - snap) >= 2 || (curr & 0x1) == 0) 11 return 0; 12 return 1; 13 } Making the corresponding correction in the model ( dyntickRCU-base-sl. spin ) results in a correct verification with 661 states that passes without errors. How- ever, it is worth noting that the first version of the liveness verification failed to catch this bug, due to a bug in the liveness verification itself. This liveness-verification bug was located by inserting an infinite loop in the grace_period() process, and noting that the liveness-verification code failed to detect this problem! We have now successfully verified both safety and liveness conditions, but only for processes running and blocking. We also need to handle interrupts, a task taken up in the next section. 11.7.2.4 Interrupts There are a couple of ways to model interrupts in Promela: 1.  using C-preprocessor tricks to insert the interrupt handler between each and every statement of the  dynticks_nohz()  process, or 2. modeling the interrupt handler with a separate process. A bit of thought indicated that the second approach would have a smaller state space, though it requires that the interrupt handler somehow run atomically with respect to the  dynticks_nohz()  process, but not with respect to the  grace_period() process. Fortunately, it turns out that Promela permits you to branch out of atomic statements. This trick allows us to have the interrupt handler set a flag, and recode  dynticks_  nohz()  to atomically check this flag and execute only when the flag is not set. This can be accomplished with a C-preprocessor macro that takes a label and a Promela statement as follows: 317 1 #define EXECUTE_MAINLINE(label, stmt) 2 label: skip; 3 atomic { 4 if 5 :: in_dyntick_irq -> goto label; 6 :: else -> stmt; 7 fi; 8 } One might use this macro as follows: EXECUTE_MAINLINE(stmt1, tmp = dynticks_progress_counter) Line 2 of the macro creates the specified statement label. Lines 3-8 are an atomic block that tests the  in_dyntick_irq  variable, and if this variable is set (indicating that the interrupt handler is active), branches out of the atomic block back to the label. Otherwise, line 6 executes the specified statement. The overall effect is that mainline execution stalls any time an interrupt is active, as required. 11.7.2.5 Validating Interrupt Handlers The first step is to convert  dyntick_nohz()  to  EXECUTE_MAINLINE()  form, as follows: 1 proctype dyntick_nohz() 2 { 3 byte tmp; 4 byte i = 0; 5 bit old_gp_idle; 6 7 do 8 :: i >= MAX_DYNTICK_LOOP_NOHZ -> break; 9 :: i < MAX_DYNTICK_LOOP_NOHZ -> 10 EXECUTE_MAINLINE(stmt1, 11 tmp = dynticks_progress_counter) 12 EXECUTE_MAINLINE(stmt2, 13 dynticks_progress_counter = tmp + 1; 14 old_gp_idle = (gp_state == GP_IDLE); 15 assert((dynticks_progress_counter & 1) == 1)) 16 EXECUTE_MAINLINE(stmt3, 17 tmp = dynticks_progress_counter; 18 assert(!old_gp_idle || 19 gp_state != GP_DONE)) 20 EXECUTE_MAINLINE(stmt4, 21 dynticks_progress_counter = tmp + 1; 22 assert((dynticks_progress_counter & 1) == 0)) 23 i++; 24 od; 25 dyntick_nohz_done = 1; 26 } It is important to note that when a group of statements is passed to  EXECUTE_  MAINLINE() , as in lines 11-14, all statements in that group execute atomically. Quick Quiz 11.14:  But what would you do if you needed the statements in a single EXECUTE_MAINLINE()  group to execute non-atomically? Quick Quiz 11.15:  But what if the  dynticks_nohz()  process had “if” or “do” statements with conditions, where the statement bodies of these constructs needed to execute non-atomically? The next step is to write a dyntick_irq() process to model an interrupt handler: 1 proctype dyntick_irq() 2 { 3 byte tmp; 318 4 byte i = 0; 5 bit old_gp_idle; 6 7 do 8 :: i >= MAX_DYNTICK_LOOP_IRQ -> break; 9 :: i < MAX_DYNTICK_LOOP_IRQ -> 10 in_dyntick_irq = 1; 11 if 12 :: rcu_update_flag > 0 -> 13 tmp = rcu_update_flag; 14 rcu_update_flag = tmp + 1; 15 :: else -> skip; 16 fi; 17 if 18 :: !in_interrupt && 19 (dynticks_progress_counter & 1) == 0 -> 20 tmp = dynticks_progress_counter; 21 dynticks_progress_counter = tmp + 1; 22 tmp = rcu_update_flag; 23 rcu_update_flag = tmp + 1; 24 :: else -> skip; 25 fi; 26 tmp = in_interrupt; 27 in_interrupt = tmp + 1; 28 old_gp_idle = (gp_state == GP_IDLE); 29 assert(!old_gp_idle || gp_state != GP_DONE); 30 tmp = in_interrupt; 31 in_interrupt = tmp - 1; 32 if 33 :: rcu_update_flag != 0 -> 34 tmp = rcu_update_flag; 35 rcu_update_flag = tmp - 1; 36 if 37 :: rcu_update_flag == 0 -> 38 tmp = dynticks_progress_counter; 39 dynticks_progress_counter = tmp + 1; 40 :: else -> skip; 41 fi; 42 :: else -> skip; 43 fi; 44 atomic { 45 in_dyntick_irq = 0; 46 i++; 47 } 48 od; 49 dyntick_irq_done = 1; 50 } The loop from line 7-48 models up to MAX_DYNTICK_LOOP_IRQ interrupts, with lines 8 and 9 forming the loop condition and line 45 incrementing the control variable. Line 10 tells  dyntick_nohz()  that an interrupt handler is running, and line 45 tells  dyntick_nohz()  that this handler has completed. Line 49 is used for liveness verification, much as is the corresponding line of   dyntick_nohz() . Quick Quiz 11.16:  Why are lines 45 and 46 (the  in_dyntick_irq = 0;  and the  i++; ) executed atomically? Lines 11-25 model  rcu_irq_enter() , and lines 26 and 27 model the relevant snippet of   __irq_enter() . Lines 28 and 29 verifies safety in much the same manner as do the corresponding lines of   dynticks_nohz() . Lines 30 and 31 model the relevant snippet of   __irq_exit() , and finally lines 32-43 model  rcu_irq_  exit() . Quick Quiz 11.17:  What property of interrupts is this dynticks_irq() process unable to model? The  grace_period  process then becomes as follows: 1 proctype grace_period() 2 { 319 3 byte curr; 4 byte snap; 5 bit shouldexit; 6 7 gp_state = GP_IDLE; 8 atomic { 9 printf("MDLN = %dn", MAX_DYNTICK_LOOP_NOHZ); 10 printf("MDLI = %dn", MAX_DYNTICK_LOOP_IRQ); 11 shouldexit = 0; 12 snap = dynticks_progress_counter; 13 gp_state = GP_WAITING; 14 } 15 do 16 :: 1 -> 17 atomic { 18 assert(!shouldexit); 19 shouldexit = dyntick_nohz_done && dyntick_irq_done; 20 curr = dynticks_progress_counter; 21 if 22 :: (curr - snap) >= 2 || (curr & 1) == 0 -> 23 break; 24 :: else -> skip; 25 fi; 26 } 27 od; 28 gp_state = GP_DONE; 29 gp_state = GP_IDLE; 30 atomic { 31 shouldexit = 0; 32 snap = dynticks_progress_counter; 33 gp_state = GP_WAITING; 34 } 35 do 36 :: 1 -> 37 atomic { 38 assert(!shouldexit); 39 shouldexit = dyntick_nohz_done && dyntick_irq_done; 40 curr = dynticks_progress_counter; 41 if 42 :: (curr != snap) || ((curr & 1) == 0) -> 43 break; 44 :: else -> skip; 45 fi; 46 } 47 od; 48 gp_state = GP_DONE; 49 } The implementation of   grace_period()  is very similar to the earlier one. The only changes are the addition of line 10 to add the new interrupt-count parameter, changes to lines 19 and 39 to add the new  dyntick_irq_done  variable to the liveness checks, and of course the optimizations on lines 22 and 42. This model  ( dyntickRCU-irqnn-ssl.spin )  results in a correct verification with roughly half a million states, passing without errors. However, this version of the model does not handle nested interrupts. This topic is taken up in the nest section. 11.7.2.6 Validating Nested Interrupt Handlers Nestedinterrupthandlersmaybemodeledbysplittingthebodyoftheloopin dyntick_  irq()  as follows: 1 proctype dyntick_irq() 2 { 3 byte tmp; 4 byte i = 0; 5 byte j = 0; 6 bit old_gp_idle; 7 bit outermost; 320 8 9 do 10 :: i >= MAX_DYNTICK_LOOP_IRQ && 11 j >= MAX_DYNTICK_LOOP_IRQ -> break; 12 :: i < MAX_DYNTICK_LOOP_IRQ -> 13 atomic { 14 outermost = (in_dyntick_irq == 0); 15 in_dyntick_irq = 1; 16 } 17 if 18 :: rcu_update_flag > 0 -> 19 tmp = rcu_update_flag; 20 rcu_update_flag = tmp + 1; 21 :: else -> skip; 22 fi; 23 if 24 :: !in_interrupt && 25 (dynticks_progress_counter & 1) == 0 -> 26 tmp = dynticks_progress_counter; 27 dynticks_progress_counter = tmp + 1; 28 tmp = rcu_update_flag; 29 rcu_update_flag = tmp + 1; 30 :: else -> skip; 31 fi; 32 tmp = in_interrupt; 33 in_interrupt = tmp + 1; 34 atomic { 35 if 36 :: outermost -> 37 old_gp_idle = (gp_state == GP_IDLE); 38 :: else -> skip; 39 fi; 40 } 41 i++; 42 :: j < i -> 43 atomic { 44 if 45 :: j + 1 == i -> 46 assert(!old_gp_idle || 47 gp_state != GP_DONE); 48 :: else -> skip; 49 fi; 50 } 51 tmp = in_interrupt; 52 in_interrupt = tmp - 1; 53 if 54 :: rcu_update_flag != 0 -> 55 tmp = rcu_update_flag; 56 rcu_update_flag = tmp - 1; 57 if 58 :: rcu_update_flag == 0 -> 59 tmp = dynticks_progress_counter; 60 dynticks_progress_counter = tmp + 1; 61 :: else -> skip; 62 fi; 63 :: else -> skip; 64 fi; 65 atomic { 66 j++; 67 in_dyntick_irq = (i != j); 68 } 69 od; 70 dyntick_irq_done = 1; 71 } This is similar to the earlier  dynticks_irq()  process. It adds a second counter variable j on line 5, so that i counts entries to interrupt handlers and j counts exits. The outermost variable on line 7 helps determine when the gp_state variable needs to be sampled for the safety checks. The loop-exit check on lines 10 and 11 is updated to require that the specified number of interrupt handlers are exited as well as entered, and the increment of   i  is moved to line 41, which is the end of the interrupt-entry model. 321 Lines 13-16 set the  outermost  variable to indicate whether this is the outermost of a set of nested interrupts and to set the  in_dyntick_irq  variable that is used by the dyntick_nohz()  process. Lines 34-40 capture the state of the  gp_state  variable, but only when in the outermost interrupt handler. Line 42 has the do-loop conditional for interrupt-exit modeling: as long as we have exited fewer interrupts than we have entered, it is legal to exit another interrupt. Lines 43-50 check the safety criterion, but only if we are exiting from the outermost interrupt level. Finally, lines 65-68 increment the interrupt-exit count  j  and, if this is the outermost interrupt level, clears  in_dyntick_irq . Thismodel( dyntickRCU-irq-ssl.spin ) resultsinacorrectverificationwith a bit more than half a million states, passing without errors. However, this version of  the model does not handle NMIs, which are taken up in the nest section. 11.7.2.7 Validating NMI Handlers We take the same general approach for NMIs as we do for interrupts, keeping in mind that NMIs do not nest. This results in a  dyntick_nmi()  process as follows: 1 proctype dyntick_nmi() 2 { 3 byte tmp; 4 byte i = 0; 5 bit old_gp_idle; 6 7 do 8 :: i >= MAX_DYNTICK_LOOP_NMI -> break; 9 :: i < MAX_DYNTICK_LOOP_NMI -> 10 in_dyntick_nmi = 1; 11 if 12 :: rcu_update_flag > 0 -> 13 tmp = rcu_update_flag; 14 rcu_update_flag = tmp + 1; 15 :: else -> skip; 16 fi; 17 if 18 :: !in_interrupt && 19 (dynticks_progress_counter & 1) == 0 -> 20 tmp = dynticks_progress_counter; 21 dynticks_progress_counter = tmp + 1; 22 tmp = rcu_update_flag; 23 rcu_update_flag = tmp + 1; 24 :: else -> skip; 25 fi; 26 tmp = in_interrupt; 27 in_interrupt = tmp + 1; 28 old_gp_idle = (gp_state == GP_IDLE); 29 assert(!old_gp_idle || gp_state != GP_DONE); 30 tmp = in_interrupt; 31 in_interrupt = tmp - 1; 32 if 33 :: rcu_update_flag != 0 -> 34 tmp = rcu_update_flag; 35 rcu_update_flag = tmp - 1; 36 if 37 :: rcu_update_flag == 0 -> 38 tmp = dynticks_progress_counter; 39 dynticks_progress_counter = tmp + 1; 40 :: else -> skip; 41 fi; 42 :: else -> skip; 43 fi; 44 atomic { 45 i++; 46 in_dyntick_nmi = 0; 47 } 48 od; 322 49 dyntick_nmi_done = 1; 50 } Of course, the fact that we have NMIs requires adjustments in the other components. For example, the  EXECUTE_MAINLINE()  macro now needs to pay attention to the NMI handler ( in_dyntick_nmi ) as well as the interrupt handler ( in_dyntick_  irq ) by checking the  dyntick_nmi_done  variable as follows: 1 #define EXECUTE_MAINLINE(label, stmt) 2 label: skip; 3 atomic { 4 if 5 :: in_dyntick_irq || 6 in_dyntick_nmi -> goto label; 7 :: else -> stmt; 8 fi; 9 } We will also need to introduce an  EXECUTE_IRQ()  macro that checks  in_  dyntick_nmi  in order to allow  dyntick_irq()  to exclude  dyntick_nmi() : 1 #define EXECUTE_IRQ(label, stmt) 2 label: skip; 3 atomic { 4 if 5 :: in_dyntick_nmi -> goto label; 6 :: else -> stmt; 7 fi; 8 } It is further necessary to convert  dyntick_irq()  to  EXECUTE_IRQ()  as fol- lows: 1 proctype dyntick_irq() 2 { 3 byte tmp; 4 byte i = 0; 5 byte j = 0; 6 bit old_gp_idle; 7 bit outermost; 8 9 do 10 :: i >= MAX_DYNTICK_LOOP_IRQ && 11 j >= MAX_DYNTICK_LOOP_IRQ -> break; 12 :: i < MAX_DYNTICK_LOOP_IRQ -> 13 atomic { 14 outermost = (in_dyntick_irq == 0); 15 in_dyntick_irq = 1; 16 } 17 stmt1: skip; 18 atomic { 19 if 20 :: in_dyntick_nmi -> goto stmt1; 21 :: !in_dyntick_nmi && rcu_update_flag -> 22 goto stmt1_then; 23 :: else -> goto stmt1_else; 24 fi; 25 } 26 stmt1_then: skip; 27 EXECUTE_IRQ(stmt1_1, tmp = rcu_update_flag) 28 EXECUTE_IRQ(stmt1_2, rcu_update_flag = tmp + 1) 29 stmt1_else: skip; 30 stmt2: skip; atomic { 31 if 32 :: in_dyntick_nmi -> goto stmt2; 33 :: !in_dyntick_nmi && 34 !in_interrupt && 35 (dynticks_progress_counter & 1) == 0 -> 36 goto stmt2_then; 323 37 :: else -> goto stmt2_else; 38 fi; 39 } 40 stmt2_then: skip; 41 EXECUTE_IRQ(stmt2_1, tmp = dynticks_progress_counter) 42 EXECUTE_IRQ(stmt2_2, 43 dynticks_progress_counter = tmp + 1) 44 EXECUTE_IRQ(stmt2_3, tmp = rcu_update_flag) 45 EXECUTE_IRQ(stmt2_4, rcu_update_flag = tmp + 1) 46 stmt2_else: skip; 47 EXECUTE_IRQ(stmt3, tmp = in_interrupt) 48 EXECUTE_IRQ(stmt4, in_interrupt = tmp + 1) 49 stmt5: skip; 50 atomic { 51 if 52 :: in_dyntick_nmi -> goto stmt4; 53 :: !in_dyntick_nmi && outermost -> 54 old_gp_idle = (gp_state == GP_IDLE); 55 :: else -> skip; 56 fi; 57 } 58 i++; 59 :: j < i -> 60 stmt6: skip; 61 atomic { 62 if 63 :: in_dyntick_nmi -> goto stmt6; 64 :: !in_dyntick_nmi && j + 1 == i -> 65 assert(!old_gp_idle || 66 gp_state != GP_DONE); 67 :: else -> skip; 68 fi; 69 } 70 EXECUTE_IRQ(stmt7, tmp = in_interrupt); 71 EXECUTE_IRQ(stmt8, in_interrupt = tmp - 1); 72 73 stmt9: skip; 74 atomic { 75 if 76 :: in_dyntick_nmi -> goto stmt9; 77 :: !in_dyntick_nmi && rcu_update_flag != 0 -> 78 goto stmt9_then; 79 :: else -> goto stmt9_else; 80 fi; 81 } 82 stmt9_then: skip; 83 EXECUTE_IRQ(stmt9_1, tmp = rcu_update_flag) 84 EXECUTE_IRQ(stmt9_2, rcu_update_flag = tmp - 1) 85 stmt9_3: skip; 86 atomic { 87 if 88 :: in_dyntick_nmi -> goto stmt9_3; 89 :: !in_dyntick_nmi && rcu_update_flag == 0 -> 90 goto stmt9_3_then; 91 :: else -> goto stmt9_3_else; 92 fi; 93 } 94 stmt9_3_then: skip; 95 EXECUTE_IRQ(stmt9_3_1, 96 tmp = dynticks_progress_counter) 97 EXECUTE_IRQ(stmt9_3_2, 98 dynticks_progress_counter = tmp + 1) 99 stmt9_3_else: 100 stmt9_else: skip; 101 atomic { 102 j++; 103 in_dyntick_irq = (i != j); 104 } 105 od; 106 dyntick_irq_done = 1; 107 } Note that we have open-coded the “if” statements (for example, lines 17-29). In 324 addition, statements that process strictly local state (such as line 58) need not exclude dyntick_nmi() . Finally,  grace_period()  requires only a few changes: 1 proctype grace_period() 2 { 3 byte curr; 4 byte snap; 5 bit shouldexit; 6 7 gp_state = GP_IDLE; 8 atomic { 9 printf("MDLN = %dn", MAX_DYNTICK_LOOP_NOHZ); 10 printf("MDLI = %dn", MAX_DYNTICK_LOOP_IRQ); 11 printf("MDLN = %dn", MAX_DYNTICK_LOOP_NMI); 12 shouldexit = 0; 13 snap = dynticks_progress_counter; 14 gp_state = GP_WAITING; 15 } 16 do 17 :: 1 -> 18 atomic { 19 assert(!shouldexit); 20 shouldexit = dyntick_nohz_done && 21 dyntick_irq_done && 22 dyntick_nmi_done; 23 curr = dynticks_progress_counter; 24 if 25 :: (curr - snap) >= 2 || (curr & 1) == 0 -> 26 break; 27 :: else -> skip; 28 fi; 29 } 30 od; 31 gp_state = GP_DONE; 32 gp_state = GP_IDLE; 33 atomic { 34 shouldexit = 0; 35 snap = dynticks_progress_counter; 36 gp_state = GP_WAITING; 37 } 38 do 39 :: 1 -> 40 atomic { 41 assert(!shouldexit); 42 shouldexit = dyntick_nohz_done && 43 dyntick_irq_done && 44 dyntick_nmi_done; 45 curr = dynticks_progress_counter; 46 if 47 :: (curr != snap) || ((curr & 1) == 0) -> 48 break; 49 :: else -> skip; 50 fi; 51 } 52 od; 53 gp_state = GP_DONE; 54 } Wehaveaddedthe printf() forthenew MAX_DYNTICK_LOOP_NMI parameter on line 11 and added  dyntick_nmi_done  to the  shouldexit  assignments on lines 22 and 44. The model  ( dyntickRCU-irq-nmi-ssl.spin ) results in a correct verifica- tion with several hundred million states, passing without errors. Quick Quiz 11.18:  Does Paul  always  write his code in this painfully incremental manner? 325 static inline void rcu_enter_nohz(void) { + mb();  __get_cpu_var(dynticks_progress_counter)++; - mb(); } static inline void rcu_exit_nohz(void) { - mb();  __get_cpu_var(dynticks_progress_counter)++; + mb(); } Figure 11.16: Memory-Barrier Fix Patch - if ((curr - snap) > 2 || (snap & 0x1) == 0) + if ((curr - snap) > 2 || (curr & 0x1) == 0) Figure 11.17: Variable-Name-Typo Fix Patch 11.7.3 Lessons (Re)Learned This effort provided some lessons (re)learned: 1.  Promela and spin can verify interrupt/NMI-handler interactions . 2.  Documenting code can help locate bugs . In this case, the documentation effort located a misplaced memory barrier in rcu_enter_nohz() and rcu_exit_  nohz() , as shown by the patch in Figure  11.16 . 3.  Validate your code early, often, and up to the point of destruction.  This effort located one subtle bug in  rcu_try_flip_waitack_needed()  that would have been quite difficult to test or debug, as shown by the patch in Figure  11.17 . 4.  Always verify your verification code.  The usual way to do this is to insert a deliberate bug and verify that the verification code catches it. Of course, if the verification code fails to catch this bug, you may also need to verify the bug itself, and so on, recursing infinitely. However, if you find yourself in this position, getting a good night’s sleep can be an extremely effective debugging technique. 5.  Use of atomic instructions can simplify verification.  Unfortunately, use of the cmpxchg  atomic instruction would also slow down the critical irq fastpath, so they are not appropriate in this case. 6.  The need for complex formal verification often indicates a need to re-think your design.  In fact the design verified in this section turns out to have a much simpler solution, which is presented in the next section. 11.8 Simplicity Avoids Formal Verification The complexity of the dynticks interface for preemptible RCU is primarily due to the fact that both irqs and NMIs use the same code path and the same state variables. This leads to the notion of providing separate code paths and variables for irqs and NMIs, 326 1 struct rcu_dynticks { 2 int dynticks_nesting; 3 int dynticks; 4 int dynticks_nmi; 5 }; 6 7 struct rcu_data { 8 ... 9 int dynticks_snap; 10 int dynticks_nmi_snap; 11 ... 12 }; Figure 11.18: Variables for Simple Dynticks Interface as has been done for hierarchical RCU [ McK08a ]  as indirectly suggested by Manfred Spraul  [Spr08b ]. 11.8.1 State Variables for Simplified Dynticks Interface Figure  11.18  shows the new per-CPU state variables. These variables are grouped into structs to allow multiple independent RCU implementations (e.g.,  rcu  and  rcu_bh ) to conveniently and efficiently share dynticks state. In what follows, they can be thought of as independent per-CPU variables. The dynticks_nesting , dynticks , and dynticks_snap variables are for the irq code paths, and the  dynticks_nmi  and  dynticks_nmi_snap  variables are for the NMI code paths, although the NMI code path will also reference (but not modify) the  dynticks_nesting  variable. These variables are used as follows: •  dynticks_nesting : This counts the number of reasons that the correspond- ing CPU should be monitored for RCU read-side critical sections. If the CPU is in dynticks-idle mode, then this counts the irq nesting level, otherwise it is one greater than the irq nesting level. •  dynticks : This counter’s value is even if the corresponding CPU is in dynticks- idle mode and there are no irq handlers currently running on that CPU, otherwise the counter’s value is odd. In other words, if this counter’s value is odd, then the corresponding CPU might be in an RCU read-side critical section. •  dynticks_nmi : This counter’s value is odd if the corresponding CPU is in an NMI handler, but only if the NMI arrived while this CPU was in dyntick-idle mode with no irq handlers running. Otherwise, the counter’s value will be even. •  dynticks_snap : This will be a snapshot of the  dynticks  counter, but only if the current RCU grace period has extended for too long a duration. •  dynticks_nmi_snap : Thiswillbeasnapshotofthe dynticks_nmi counter, but again only if the current RCU grace period has extended for too long a dura- tion. If both  dynticks  and  dynticks_nmi  have taken on an even value during a given time interval, then the corresponding CPU has passed through a quiescent state during that interval. Quick Quiz 11.19:  But what happens if an NMI handler starts running before an 327 1 void rcu_enter_nohz(void) 2 { 3 unsigned long flags; 4 struct rcu_dynticks  * rdtp; 5 6 smp_mb(); 7 local_irq_save(flags); 8 rdtp = &__get_cpu_var(rcu_dynticks); 9 rdtp->dynticks++; 10 rdtp->dynticks_nesting--; 11 WARN_ON(rdtp->dynticks & 0x1); 12 local_irq_restore(flags); 13 } 14 15 void rcu_exit_nohz(void) 16 { 17 unsigned long flags; 18 struct rcu_dynticks  * rdtp; 19 20 local_irq_save(flags); 21 rdtp = &__get_cpu_var(rcu_dynticks); 22 rdtp->dynticks++; 23 rdtp->dynticks_nesting++; 24 WARN_ON(!(rdtp->dynticks & 0x1)); 25 local_irq_restore(flags); 26 smp_mb(); 27 } Figure 11.19: Entering and Exiting Dynticks-Idle Mode irq handler completes, and if that NMI handler continues running until a second irq handler starts? 11.8.2 Entering and Leaving Dynticks-Idle Mode Figure  11.19  shows the  rcu_enter_nohz()  and  rcu_exit_nohz() , which en- ter and exit dynticks-idle mode, also known as “nohz” mode. These two functions are invoked from process context. Line 6 ensures that any prior memory accesses (which might include accesses from RCU read-side critical sections) are seen by other CPUs before those marking entry to dynticks-idlemode. Lines7and12disableandreenableirqs. Line8acquiresapointerto the current CPU’s  rcu_dynticks  structure, and line 9 increments the current CPU’s dynticks counter, which should now be even, given that we are entering dynticks-idle mode in process context. Finally, line 10 decrements  dynticks_nesting , which should now be zero. The  rcu_exit_nohz()  function is quite similar, but increments  dynticks_  nesting  rather than decrementing it and checks for the opposite  dynticks  polarity. 11.8.3 NMIs From Dynticks-Idle Mode Figure 11.20 showthe rcu_nmi_enter() and rcu_nmi_exit() functions, which inform RCU of NMI entry and exit, respectively, from dynticks-idle mode. However, if  the NMI arrives during an irq handler, then RCU will already be on the lookout for RCU read-side critical sections from this CPU, so lines 6 and 7 of   rcu_nmi_enter  and lines 18 and 19 of  rcu_nmi_exit silently return if  dynticks is odd. Otherwise, the two functions increment  dynticks_nmi , with  rcu_nmi_enter()  leaving it with an odd value and  rcu_nmi_exit()  leaving it with an even value. Both functions 328 1 void rcu_nmi_enter(void) 2 { 3 struct rcu_dynticks  * rdtp; 4 5 rdtp = &__get_cpu_var(rcu_dynticks); 6 if (rdtp->dynticks & 0x1) 7 return; 8 rdtp->dynticks_nmi++; 9 WARN_ON(!(rdtp->dynticks_nmi & 0x1)); 10 smp_mb(); 11 } 12 13 void rcu_nmi_exit(void) 14 { 15 struct rcu_dynticks  * rdtp; 16 17 rdtp = &__get_cpu_var(rcu_dynticks); 18 if (rdtp->dynticks & 0x1) 19 return; 20 smp_mb(); 21 rdtp->dynticks_nmi++; 22 WARN_ON(rdtp->dynticks_nmi & 0x1); 23 } Figure 11.20: NMIs From Dynticks-Idle Mode execute memory barriers between this increment and possible RCU read-side critical sections on lines 11 and 21, respectively. 11.8.4 Interrupts From Dynticks-Idle Mode Figure  11.21  shows  rcu_irq_enter()  and  rcu_irq_exit() , which inform RCU of entry to and exit from, respectively, irq context. Line 6 of  rcu_irq_enter() increments  dynticks_nesting , and if this variable was already non-zero, line 7 silently returns. Otherwise, line 8 increments  dynticks , which will then have an odd value, consistent with the fact that this CPU can now execute RCU read-side critical sections. Line 10 therefore executes a memory barrier to ensure that the increment of  dynticks  is seen before any RCU read-side critical sections that the subsequent irq handler might execute. Line 18 of   rcu_irq_exit  decrements  dynticks_nesting , and if the result is non-zero, line 19 silently returns. Otherwise, line 20 executes a memory barrier to ensure that the increment of   dynticks  on line 21 is seen after any RCU read- side critical sections that the prior irq handler might have executed. Line 22 verifies that  dynticks  is now even, consistent with the fact that no RCU read-side critical sections may appear in dynticks-idle mode. Lines 23-25 check to see if the prior irq handlers enqueued any RCU callbacks, forcing this CPU out of dynticks-idle mode via an reschedule IPI if so. 11.8.5 Checking For Dynticks Quiescent States Figure  11.22  shows dyntick_save_progress_counter() , which takes a snap- shot of the specified CPU’s  dynticks  and  dynticks_nmi  counters. Lines 8 and 9 snapshot these two variables to locals, line 10 executes a memory barrier to pair with the memory barriers in the functions in Figures  11.19,  11.20,  and  11.21.  Lines 11 and 12 record the snapshots for later calls to  rcu_implicit_dynticks_qs , and lines 13 and 14 checks to see if the CPU is in dynticks-idle mode with neither irqs nor NMIs 329 1 void rcu_irq_enter(void) 2 { 3 struct rcu_dynticks  * rdtp; 4 5 rdtp = &__get_cpu_var(rcu_dynticks); 6 if (rdtp->dynticks_nesting++) 7 return; 8 rdtp->dynticks++; 9 WARN_ON(!(rdtp->dynticks & 0x1)); 10 smp_mb(); 11 } 12 13 void rcu_irq_exit(void) 14 { 15 struct rcu_dynticks  * rdtp; 16 17 rdtp = &__get_cpu_var(rcu_dynticks); 18 if (--rdtp->dynticks_nesting) 19 return; 20 smp_mb(); 21 rdtp->dynticks++; 22 WARN_ON(rdtp->dynticks & 0x1); 23 if (__get_cpu_var(rcu_data).nxtlist || 24 __get_cpu_var(rcu_bh_data).nxtlist) 25 set_need_resched(); 26 } Figure 11.21: Interrupts From Dynticks-Idle Mode 1 static int 2 dyntick_save_progress_counter(struct rcu_data  * rdp) 3 { 4 int ret; 5 int snap; 6 int snap_nmi; 7 8 snap = rdp->dynticks->dynticks; 9 snap_nmi = rdp->dynticks->dynticks_nmi; 10 smp_mb(); 11 rdp->dynticks_snap = snap; 12 rdp->dynticks_nmi_snap = snap_nmi; 13 ret = ((snap & 0x1) == 0) && 14 ((snap_nmi & 0x1) == 0); 15 if (ret) 16 rdp->dynticks_fqs++; 17 return ret; 18 } Figure 11.22: Saving Dyntick Progress Counters 330 1 static int 2 rcu_implicit_dynticks_qs(struct rcu_data  * rdp) 3 { 4 long curr; 5 long curr_nmi; 6 long snap; 7 long snap_nmi; 8 9 curr = rdp->dynticks->dynticks; 10 snap = rdp->dynticks_snap; 11 curr_nmi = rdp->dynticks->dynticks_nmi; 12 snap_nmi = rdp->dynticks_nmi_snap; 13 smp_mb(); 14 if ((curr != snap || (curr & 0x1) == 0) && 15 (curr_nmi != snap_nmi || 16 (curr_nmi & 0x1) == 0)) { 17 rdp->dynticks_fqs++; 18 return 1; 19 } 20 return rcu_implicit_offline_qs(rdp); 21 } Figure 11.23: Checking Dyntick Progress Counters in progress (in other words, both snapshots have even values), hence in an extended quiescent state. If so, lines 15 and 16 count this event, and line 17 returns true if the CPU was in a quiescent state. Figure  11.23  shows  dyntick_save_progress_counter , which is called to checkwhetheraCPUhasentereddyntick-idlemodesubsequenttoacallto dynticks_  save_progress_counter() . Lines 9 and 11 take new snapshots of the corre- sponding CPU’s  dynticks  and  dynticks_nmi  variables, while lines 10 and 12 retrieve the snapshots saved earlier by  dynticks_save_progress_counter() . Line 13 then executes a memory barrier to pair with the memory barriers in the func- tions in Figures  11.19,  11.20,  and  11.21.  Lines 14-16 then check to see if the CPU is either currently in a quiescent state ( curr  and  curr_nmi  having even values) or has passed through a quiescent state since the last call to dynticks_save_progress_  counter()  (the values of   dynticks  and  dynticks_nmi  having changed). If  these checks confirm that the CPU has passed through a dyntick-idle quiescent state, then line 17 counts that fact and line 18 returns an indication of this fact. Either way, line 20 checks for race conditions that can result in RCU waiting for a CPU that is offline. Quick Quiz 11.20:  This is still pretty complicated. Why not just have a cpumask_  t  that has a bit set for each CPU that is in dyntick-idle mode, clearing the bit when entering an irq or NMI handler, and setting it upon exit? 11.8.6 Discussion Aslight shift in viewpoint resulted in a substantial simplification of the dynticks interface for RCU. The key change leading to this simplification was minimizing of sharing between irq and NMI contexts. The only sharing in this simplified interface is references from NMI context to irq variables (the  dynticks  variable). This type of sharing is benign, because the NMI functions never update this variable, so that its value remains constant through the lifetime of the NMI handler. This limitation of sharing allows the individual functions to be understood one at a time, in happy contrast to the situation described in Section  11.7,  where an NMI might change shared state at any point during 331 1 PPC SB+lwsync-RMW-lwsync+isync-simple 2 "" 3 { 4 0:r2=x; 0:r3=2; 0:r4=y; 0:r10=0; 0:r11=0; 0:r12=z; 5 1:r2=y; 1:r4=x; 6 } 7 P0 | P1 ; 8 li r1,1 | li r1,1 ; 9 stw r1,0(r2) | stw r1,0(r2) ; 10 lwsync | sync ; 11 | lwz r3,0(r4) ; 12 lwarx r11,r10,r12 | ; 13 stwcx. r11,r10,r12 | ; 14 bne Fail1 | ; 15 isync | ; 16 lwz r3,0(r4) | ; 17 Fail1: | ; 18 19 exists 20 (0:r3=0 / 1:r3=0) Figure 11.24: CPPMEM Litmus Test execution of the irq functions. Verification can be a good thing, but simplicity is even better. 11.9 Formal Verification and Memory Ordering Section  11.6  showed how to convince Promela to account for weak memory ordering. Although this approach can work well, it requires that the developer fully understand the system’s memory model. Unfortunately, few (if and) developers fully understand the complex memory models of modern CPUs. Therefore, another approach is to use a tool that already understands this memory ordering, such as the PPCMEM tool produced by Peter Sewell and Susmit Sarkar at the University of Cambridge, Luc Maranget, Francesco Zappa Nardelli, and Pankaj Pawan at INRIA, and Jade Alglave at Oxford University, in cooperation with Derek Williams of IBM [ AMP + 11 ]. This group formalized the memory models of Power, ARM, x86, as well as that of the C/C++11 standard  [ Bec11 ] , and produced the CPPMEM tool based on the Power and ARM formalizations. Quick Quiz 11.21:  But x86 has strong memory ordering! Why would you need to formalize its memory model? The PPCMEM tool takes  litmus tests  as input. A sample litmus test is presented in Section  11.9.1.  Section  11.9.2  relates this litmus test to the equivalent C-language program, and Section  11.9.3  describes how to apply CPPMEM to this litmus test. 11.9.1 Anatomy of a Litmus Test An example PowerPC litmus test for CPPMEM is shown in Figure  11.24.  The ARM interface works exactly the same way, but with ARM instructions substituted for the Power instructions and with the initial “PPC” replaced by “ARM”. You can select the ARM interface by clicking on “Change to ARM Model” at the web page called out above. In the example, line 1 identifies the type of system (“ARM” or “PPC”) and contains the title for the model. Line 2 provides a place for an alternative name for the test, which 332 you will usually want to leave blank as shown in the above example. Comments can be inserted between lines 2 and 3 using the OCaml (or Pascal) syntax of   ( * * ) . Lines 3-6 give initial values for all registers; each is of the form  P:R=V , where  P is the process identifier,  R  is the register identifier, and  V  is the value. For example, process 0’s register r3 initially contains the value 2. If the value is a variable ( x ,  y , or  z in the example) then the register is initialized to the address of the variable. It is also possible to initialize the contents of variables, for example,  x=1  initializes the value of   x  to 1. Uninitialized variables default to the value zero, so that in the example,  x ,  y , and  z  are all initially zero. Line 7 provides identifiers for the two processes, so that the 0:r3=2 on line 4 could instead have been written  P0:r3=2 . Line 7 is required, and the identifiers must be of  the form Pn , where n is the column number, starting from zero for the left-most column. This may seem unnecessarily strict, but it does prevent considerable confusion in actual use. Quick Quiz 11.22:  Why does line 8 of Figure  11.24  initialize the registers? Why not instead initialize them on lines 4 and 5? Lines 8-17 are the lines of code for each process. A given process can have empty lines, as is the case for P0’s line 11 and P1’s lines 12-17. Labels and branches are permitted, as demonstrated by the branch on line 14 to the label on line 17. That said, too-free use of branches will expand the state space. Use of loops is a particularly good way to explode your state space. Lines 19-20 show the assertion, which in this case indicates that we are interested in whether P0’s and P1’s r3 registers can both contain zero after both threads complete execution. This assertion is important because there are a number of use cases that would fail miserably if both P0 and P1 saw zero in their respective r3 registers. This should give you enough information to construct simple litmus tests. Some additional documentation is available, though much of this additional documentation is intended for a different research tool that runs tests on actual hardware. Perhaps more importantly, a large number of pre-existing litmus tests are available with the online tool (available via the “Select ARM Test” and “Select POWER Test” buttons). It is quite likely that one of these pre-existing litmus tests will answer your Power or ARM memory-ordering question. 11.9.2 What Does This Litmus Test Mean? P0’s lines 8 and 9 are equivalent to the C statement  x=1  because line 4 defines P0’s register  r2  to be the address of   x . P0’s lines 12 and 13 are the mnemonics for load- linked (“load register exclusive” in ARM parlance and “load reserve” in Power parlance) and store-conditional (“store register exclusive” in ARM parlance), respectively. When these are used together, they form an atomic instruction sequence, roughly similar to the compare-and-swap sequences exemplified by the x86 colock;cmpxchg instruction. Moving to a higher level of abstraction, the sequence from lines 10-15 is equivalent to the Linux kernel’s  atomic_add_return(&z, 0) . Finally, line 16 is roughly equivalent to the C statement  r3=y . P1’s lines 8 and 9 are equivalent to the C statement y=1 , line 10 is a memory barrier, equivalent to the Linux kernel statement  smp_mb() , and line 11 is equivalent to the C statement  r3=x . Quick Quiz 11.23:  But whatever happened to line 17 of Figure  11.24,  the one that is the  Fail:  label? 333 1 void P0(void) 2 { 3 int r3; 4 5 x = 1; / *  Lines 8 and 9  * / 6 atomic_add_return(&z, 0); / *  Lines 10-15  * / 7 r3 = y; / *  Line 16  * / 8 } 9 10 void P1(void) 11 { 12 int r3; 13 14 y = 1; / *  Lines 8-9  * / 15 smp_mb(); / *  Line 10  * / 16 r3 = x; / *  Line 11  * / 17 } Figure 11.25: Meaning of CPPMEM Litmus Test ./ppcmem -model lwsync_read_block -model coherence_points filename.litmus ... States 6 0:r3=0; 1:r3=0; 0:r3=0; 1:r3=1; 0:r3=1; 1:r3=0; 0:r3=1; 1:r3=1; 0:r3=2; 1:r3=0; 0:r3=2; 1:r3=1; Ok Condition exists (0:r3=0 / 1:r3=0) Hash=e2240ce2072a2610c034ccd4fc964e77 Observation SB+lwsync-RMW-lwsync+isync Sometimes 1 Figure 11.26: CPPMEM Detects an Error Putting all this together, the C-language equivalent to the entire litmus test is as shown in Figure  11.25.  The key point is that if   atomic_add_return()  acts as a full memory barrier (as the Linux kernel requires it to), then it should be impossible for P0() ’s and  P1() ’s  r3  variables to both be zero after execution completes. The next section describes how to run this litmus test. 11.9.3 Running a Litmus Test Although litmus tests may be run interactively via  http://www.cl.cam.ac.uk/ ~pes20/ppcmem/ ,  which can help build an understanding of the memory model. However, this approach requires that the user manually carry out the full state-space search. Because it is very difficult to be sure that you have checked every possible sequence of events, a separate tool is provided for this purpose  [McK11c ]. Because the litmus test shown in Figure  11.24  contains read-modify-write instruc- tions, we must add  -model  arguments to the command line. If the litmus test is stored in  filename.litmus , this will result in the output shown in Figure  11.26,  where the  ...  stands for voluminous making-progress output. The list of states includes 0:r3=0; 1:r3=0; , indicating once again that the old PowerPC implementation of  atomic_add_return()  does not act as a full barrier. The “Sometimes” on the last line confirms this: the assertion triggers for some executions, but not all of the time. The fix to this Linux-kernel bug is to replace P0’s  isync  with  sync , which results 334 ./ppcmem -model lwsync_read_block -model coherence_points filename.litmus ... States 5 0:r3=0; 1:r3=1; 0:r3=1; 1:r3=0; 0:r3=1; 1:r3=1; 0:r3=2; 1:r3=0; 0:r3=2; 1:r3=1; No (allowed not found) Condition exists (0:r3=0 / 1:r3=0) Hash=77dd723cda9981248ea4459fcdf6097d Observation SB+lwsync-RMW-lwsync+sync Never 0 5 Figure 11.27: CPPMEM on Repaired Litmus Test in the output shown in Figure  11.27.  As you can see,  0:r3=0; 1:r3=0;  does not appear in the list of states, and the last line calls out “Never”. Therefore, the model predicts that the offending execution sequence cannot happen. Quick Quiz 11.24:  Does the ARM Linux kernel have a similar bug? 11.9.4 CPPMEM Discussion These tools promise to be of great help to people working on low-level parallel primitives that run on ARM and on Power. These tools do have some intrinsic limitations: 1. These tools are research prototypes, and as such are unsupported. 2.  These tools do not constitute official statements by IBM or ARM on their re- spective CPU architectures. For example, both corporations reserve the right to report a bug at any time against any version of any of these tools. These tools are therefore not a substitute for careful stress testing on real hardware. Moreover, both the tools and the model that they are based on are under active development and might change at any time. On the other hand, this model was developed in consultation with the relevant hardware experts, so there is good reason to be confident that it is a robust representation of the architectures. 3.  These tools currently handle a subset of the instruction set. This subset has been sufficient for my purposes, but your mileage may vary. In particular, the tool handles only word-sized accesses (32 bits), and the words accessed must be properly aligned. In addition, the tool does not handle some of the weaker variants of the ARM memory-barrier instructions. 4.  The tools are restricted to small loop-free code fragments running on small numbers of threads. Larger examples result in state-space explosion, just as with similar tools such as Promela and spin. 5.  The full state-space search does not give any indication of how each offending state was reached. That said, once you realize that the state is in fact reachable, it is usually not too hard to find that state using the interactive tool. 6.  The tools will detect only those problems for which you code an assertion. This weakness is common to all formal methods, and is yet another reason why testing remains important. In the immortal words of Donald Knuth, “Beware of bugs in the above code; I have only proved it correct, not tried it.” 335 That said, one strength of these tools is that they are designed to model the full range of behaviors allowed by the architectures, including behaviors that are legal, but which current hardware implementations do not yet inflict on unwary software developers. Therefore, an algorithm that is vetted by these tools likely has some additional safety margin when running on real hardware. Furthermore, testing on real hardware can only find bugs; such testing is inherently incapable of proving a given usage correct. To appreciate this, consider that the researchers routinely ran in excess of 100 billion test runs on real hardware to validate their model. In one case, behavior that is allowed by the architecture did not occur, despite 176 billion runs [ AMP + 11 ] . In contrast, the full-state-space search allows the tool to prove code fragments correct. It is worth repeating that formal methods and tools are no substitute for testing. The fact is that producing large reliable concurrent software artifacts, the Linux kernel for example, is quite difficult. Developers must therefore be prepared to apply every tool at their disposal towards this goal. The tools presented in this paper are able to locate bugs that are quite difficult to produce (let alone track down) via testing. On the other hand, testing can be applied to far larger bodies of software than the tools presented in this paper are ever likely to handle. As always, use the right tools for the job! Of course, it is always best to avoid the need to work at this level by designing your parallel code to be easily partitioned and then using higher-level primitives (such as locks, sequence counters, atomic operations, and RCU) to get your job done more straightforwardly. And even if you absolutely must use low-level memory barriers and read-modify-write instructions to get your job done, the more conservative your use of  these sharp instruments, the easier your life is likely to be. 11.10 Summary Promela and CPPMEM are very powerful tools for validating small parallel algorithms, but they should not be the only tools in your toolbox. The QRCU experience is a case in point: given the Promela validation, the proof of correctness, and several rcutorture runs, I now feel reasonably confident in the QRCU algorithm and its implementation. But I would certainly not feel so confident given only one of the three! Nevertheless, if your code is so complex that you find yourself relying too heavily on validation tools, you should carefully rethink your design. For example, a complex implementation of the dynticks interface for preemptible RCU that was presented in Section  11.7  turned out to have a much simpler alternative implementation, as discussed in Section  11.8 . All else being equal, a simpler implementation is much better than a mechanical proof for a complex implementation! 336 Chapter 12 Putting It All Together This chapter gives a few hints on handling some concurrent-programming puzzles, starting with counter conundrums in Section  12.1,  continuing with some RCU rescues in Section  12.2 , and finishing off with some hashing hassles in Section  12.3 . 12.1 Counter Conundrums This section outlines possible solutions to some counter conundrums. 12.1.1 Counting Updates Suppose that Schödinger (see Section  9.1)  wants to count the number of updates for each animal, and that these updates are synchronized using a per-data-element lock. How can this counting best be done? Of course, any number of counting algorithms from Chapter  4  might be considered, but the optimal approach is much simpler in this case. Just place a counter in each data element, and increment it under the protection of that element’s lock! 12.1.2 Counting Lookups Suppose that Schödinger also wants to count the number of lookups for each animal, where lookups are protected by RCU. How can this counting best be done? One approach would be to protect a lookup counter with the per-element lock, as discussed in Section  12.1.1.  Unfortunately, this would require all lookups to acquire this lock, which would be a severe bottleneck on large systems. Another approach is to “just say no” to counting, following the example of the noatime  mount option. If this approach is feasible, it is clearly the best: After all, nothing is faster than doing nothing. If the lookup count cannot be dispensed with, read on! Any of the counters from Chapter  4  could be pressed into service, with the statistical counters described in Section  4.2  being perhaps the most common choice. However, this results in a large memory footprint: The number of counters required is the number of data elements multiplied by the number of threads. If this memory overhead is excessive, then one approach is to keep per-socket counters rather than per-CPU counters, with an eye to the hash-table performance 337 results depicted in Figure  9.8 . This will require that the counter increments be atomic operations, especially for user-mode execution where a given thread could migrate to another CPU at any time. If some elements are looked up very frequently, there are a number of approaches that batch updates by maintaining a per-thread log, where multiple log entries for a given element can be merged. After a given log entry has a sufficiently large increment or after sufficient time has passed, the log entries may be applied to the corresponding data elements. Silas Boyd-Wickizer has done some work formalizing this notion [ BW14] . 12.2 RCU Rescues This section shows how to apply RCU to some examples discussed earlier in this book. In some cases, RCU provides simpler code, in other cases better performance and scalability, and in still other cases, both. 12.2.1 RCU and Per-Thread-Variable-Based Statistical Counters Section  4.2.4  described an implementation of statistical counters that provided excellent performance, roughly that of simple increment (as in the C  ++  operator), and linear scalability — but only for incrementing via  inc_count() . Unfortunately, threads needing to read out the value via  read_count()  were required to acquire a global lock, and thus incurred high overhead and suffered poor scalability. The code for the lock-based implementation is shown in Figure  4.9  on Page  51. Quick Quiz 12.1:  Why on earth did we need that global lock in the first place? 12.2.1.1 Design The hope is to use RCU rather than  final_mutex  to protect the thread traversal in read_count()  in order to obtain excellent performance and scalability from  read_  count() , rather than just from  inc_count() . However, we do not want to give up any accuracy in the computed sum. In particular, when a given thread exits, we absolutely cannot lose the exiting thread’s count, nor can we double-count it. Such an error could result in inaccuracies equal to the full precision of the result, in other words, such an error would make the result completely useless. And in fact, one of the purposes of   final_mutex  is to ensure that threads do not come and go in the middle of   read_count()  execution. Quick Quiz 12.2:  Just what is the accuracy of   read_count() , anyway? Therefore, if we are to dispense with final_mutex , we will need to come up with some other method for ensuring consistency. One approach is to place the total count for all previously exited threads and the array of pointers to the per-thread counters into a single structure. Such a structure, once made available to  read_count() , is held constant, ensuring that  read_count()  sees consistent data. 12.2.1.2 Implementation Lines 1-4 of Figure  12.1  show the countarray structure, which contains a ->total field for the count from previously exited threads, and a counterp[] array of pointers to the per-thread  counter  for each currently running thread. This structure allows a 338 1 struct countarray { 2 unsigned long total; 3 unsigned long  * counterp[NR_THREADS]; 4 }; 5 6 long __thread counter = 0; 7 struct countarray  * countarrayp = NULL; 8 DEFINE_SPINLOCK(final_mutex); 9 10 void inc_count(void) 11 { 12 counter++; 13 } 14 15 long read_count(void) 16 { 17 struct countarray  * cap; 18 unsigned long sum; 19 int t; 20 21 rcu_read_lock(); 22 cap = rcu_dereference(countarrayp); 23 sum = cap->total; 24 for_each_thread(t) 25 if (cap->counterp[t] != NULL) 26 sum +=  * cap->counterp[t]; 27 rcu_read_unlock(); 28 return sum; 29 } 30 31 void count_init(void) 32 { 33 countarrayp = malloc(sizeof( * countarrayp)); 34 if (countarrayp == NULL) { 35 fprintf(stderr, "Out of memory"); 36 exit(-1); 37 } 38 memset(countarrayp, ’0’, sizeof( * countarrayp)); 39 } 40 41 void count_register_thread(void) 42 { 43 int idx = smp_thread_id(); 44 45 spin_lock(&final_mutex); 46 countarrayp->counterp[idx] = &counter; 47 spin_unlock(&final_mutex); 48 } 49 50 void count_unregister_thread(int nthreadsexpected) 51 { 52 struct countarray  * cap; 53 struct countarray  * capold; 54 int idx = smp_thread_id(); 55 56 cap = malloc(sizeof( * countarrayp)); 57 if (cap == NULL) { 58 fprintf(stderr, "Out of memory"); 59 exit(-1); 60 } 61 spin_lock(&final_mutex); 62  * cap =  * countarrayp; 63 cap->total += counter; 64 cap->counterp[idx] = NULL; 65 capold = countarrayp; 66 rcu_assign_pointer(countarrayp, cap); 67 spin_unlock(&final_mutex); 68 synchronize_rcu(); 69 free(capold); 70 } Figure 12.1: RCU and Per-Thread Statistical Counters 339 given execution of   read_count()  to see a total that is consistent with the indicated set of running threads. Lines 6-8 contain the definition of the per-thread  counter  variable, the global pointer countarrayp referencingthecurrent countarray structure, andthe final_  mutex  spinlock. Lines 10-13 show  inc_count() , which is unchanged from Figure  4.9. Lines 15-29 show  read_count() , which has changed significantly. Lines 21 and 27 substitute  rcu_read_lock()  and  rcu_read_unlock()  for acquisition and release of   final_mutex . Line 22 uses  rcu_dereference()  to snapshot the current  countarray  structure into local variable  cap . Proper use of RCU will guarantee that this  countarray  structure will remain with us through at least the end of the current RCU read-side critical section at line 27. Line 23 initializes  sum  to cap->total , which is the sum of the counts of threads that have previously exited. Lines 24-26 add up the per-thread counters corresponding to currently running threads, and, finally, line 28 returns the sum. The initial value for  countarrayp  is provided by  count_init()  on lines 31- 39. This function runs before the first thread is created, and its job is to allocate and zero the initial structure, and then assign it to  countarrayp . Lines 41-48 show the count_register_thread() function, which is invoked by each newly created thread. Line 43 picks up the current thread’s index, line 45 acquires  final_mutex , line 46 installs a pointer to this thread’s  counter , and line 47 releases  final_mutex . Quick Quiz 12.3:  Hey!!! Line 45 of Figure  12.1  modifies a value in a pre-existing countarray  structure! Didn’t you say that this structure, once made available to read_count() , remained constant??? Lines 50-70 shows  count_unregister_thread() , which is invoked by each thread just before it exits. Lines 56-60 allocate a new  countarray  structure, line 61 acquires  final_mutex  and line 67 releases it. Line 62 copies the contents of  the current  countarray  into the newly allocated version, line 63 adds the exiting thread’s  counter  to new structure’s total, and line 64  NULL s the exiting thread’s counterp[]  array element. Line 65 then retains a pointer to the current (soon to be old)  countarray  structure, and line 66 uses  rcu_assign_pointer()  to install the new version of the  countarray  structure. Line 68 waits for a grace period to elapse, so that any threads that might be concurrently executing in  read_count , and thus might have references to the old  countarray  structure, will be allowed to exit their RCU read-side critical sections, thus dropping any such references. Line 69 can then safely free the old  countarray  structure. 12.2.1.3 Discussion Quick Quiz 12.4:  Wow! Figure  12.1  contains 69 lines of code, compared to only 42 in Figure  4.9.  Is this extra complexity really worth it? Use of RCU enables exiting threads to wait until other threads are guaranteed to be done using the exiting threads’  __thread  variables. This allows the  read_  count()  function to dispense with locking, thereby providing excellent performance and scalability for both the  inc_count()  and  read_count()  functions. However, this performance and scalability come at the cost of some increase in code complexity. It is hoped that compiler and library writers employ user-level RCU [ Des09 ] to provide safe cross-thread access to  __thread  variables, greatly reducing the complexity seen by users of   __thread  variables. 340 1 struct foo { 2 int length; 3 char  * a; 4 }; Figure 12.2: RCU-Protected Variable-Length Array 12.2.2 RCU and Counters for Removable I/O Devices Section  4.5  showed a fanciful pair of code fragments for dealing with counting I/O accesses to removable devices. These code fragments suffered from high overhead on the fastpath (starting an I/O) due to the need to acquire a reader-writer lock. This section shows how RCU may be used to avoid this overhead. The code for performing an I/O is quite similar to the original, with an RCU read- side critical section be substituted for the reader-writer lock read-side critical section in the original: 1 rcu_read_lock(); 2 if (removing) { 3 rcu_read_unlock(); 4 cancel_io(); 5 } else { 6 add_count(1); 7 rcu_read_unlock(); 8 do_io(); 9 sub_count(1); 10 } The RCU read-side primitives have minimal overhead, thus speeding up the fastpath, as desired. The updated code fragment removing a device is as follows: 1 spin_lock(&mylock); 2 removing = 1; 3 sub_count(mybias); 4 spin_unlock(&mylock); 5 synchronize_rcu(); 6 while (read_count() != 0) { 7 poll(NULL, 0, 1); 8 } 9 remove_device(); Here we replace the reader-writer lock with an exclusive spinlock and add a synchronize_rcu()  to wait for all of the RCU read-side critical sections to com- plete. Because of the  synchronize_rcu() , once we reach line 6, we know that all remaining I/Os have been accounted for. Of course, the overhead of   synchronize_rcu()  can be large, but given that device removal is quite rare, this is usually a good tradeoff. 12.2.3 Array and Length Suppose we have an RCU-protected variable-length array, as shown in Figure  12.2 . The length of the array  ->a[]  can change dynamically, and at any given time, its length is 341 1 struct foo_a { 2 int length; 3 char a[0]; 4 }; 5 6 struct foo { 7 struct foo_a  * fa; 8 }; Figure 12.3: Improved RCU-Protected Variable-Length Array given by the field  ->length . Of course, this introduces the following race condition: 1. The array is initially 16 characters long, and thus  ->length  is equal to 16. 2. CPU 0 loads the value of   ->length , obtaining the value 16. 3.  CPU 1 shrinks the array to be of length 8, and assigns a pointer to a new 8- character block of memory into  ->a[] . 4.  CPU 0 picks up the new pointer from ->a[] , and stores a new value into element 12. Because the array has only 8 characters, this results in a SEGV or (worse yet) memory corruption. How can we prevent this? One approach is to make careful use of memory barriers, which are covered in Section  13.2.  This works, but incurs read-side overhead and, perhaps worse, requires use of explicit memory barriers. A better approach is to put the value and the array into the same structure, as shown in Figure  12.3.  Allocating a new array ( foo_a  structure) then automatically provides a new place for the array length. This means that if any CPU picks up a reference to ->fa , it is guaranteed that the  ->length  will match the  ->a[]  length  [ACMS03 ]. 1. The array is initially 16 characters long, and thus  ->length  is equal to 16. 2.  CPU 0 loads the value of   ->fa , obtaining a pointer to the structure containing the value 16 and the 16-byte array. 3. CPU 0 loads the value of   ->fa->length , obtaining the value 16. 4.  CPU 1 shrinks the array to be of length 8, and assigns a pointer to a new  foo_a structure containing an 8-character block of memory into  ->a[] . 5.  CPU 0 picks up the new pointer from ->a[] , and stores a new value into element 12. But because CPU 0 is still referencing the old  foo_a  structure that contains the 16-byte array, all is well. Of course, in both cases, CPU 1 must wait for a grace period before freeing the old array. A more general version of this approach is presented in the next section. 342 1 struct animal { 2 char name[40]; 3 double age; 4 double meas_1; 5 double meas_2; 6 double meas_3; 7 char photo[0]; / *  large bitmap.  * / 8 }; Figure 12.4: Uncorrelated Measurement Fields 1 struct measurement { 2 double meas_1; 3 double meas_2; 4 double meas_3; 5 }; 6 7 struct animal { 8 char name[40]; 9 double age; 10 struct measurement  * mp; 11 char photo[0]; / *  large bitmap.  * / 12 }; Figure 12.5: Correlated Measurement Fields 12.2.4 Correlated Fields Suppose that each of Schödinger’s animals is represented by the data element shown in Figure  12.4 . The  meas_1 ,  meas_2 , and  meas_3  fields are a set of correlated measurements that are updated periodically. It is critically important that readers see these three values from a single measurement update: If a reader sees an old value of   meas_1  but new values of   meas_2  and  meas_3 , that reader will become fatally confused. How can we guarantee that readers will see coordinated sets of these three values? One approach would be to allocate a new  animal  structure, copy the old structure into the new structure, update the new structure’s  meas_1 ,  meas_2 , and  meas_3 fields, and then replace the old structure with a new one by updating the pointer. This does guarantee that all readers see coordinated sets of measurement values, but it requires copying a large structure due to the  ->photo[]  field. This copying might incur unacceptably large overhead. Another approach is to insert a level of indirection, as shown in Figure  12.5.  When a new measurement is taken, a new  measurement  structure is allocated, filled in with the measurements, and the  animal  structure’s  ->mp  field is updated to point to this new  measurement  structure using  rcu_assign_pointer() . After a grace period elapses, the old  measurement  structure can be freed. Quick Quiz 12.5:  But cant’t the approach shown in Figure  12.5  result in extra cache misses, in turn resulting in additional read-side overhead? This approach enables readers to see correlated values for selected fields with minimal read-side overhead. 343 12.3 Hashing Hassles This section looks at some issues that can arise when dealing with hash tables. Please note that these issues also apply to many other search structures. 12.3.1 Correlated Data Elements This situation is analogous to that in Section  12.2.4:  We have a hash table where we need correlated views of two or more of the elements. These elements are updated together, and we do not want so see an old version of the first element along with new versions of the other elements. For example, Schrödinger decided to add his extended family to his in-memory database along with all his animals. Although Schrödinger understands that marriages and divorces do not happen instantaneously, he is also a traditionalist. As such, he absolutely does not want his database ever to show that the bride is now married, but the groom is not, and vice versa. In other words, Schrödinger wants to be able to carry out a wedlock-consistent tranversal of his database. One approach is to use sequence locks (see Section  8.2 ), so that wedlock-related updates are carried out under the protection of   write_seqlock() , while reads requiring wedlock consistency are carried out within a  read_seqbegin()  /   read_  seqretry() loop. Note that sequence locks are not a replacement for RCU protection: Sequence locks protect against concurrent modifications, but RCU is still needed to protect against concurrent deletions. This approach works quite well when the number of correlated elements is small, the time to read these elements is short, and the update rate is low. Otherwise, updates might happen so quickly that readers might never complete. Although Schrödinger does not expect that even is least-sane relatives will marry and divorce quickly enough for this to be a problem, he does realize that this problem could well arise in other situations. One way to avoid this reader-starvation problem is to have the readers use the update-side primitives if there have been too many retries, but this can degrade both performance and scalability. In addition, if the update-side primitives are used too frequently, poor performance and scalability will result due to lock contention. One way to avoid this is to maintain a per-element sequence lock, and to hold both spouses’ locks when updating their marital status. Readers can do their retry looping on either of the spouses’ locks to gain a stable view of any change in marital status involving both members of the pair. This avoids contention due to high marriage and divorce rates, but complicates gaining a stable view of all marital statuses during a single scan of the database. If the element groupings are well-defined and persistent, which marital status is hoped to be, then one approach is to add pointers to the data elements to link together the members of a given group. Readers can then traverse these pointers to access all the data elements in the same group as the first one located. Other approaches using version numbering are left as exercises for the interested reader. 12.3.2 Update-Friendly Hash-Table Traversal Suppose that a statistical scan of all elements in a hash table is required. For example, Schrödinger might wish to compute the average length-to-weight ratio over all of his 344 animals. 1 Suppose further that Schrödinger is willing to ignore slight errors due to animals being added to and removed from the hash table while this statistical scan is being carried out. What should Schrödinger do to control concurrency? One approach is to enclose the statistical scan in an RCU read-side critical section. This permits updates to proceed concurrently without unduly impeding the scan. In particular, the scan does not block the updates and vice versa, which allows scan of hash tables containing very large numbers of elements to be supported gracefully, even in the face of very high update rates. Quick Quiz 12.6:  But how does this scan work while a resizable hash table is being resized? In that case, neither the old nor the new hash table is guaranteed to contain all the elements in the hash table! 1 Why would such a quantity be useful? Beats me! But group statistics in general are often useful. 345 346 Chapter 13 Advanced Synchronization 13.1 Avoiding Locks Although locking is the workhorse of parallelism in production, in many situations performance, scalability, and real-time response can all be greatly improved though use of lockless techniques. A particularly impressive example of such a lockless technique are the statistical counters describe in Section  4.2 , which avoids not only locks, but also atomic operations, memory barriers, and even cache misses for counter increments. Other examples we have covered include: 1. The fastpaths through a number of other counting algorithms in Chapter  4. 2. The fastpath through resource allocator caches in Section  5.4.3. 3. The maze solver in Section  5.5. 4. The data-ownership techniques described in Section  7 . 5. The reference-counting and RCU techinques described in Chapter  8. 6. The lookup code paths described in Chapter  9. 7. Many of the techniques described in Chapter  12 . In short, lockless techniques are quite useful and are heavily used. However, it is best if lockless techniques are hidden behind a well-defined API, such as the  inc_count() ,  memblock_alloc() ,  rcu_read_lock() , and so on. The reason for this is that undisciplined use of lockless techniques is a good way to create difficult bugs. A key component of many lockless techniques is the memory barrier, which is described in the following section. 13.2 Memory Barriers Author: David Howells and Paul McKenney. Causality and sequencing are deeply intuitive, and hackers often tend to have a much stronger grasp of these concepts than does the general population. These intuitions can 347 be extremely powerful tools when writing, analyzing, and debugging both sequential code and parallel code that makes use of standard mutual-exclusion mechanisms, such as locking and RCU. Unfortunately, these intuitions break down completely in face of code that makes direct use of explicit memory barriers for data structures in shared memory (driver writers making use of MMIO registers can place greater trust in their intuition, but more on this later). The following sections show exactly where this intuition breaks down, and then puts forward a mental model of memory barriers that can help you avoid these pitfalls. Section  13.2.1  gives a brief overview of memory ordering and memory barriers. Once this background is in place, the next step is to get you to admit that your intuition hasaproblem. ThispainfultaskistakenupbySection 13.2.2,  whichshowsanintuitively correct code fragment that fails miserably on real hardware, and by Section  13.2.3, which presents some code demonstrating that scalar variables can take on multiple values simultaneously. Once your intuition has made it through the grieving process, Section  13.2.4  provides the basic rules that memory barriers follow, rules that we will build upon. These rules are further refined in Sections  13.2.5  through  13.2.14 . 13.2.1 Memory Ordering and Memory Barriers But why are memory barriers needed in the first place? Can’t CPUs keep track of  ordering on their own? Isn’t that why we have computers in the first place, to keep track of things? Many people do indeed expect their computers to keep track of things, but many also insist that they keep track of things quickly. One difficulty that modern computer-system vendors face is that the main memory cannot keep up with the CPU – modern CPUs can execute hundreds of instructions in time required to fetch a single variable from memory. CPUs therefore sport increasingly large caches, as shown in Figure  13.1 . Variables that are heavily used by a given CPU will tend to remain in that CPU’s cache, allowing high-speed access to the corresponding data. CPU 0CPU 1 Cache Cache Memory Interconnect Figure 13.1: Modern Computer System Cache Structure Unfortunately, when a CPU accesses data that is not yet in its cache will result in an expensive “cache miss”, requiring the data to be fetched from main memory. Doubly unfortunately, running typical code results in a significant number of cache misses. To 348 limit the resulting performance degradation, CPUs have been designed to execute other instructions and memory references while waiting for a cache miss to fetch data from memory. This clearly causes instructions and memory references to execute out of  order, which could cause serious confusion, as illustrated in Figure  13.2.  Compilers and synchronization primitives (such as locking and RCU) are responsible for maintaining the illusion of ordering through use of “memory barriers” (for example,  smp_mb()  in the Linux kernel). These memory barriers can be explicit instructions, as they are on ARM, POWER, Itanium, and Alpha, or they can be implied by other instructions, as they are on x86. Figure 13.2: CPUs Can Do Things Out of Order Since the standard synchronization primitives preserve the illusion of ordering, your path of least resistance is to stop reading this section and simply use these primitives. However, if you need to implement the synchronization primitives themselves, or if  you are simply interested in understanding how memory ordering and memory barriers work, read on! The next sections present counter-intuitive scenarios that you might encounter when using explicit memory barriers. 13.2.2 If B Follows A, and C Follows B, Why Doesn’t C Follow A? Memory ordering and memory barriers can be extremely counter-intuitive. For example, consider the functions shown in Figure  13.3  executing in parallel where variables A, B, and C are initially zero: Intuitively,  thread0()  assigns to B after it assigns to A,  thread1()  waits until thread0()  has assigned to B before assigning to C, and  thread2()  waits until thread1()  has assigned to C before referencing A. Therefore, again intuitively, the assertion on line 21 cannot possibly fire. 349 1 thread0(void) 2 { 3 A = 1; 4 smp_wmb(); 5 B = 1; 6 } 7 8 thread1(void) 9 { 10 while (B != 1) 11 continue; 12 barrier(); 13 C = 1; 14 } 15 16 thread2(void) 17 { 18 while (C != 1) 19 continue; 20 smp_mb(); 21 assert(A != 0); 22 } Figure 13.3: Parallel Hardware is Non-Causal This line of reasoning, intuitively obvious though it may be, is completely and utterly incorrect. Please note that this is  not   a theoretical assertion: actually running this code on real-world weakly-ordered hardware (a 1.5GHz 16-CPU POWER 5 system) resulted in the assertion firing 16 times out of 10 million runs. Clearly, anyone who produces code with explicit memory barriers should do some extreme testing – although a proof of correctness might be helpful, the strongly counter-intuitive nature of the behavior of memory barriers should in turn strongly limit one’s trust in such proofs. The requirement for extreme testing should not be taken lightly, given that a number of dirty hardware-dependent tricks were used to greatly  increase  the probability of failure in this run. Quick Quiz 13.1:  How on earth could the assertion on line 21 of the code in Figure  13.3  on page  350  possibly  fail? Quick Quiz 13.2:  Great... So how do I fix it? So what should you do? Your best strategy, if possible, is to use existing primitives that incorporate any needed memory barriers, so that you can simply ignore the rest of  this chapter. Of course, if you are implementing synchronization primitives, you don’t have this luxury. The following discussion of memory ordering and memory barriers is for you. 13.2.3 Variables Can Have More Than One Value It is natural to think of a variable as taking on a well-defined sequence of values in a well-defined, global order. Unfortunately, it is time to say “goodbye” to this sort of  comforting fiction. To see this, consider the program fragment shown in Figure  13.4 . This code fragment is executed in parallel by several CPUs. Line 1 sets a shared variable to the current CPU’s ID, line 2 initializes several variables from a gettb() function that delivers the value of fine-grained hardware “timebase” counter that is synchronized among all CPUs (not available from all CPU architectures, unfortunately!), and the loop from lines 3-8 records the length of time that the variable retains the value that this CPU assigned to it. 350 Of course, one of the CPUs will “win”, and would thus never exit the loop if not for the check on lines 7-8. Quick Quiz 13.3:  What assumption is the code fragment in Figure  13.4  making that might not be valid on real hardware? 1 state.variable = mycpu; 2 lasttb = oldtb = firsttb = gettb(); 3 while (state.variable == mycpu) { 4 lasttb = oldtb; 5 oldtb = gettb(); 6 if (lasttb - firsttb > 1000) 7 break; 8 } Figure 13.4: Software Logic Analyzer Upon exit from the loop,  firsttb  will hold a timestamp taken shortly after the assignment and  lasttb  will hold a timestamp taken before the last sampling of the shared variable that still retained the assigned value, or a value equal to  firsttb  if  the shared variable had changed before entry into the loop. This allows us to plot each CPU’s view of the value of   state.variable  over a 532-nanosecond time period, as shown in Figure  13.5.  This data was collected on 1.5GHz POWER5 system with 8 cores, each containing a pair of hardware threads. CPUs 1, 2, 3, and 4 recorded the values, while CPU 0 controlled the test. The timebase counter period was about 5.32ns, sufficiently fine-grained to allow observations of intermediate cache states. 1 2 4 2 2 2 100ns200ns300ns400ns500ns 3 CPU 2 CPU 3 CPU 4 CPU 1 Figure 13.5: A Variable With Multiple Simultaneous Values Each horizontal bar represents the observations of a given CPU over time, with the black regions to the left indicating the time before the corresponding CPU’s first measurement. During the first 5ns, only CPU 3 has an opinion about the value of the variable. During the next 10ns, CPUs 2 and 3 disagree on the value of the variable, but thereafter agree that the value is “2”, which is in fact the final agreed-upon value. However, CPU 1 believes that the value is “1” for almost 300ns, and CPU 4 believes that the value is “4” for almost 500ns. Quick Quiz 13.4:  How could CPUs possibly have different views of the value of a single variable  at the same time? Quick Quiz 13.5:  Why do CPUs 2 and 3 come to agreement so quickly, when it takes so long for CPUs 1 and 4 to come to the party? We have entered a regime where we must bid a fond farewell to comfortable intuitions about values of variables and the passage of time. This is the regime where memory barriers are needed. 351 13.2.4 What Can You Trust? You most definitely cannot trust your intuition. What  can  you trust? It turns out that there are a few reasonably simple rules that allow you to make good use of memory barriers. This section derives those rules, for those who wish to get to the bottom of the memory-barrier story, at least from the viewpoint of portable code. If you just want to be told what the rules are rather than suffering through the actual derivation, please feel free to skip to Section  13.2.6. The exact semantics of memory barriers vary wildly from one CPU to another, so portable code must rely only on the least-common-denominator semantics of memory barriers. Fortunately, all CPUs impose the following rules: 1.  All accesses by a given CPU will appear to that CPU to have occurred in program order. 2.  All CPUs’ accesses to a single variable will be consistent with some global ordering of stores to that variable. 3. Memory barriers will operate in a pair-wise fashion. 4.  Operations will be provided from which exclusive locking primitives may be constructed. Therefore, if you need to use memory barriers in portable code, you can rely on all of these properties . 1 Each of these properties is described in the following sections. 13.2.4.1 Self-References Are Ordered A given CPU will see its own accesses as occurring in “program order”, as if the CPU was executing only one instruction at a time with no reordering or speculation. For older CPUs, this restriction is necessary for binary compatibility, and only secondarily for the sanity of us software types. There have been a few CPUs that violate this rule to a limited extent, but in those cases, the compiler has been responsible for ensuring that ordering is explicitly enforced as needed. Either way, from the programmer’s viewpoint, the CPU sees its own accesses in program order. 13.2.4.2 Single-Variable Memory Consistency Because current commercially available computer systems provide  cache coherence , if a group of CPUs all do concurrent non-atomic stores to a single variable, the series of values seen by all CPUs will be consistent with at least one global ordering. For example, in the series of accesses shown in Figure  13.5,  CPU 1 sees the sequence {1,2} , CPU 2 sees the sequence  {2} , CPU 3 sees the sequence  {3,2} , and CPU 4 sees the sequence  {4,2} . This is consistent with the global sequence  {3,1,4,2} , but also with all five of the other sequences of these four numbers that end in “2”. Thus, there will be agreement on the sequence of values taken on by a single variable, but there might be ambiguity. 1 Or, better yet, you can avoid explicit use of memory barriers entirely. But that would be the subject of  other sections. 352 In contrast, had the CPUs used atomic operations (such as the Linux kernel’s atomic_inc_return()  primitive) rather than simple stores of unique values, their observations would be guaranteed to determine a single globally consistent sequence of values. One of the  atomic_inc_return()  invocations would happen first, and would change the value from 0 to 1, the second from 1 to 2, and so on. The CPUs could compare notes afterwards and come to agreement on the exact ordering of the sequence of   atomic_inc_return()  invocations. This does not work for the non-atomic stores described earlier because the non-atomic stores do not return any indication of  the earlier value, hence the possibility of ambiguity. Please note well that this section applies  only  when all CPUs’ accesses are to one single variable. In this single-variable case, cache coherence guarantees the global ordering, at least assuming that some of the more aggressive compiler optimizations are disabled via the Linux kernel’s  ACCESS_ONCE()  directive or C++11’s relaxed atomics [ Bec11 ] . In contrast, if there are multiple variables, memory barriers are required for the CPUs to consistently agree on the order for current commercially available computer systems. 13.2.4.3 Pair-Wise Memory Barriers Pair-wise memory barriers provide conditional ordering semantics. For example, in the following set of operations, CPU 1’s access to A does not unconditionally precede its access to B from the viewpoint of an external logic analyzer (see Appendix  C  for examples). However, if CPU 2’s access to B sees the result of CPU 1’s access to B, then CPU 2’s access to A is guaranteed to see the result of CPU 1’s access to A. Although some CPUs’ memory barriers do in fact provide stronger, unconditional ordering guarantees, portable code may rely only on this weaker if-then conditional ordering guarantee. CPU 1 CPU 2 access(A); access(B); smp_mb(); smp_mb(); access(B); access(A); Quick Quiz 13.6:  But if the memory barriers do not unconditionally force ordering, how the heck can a device driver reliably execute sequences of loads and stores to MMIO registers? Of course, accesses must be either loads or stores, and these do have different properties. Table  13.1  shows all possible combinations of loads and stores from a pair of CPUs. Of course, to enforce conditional ordering, there must be a memory barrier between each CPU’s pair of operations. 13.2.4.4 Pair-Wise Memory Barriers: Portable Combinations The following pairings from Table  13.1 , enumerate all the combinations of memory- barrier pairings that portable software may depend on. Pairing 1.  In this pairing, one CPU executes a pair of loads separated by a memory bar- rier, while a second CPU executes a pair of stores also separated by a memory barrier, as follows(bothAandBareinitiallyequaltozero): CPU 1 CPU 2 A=1; Y=B; smp_mb(); smp_mb(); B=1; X=A; 353 CPU 1 CPU 2 Description 0 load(A) load(B) load(B) load(A) Ears to ears. 1 load(A) load(B) load(B) store(A) Only one store. 2 load(A) load(B) store(B) load(A) Only one store. 3 load(A) load(B) store(B) store(A) Pairing 1. 4 load(A) store(B) load(B) load(A) Only one store. 5 load(A) store(B) load(B) store(A) Pairing 2. 6 load(A) store(B) store(B) load(A) Mouth to mouth, ear to ear. 7 load(A) store(B) store(B) store(A) Pairing 3. 8 store(A) load(B) load(B) load(A) Only one store. 9 store(A) load(B) load(B) store(A) Mouth to mouth, ear to ear. A store(A) load(B) store(B) load(A) Ears to mouths. B store(A) load(B) store(B) store(A) Stores “pass in the night”. C store(A) store(B) load(B) load(A) Pairing 1. D store(A) store(B) load(B) store(A) Pairing 3. E store(A) store(B) store(B) load(A) Stores “pass in the night”. F store(A) store(B) store(B) store(A) Stores “pass in the night”. Table 13.1: Memory-Barrier Combinations After both CPUs have completed executing these code sequences, if   Y==1 , then we must also have  X==1 . In this case, the fact that  Y==1  means that CPU 2’s load prior to its memory barrier has seen the store following CPU 1’s memory barrier. Due to the pairwise nature of memory barriers, CPU 2’s load following its memory barrier must therefore see the store that precedes CPU 1’s memory barrier, so that  X==1 . On the other hand, if   Y==0 , the memory-barrier condition does not hold, and so in this case, X could be either 0 or 1. Pairing 2.  In this pairing, each CPU executes a load followed by a memory barrier fol- lowedbyastore, asfollows(bothAandBareinitiallyequaltozero): CPU 1 CPU 2 X=A; Y=B; smp_mb(); smp_mb(); B=1; A=1; After both CPUs have completed executing these code sequences, if   X==1 , then we must also have  Y==0 . In this case, the fact that  X==1  means that CPU 1’s load prior to its memory barrier has seen the store following CPU 2’s memory barrier. Due to the pairwise nature of memory barriers, CPU 1’s store following its memory barrier must therefore see the results of CPU 2’s load preceding its memory barrier, so that  Y==0 . On the other hand, if   X==0 , the memory-barrier condition does not hold, and so in this case, Y could be either 0 or 1. The two CPUs’ code sequences are symmetric, so if   Y==1  after both CPUs have finished executing these code sequences, then we must have  X==0 . Pairing 3.  In this pairing, one CPU executes a load followed by a memory barrier followed by a store, while the other CPU executes a pair of stores separated by a memory 354 barrier, asfollows(bothAandBareinitiallyequaltozero): CPU 1 CPU 2 X=A; B=2; smp_mb(); smp_mb(); B=1; A=1; After both CPUs have completed executing these code sequences, if   X==1 , then we must also have  B==1 . In this case, the fact that  X==1  means that CPU 1’s load prior to its memory barrier has seen the store following CPU 2’s memory barrier. Due to the pairwise nature of memory barriers, CPU 1’s store following its memory barrier must therefore see the results of CPU 2’s store preceding its memory barrier. This means that CPU 1’s store to B will overwrite CPU 2’s store to B, resulting in  B==1 . On the other hand, if   X==0 , the memory-barrier condition does not hold, and so in this case, B could be either 1 or 2. 13.2.4.5 Pair-Wise Memory Barriers: Semi-Portable Combinations The following pairings from Table  13.1  can be used on modern hardware, but might fail on some systems that were produced in the 1900s. However, these  can  safely be used on all mainstream hardware introduced since the year 2000. So if you think that memory barriers are difficult to deal with, please keep in mind that they used to be a  lot  harder on some systems! Ears to Mouths.  Since the stores cannot see the results of the loads (again, ignoring MMIO registers for the moment), it is not always possible to determine whether the memory-barrier condition has been met. However, 21 st -century hardware  would   guar- antee that at least one of the loads saw the value stored by the corresponding store (or some later value for that same variable). Quick Quiz 13.7:  How do we know that modern hardware guarantees that at least one of the loads will see the value stored by the other thread in the ears-to-mouths scenario? Stores “Pass in the Night”.  In the following example, after both CPUs have fin- ished executing their code sequences, it is quite tempting to conclude that the result {A==1,B==2}  cannot happen. CPU 1 CPU 2 A=1; B=2; smp_mb(); smp_mb(); B=1; A=2; Unfortunately, although this conclusion is correct on 21 st -century systems, it does not necessarily hold on all antique 20 th -century systems. Suppose that the cache line containing A is initially owned by CPU 2, and that containing B is initially owned by CPU 1. Then, in systems that have invalidation queues and store buffers, it is possible for the first assignments to “pass in the night”, so that the second assignments actually happen first. This strange effect is explained in Appendix  C. This same effect can happen in any memory-barrier pairing where each CPU’s memory barrier is preceded by a store, including the “ears to mouths” pairing. However, 21 st -century hardware  does  accommodate these ordering intuitions, and do  permit this combination to be used safely. 355 13.2.4.6 Pair-Wise Memory Barriers: Dubious Combinations In the following combinations from Table  13.1 , the memory barriers have very limited use in portable code, even on 21 st -century hardware. However, “limited use” is different than “no use”, so let’s see what can be done! Avid readers will want to write toy programs that rely on each of these combinations in order to fully understand how this works. Ears to Ears.  Since loads do not change the state of memory (ignoring MMIO registers for the moment), it is not possible for one of the loads to see the results of the other load. However, if we know that CPU 2’s load from B returned a newer value than CPU 1’s load from B, the we also know that CPU 2’s load from A returned either the same value as CPU 1’s load from A or some later value. Mouth to Mouth, Ear to Ear.  One of the variables is only loaded from, and the other is only stored to. Because (once again, ignoring MMIO registers) it is not possible for one load to see the results of the other, it is not possible to detect the conditional ordering provided by the memory barrier. However, it is possible to determine which store happened last, but this requires an additional load from B. If this additional load from B is executed after both CPUs 1 and 2 complete, and if it turns out that CPU 2’s store to B happened last, then we know that CPU 2’s load from A returned either the same value as CPU 1’s load from A or some later value. Only One Store.  Because there is only one store, only one of the variables permits one CPU to see the results of the other CPU’s access. Therefore, there is no way to detect the conditional ordering provided by the memory barriers. At least not straightforwardly. But suppose that in combination 1 from Table  13.1, CPU 1’s load from A returns the value that CPU 2 stored to A. Then we know that CPU 1’s load from B returned either the same value as CPU 2’s load from A or some later value. Quick Quiz 13.8:  How can the other “Only one store” entries in Table  13.1  be used? 13.2.4.7 Semantics Sufficient to Implement Locking Suppose we have an exclusive lock ( spinlock_t  in the Linux kernel,  pthread_  mutex_t  in pthreads code) that guards a number of variables (in other words, these variables are not accessed except from the lock’s critical sections). The following properties must then hold true: 1.  A given CPU or thread must see all of its own loads and stores as if they had occurred in program order. 2.  The lock acquisitions and releases must appear to have executed in a single global order . 2 2 Of course, this order might be different from one run to the next. On any given run, however, all CPUs and threads must have a consistent view of the order of critical sections for a given exclusive lock. 356 3.  Suppose a given variable has not yet been stored to in a critical section that is currently executing. Then any load from a given variable performed in that critical section must see the last store to that variable from the last previous critical section that stored to it. The difference between the last two properties is a bit subtle: the second requires that the lock acquisitions and releases occur in a well-defined order, while the third requires that the critical sections not “bleed out” far enough to cause difficulties for other critical section. Why are these properties necessary? Suppose the first property did not hold. Then the assertion in the following code might well fail! a = 1; b = 1 + a; assert(b == 2); Quick Quiz 13.9:  How could the assertion  b==2  on page  357  possibly fail? Suppose that the second property did not hold. Then the following code might leak memory! spin_lock(&mylock); if (p == NULL) p = kmalloc(sizeof( * p), GFP_KERNEL); spin_unlock(&mylock); Quick Quiz 13.10:  How could the code on page  357  possibly leak memory? Suppose that the third property did not hold. Then the counter shown in the following code might well count backwards. This third property is crucial, as it cannot be strictly with pairwise memory barriers. spin_lock(&mylock); ctr = ctr + 1; spin_unlock(&mylock); Quick Quiz 13.11:  How could the code on page  357  possibly count backwards? If you are convinced that these rules are necessary, let’s look at how they interact with a typical locking implementation. 13.2.5 Review of Locking Implementations Naive pseudocode for simple lock and unlock operations are shown below. Note that the atomic_xchg() primitive implies a memory barrier both before and after the atomic exchange operation, which eliminates the need for an explicit memory barrier in spin_  lock() . Note also that, despite the names,  atomic_read()  and  atomic_set() do  not   execute any atomic instructions, instead, it merely executes a simple load and store, respectively. This pseudocode follows a number of Linux implementations for the unlock operation, which is a simple non-atomic store following a memory barrier. These minimal implementations must possess all the locking properties laid out in Section  13.2.4. 357 1 void spin_lock(spinlock_t  * lck) 2 { 3 while (atomic_xchg(&lck->a, 1) != 0) 4 while (atomic_read(&lck->a) != 0) 5 continue; 6 } 7 8 void spin_unlock(spinlock_t lck) 9 { 10 smp_mb(); 11 atomic_set(&lck->a, 0); 12 } The spin_lock() primitivecannotproceeduntilthepreceding spin_unlock() primitive completes. If CPU 1 is releasing a lock that CPU 2 is attempting to acquire, the sequence of operations might be as follows: CPU 1 CPU 2 (critical section) atomic_xchg(&lck->a, 1)->1 smp_mb(); lck->a->1 lck->a=0; lck->a->1 lck->a->0 (implicit smp_mb()1) atomic_xchg(&lck->a, 1)->0 (implicit smp_mb()2) (critical section) In this particular case, pairwise memory barriers suffice to keep the two criti- cal sections in place. CPU 2’s  atomic_xchg(&lck->a, 1)  has seen CPU 1’s lck->a=0 , so therefore everything in CPU 2’s following critical section must see everything that CPU 1’s preceding critical section did. Conversely, CPU 1’s critical section cannot see anything that CPU 2’s critical section will do. 13.2.6 A Few Simple Rules Probably the easiest way to understand memory barriers is to understand a few simple rules: 1. Each CPU sees its own accesses in order. 2.  If a single shared variable is loaded and stored by multiple CPUs, then the series of values seen by a given CPU will be consistent with the series seen by the other CPUs, and there will be at least one sequence consisting of all values stored to that variable with which each CPUs series will be consistent . 3 3.  If one CPU does ordered stores to variables A and B, 4 ,  and if a second CPU does ordered loads from B and A , 5 ,  then if the second CPU’s load from B gives the value stored by the first CPU, then the second CPU’s load from A must give the value stored by the first CPU. 4.  If one CPU does a load from A ordered before a store to B, and if a second CPU does a load from B ordered before a store from A, and if the second CPU’s load from B gives the value stored by the first CPU, then the first CPU’s load from A must  not   give the value stored by the second CPU. 3 A given CPU’s series may of course be incomplete, for example, if a given CPU never loaded or stored the shared variable, then it can have no opinion about that variable’s value. 4 For example, by executing the store to A, a memory barrier, and then the store to B. 5 For example, by executing the load from B, a memory barrier, and then the load from A. 358 5.  If one CPU does a load from A ordered before a store to B, and if a second CPU does a store to B ordered before a store to A, and if the first CPU’s load from A gives the value stored by the second CPU, then the first CPU’s store to B must happen after the second CPU’s store to B, hence the value stored by the first CPU persists. 6 The next section takes a more operational view of these rules. 13.2.7 Abstract Memory Access Model Consider the abstract model of the system shown in Figure  13.6 . CPU 1MemoryCPU 2 Device Figure 13.6: Abstract Memory Access Model Each CPU executes a program that generates memory access operations. In the abstract CPU, memory operation ordering is very relaxed, and a CPU may actually perform the memory operations in any order it likes, provided program causality appears to be maintained. Similarly, the compiler may also arrange the instructions it emits in any order it likes, provided it doesn’t affect the apparent operation of the program. So in the above diagram, the effects of the memory operations performed by a CPU are perceived by the rest of the system as the operations cross the interface between the CPU and rest of the system (the dotted lines). For example, consider the following sequence of events given the initial values  {A = 1, B = 2} : CPU 1 CPU 2 A = 3; x = A; B = 4; y = B; The set of accesses as seen by the memory system in the middle can be arranged in 24 different combinations, with loads denoted by “ld” and stores denoted by “st”: st A=3, st B=4, x=ld A → 3, y=ld B → 4 st A=3, st B=4, y=ld B → 4, x=ld A → 3 st A=3, x=ld A → 3, st B=4, y=ld B → 4 st A=3, x=ld A → 3, y=ld B → 2, st B=4 st A=3, y=ld B → 2, st B=4, x=ld A → 3 st A=3, y=ld B → 2, x=ld A → 3, st B=4 st B=4, st A=3, x=ld A → 3, y=ld B → 4 st B=4, ... ... 6 Or, for the more competitively oriented, the first CPU’s store to B “wins”. 359 and can thus result in four different combinations of values: x == 1, y == 2 x == 1, y == 4 x == 3, y == 2 x == 3, y == 4 Furthermore, the stores committed by a CPU to the memory system may not be perceived by the loads made by another CPU in the same order as the stores were committed. As a further example, consider this sequence of events given the initial values {A = 1, B = 2, C = 3, P = &A, Q = &C} : CPU 1 CPU 2 B = 4; Q = P; P = &B D =  * Q; There is an obvious data dependency here, as the value loaded into  D  depends on the address retrieved from P  by CPU 2. At the end of the sequence, any of the following results are possible: (Q == &A) and (D == 1) (Q == &B) and (D == 2) (Q == &B) and (D == 4) Note that CPU 2 will never try and load C into D because the CPU will load P into Q before issuing the load of *Q. 13.2.8 Device Operations Some devices present their control interfaces as collections of memory locations, but the order in which the control registers are accessed is very important. For instance, imagine an Ethernet card with a set of internal registers that are accessed through an address port register (A) and a data port register (D). To read internal register 5, the following code might then be used: * A = 5; x =  * D; but this might show up as either of the following two sequences: STORE  * A = 5, x = LOAD  * D x = LOAD  * D, STORE  * A = 5 the second of which will almost certainly result in a malfunction, since it set the address  after   attempting to read the register. 13.2.9 Guarantees There are some minimal guarantees that may be expected of a CPU: 1.  On any given CPU, dependent memory accesses will be issued in order, with respect to itself. This means that for: Q = P; D =  * Q; the CPU will issue the following memory operations: 360 Q = LOAD P, D = LOAD  * Q and always in that order. 2.  Overlapping loads and stores within a particular CPU will appear to be ordered within that CPU. This means that for: a =  * X;  * X = b; the CPU will only issue the following sequence of memory operations: a = LOAD  * X, STORE  * X = b And for: * X = c; d =  * X; the CPU will only issue: STORE  * X = c, d = LOAD  * X (Loads and stores overlap if they are targetted at overlapping pieces of memory). 3.  A series of stores to a single variable will appear to all CPUs to have occurred in a single order, though this order might not be predictable from the code, and in fact the order might vary from one run to another. And there are a number of things that  must   or  must not   be assumed: 1.  It  must not   be assumed that independent loads and stores will be issued in the order given. This means that for: X =  * A; Y =  * B;  * D = Z; we may get any of the following sequences: X = LOAD  * A, Y = LOAD  * B, STORE  * D = Z X = LOAD  * A, STORE  * D = Z, Y = LOAD  * B Y = LOAD  * B, X = LOAD  * A, STORE  * D = Z Y = LOAD  * B, STORE  * D = Z, X = LOAD  * A STORE  * D = Z, X = LOAD  * A, Y = LOAD  * B STORE  * D = Z, Y = LOAD  * B, X = LOAD  * A 2.  It must   beassumedthatoverlappingmemoryaccessesmaybemergedordiscarded. This means that for: X =  * A; Y =  * (A + 4); we may get any one of the following sequences: 361 X = LOAD  * A; Y = LOAD  * (A + 4); Y = LOAD  * (A + 4); X = LOAD  * A; {X, Y} = LOAD { * A,  * (A + 4) }; And for: * A = X; Y =  * A; we may get any of: STORE  * A = X; STORE  * (A + 4) = Y; STORE  * (A + 4) = Y; STORE  * A = X; STORE { * A,  * (A + 4) } = {X, Y}; Finally, for: * A = X;  * A = Y; we may get either of: STORE  * A = X; STORE  * A = Y; STORE  * A = Y; 13.2.10 What Are Memory Barriers? As can be seen above, independent memory operations are effectively performed in random order, but this can be a problem for CPU-CPU interaction and for I/O. What is required is some way of intervening to instruct the compiler and the CPU to restrict the order. Memory barriers are such interventions. They impose a perceived partial ordering over the memory operations on either side of the barrier. Such enforcement is important because the CPUs and other devices in a system can use a variety of tricks to improve performance - including reordering, deferral and combination of memory operations; speculative loads; speculative branch prediction and various types of caching. Memory barriers are used to override or suppress these tricks, allowing the code to sanely control the interaction of multiple CPUs and/or devices. 13.2.10.1 Explicit Memory Barriers Memory barriers come in four basic varieties: 1. Write (or store) memory barriers, 2. Data dependency barriers, 3. Read (or load) memory barriers, and 4. General memory barriers. Each variety is described below. 362 Write Memory Barriers  A write memory barrier gives a guarantee that all the STORE operations specified before the barrier will appear to happen before all the STORE operations specified after the barrier with respect to the other components of  the system. A write barrier is a partial ordering on stores only; it is not required to have any effect on loads. A CPU can be viewed as committing a sequence of store operations to the memory system as time progresses. All stores before a write barrier will occur in the sequence before  all the stores after the write barrier. †  Note that write barriers should normally be paired with read or data dependency barriers; see the “SMP barrier pairing” subsection. Data Dependency Barriers  A data dependency barrier is a weaker form of read barrier. In the case where two loads are performed such that the second depends on the result of the first (e.g., the first load retrieves the address to which the second load will be directed), a data dependency barrier would be required to make sure that the target of  the second load is updated before the address obtained by the first load is accessed. A data dependency barrier is a partial ordering on interdependent loads only; it is not required to have any effect on stores, independent loads or overlapping loads. As mentioned for write memory barriers, the other CPUs in the system can be viewed as committing sequences of stores to the memory system that the CPU being considered can then perceive. A data dependency barrier issued by the CPU under consideration guarantees that for any load preceding it, if that load touches one of a sequence of stores from another CPU, then by the time the barrier completes, the effects of all the stores prior to that touched by the load will be perceptible to any loads issued after the data dependency barrier. See the “Examples of memory barrier sequences” subsection for diagrams showing the ordering constraints. †  Note that the first load really has to have a  data  dependency and not a control dependency. If the address for the second load is dependent on the first load, but the dependency is through a conditional rather than actually loading the address itself, then it’s a  control  dependency and a full read barrier or better is required. See the “Control dependencies” subsection for more information. †  Note that data dependency barriers should normally be paired with write barriers; see the “SMP barrier pairing” subsection. Read Memory Barriers  A read barrier is a data dependency barrier plus a guarantee that all the LOAD operations specified before the barrier will appear to happen before all the LOAD operations specified after the barrier with respect to the other components of the system. A read barrier is a partial ordering on loads only; it is not required to have any effect on stores. Read memory barriers imply data dependency barriers, and so can substitute for them. †  Note that read barriers should normally be paired with write barriers; see the “SMP barrier pairing” subsection. General Memory Barriers  A general memory barrier gives a guarantee that all the LOAD and STORE operations specified before the barrier will appear to happen before 363 all the LOAD and STORE operations specified after the barrier with respect to the other components of the system. A general memory barrier is a partial ordering over both loads and stores. General memory barriers imply both read and write memory barriers, and so can substitute for either. 13.2.10.2 Implicit Memory Barriers There are a couple of types of implicit memory barriers, so called because they are embedded into locking primitives: 1. LOCK operations and 2. UNLOCK operations. LOCK Operations  A lock operation acts as a one-way permeable barrier. It guaran- tees that all memory operations after the LOCK operation will appear to happen after the LOCK operation with respect to the other components of the system. Memory operations that occur before a LOCK operation may appear to happen after it completes. A LOCK operation should almost always be paired with an UNLOCK operation. UNLOCK Operations  Unlock operations also act as a one-way permeable barrier. It guarantees that all memory operations before the UNLOCK operation will appear to happen before the UNLOCK operation with respect to the other components of the system. Memory operations that occur after an UNLOCK operation may appear to happen before it completes. LOCK and UNLOCK operations are guaranteed to appear with respect to each other strictly in the order specified. The use of LOCK and UNLOCK operations generally precludes the need for other sorts of memory barrier (but note the exceptions mentioned in the subsection “MMIO write barrier”). Quick Quiz 13.12:  What effect does the following sequence have on the order of  stores to variables “a” and “b”? a = 1; b = 1; 13.2.10.3 What May Not Be Assumed About Memory Barriers? There are certain things that memory barriers cannot guarantee outside of the confines of a given architecture: 1.  There is no guarantee that any of the memory accesses specified before a memory barrier will be  complete  by the completion of a memory barrier instruction; the barrier can be considered to draw a line in that CPU’s access queue that accesses of the appropriate type may not cross. 364 2.  There is no guarantee that issuing a memory barrier on one CPU will have any direct effect on another CPU or any other hardware in the system. The indirect effect will be the order in which the second CPU sees the effects of the first CPU’s accesses occur, but see the next point. 3.  There is no guarantee that a CPU will see the correct order of effects from a second CPU’s accesses, even  if   the second CPU uses a memory barrier, unless the first CPU  also  uses a matching memory barrier (see the subsection on “SMP Barrier Pairing”). 4.  There is no guarantee that some intervening piece of off-the-CPU hardware 7 will not reorder the memory accesses. CPU cache coherency mechanisms should propagate the indirect effects of a memory barrier between CPUs, but might not do so in order. 13.2.10.4 Data Dependency Barriers The usage requirements of data dependency barriers are a little subtle, and it’s not always obvious that they’re needed. To illustrate, consider the following sequence of  events, with initial values  {A = 1, B = 2, C = 3, P = &A, Q = &C} : CPU 1 CPU 2 B = 4; P = &B; Q = P; D =  * Q; There’s a clear data dependency here, and it would seem intuitively obvious that by the end of the sequence,  Q  must be either  &A  or  &B , and that: (Q == &A) implies (D == 1) (Q == &B) implies (D == 4) Counter-intuitive though it might be, it is quite possible that CPU 2’s perception of  P  might be updated  before  its perception of   B , thus leading to the following situation: (Q == &B) and (D == 2) ???? Whilst this may seem like a failure of coherency or causality maintenance, it isn’t, and this behaviour can be observed on certain real CPUs (such as the DEC Alpha). To deal with this, a data dependency barrier must be inserted between the address load and the data load (again with initial values of   {A = 1, B = 2, C = 3, P = &A, Q = &C} ): CPU 1 CPU 2 B = 4; P = &B; Q = P; D =  * Q; This enforces the occurrence of one of the two implications, and prevents the third possibility from arising. 7 This is of concern primarily in operating-system kernels. For more information on hardware opera- tions and memory ordering, see the files  pci.txt ,  DMA-API-HOWTO.txt ,  and  DMA-API.txt  in the Documentation  directory in the Linux source tree [ Tor03c] . 365 Note that this extremely counterintuitive situation arises most easily on machines with split caches, so that, for example, one cache bank processes even-numbered cache lines and the other bank processes odd-numbered cache lines. The pointer  P  might be stored in an odd-numbered cache line, and the variable  B  might be stored in an even-numbered cache line. Then, if the even-numbered bank of the reading CPU’s cache is extremely busy while the odd-numbered bank is idle, one can see the new value of  the pointer  P  (which is  &B ), but the old value of the variable  B  (which is 1). Another example of where data dependency barriers might by required is where a number is read from memory and then used to calculate the index for an array access with initial values  {M[0] = 1, M[1] = 2, M[3] = 3, P = 0, Q = 3} : CPU 1 CPU 2 M[1] = 4; P = 1; Q = P; D = M[Q]; The data dependency barrier is very important to the Linux kernel’s RCU system, for example, see  rcu_dereference()  in  include/linux/rcupdate.h .  This permits the current target of an RCU’d pointer to be replaced with a new modified target, without the replacement target appearing to be incompletely initialised. See also Section  13.2.13.1  for a larger example. 13.2.10.5 Control Dependencies A control dependency requires a full read memory barrier, not simply a data dependency barrier to make it work correctly. Consider the following bit of code: 1 q = &a; 2 if (p) 3 q = &b; 4 5 x =  * q; This will not have the desired effect because there is no actual data dependency, but rather a control dependency that the CPU may short-circuit by attempting to predict the outcome in advance. In such a case what’s actually required is: 1 q = &a; 2 if (p) 3 q = &b; 4 5 x =  * q; 13.2.10.6 SMP Barrier Pairing When dealing with CPU-CPU interactions, certain types of memory barrier should always be paired. A lack of appropriate pairing is almost certainly an error. A write barrier should always be paired with a data dependency barrier or read barrier, though a general barrier would also be viable. Similarly a read barrier or a data dependency barrier should always be paired with at least an write barrier, though, again, a general barrier is viable: 366 CPU 1 CPU 2 A = 1; B = 2; X = B; Y = A; Or: CPU 1 CPU 2 A = 1; B = &A; X = B; Y =  * X; One way or another, the read barrier must always be present, even though it might be of a weaker type . 8 Note that the stores before the write barrier would normally be expected to match the loads after the read barrier or data dependency barrier, and vice versa: x = a; y = b; c = 3; d = 4; v = c a = 1; b = 2; CPU 2 CPU 1 w = d 13.2.10.7 Examples of Memory Barrier Pairings Firstly, write barriers act as a partial orderings on store operations. Consider the following sequence of events: STORE A = 1 STORE B = 2 STORE C = 3 STORE D = 4 STORE E = 5 This sequence of events is committed to the memory coherence system in an order that the rest of the system might perceive as the unordered set of   {A=1,B=2,C=3}  all occurring before the unordered set of   {D=4,E=5} , as shown in Figure  13.7. Secondly, data dependency barriers act as a partial orderings on data-dependent loads. Consider the following sequence of events with initial values  {B = 7, X = 9, Y = 8, C = &Y} : CPU 1 CPU 2 A = 1; B = 2; C = &B; LOAD X D = 4; LOAD C (gets &B) LOAD  * C (reads B) Without intervention, CPU 2 may perceive the events on CPU 1 in some effectively random order, despite the write barrier issued by CPU 1, as shown in Figure  13.8. In the above example, CPU 2 perceives that  B  is 7, despite the load of   * C  (which would be  B ) coming after the  LOAD  of   C . 8 By “weaker”, we mean “makes fewer ordering guarantees”. A weaker barrier is usually also lower- overhead than is a stronger barrier. 367 000 000 000 000 000 000 000 111 111 111 111 111 111 111 wwwwwwwwwwwwwwww CPU 1 C=3 A=1 B=2 E=5 D=4 Sequence in which stores are committed to the memory system by CPU 1 At this point the write barrier requires all stores prior to the barrier to be committed before further stores may be take place. Events perceptible to rest of system Figure 13.7: Write Barrier Ordering Semantics 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 000 000 000 000 000 000 111 111 111 111 111 111 wwwwwwwwwwwwwwww CPU 2 CPU 1 Y−>8 C−>&Y C−>&B B−>7 X−>9 B−>2 B=2 A=1 C=&B D=4 The load of X holds up the maintenance of coherence of B Apparently incorrect perception of B (!) Sequence of update of perception on CPU 2 Figure 13.8: Data Dependency Barrier Omitted If, however, a data dependency barrier were to be placed between the load of C and the load of   * C  (i.e.:  B ) on CPU 2, again with initial values of   {B = 7, X = 9, Y = 8, C = &Y} : CPU 1 CPU 2 A = 1; B = 2; C = &B; LOAD X D = 4; LOAD C (gets &B) LOAD  * C (reads B) then ordering will be as intuitively expected, as shown in Figure  13.9. And thirdly, a read barrier acts as a partial order on loads. Consider the following sequence of events, with initial values  {A = 0, B = 9} : 368 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 000 000 000 000 000 000 111 111 111 111 111 111 ddddddddddddddddd wwwwwwwwwwwwwwww CPU 2 CPU 1 Y−>8 C−>&Y C−>&B X−>9 B−>2 B=2 A=1 C=&B D=4 Makes sure all effects prior to the store of C are perceptible to subsequent loads Figure 13.9: Data Dependency Barrier Supplied CPU 1 CPU 2 A = 1; B = 2; LOAD B LOAD A Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in some effectively random order, despite the write barrier issued by CPU 1, as shown in Figure  13.10 . 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 000 000 000 000 111 111 111 111 wwwwwwwwwwwwwwww CPU 2 CPU 1 A−>0 B−>9 B−>2 A−>0 A−>1 A=1 B=2 Figure 13.10: Read Barrier Needed If, however, a read barrier were to be placed between the load of   B  and the load of   A on CPU 2, again with initial values of   {A = 0, B = 9} : CPU 1 CPU 2 A = 1; B = 2; LOAD B LOAD A 369 then the partial ordering imposed by CPU 1’s write barrier will be perceived correctly by CPU 2, as shown in Figure  13.11. 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 000 000 000 000 111 111 111 111 rrrrrrrrrrrrrrrrr wwwwwwwwwwwwwwww CPU 2 CPU 1 A−>0 B−>9 B−>2 A−>1 A=1 B=2 At this point the read barrier causes all effects prior to the storage of B to be perceptible to CPU 2 Figure 13.11: Read Barrier Supplied To illustrate this more completely, consider what could happen if the code contained a load of A either side of the read barrier, once again with the same initial values of   {A = 0, B = 9} : CPU 1 CPU 2 A = 1; B = 2; LOAD B LOAD A (1 st ) LOAD A (2 nd ) Even though the two loads of   A  both occur after the load of   B , they may both come up with different values, as shown in Figure  13.12. 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 000 000 000 000 111 111 111 111 rrrrrrrrrrrrrrrrr wwwwwwwwwwwwwwww 2nd 1st CPU 2 CPU 1 A−>0 B−>9 B−>2 A−>0 A−>1 A=1 B=2 At this point the read barrier causes all effects prior to the storage of B to be perceptible to CPU 2 Figure 13.12: Read Barrier Supplied, Double Load Of course, it may well be that CPU 1’s update to  A  becomes perceptible to CPU 2 before the read barrier completes, as shown in Figure  13.13. The guarantee is that the second load will always come up with  A == 1  if the load of   B  came up with  B == 2 . No such guarantee exists for the first load of   A ; that may 370 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 000 000 000 000 111 111 111 111 rrrrrrrrrrrrrrrrr wwwwwwwwwwwwwwww 2nd 1st CPU 2 CPU 1 A−>0 B−>9 B−>2 A−>1 A−>1 A=1 B=2 Figure 13.13: Read Barrier Supplied, Take Two come up with either  A == 0  or  A == 1 . 13.2.10.8 Read Memory Barriers vs. Load Speculation Many CPUs speculate with loads: that is, they see that they will need to load an item from memory, and they find a time where they’re not using the bus for any other loads, and then do the load in advance — even though they haven’t actually got to that point in the instruction execution flow yet. Later on, this potentially permits the actual load instruction to complete immediately because the CPU already has the value on hand. It may turn out that the CPU didn’t actually need the value (perhaps because a branch circumvented the load) in which case it can discard the value or just cache it for later use. For example, consider the following: CPU 1 CPU 2 LOAD B DIVIDE DIVIDE LOAD A On some CPUs, divide instructions can take a long time to complete, which means that CPU 2’s bus might go idle during that time. CPU 2 might therefore speculatively load  A  before the divides complete. In the (hopefully) unlikely event of an exception from one of the dividees, this speculative load will have been wasted, but in the (again, hopefully) common case, overlapping the load with the divides will permit the load to complete more quickly, as illustrated by Figure  13.14. Placing a read barrier or a data dependency barrier just before the second load: CPU 1 CPU 2 LOAD B DIVIDE DIVIDE LOAD A will force any value speculatively obtained to be reconsidered to an extent dependent on the type of barrier used. If there was no change made to the speculated memory location, then the speculated value will just be used, as shown in Figure  13.15.  On the other hand, if there was an update or invalidation to  A  from some other CPU, then 371 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 CPU 2 B−>2 A−>0 Once the divisions are complete the CPU can then perform the LOAD with immediate effect DIVIDE The CPU being busy doing a division speculates on the LOAD of A DIVIDE Figure 13.14: Speculative Load the speculation will be cancelled and the value of   A  will be reloaded, as shown in Figure  13.16. 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 rrrrrrrrrrrrr CPU 2 B−>2 A−>0 DIVIDE The CPU being busy doing a division speculates on the LOAD of A DIVIDE Figure 13.15: Speculative Load and Barrier 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 rrrrrrrrrrrrrrrrr CPU 2 B−>2 A−>0 A−>1 The speculation is discarded and an updated value is retrieved DIVIDE The CPU being busy doing a division speculates on the LOAD of A DIVIDE Figure 13.16: Speculative Load Cancelled by Barrier 13.2.11 Locking Constraints As noted earlier, locking primitives contain implicit memory barriers. These implicit memory barriers provide the following guarantees: 372 1. LOCK operation guarantee: •  Memory operations issued after the LOCK will be completed after the LOCK operation has completed. •  Memory operations issued before the LOCK may be completed after the LOCK operation has completed. 2. UNLOCK operation guarantee: •  Memory operations issued before the UNLOCK will be completed before the UNLOCK operation has completed. •  Memory operations issued after the UNLOCK may be completed before the UNLOCK operation has completed. 3. LOCK vs LOCK guarantee: •  All LOCK operations issued before another LOCK operation will be com- pleted before that LOCK operation. 4. LOCK vs UNLOCK guarantee: •  All LOCK operations issued before an UNLOCK operation will be com- pleted before the UNLOCK operation. •  All UNLOCK operations issued before a LOCK operation will be completed before the LOCK operation. 5. Failed conditional LOCK guarantee: •  Certain variants of the LOCK operation may fail, either due to being unable to get the lock immediately, or due to receiving an unblocked signal or exception whilst asleep waiting for the lock to become available. Failed locks do not imply any sort of barrier. 13.2.12 Memory-Barrier Examples 13.2.12.1 Locking Examples LOCK Followed by UNLOCK:  A LOCK followed by an UNLOCK may not be assumed to be a full memory barrier because it is possible for an access preceding the LOCK to happen after the LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the two accesses can themselves then cross. For example, the following: 1  * A = a; 2 LOCK 3 UNLOCK 4  * B = b; might well execute in the following order: 2 LOCK 4  * B = b; 1  * A = a; 3 UNLOCK 373 Again, always remember that both LOCK and UNLOCK are permitted to let pre- ceding operations “bleed in” to the critical section. Quick Quiz 13.13:  What sequence of LOCK-UNLOCK operations  would   act as a full memory barrier? Quick Quiz 13.14:  What (if any) CPUs have memory-barrier instructions from which these semi-permeable locking primitives might be constructed? LOCK-Based Critical Sections:  Although a LOCK-UNLOCK pair does not act as a full memory barrier, these operations  do  affect memory ordering. Consider the following code: 1  * A = a; 2  * B = b; 3 LOCK 4  * C = c; 5  * D = d; 6 UNLOCK 7  * E = e; 8  * F = f; This could legitimately execute in the following order, where pairs of operations on the same line indicate that the CPU executed those operations concurrently: 3 LOCK 1  * A = a;  * F = f; 7  * E = e; 4  * C = c;  * D = d; 2  * B = b; 6 UNLOCK # Ordering: legitimate or not? 1  * A;  * B; LOCK;  * C;  * D; UNLOCK;  * E;  * F; 2  * A; { * B; LOCK;}  * C;  * D; UNLOCK;  * E;  * F; 3  { * F;  * A;}  * B; LOCK;  * C;  * D; UNLOCK;  * E; 4  * A;  * B; {LOCK;  * C;}  * D; {UNLOCK;  * E;}  * F; 5  * B; LOCK;  * C;  * D;  * A; UNLOCK;  * E;  * F; 6  * A;  * B;  * C; LOCK;  * D; UNLOCK;  * E;  * F; 7  * A;  * B; LOCK;  * C; UNLOCK;  * D;  * E;  * F; 8  { * B;  * A; LOCK;} { * D;  * C;} {UNLOCK;  * F;  * E;} 9  * B; LOCK;  * C;  * D; UNLOCK; { * F;  * A;}  * E; Table 13.2: Lock-Based Critical Sections Quick Quiz 13.15:  Given that operations grouped in curly braces are executed con- currently, which of the rows of Table  13.2  are legitimate reorderings of the assignments to variables “A” through “F” and the LOCK/UNLOCK operations? (The order in the code is A, B, LOCK, C, D, UNLOCK, E, F.) Why or why not? Ordering with Multiple Locks:  Code containing multiple locks still sees ordering constraints from those locks, but one must be careful to keep track of which lock is which. For example, consider the code shown in Table  13.3,  which uses a pair of locks named “M” and “Q”. In this example, there are no guarantees as to what order the assignments to vari- ables “A” through “H” will appear in, other than the constraints imposed by the locks themselves, as described in the previous section. Quick Quiz 13.16:  What are the constraints for Table  13.3 ? 374 CPU 1 CPU 2 A = a; E = e; LOCK M; LOCK Q; B = b; F = f; C = c; G = g; UNLOCK M; UNLOCK Q; D = d; H = h; Table 13.3: Ordering With Multiple Locks Ordering with Multiple CPUs on One Lock:  Suppose, instead of the two different locks as shown in Table  13.3,  both CPUs acquire the same lock, as shown in Table  13.4 ? CPU 1 CPU 2 A = a; E = e; LOCK M; LOCK M; B = b; F = f; C = c; G = g; UNLOCK M; UNLOCK M; D = d; H = h; Table 13.4: Ordering With Multiple CPUs on One Lock In this case, either CPU 1 acquires M before CPU 2 does, or vice versa. In the first case, the assignments to A, B, and C must precede those to F, G, and H. On the other hand, if CPU 2 acquires the lock first, then the assignments to E, F, and G must precede those to B, C, and D. 13.2.13 The Effects of the CPU Cache The perceived ordering of memory operations is affected by the caches that lie between the CPUs and memory, as well as by the cache coherence protocol that maintains memory consistency and ordering. From a software viewpoint, these caches are for all intents and purposes part of memory. Memory barriers can be thought of as acting on the vertical dotted line in Figure  13.17,  ensuring that the CPU presents its values to memory in the proper order, as well as ensuring that it sees changes made by other CPUs in the proper order. Cache CPU Queue Access Memory Core CPU Device Memory Mechanism Coherency Cache Cache CPU Queue Access Memory Core CPU Memory CPU Figure 13.17: Memory Architecture 375 Although the caches can “hide” a given CPU’s memory accesses from the rest of  the system, the cache-coherence protocol ensures that all other CPUs see any effects of  these hidden accesses, migrating and invalidating cachelines as required. Furthermore, the CPU core may execute instructions in any order, restricted only by the requirement that program causality and memory ordering appear to be maintained. Some of these instructions may generate memory accesses that must be queued in the CPU’s memory access queue, but execution may nonetheless continue until the CPU either fills up its internal resources or until it must wait for some queued memory access to complete. 13.2.13.1 Cache Coherency Although cache-coherence protocols guarantee that a given CPU sees its own accesses in order, and that all CPUs agree on the order of modifications to a single variable contained within a single cache line, there is no guarantee that modifications to different variables will be seen in the same order by all CPUs — although some computer systems do make some such guarantees, portable software cannot rely on them. Cache D CPU 2 Cache C Cache B CPU 1 Cache A System Memory Figure 13.18: Split Caches To see why reordering can occur, consider the two-CPU system shown in Fig- ure  13.18,  in which each CPU has a split cache. This system has the following proper- ties: 1.  An odd-numbered cache line may be in cache A, cache C, in memory, or some combination of the above. 2.  An even-numbered cache line may be in cache B, cache D, in memory, or some combination of the above. 3.  While the CPU core is interrogating one of its caches , 9 its other cache is not necessarily quiescent. This other cache may instead be responding to an invalida- tion request, writing back a dirty cache line, processing elements in the CPU’s memory-access queue, and so on. 4.  Each cache has queues of operations that need to be applied to that cache in order to maintain the required coherence and ordering properties. 9 But note that in “superscalar” systems, the CPU might well be accessing both halves of its cache at once, and might in fact be performing multiple concurrent accesses to each of the halves. 376 5.  These queues are not necessarily flushed by loads from or stores to cache lines affected by entries in those queues. In short, if cache A is busy, but cache B is idle, then CPU 1’s stores to odd-numbered cache lines may be delayed compared to CPU 2’s stores to even-numbered cache lines. In not-so-extreme cases, CPU 2 may see CPU 1’s operations out of order. Much more detail on memory ordering in hardware and software may be found in Appendix  C . 13.2.14 Where Are Memory Barriers Needed? Memory barriers are only required where there’s a possibility of interaction between two CPUs or between a CPU and a device. If it can be guaranteed that there won’t be any such interaction in any particular piece of code, then memory barriers are unnecessary in that piece of code. Note that these are the  minimum  guarantees. Different architectures may give more substantial guarantees, as discussed in Appendix  C,  but they may  not   be relied upon outside of code specifically designed to run only on the corresponding architecture. However, primitives that implement atomic operations, such as locking primitives and atomic data-structure manipulation and traversal primitives, will normally include any needed memory barriers in their definitions. However, there are some exceptions, such as  atomic_inc()  in the Linux kernel, so be sure to review the documentation, and, if possible, the actual implementations, for your software environment. One final word of advice: use of raw memory-barrier primitives should be a last resort. It is almost always better to use an existing primitive that takes care of memory barriers. 13.3 Non-Blocking Synchronization The term  non-blocking synchronization  (NBS) describes six classes of linearizable algo- rithms with differing  forward-progress guarantees . These forward-progress guarantees are orthogonal to those that form the basis of real-time programming: 1.  Real-time forward-progress guarantees usually have some definite time associated with them, for example, “scheduling latency must be less than 100 microseconds.” In contrast, NBS only that progress will be made in finite time, with no definite bound. 2.  Real-time forward-progress guarantees are sometimes probabilistic, as in the soft-real-time guarantee that “at least 99.9% of the time, scheduling latency must be less than 100 microseconds.” In contrast, NBS’s forward-progress guarantees have traditionally been unconditional. 3.  Real-time forward-progress guarantees are often conditioned on environmental constraints, for example, only being honored for the highest-priority tasks, when each CPU spends at least a certain fraction of its time idle, or when I/O rates are below some specified maximum. In contrast, NBS’s forward-progress guarantees are usually unconditional . 10 10 As we will see below, some recent NBS work relaxes this guarantee. 377 4.  Real-time forward-progress guarantees usually apply only in the absence of  software bugs. In contrast, most NBS guarantees apply even in the face of  fail-stop bugs . 11 5.  NBS forward-progress guarantee classes imply linearizability. In contrast, real- time forward progress guarantees are often independent of ordering constraints such as linearizability. Despite these differences, a number of NBS algorithms are extremely useful in real-time programs. There are currently six levels in the NBS hierarchy  [ ACHS13 ], which are roughly as follows: 1.  Wait-free synchronization : Every thread will make progress in finite time [ Her93 ]. 2.  Lock-free synchronization : At least one thread will make progress in finite time [ Her93 ]. 3.  Obstruction-free synchronization : Every thread will make progress in finite time in the absence of contention [ HLM03] . 4.  Clash-free synchronization : At least one thread will make progress in finite time in the absence of contention [ ACHS13] . 5.  Starvation-free synchronization : Every thread will make progress in finite time in the absence of failures [ ACHS13 ]. 6.  Deadlock-free synchronization : At least one thread will make progress in finite time in the absence of failures [ ACHS13] . NBS classes 1 and 2 were first formulated in the early 1990s, class 3 was first fomrulated in the early 2000s, and class 4 was first formulated in 2013. The final two classes have seen informal use for a great many decades, but were reformulated in 2013. In theory, any parallel algorithm can be cast into wait-free form, but there are a relatively small subset of NBS algorithms that are in common use. A few of these are listed in the following section. 13.3.1 Simple NBS Perhaps the simplest NBS algorithm is atomic update of an integer counter using fetch-and-add ( atomic_add_return() ) primitives. Another simple NBS algorithm implements a set of integers in an array. Here the array index indicates a value that might be a member of the set and the array element indicates whether or not that value actually is a set member. The linearizability criterion for NBS algorithms requires that reads from and updates to the array either use atomic instructions or be accompanied by memory barriers, but in the not-uncommon case where linearizability is not important, simple volatile loads and stores suffice, for example, using  ACCESS_ONCE() . An NBS set may also be implemented using a bitmap, where each value that might be a member of the set corresponds to one bit. Reads and updates must normally 11 Again, some recent NBS work relaxes this guarantee. 378 1 static inline bool 2 ___cds_wfcq_append(struct cds_wfcq_head  * head, 3 struct cds_wfcq_tail  * tail, 4 struct cds_wfcq_node  * new_head, 5 struct cds_wfcq_node  * new_tail) 6 { 7 struct cds_wfcq_node  * old_tail; 8 9 old_tail = uatomic_xchg(&tail->p, new_tail); 10 CMM_STORE_SHARED(old_tail->next, new_head); 11 return old_tail != &head->node; 12 } 13 14 static inline bool 15 _cds_wfcq_enqueue(struct cds_wfcq_head  * head, 16 struct cds_wfcq_tail  * tail, 17 struct cds_wfcq_node  * new_tail) 18 { 19 return ___cds_wfcq_append(head, tail, 20 new_tail, new_tail); 21 } Figure 13.19: NBS Enqueue Algorithm be carried out via atomic bit-manipulation instructions, although compare-and-swap ( cmpxchg()  or CAS) instructions can also be used. The statistical counters algorithm discussed in Section  4.2  can be considered wait- free, but only but using a cute definitional trick in which the sum is considered approxi- mate rather than exact . 12 Given sufficiently wide error bounds that are a function of the length of time that the  read_count()  function takes to sum the counters, it is not possible to prove that any non-linearizable behavior occurred. This definitely (if a bit artificially) classifies the statistical-counters algorithm as wait-free. This algorithm is probably the most heavily used NBS algorithm in the Linux kernel. Another common NBS algorithm is the atomic queue where elements are enqueued using an atomic exchange instruction [ MS98b ], followed by a store into the  ->next pointer of the new element’s predecessor, as shown in Figure  13.19,  which shows the userspace-RCU library implementation [ Des09 ]. Line 9 updates the tail pointer to reference the new element while returning a reference to its predecessor, which is stored in local variable  old_tail . Line 10 then updates the predecessor’s  ->next  pointer to reference the newly added element, and finally line 11 returns an indication as to whether or not the queue was initially empty. Although mutual exclusion is required to dequeue a single element (so that dequeue is blocking), it is possible to carry out a non-blocking removal of the entire contents of the queue. What is not possible is to dequeue any given element in a non-blocking manner: The enqueuer might have failed between lines 9 and 10 of the figure, so that the element in question is only partially enqueued. This results in a half-NBS algorithm where enqueues are NBS but dequeues are blocking. This algorithm is nevertheless used in practice, in part because most production software is not required to tolerate arbitrary fail-stop errors. 13.3.2 NBS Discussion It is possible to create fully non-blocking queues [ MS96 ], however, such queues are much more complex than the half-NBS algorithm outlined above. The lesson here is to 12 Citation needed. I hear of this trick verbally from Mark Moir. 379 carefully consider what your requirements really are. Relaxing irrelevant requirements can often result in great improvements in both simplicity and performance. Recent research points to another important way to relax requirements. It turns out that systems providing fair scheduling can enjoy most of the benefits of wait- free synchronization even when running algorithms that provide only non-blocking synchronization, both in theory [ ACHS13 ] and in practice [ AB13 ] . Because a great many schedulers used in production do in fact provide fairness, the more-complex algorithms providing wait-free synchronization usually provide no practical advantages over their simpler and often faster non-blocking-synchronization counterparts. Interestingly enough, fair scheduling is but one beneficial constraint that is often respected in practice. Other sets of constraints can permit blocking algorithms to achieve deterministic real-time response. For example, given fair locks that are granted to requesters in FIFO order at a given priority level, a method of avoiding priority inversion (such as priority inheritance [ TS95 ,  WTS96 ] or priority ceiling), a bounded number of threads, bounded critical sections, bounded load, and avoidance of fail-stop bugs, lock-based applications can provide deterministic response times [ Bra11 ]. This approach of course blurs the distinction between blocking and wait-free synchronization, which is all to the good. Hopefully theoeretical frameworks continue to grow, further increasing their ability to describe how software is actually constructed in practice. 380 Chapter 14 Ease of Use “Creating a perfect API is like committing the perfect crime. There are at least fifty things that can go wrong, and if you are a genius, you might be able to anticipate twenty-five of them.” 14.1 What is Easy? “Easy” is a relative term. For example, many people would consider a 15-hour airplane flight to be a bit of an ordeal—unless they stopped to consider alternative modes of  transportation, especially swimming. This means that creating an easy-to-use API requires that you know quite a bit about your intended users. The following question illustrates this point: “Given a randomly chosen person among everyone alive today, what one change would improve his or her life?” There is no single change that would be guaranteed to help everyone’s life. After all, there is an extremely wide range of people, with a correspondingly wide range of needs, wants, desires, and aspirations. A starving person might need food, but additional food might well hasten the death of a morbidly obese person. The high level of excitement so fervently desired by many young people might well be fatal to someone recovering from a heart attack. Information critical to the success of one person might contribute to the failure of someone suffering from information overload. In short, if you are working on a software project that is intended to help someone you know nothing about, you should not be surprised when that someone is less than impressed with your efforts. If you really want to help a given group of people, there is simply no substitute for working closely with them over an extended period of time. Nevertheless, there are some simple things that you can do to increase the odds of your users being happy with your software, and some of these things are covered in the next section. 14.2 Rusty Scale for API Design This section is adapted from portions of Rusty Russell’s 2003 Ottawa Linux Symposium keynote address [ Rus03 ,  Slides 39–57]. Rusty’s key point is that the goal should not be merely to make an API easy to use, but rather to make the API hard to misuse. To that end, Rusty proposed his “Rusty Scale” in decreasing order of this important hard-to-misuse property. 381 The following list attempts to generalize the Rusty Scale beyond the Linux kernel: 1.  It is impossible to get wrong. Although this is the standard to which all API designers should strive, only the mythical  dwim() 1 command manages to come close. 2. The compiler or linker won’t let you get it wrong. 3. The compiler or linker will warn you if you get it wrong. 4. The simplest use is the correct one. 5. The name tells you how to use it. 6. Do it right or it will always break at runtime. 7.  Follow common convention and you will get it right. The  malloc()  library function is a good example. Although it is easy to get memory allocation wrong, a great many projects do manage to get it right, at least most of the time. Using malloc()  in conjunction with Valgrind  [ The11 ]  moves  malloc()  almost up to the “do it right or it will always break at runtime” point on the scale. 8. Read the documentation and you will get it right. 9. Read the implementation and you will get it right. 10. Read the right mailing-list archive and you will get it right. 11. Read the right mailing-list archive and you will get it wrong. 12.  Read the implementation and you will get it wrong. The original non- CONFIG_  PREEMPT  implementation of   rcu_read_lock()  [ McK07a ] is an infamous example of this point on the scale. 13.  Read the documentation and you will get it wrong. For example, the DEC Alpha wmb  instruction’s documentation [ SW95 ] fooled a number of developers into thinking that that this instruction had much stronger memory-order semantics than it actually does. Later documentation clarified this point [ Com01 ], moving the  wmb  instruction up to the “read the documentation and you will get it right” point on the scale. 14.  Follow common convention and you will get it wrong. The printf() statement is an example of this point on the scale because developers almost always fail to check  printf() ’s error return. 15. Do it right and it will break at runtime. 16. The name tells you how not to use it. 17.  The obvious use is wrong. The Linux kernel  smp_mb()  function is an exam- ple of this point on the scale. Many developers assume that this function has much stronger ordering semantics than it possesses. Section  13.2  contains the information needed to avoid this mistake, as does the Linux-kernel source tree’s Documentation  directory. 1 The  dwim()  function is an acronym that expands to “do what I mean”. 382 18. The compiler or linker will warn you if you get it right. 19. The compiler or linker won’t let you get it right. 20.  It is impossible to get right. The  gets()  function is a famous example of  this point on the scale. In fact,  gets()  can perhaps best be described as an unconditional buffer-overflow security hole. 14.3 Shaving the Mandelbrot Set The set of useful programs resembles the Mandelbrot set (shown in Figure  14.1)  in that it does not have a clear-cut smooth boundary — if it did, the halting problem would be solvable. But we need APIs that real people can use, not ones that require a Ph.D. dissertation be completed for each and every potential use. So, we “shave the Mandelbrot set” , 2 restricting the use of the API to an easily described subset of the full set of potential uses. Figure 14.1: Mandelbrot Set (Courtesy of Wikipedia) Such shaving may seem counterproductive. After all, if an algorithm works, why shouldn’t it be used? To see why at least some shaving is absolutely necessary, consider a locking design that avoids deadlock, but in perhaps the worst possible way. This design uses a circular doubly linked list, which contains one element for each thread in the system along with a header element. When a new thread is spawned, the parent thread must insert a new element into this list, which requires some sort of synchronization. Onewaytoprotectthelististouseagloballock. However, thismightbeabottleneck if threads were being created and deleted frequently . 3 Another approach would be to use a hash table and to lock the individual hash buckets, but this can perform poorly when scanning the list in order. 2 Due to Josh Triplett. 3 Those of you with strong operating-system backgrounds, please suspend disbelief. If you are unable to suspend disbelief, send us a better example. 383 A third approach is to lock the individual list elements, and to require the locks for both the predecessor and successor to be held during the insertion. Since both locks must be acquired, we need to decide which order to acquire them in. Two conventional approaches would be to acquire the locks in address order, or to acquire them in the order that they appear in the list, so that the header is always acquired first when it is one of the two elements being locked. However, both of these methods require special checks and branches. The to-be-shaven solution is to unconditionally acquire the locks in list order. But what about deadlock? Deadlock cannot occur. To see this, number the elements in the list starting with zero for the header up to  N   for the last element in the list (the one preceding the header, given that the list is circular). Similarly, number the threads from zero to  N  − 1 . If each thread attempts to lock some consecutive pair of elements, at least one of the threads is guaranteed to be able to acquire both locks. Why? Because there are not enough threads to reach all the way around the list. Suppose thread 0 acquires element 0’s lock. To be blocked, some other thread must have already acquired element 1’s lock, so let us assume that thread 1 has done so. Similarly, for thread 1 to be blocked, some other thread must have acquired element 2’s lock, and so on, up through thread  N  − 1 , who acquires element  N  − 1 ’s lock. For thread  N  − 1  to be blocked, some other thread must have acquired element  N  ’s lock. But there are no more threads, and so thread  N  − 1 cannot be blocked. Therefore, deadlock cannot occur. So why should we prohibit use of this delightful little algorithm? The fact is that if you  really  want to use it, we cannot stop you. We  can , however, recommend against such code being included in any project that we care about. But, before you use this algorithm, please think through the following Quick Quiz. Quick Quiz 14.1:  Can a similar algorithm be used when deleting elements? The fact is that this algorithm is extremely specialized (it only works on certain sized lists), and also quite fragile. Any bug that accidentally failed to add a node to the list could result in deadlock. In fact, simply adding the node a bit too late could result in deadlock. In addition, the other algorithms described above are “good and sufficient”. For example, simply acquiring the locks in address order is fairly simple and quick, while allowing the use of lists of any size. Just be careful of the special cases presented by empty lists and lists containing only one element! Quick Quiz 14.2:  Yetch! What ever possessed someone to come up with an algorithm that deserves to be shaved as much as this one does??? In summary, we do not use algorithms simply because they happen to work. We instead restrict ourselves to algorithms that are useful enough to make it worthwhile learning about them. The more difficult and complex the algorithm, the more generally useful it must be in order for the pain of learning it and fixing its bugs to be worthwhile. Quick Quiz 14.3:  Give an exception to this rule. Exceptions aside, we must continue to shave the software “Mandelbrot set” so that our programs remain maintainable, as shown in Figure  14.2. 384 Figure 14.2: Shaving the Mandelbrot Set 385 386 Chapter 15 Conflicting Visions of the Future This chapter presents some conflicting visions of the future of parallel programming. It is not clear which of these will come to pass, in fact, it is not clear that any of them will. They are nevertheless important because each vision has its devoted adherents, and if enough people believe in something fervently enough, you will need to deal with at least the shadow of that thing’s existence in the form of its influence on the thoughts, words, and deeds of its adherents. Besides which, it is entirely possible that one or more of these visions will actually come to pass. But most are bogus. Tell which is which and you’ll be rich [ Spi77] ! Therefore, the following sections give an overview of transactional memory, hard- ware transactional memory, and parallel functional programming. But first, a cautionary tale on prognostication taken from the early 2000s. 15.1 TheFutureofCPUTechnologyAin’tWhatitUsed to Be Years past always seem so simple and innocent when viewed through the lens of many years of experience. And the early 2000s were for the most part innocent of the impending failure of Moore’s Law to continue delivering the then-traditional increases in CPU clock frequency. Oh, there were the occasional warnings about the limits of  technology, but such warnings had be sounded for decades. With that in mind, consider the following scenarios: 1. Uniprocessor Über Alles (Figure  15.1 ), 2. Multithreaded Mania (Figure  15.2 ), 3. More of the Same (Figure  15.3) , and 4. Crash Dummies Slamming into the Memory Wall (Figure  15.4 ). Each of these scenarios are covered in the following sections. 15.1.1 Uniprocessor Über Alles As was said in 2004 [ McK04 ]: 387 Figure 15.1: Uniprocessor Über Alles In this scenario, the combination of Moore’s-Law increases in CPU clock rate and continued progress in horizontally scaled computing render SMP systems irrelevant. This scenario is therefore dubbed “Uniprocessor Über Alles”, literally, uniprocessors above all else. These uniprocessor systems would be subject only to instruction overhead, since memory barriers, cache thrashing, and contention do not affect single- CPU systems. In this scenario, RCU is useful only for niche applications, such as interacting with NMIs. It is not clear that an operating system lacking RCU would see the need to adopt it, although operating systems that already implement RCU might continue to do so. However, recent progress with multithreaded CPUs seems to indicate that this scenario is quite unlikely. Unlikely indeed! But the larger software community was reluctant to accept the fact that they would need to embrace parallelism, and so it was some time before this community concluded that the “free lunch” of Moore’s-Law-induced CPU core-clock frequency increases was well and truly finished. Never forget: belief is an emotion, not necessarily the result of a rational technical thought process! 15.1.2 Multithreaded Mania Also from 2004 [ McK04 ]: A less-extreme variant of Uniprocessor Über Alles features uniprocessors with hardware multithreading, and in fact multithreaded CPUs are now standard for many desktop and laptop computer systems. The most ag- gressively multithreaded CPUs share all levels of cache hierarchy, thereby eliminating CPU-to-CPU memory latency, in turn greatly reducing the performance penalty for traditional synchronization mechanisms. How- ever, a multithreaded CPU would still incur overhead due to contention 388 Figure 15.2: Multithreaded Mania and to pipeline stalls caused by memory barriers. Furthermore, because all hardware threads share all levels of cache, the cache available to a given hardware thread is a fraction of what it would be on an equivalent single-threaded CPU, which can degrade performance for applications with large cache footprints. There is also some possibility that the restricted amount of cache available will cause RCU-based algorithms to incur per- formance penalties due to their grace-period-induced additional memory consumption. Investigating this possibility is future work. However, in order to avoid such performance degradation, a number of  multithreaded CPUs and multi-CPU chips partition at least some of the levels of cache on a per-hardware-thread basis. This increases the amount of  cache available to each hardware thread, but re-introduces memory latency for cachelines that are passed from one hardware thread to another. And we all know how this story has played out, with multiple multi-threaded cores on a single die plugged into a single socket. The question then becomes whether or not future shared-memory systems will always fit into a single socket. 15.1.3 More of the Same Again from 2004  [McK04] : The More-of-the-Same scenario assumes that the memory-latency ratios will remain roughly where they are today. This scenario actually represents a change, since to have more of the same, interconnect performance must begin keeping up with the Moore’s-Law 389 Figure 15.3: More of the Same increases in core CPU performance. In this scenario, overhead due to pipeline stalls, memory latency, and contention remains significant, and RCU retains the high level of applicability that it enjoys today. And the change has been the ever-increasing levels of integration that Moore’s Law is still providing. But longer term, which will it be? More CPUs per die? Or more I/O, cache, and memory? Servers seem to be choosing the former, while embedded systems on a chip (SoCs) continue choosing the latter. 15.1.4 Crash Dummies Slamming into the Memory Wall And one more quote from 2004 [ McK04 ]: If the memory-latency trends shown in Figure  15.5  continue, then memory latency will continue to grow relative to instruction-execution overhead. Systems such as Linux that have significant use of RCU will find additional use of RCU to be profitable, as shown in Figure  15.6  As can be seen in this figure, if RCU is heavily used, increasing memory-latency ratios give RCU an increasing advantage over other synchronization mechanisms. In contrast, systems with minor use of RCU will require increasingly high 390 Figure 15.4: Crash Dummies Slamming into the Memory Wall degrees of read intensity for use of RCU to pay off, as shown in Figure  15.7 . As can be seen in this figure, if RCU is lightly used, increasing memory- latency ratios put RCU at an increasing disadvantage compared to other synchronization mechanisms. Since Linux has been observed with over 1,600 callbacks per grace period under heavy load [ SM04 ], it seems safe to say that Linux falls into the former category. On the one hand, this passage failed to anticipate the cache-warmth issues that RCU can suffer from in workloads with significant update intensity, in part because it seemed unlikely that RCU would really be used in such cases. In the event, the SLAB_DESTROY_BY_RCU  has been pressed into service in a number of instances where these cache-warmth issues would otherwise be problematic, as has sequence locking. On the other hand, this passage also failed to anticipate that RCU would be used to reduce scheduling latency or for security. Inshort, bewareofprognostications, includingthoseintheremainderofthischapter. 15.2 Transactional Memory The idea of using transactions outside of databases goes back many decades [ Lom77 ], with the key difference between database and non-database transactions being that non-database transactions drop the “D” in the “ACID” properties defining database transactions. The idea of supporting memory-based transactions, or “transactional memory” (TM), in hardware is more recent  [ HM93 ], but unfortunately, support for such transactions in commodity hardware was not immediately forthcoming, despite other somewhat similar proposals being put forward [ SSHT93 ]. Not long after, Shavit and Touitou proposed a software-only implementation of transactional memory (STM) that was capable of running on commodity hardware, give or take memory-ordering issues. This proposal languished for many years, perhaps due to the fact that the research community’s attention was absorbed by non-blocking synchronization (see Section  13.3 ). 391 0.1 1 10 100 1000 10000 82 84 86 88 90 92 94 96 98 00 02  I  n  s  t  r  u  c  t   i  o  n  s    p  e  r    M  e  m  o  r  y    R  e   f  e  r  e  n  c  e    T   i  m  e Year Figure 15.5: Instructions per Local Memory Reference for Sequent Computers 0.1 1 1 10 100 1000  B  r  e  a   k  e  v  e  n    U  p   d  a  t  e    F  r  a  c  t   i  o  n Memory-Latency Ratio RCU spinlock Figure 15.6: Breakevens vs.  r  ,  λ   Large, Four CPUs 392 0.0001 0.001 0.01 0.1 1 1 10 100 1000  B  r  e  a   k  e  v  e  n    U  p   d  a  t  e    F  r  a  c  t   i  o  n Memory-Latency Ratio RCU drw spinlock Figure 15.7: Breakevens vs.  r  ,  λ   Small, Four CPUs But by the turn of the century, TM started receiving more attention [ MT01 ,  RG01 ] , and by the middle of the decade, the level of interest can only be termed “incandes- cent”  [Her05, Gro07] , despite a few voices of caution  [BLM05,  MMW07 ]. The basic idea behind TM is to execute a section of code atomically, so that other threadsseenointermediatestate. Assuch, thesemanticsofTMcouldbeimplementedby simply replacing each transaction with a recursively acquirable global lock acquisition and release, albeit with abysmal performance and scalability. Much of the complexity inherent in TM implementations, whether hardware or software, is efficiently detecting when concurrent transactions can safely run in parallel. Because this detection is done dynamically, conflicting transactions can be aborted or “rolled back”, and in some implementations, this failure mode is visible to the programmer. Because transaction roll-back is increasingly unlikely as transaction size decreases, TM might become quite attractive for small memory-based operations, such as linked- list manipulations used for stacks, queues, hash tables, and search trees. However, it is currently much more difficult to make the case for large transactions, particularly those containing non-memory operations such as I/O and process creation. The following sections look at current challenges to the grand vision of “Transactional Memory Everywhere” [ McK09d ]. Section  15.2.1  examines the challenges faced interacting with the outside world, Section  15.2.2  looks at interactions with process modification primitives, Section  15.2.3  explores interactions with other synchronization primitives, and finally Section  15.2.4  closes with some discussion. 15.2.1 Outside World In the words of Donald Knuth: Many computer users feel that input and output are not actually part of  “real programming,” they are merely things that (unfortunately) must be done in order to get information in and out of the machine. 393 Whether we believe that input and output are “real programming” or not, the fact is that for most computer systems, interaction with the outside world is a first-class requirement. This section therefore critiques transactional memory’s ability to so interact, whether via I/O operations, time delays, or persistent storage. 15.2.1.1 I/O Operations One can execute I/O operations within a lock-based critical section, and, at least in principle, from within an RCU read-side critical section. What happens when you attempt to execute an I/O operation from within a transaction? The underlying problem is that transactions may be rolled back, for example, due to conflicts. Roughly speaking, this requires that all operations within any given transaction be revocable, so that executing the operation twice has the same effect as executing it once. Unfortunately, I/O is in general the prototypical irrevocable operation, making it difficult to include general I/O operations in transactions. In fact, general I/O is irrevocable: Once you have pushed the button launching the nuclear warheads, there is no turning back. Here are some options for handling of I/O within transactions: 1.  Restrict I/O within transactions to buffered I/O with in-memory buffers. These buffers may then be included in the transaction in the same way that any other memory location might be included. This seems to be the mechanism of choice, and it does work well in many common cases of situations such as stream I/O and mass-storage I/O. However, special handling is required in cases where multiple record-oriented output streams are merged onto a single file from multiple pro- cesses, as might be done using the “a+” option to  fopen()  or the  O_APPEND flag to  open() . In addition, as will be seen in the next section, common net- working operations cannot be handled via buffering. 2.  Prohibit I/O within transactions, so that any attempt to execute an I/O operation aborts the enclosing transaction (and perhaps multiple nested transactions). This approach seems to be the conventional TM approach for unbuffered I/O, but re- quires that TM interoperate with other synchronization primitives that do tolerate I/O. 3.  Prohibit I/O within transactions, but enlist the compiler’s aid in enforcing this prohibition. 4.  Permit only one special  irrevocable  transaction [ SMS08 ]  to proceed at any given time, thus allowingirrevocabletransactions tocontainI/Ooperations. 1 This works in general, but severely limits the scalability and performance of I/O operations. Given that scalability and performance is a first-class goal of parallelism, this approach’s generality seems a bit self-limiting. Worse yet, use of irrevocability to tolerate I/O operations seems to prohibit use of manual transaction-abort operations . 2 Finally, if there is an irrevocable transaction manipulating a given data item, any other transaction manipulating that same data item cannot have non-blocking semantics. 1 In earlier literature, irrevocable transactions are termed  inevitable  transactions. 2 This difficulty was pointed out by Michael Factor. 394 5.  Create new hardware and protocols such that I/O operations can be pulled into the transactional substrate. In the case of input operations, the hardware would need to correctly predict the result of the operation, and to abort the transaction if  the prediction failed. I/O operations are a well-known weakness of TM, and it is not clear that the problem of supporting I/O in transactions has a reasonable general solution, at least if  “reasonable” is to include usable performance and scalability. Nevertheless, continued time and attention to this problem will likely produce additional progress. 15.2.1.2 RPC Operations One can execute RPCs within a lock-based critical section, as well as from within an RCU read-side critical section. What happens when you attempt to execute an RPC from within a transaction? If both the RPC request and its response are to be contained within the transaction, and if some part of the transaction depends on the result returned by the response, then it is not possible to use the memory-buffer tricks that can be used in the case of buffered I/O. Any attempt to take this buffering approach would deadlock the transaction, as the request could not be transmitted until the transaction was guaranteed to succeed, but the transaction’s success might not be knowable until after the response is received, as is the case in the following example: 1 begin_trans(); 2 rpc_request(); 3 i = rpc_response(); 4 a[i]++; 5 end_trans(); The transaction’s memory footprint cannot be determined until after the RPC re- sponse is received, and until the transaction’s memory footprint can be determined, it is impossible to determine whether the transaction can be allowed to commit. The only action consistent with transactional semantics is therefore to unconditionally abort the transaction, which is, to say the least, unhelpful. Here are some options available to TM: 1.  Prohibit RPC within transactions, so that any attempt to execute an RPC opera- tion aborts the enclosing transaction (and perhaps multiple nested transactions). Alternatively, enlist the compiler to enforce RPC-free transactions. This approach does works, but will require TM to interact with other synchronization primitives. 2.  Permit only one special irrevocable transaction [ SMS08 ] to proceed at any given time, thus allowing irrevocable transactions to contain RPC operations. This works in general, but severely limits the scalability and performance of RPC oper- ations. Given that scalability and performance is a first-class goal of parallelism, this approach’s generality seems a bit self-limiting. Furthermore, use of irrevo- cable transactions to permit RPC operations rules out manual transaction-abort operations once the RPC operation has started. Finally, if there is an irrevocable transaction manipulating a given data item, any other transaction manipulating that same data item cannot have non-blocking semantics. 3.  Identify special cases where the success of the transaction may be determined be- fore the RPC response is received, and automatically convert these to irrevocable 395 transactions immediately before sending the RPC request. Of course, if several concurrent transactions attempt RPC calls in this manner, it might be necessary to roll all but one of them back, with consequent degradation of performance and scalability. This approach nevertheless might be valuable given long-running transactions ending with an RPC. This approach still has problems with manual transaction-abort operations. 4.  Identify special cases where the RPC response may be moved out of the trans- action, and then proceed using techniques similar to those used for buffered I/O. 5.  Extend the transactional substrate to include the RPC server as well as its client. This is in theory possible, as has been demonstrated by distributed databases. However, it is unclear whether the requisite performance and scalability require- ments can be met by distributed-database techniques, given that memory-based TM cannot hide such latencies behind those of slow disk drives. Of course, given the advent of solid-state disks, it is also unclear how much longer databases will be permitted to hide their latencies behind those of disks drives. As noted in the prior section, I/O is a known weakness of TM, and RPC is simply an especially problematic case of I/O. 15.2.1.3 Time Delays An important special case of interaction with extra-transactional accesses involves explicit time delays within a transaction. Of course, the idea of a time delay within a transaction flies in the face of TM’s atomicity property, but one can argue that this sort of thing is what weak atomicity is all about. Furthermore, correct interaction with memory-mapped I/O sometimes requires carefully controlled timing, and applications often use time delays for varied purposes. So, what can TM do about time delays within transactions? 1.  Ignore time delays within transactions. This has an appearance of elegance, but like too many other “elegant” solutions, fails to survive first contact with legacy code. Such code, which might well have important time delays in critical sections, would fail upon being transactionalized. 2.  Abort transactions upon encountering a time-delay operation. This is attractive, but it is unfortunately not always possible to automatically detect a time-delay operation. Is that tight loop computing something important, or is it instead waiting for time to elapse? 3. Enlist the compiler to prohibit time delays within transactions. 4.  Let the time delays execute normally. Unfortunately, some TM implementations publish modifications only at commit time, which would in many cases defeat the purpose of the time delay. It is not clear that there is a single correct answer. TM implementations featuring weak atomicity that publish changes immediately within the transaction (rolling these changes back upon abort) might be reasonably well served by the last alternative. Even in this case, the code at the other end of the transaction may require a substantial redesign to tolerate aborted transactions. 396 15.2.1.4 Persistence There are many different types of locking primitives. One interesting distinction is persistence, in other words, whether the lock can exist independently of the address space of the process using the lock. Non-persistent locks include  pthread_mutex_lock() ,  pthread_rwlock_  rdlock() , and most kernel-level locking primitives. If the memory locations instanti- ating a non-persistent lock’s data structures disappear, so does the lock. For typical use of   pthread_mutex_lock() , this means that when the process exits, all of its locks vanish. This property can be exploited in order to trivialize lock cleanup at program shutdown time, but makes it more difficult for unrelated applications to share locks, as such sharing requires the applications to share memory. Persistent locks help avoid the need to share memory among unrelated applications. Persistent locking APIs include the flock family,  lockf() , System V semaphores, or the O_CREAT flag to open() . These persistent APIs can be used to protect large-scale operations spanning runs of multiple applications, and, in the case of   O_CREAT  even surviving operating-system reboot. If need be, locks can span multiple computer systems via distributed lock managers. Persistent locks can be used by any application, including applications written using multiple languages and software environments. In fact, a persistent lock might well be acquired by an application written in C and released by an application written in Python. How could a similar persistent functionality be provided for TM? 1.  Restrict persistent transactions to special-purpose environments designed to sup- port them, for example, SQL. This clearly works, given the decades-long history of database systems, but does not provide the same degree of flexibility provided by persistent locks. 2.  Use snapshot facilities provided by some storage devices and/or filesystems. Unfortunately, this does not handle network communication, nor does it handle I/O to devices that do not provide snapshot capabilities, for example, memory sticks. 3. Build a time machine. Of course, the fact that it is called transactional  memory  should give us pause, as the name itself conflicts with the concept of a persistent transaction. It is nevertheless worthwhile to consider this possibility as an important test case probing the inherent limitations of transactional memory. 15.2.2 Process Modification Processes are not eternal: They are created and destroyed, their memory mappings are modified, they are linked to dynamic libraries, and they are debugged. These sections look at how transactional memory can handle an ever-changing execution environment. 15.2.2.1 Multithreaded Transactions It is perfectly legal to create processes and threads while holding a lock or, for that matter, from within an RCU read-side critical section. Not only is it legal, but it is quite simple, as can be seen from the following code fragment: 397 1 pthread_mutex_lock(...); 2 for (i = 0; i < ncpus; i++) 3 pthread_create(&tid[i], ...); 4 for (i = 0; i < ncpus; i++) 5 pthread_join(tid[i], ...); 6 pthread_mutex_unlock(...); This pseudo-code fragment uses  pthread_create()  to spawn one thread per CPU, then uses  pthread_join()  to wait for each to complete, all under the pro- tection of   pthread_mutex_lock() . The effect is to execute a lock-based critical section in parallel, and one could obtain a similar effect using  fork()  and  wait() . Of course, the critical section would need to be quite large to justify the thread-spawning overhead, but there are many examples of large critical sections in production software. What might TM do about thread spawning within a transaction? 1.  Declare  pthread_create()  to be illegal within transactions, resulting in transaction abort (preferred) or undefined behavior. Alternatively, enlist the compiler to enforce  pthread_create() -free transactions. 2.  Permit  pthread_create()  to be executed within a transaction, but only the parent thread will be considered to be part of the transaction. This approach seems to be reasonably compatible with existing and posited TM implementations, but seems to be a trap for the unwary. This approach raises further questions, such as how to handle conflicting child-thread accesses. 3.  Convert the  pthread_create() s to function calls. This approach is also an attractive nuisance, as it does not handle the not-uncommon cases where the child threads communicate with one another. In addition, it does not permit parallel execution of the body of the transaction. 4.  Extend the transaction to cover the parent and all child threads. This approach raises interesting questions about the nature of conflicting accesses, given that the parent and children are presumably permitted to conflict with each other, but not with other threads. It also raises interesting questions as to what should happen if the parent thread does not wait for its children before committing the transaction. Even more interesting, what happens if the parent conditionally executes  pthread_join()  based on the values of variables participating in the transaction? The answers to these questions are reasonably straightforward in the case of locking. The answers for TM are left as an exercise for the reader. Given that parallel execution of transactions is commonplace in the database world, it is perhaps surprising that current TM proposals do not provide for it. On the other hand, the example above is a fairly sophisticated use of locking that is not normally found in simple textbook examples, so perhaps its omission is to be expected. That said, there are rumors that some TM researchers are investigating fork/join parallelism within transactions, so perhaps this topic will soon be addressed more thoroughly. 15.2.2.2 The exec() System Call One can execute an  exec()  system call while holding a lock, and also from within an RCU read-side critical section. The exact semantics depends on the type of primitive. 398 In the case of non-persistent primitives (including  pthread_mutex_lock() , pthread_rwlock_rdlock() , and RCU), if the  exec()  succeeds, the whole address space vanishes, along with any locks being held. Of course, if the  exec() fails, the address space still lives, so any associated locks would also still live. A bit strange perhaps, but reasonably well defined. On the other hand, persistent primitives (including the flock family,  lockf() , System V semaphores, and the  O_CREAT  flag to  open() ) would survive regardless of  whether the  exec()  succeeded or failed, so that the  exec() ed program might well release them. Quick Quiz 15.1:  What about non-persistent primitives represented by data struc- tures in  mmap()  regions of memory? What happens when there is an  exec()  within a critical section of such a primitive? What happens when you attempt to execute an  exec()  system call from within a transaction? 1.  Disallow  exec()  within transactions, so that the enclosing transactions abort upon encountering the exec() . This is well defined, but clearly requires non-TM synchronization primitives for use in conjunction with  exec() . 2.  Disallow  exec()  within transactions, with the compiler enforcing this prohibi- tion. ThereisadraftspecificationforTMinC++thattakesthisapproach, allowing functions to be decorated with the transaction_safe and transaction_  unsafe  attributes . 3 This approach has some advantages over aborting the transaction at runtime, but again requires non-TM synchronization primitives for use in conjunction with  exec() . 3.  Treat the transaction in a manner similar to non-persistent Locking primitives, so that the transaction survives if exec() fails, and silently commits if the  exec() succeeds. The case were some of the variables affected by the transaction reside in mmap() ed memory (and thus could survive a successful exec() system call) is left as an exercise for the reader. 4.  Abort the transaction (and the  exec()  system call) if the  exec()  system call wouldhavesucceeded, butallowthetransactiontocontinueifthe exec() system call would fail. This is in some sense the “correct” approach, but it would require considerable work for a rather unsatisfying result. The exec() system call is perhaps the strangest example of an obstacle to universal TM applicability, as it is not completely clear what approach makes sense, and some might argue that this is merely a reflection of the perils of interacting with execs in real life. That said, the two options prohibiting  exec()  within transactions are perhaps the most logical of the group. Similar issues surround the  exit()  and  kill()  system calls. 15.2.2.3 Dynamic Linking and Loading Both lock-based critical sections and RCU read-side critical sections can legitimately contain code that invokes dynamically linked and loaded functions, including C/C++ shared libraries and Java class libraries. Of course, the code contained in these libraries 3 Thanks to Mark Moir for pointing me at this spec, and to Michael Wong for having pointed me at an earlier revision some time back. 399 is by definition unknowable at compile time. So, what happens if a dynamically loaded function is invoked within a transaction? This question has two parts: (a) how do you dynamically link and load a function within a transaction and (b) what do you do about the unknowable nature of the code within this function? To be fair, item (b) poses some challenges for locking and RCU as well, at least in theory. For example, the dynamically linked function might introduce a deadlock for locking or might (erroneously) introduce a quiescent state into an RCU read-side critical section. The difference is that while the class of operations permitted in locking and RCU critical sections is well-understood, there appears to still be considerable uncertainty in the case of TM. In fact, different implementations of TM seem to have different restrictions. So what can TM do about dynamically linked and loaded library functions? Options for part (a), the actual loading of the code, include the following: 1.  Treat the dynamic linking and loading in a manner similar to a page fault, so that the function is loaded and linked, possibly aborting the transaction in the process. If the transaction is aborted, the retry will find the function already present, and the transaction can thus be expected to proceed normally. 2. Disallow dynamic linking and loading of functions from within transactions. Options for part (b), the inability to detect TM-unfriendly operations in a not-yet- loaded function, possibilities include the following: 1.  Just execute the code: if there are any TM-unfriendly operations in the function, simply abort the transaction. Unfortunately, this approach makes it impossible for the compiler to determine whether a given group of transactions may be safely composed. Onewaytopermitcomposabilityregardlessisirrevocabletransactions, however, current implementations permit only a single irrevocable transaction to proceed at any given time, which can severely limit performance and scalability. Irrevocable transactions also seem to rule out use of manual transaction-abort operations. Finally, if there is an irrevocable transaction manipulating a given data item, any other transaction manipulating that same data item cannot have non-blocking semantics. 2.  Decorate the function declarations indicating which functions are TM-friendly. These decorations can then be enforced by the compiler’s type system. Of  course, for many languages, this requires language extensions to be proposed, standardized, and implemented, with the corresponding time delays. That said, the standardization effort is already in progress  [ATS09 ]. 3.  As above, disallow dynamic linking and loading of functions from within transac- tions. I/O operations are of course a known weakness of TM, and dynamic linking and loading can be thought of as yet another special case of I/O. Nevertheless, the proponents of TM must either solve this problem, or resign themselves to a world where TM is but one tool of several in the parallel programmer’s toolbox. (To be fair, a number of TM proponents have long since resigned themselves to a world containing more than just TM.) 400 15.2.2.4 Memory-Mapping Operations Itisperfectlylegaltoexecutememory-mappingoperations(including mmap() , shmat() , and  munmap()  [ Gro01 ]) within a lock-based critical section, and, at least in principle, from within an RCU read-side critical section. What happens when you attempt to execute such an operation from within a transaction? More to the point, what happens if  the memory region being remapped contains some variables participating in the current thread’s transaction? And what if this memory region contains variables participating in some other thread’s transaction? It should not be necessary to consider cases where the TM system’s metadata is remapped, given that most locking primitives do not define the outcome of remapping their lock variables. Here are some memory-mapping options available to TM: 1.  Memory remapping is illegal within a transaction, and will result in all enclosing transactions being aborted. This does simplify things somewhat, but also requires that TM interoperate with synchronization primitives that do tolerate remapping from within their critical sections. 2.  Memory remapping is illegal within a transaction, and the compiler is enlisted to enforce this prohibition. 3.  Memory mapping is legal within a transaction, but aborts all other transactions having variables in the region mapped over. 4.  Memory mapping is legal within a transaction, but the mapping operation will fail if the region being mapped overlaps with the current transaction’s footprint. 5. All memory-mapping operations, whether within or outside a transaction, check the region being mapped against the memory footprint of all transactions in the system. If there is overlap, then the memory-mapping operation fails. 6.  The effect of memory-mapping operations that overlap the memory footprint of  any transaction in the system is determined by the TM conflict manager, which might dynamically determine whether to fail the memory-mapping operation or abort any conflicting transactions. It is interesting to note that  munmap()  leaves the relevant region of memory unmapped, which could have additional interesting implications . 4 15.2.2.5 Debugging The usual debugging operations such as breakpoints work normally within lock-based criticalsectionsandfromRCUread-sidecriticalsections. However, ininitialtransactional- memory hardware implementations [ DLMN09 ] an exception within a transaction will abort that transaction, which in turn means that breakpoints abort all enclosing transac- tions So how can transactions be debugged? 4 This difference between mapping and unmapping was noted by Josh Triplett. 401 1.  Use software emulation techniques within transactions containing breakpoints. Of course, it might be necessary to emulate all transactions any time a breakpoint is set within the scope of any transaction. If the runtime system is unable to determine whether or not a given breakpoint is within the scope of a transaction, then it might be necessary to emulate all transactions just to be on the safe side. However, this approach might impose significant overhead, which might in turn obscure the bug being pursued. 2.  Use only hardware TM implementations that are capable of handling break- point exceptions. Unfortunately, as of this writing (September 2008), all such implementations are strictly research prototypes. 3.  Use only software TM implementations, which are (very roughly speaking) more tolerant of exceptions than are the simpler of the hardware TM implementations. Of course, software TM tends to have higher overhead than hardware TM, so this approach may not be acceptable in all situations. 4.  Program more carefully, so as to avoid having bugs in the transactions in the first place. As soon as you figure out how to do this, please do let everyone know the secret! There is some reason to believe that transactional memory will deliver productivity improvements compared to other synchronization mechanisms, but it does seem quite possible that these improvements could easily be lost if traditional debugging techniques cannot be applied to transactions. This seems especially true if transactional memory is to be used by novices on large transactions. In contrast, macho “top-gun” programmers might be able to dispense with such debugging aids, especially for small transactions. Therefore, if transactional memory is to deliver on its productivity promises to novice programmers, the debugging problem does need to be solved. 15.2.3 Synchronization If transactional memory someday proves that it can be everything to everyone, it will not need to interact with any other synchronization mechanism. Until then, it will need to work with synchronization mechanisms that can do what it cannot, or that work more naturally in a given situation. The following sections outline the current challenges in this area. 15.2.3.1 Locking It is commonplace to acquire locks while holding other locks, which works quite well, at least as long as the usual well-known software-engineering techniques are employed to avoid deadlock. It is not unusual to acquire locks from within RCU read-side critical sections, which eases deadlock concerns because RCU read-side primitives cannot participated in lock-based deadlock cycles. But happens when you attempt to acquire a lock from within a transaction? In theory, the answer is trivial: simply manipulate the data structure representing the lock as part of the transaction, and everything works out perfectly. In practice, a number of non-obvious complications [ VGS08 ] can arise, depending on implementation details of the TM system. These complications can be resolved, but at the cost of a 45% increase in overhead for locks acquired outside of transactions and a 300% increase in 402 overhead for locks acquired within transactions. Although these overheads might be acceptable for transactional programs containing small amounts of locking, they are often completely unacceptable for production-quality lock-based programs wishing to use the occasional transaction. 1.  Use only locking-friendly TM implementations. Unfortunately, the locking- unfriendly implementations have some attractive properties, including low over- head for successful transactions and the ability to accommodate extremely large transactions. 2.  Use TM only “in the small” when introducing TM to lock-based programs, thereby accommodating the limitations of locking-friendly TM implementations. 3.  Set aside locking-based legacy systems entirely, re-implementing everything in terms of transactions. This approach has no shortage of advocates, but this requires that all the issues described in this series be resolved. During the time it takes to resolve these issues, competing synchronization mechanisms will of  course also have the opportunity to improve. 4.  Use TM strictly as an optimization in lock-based systems, as was done by the TxLinux [ RHP + 07 ] group. This approach seems sound, but leaves the locking design constraints (such as the need to avoid deadlock) firmly in place. 5. Strive to reduce the overhead imposed on locking primitives. The fact that there could possibly a problem interfacing TM and locking came as a surprise to many, which underscores the need to try out new mechanisms and primitives in real-world production software. Fortunately, the advent of open source means that a huge quantity of such software is now freely available to everyone, including researchers. 15.2.3.2 Reader-Writer Locking It is commonplace to read-acquire reader-writer locks while holding other locks, which  just works, at least as long as the usual well-known software-engineering techniques are employed to avoid deadlock. Read-acquiring reader-writer locks from within RCU read-side critical sections also works, and doing so eases deadlock concerns because RCU read-side primitives cannot participated in lock-based deadlock cycles. But what happens when you attempt to read-acquire a reader-writer lock from within a transaction? Unfortunately, the straightforward approach to read-acquiring the traditional counter- based reader-writer lock within a transaction defeats the purpose of the reader-writer lock. To see this, consider a pair of transactions concurrently attempting to read-acquire the same reader-writer lock. Because read-acquisition involves modifying the reader- writer lock’s data structures, a conflict will result, which will roll back one of the two transactions. This behavior is completely inconsistent with the reader-writer lock’s goal of allowing concurrent readers. Here are some options available to TM: 1.  Use per-CPU or per-thread reader-writer locking  [ HW92 ], which allows a given CPU (or thread, respectively) to manipulate only local data when read-acquiring the lock. This would avoid the conflict between the two transactions concurrently read-acquiring the lock, permitting both to proceed, as intended. Unfortunately, 403 (1) the write-acquisition overhead of per-CPU/thread locking can be extremely high, (2) the memory overhead of per-CPU/thread locking can be prohibitive, and (3) this transformation is available only when you have access to the source code in question. Other more-recent scalable reader-writer locks  [ LLO09 ] might avoid some or all of these problems. 2.  Use TM only “in the small” when introducing TM to lock-based programs, thereby avoiding read-acquiring reader-writer locks from within transactions. 3.  Set aside locking-based legacy systems entirely, re-implementing everything in terms of transactions. This approach has no shortage of advocates, but this requires that  all  the issues described in this series be resolved. During the time it takes to resolve these issues, competing synchronization mechanisms will of  course also have the opportunity to improve. 4.  Use TM strictly as an optimization in lock-based systems, as was done by the TxLinux  [ RHP + 07 ] group. This approach seems sound, but leaves the locking design constraints (such as the need to avoid deadlock) firmly in place. Further- more, this approach can result in unnecessary transaction rollbacks when multiple transactions attempt to read-acquire the same lock. Of course, there might well be other non-obvious issues surrounding combining TM with reader-writer locking, as there in fact were with exclusive locking. 15.2.3.3 RCU Because read-copy update (RCU) finds its main use in the Linux kernel, one might be forgiven for assuming that there had been no academic work on combining RCU and TM. 5 However, the TxLinux group from the University of Texas at Austin had no choice [ RHP + 07 ]. The fact that they applied TM to the Linux 2.6 kernel, which uses RCU, forced them to integrate TM and RCU, with TM taking the place of locking for RCU updates. Unfortunately, although the paper does state that the RCU implementa- tion’s locks (e.g.,  rcu_ctrlblk.lock ) were converted to transactions, it is silent about what happened to locks used in RCU-based updates (e.g.,  dcache_lock ). It is important to note that RCU permits readers and updaters to run concurrently, further permitting RCU readers to access data that is in the act of being updated. Of  course, this property of RCU, whatever its performance, scalability, and real-time- response benefits might be, flies in the face of the underlying atomicity properties of  TM. So how should TM-based updates interact with concurrent RCU readers? Some possibilities are as follows: 1.  RCU readers abort concurrent conflicting TM updates. This is in fact the approach taken by the TxLinux project. This approach does preserve RCU semantics, and also preserves RCU’s read-side performance, scalability, and real-time-response properties, but it does have the unfortunate side-effect of unnecessarily aborting conflicting updates. In the worst case, a long sequence of RCU readers could potentially starve all updaters, which could in theory result in system hangs. In addition, not all TM implementations offer the strong atomicity required to implement this approach. 5 However, the in-kernel excuse is wearing thin with the advent of user-space RCU  [Des09,  DMS + 12] . 404 2.  RCU readers that run concurrently with conflicting TM updates get old (pre- transaction) values from any conflicting RCU loads. This preserves RCU se- mantics and performance, and also prevents RCU-update starvation. However, not all TM implementations can provide timely access to old values of vari- ables that have been tentatively updated by an in-flight transaction. In particular, log-based TM implementations that maintain old values in the log (thus mak- ing for excellent TM commit performance) are not likely to be happy with this approach. Perhaps the  rcu_dereference()  primitive can be leveraged to permit RCU to access the old values within a greater range of TM implementa- tions, though performance might still be an issue. Nevertheless, there are popular TM implementations that can be easily and efficiently integrated with RCU in this manner [ PW07,  HW11 ,  HW13] . 3.  If an RCU reader executes an access that conflicts with an in-flight transaction, then that RCU access is delayed until the conflicting transaction either commits or aborts. This approach preserves RCU semantics, but not RCU’s performance or real-time response, particularly in presence of long-running transactions. In addition, not all TM implementations are capable of delaying conflicting ac- cesses. That said, this approach seems eminently reasonable for hardware TM implementations that support only small transactions. 4.  RCU readers are converted to transactions. This approach pretty much guarantees that RCU is compatible with any TM implementation, but it also imposes TM’s rollbacks on RCU read-side critical sections, destroying RCU’s real-time response guarantees, and also degrading RCU’s read-side performance. Furthermore, this approach is infeasible in cases where any of the RCU read-side critical sections contains operations that the TM implementation in question is incapable of  handling. 5.  Many update-side uses of RCU modify a single pointer to publish a new data structure. In some these cases, RCU can safely be permitted to see a transactional pointer update that is subsequently rolled back, as long as the transaction respects memory ordering and as long as the roll-back process uses  call_rcu()  to free up the corresponding structure. Unfortunately, not all TM implementations respect memory barriers within a transaction. Apparently, the thought is that because transactions are supposed to be atomic, the ordering of the accesses within the transaction is not supposed to matter. 6.  Prohibit use of TM in RCU updates. This is guaranteed to work, but seems a bit restrictive. It seems likely that additional approaches will be uncovered, especially given the advent of user-level RCU implementations . 6 15.2.3.4 Extra-Transactional Accesses Within a lock-based critical section, it is perfectly legal to manipulate variables that are concurrently accessed or even modified outside that lock’s critical section, with one 6 Kudos to the TxLinux group, Maged Michael, and Josh Triplett for coming up with a number of the above alternatives. 405 common example being statistical counters. The same thing is possible within RCU read-side critical sections, and is in fact the common case. Givenmechanisms suchastheso-called“dirtyreads”thatareprevalentinproduction database systems, it is not surprising that extra-transactional accesses have received serious attention from the proponents of TM, with the concepts of weak and strong atomicity [ BLM06 ] being but one case in point. Here are some extra-transactional options available to TM: 1.  Conflicts due to extra-transactional accesses always abort transactions. This is strong atomicity. 2.  Conflicts due to extra-transactional accesses are ignored, so only conflicts among transactions can abort transactions. This is weak atomicity. 3.  Transactions are permitted to carry out non-transactional operations in special cases, such as when allocating memory or interacting with lock-based critical sections. 4.  Produce hardware extensions that permit some operations (for example, addition) to be carried out concurrently on a single variable by multiple transactions. 5.  Introduce weak semantics to transactional memory. One approach is the combina- tion with RCU described in Section  15.2.3.3,  while Gramoli and Guerraoui survey a number of other weak-transaction approaches [ GG14 ], for example, restricted partitioning of large “elastic” transactions into smaller transactions, thus reducing conflict probabilities (albeit with tepid performance and scalability). Perhaps further experience will show that some uses of extra-transactional accesses can be replaced by weak transactions. It appears that transactions were conceived as standing alone, with no interaction required with any other synchronization mechanism. If so, it is no surprise that much confusion and complexity arises when combining transactions with non-transactional accesses. But unless transactions are to be confined to small updates to isolated data structures, or alternatively to be confined to new programs that do not interact with the huge body of existing parallel code, then transactions absolutely must be so combined if  they are to have large-scale practical impact in the near term. 15.2.4 Discussion The obstacles to universal TM adoption lead to the following conclusions: 1.  One interesting property of TM is the fact that transactions are subject to rollback and retry. This property underlies TM’s difficulties with irreversible operations, including unbuffered I/O, RPCs, memory-mapping operations, time delays, and the  exec()  system call. This property also has the unfortunate consequence of introducing all the complexities inherent in the possibility of failure into synchronization primitives, often in a developer-visible manner. 2.  Another interesting property of TM, noted by Shpeisman et al.  [ SATG + 09 ], is that TM intertwines the synchronization with the data it protects. This property underlies TM’s issues with I/O, memory-mapping operations, extra-transactional accesses, and debugging breakpoints. In contrast, conventional synchronization 406 Figure 15.8: The STM Vision primitives, including locking and RCU, maintain a clear separation between the synchronization primitives and the data that they protect. 3.  One of the stated goals of many workers in the TM area is to ease parallelization of large sequential programs. As such, individual transactions are commonly expected to execute serially, which might do much to explain TM’s issues with multithreaded transactions. What should TM researchers and developers do about all of this? One approach is to focus on TM in the small, focusing on situations where hardware assist potentially provides substantial advantages over other synchronization primitives. This is in fact the approach Sun took with its Rock research CPU [ DLMN09 ]. Some TM researchers seem to agree with this approach, while others have much higher hopes for TM. Of course, it is quite possible that TM will be able to take on larger problems, and this section lists a few of the issues that must be resolved if TM is to achieve this lofty goal. Of course, everyone involved should treat this as a learning experience. It would seem that TM researchers have great deal to learn from practitioners who have success- fully built large software systems using traditional synchronization primitives. And vice versa. But for the moment, the current state of STM can best be summarized with a series of cartoons. First, Figure  15.8  shows the STM vision. As always, the reality is a bit more nuanced, as fancifully depicted by Figures  15.9,  15.10 , and  15.11. Recent advances in commercially available hardware have opened the door for variants of HTM, which are addressed in the following section. 407 Figure 15.9: The STM Reality: Conflicts 15.3 Hardware Transactional Memory As of early 2012, hardware transactional memory (HTM) is starting to emerge into commercially available commodity computer systems. This section makes a first attempt to find its place in the parallel programmer’s toolbox. From a conceptual viewpoint, HTM uses processor caches and speculative execution to make a designated group of statements (a “transaction”) take effect atomically from the viewpoint of any other transactions running on other processors. This transaction is initiated by a begin-transaction machine instruction and completed by a commit- transaction machine instruction. There is typically also an abort-transaction machine instruction, which squashes the speculation (as if the begin-transaction instruction and all following instructions had not executed) and commences execution at a failure handler. The location of the failure handler is typically specified by the begin-transaction instruction, either as an explicit failure-handler address or via a condition code set by the instruction itself. Each transaction executes atomically with respect to all other transactions. HTM has a number of important benefits, including automatic dynamic partitioning of data structures, reducing synchronization-primitive cache misses, and supporting a fair number of practical applications. However, it always pays to read the fine print, and HTM is no exception. A major point of this section is determining under what conditions HTM’s benefits outweigh the complications hidden in its fine print. To this end, Section  15.3.1  describes HTM’s benefits and Section  15.3.2  describes its weaknesses. This is the same approach used in 408 Figure 15.10: The STM Reality: Irrevocable Operations earlier papers [ MMW07 ,  MMTW10 ] , but focused on HTM rather than TM as a whole . 7 Section  15.3.3  then describes HTM’s weaknesses with respect to the combination of  synchronization primitives used in the Linux kernel (and in some user-space applica- tions). Section  15.3.4  looks at where HTM might best fit into the parallel programmer’s toolbox, and Section  15.3.5  lists some events that might greatly increase HTM’s scope and appeal. Finally, Section  15.3.6  presents concluding remarks. 15.3.1 HTM Benefits WRT to Locking The primary benefits of HTM are (1) its avoidance of the cache misses that are often incurred by other synchronization primitives, (2) its ability to dynamically partition data structures, and (3) the fact that it has a fair number of practical applications. I break from TM tradition by not listing ease of use separately for two reasons. First, ease of use should stem from HTM’s primary benefits, which this paper focuses on. Second, there has been considerable controversy surrounding attempts to test for raw programming talent [ Bow06 ,  DBA09 ]  and even around the use of small programming exercises in job interviews [ Bra07 ]. This indicates that we really do not have a grasp on what makes programming easy or hard. Therefore, this paper focuses on the three benefits listed above, each in one of the following sections. 15.3.1.1 Avoiding Synchronization Cache Misses Most synchronization mechanisms are based on data structures that are operated on by atomic instructions. Because these atomic instructions normally operate by first causing 7 And I gratefully acknowledge many stimulating discussions with the other authors, Maged Michael, Josh Triplett, and Jonathan Walpole, as well as with Andi Kleen. 409 Figure 15.11: The STM Reality: Realtime Response the relevant cache line to be owned by the CPU that they are running on, a subsequent execution of the same instance of that synchronization primitive on some other CPU will result in a cache miss. These communications cache misses severely degrade both the performance and scalability of conventional synchronization mechanisms  [ ABD + 97 , Section 4.2.3]. In contrast, HTM synchronizes by using the CPU’s cache, avoiding the need for a synchronization data structure and resultant cache misses. HTM’s advantage is greatest in cases where a lock data structure is placed in a separate cache line, in which case, convertingagivencriticalsectiontoanHTMtransactioncanreducethatcriticalsection’s overhead by a full cache miss. This savings can be quite significant for the common case of short critical sections, at least for those situations where the elided lock does not share a cache line with a oft-written variable protected by that lock. Quick Quiz 15.2:  Why would it matter that oft-written variables shared the cache line with the lock variable? 15.3.1.2 Dynamic Partitioning of Data Structures A major obstacle to the use of some conventional synchronization mechanisms is the need to statically partition data structures. There are a number of data structures that are trivially partitionable, with the most prominent example being hash tables, where each hash chain constitutes a partition. Allocating a lock for each hash chain then trivially parallelizes the hash table for operations confined to a given chain . 8 Partitioning is similarly trivial for arrays, radix trees, and a few other data structures. However, partitioning for many types of trees and graphs is quite difficult, and the 8 And it is also easy to extend this scheme to operations accessing multiple hash chains by having such operations acquire the locks for all relevant chains in hash order. 410 results are often quite complex  [ Ell80 ] . Although it is possible to use two-phased locking and hashed arrays of locks to partition general data structures, other techniques have proven preferable [ Mil06 ], as will be discussed in Section  15.3.3.  Given its avoidance of synchronization cache misses, HTM is therefore a very real possibility for large non-partitionable data structures, at least assuming relatively small updates. Quick Quiz 15.3:  Why are relatively small updates important to HTM performance and scalability? 15.3.1.3 Practical Value Some evidence of HTM’s practical value has been demonstrated in a number of hardware platforms, including Sun Rock [ DLMN09 ] and Azul Vega  [ Cli09 ]. It is reasonable to assume that practical benefits will flow from the more recent IBM Blue Gene/Q, Intel Haswell TSX, and AMD AFS systems. Expected practical benefits include: 1. Lock elision for in-memory data access and update  [MT01 ,  RG02 ]. 2.  Concurrent access and small random updates to large non-partitionable data structures. However, HTM also has some very real shortcomings, which will be discussed in the next section. 15.3.2 HTM Weaknesses WRT Locking The concept of HTM is quite simple: A group of accesses and updates to memory occur atomically. However, as is the case with many simple ideas, complications arise when you apply it to real systems in the real world. These complications are as follows: 1. Transaction-size limitations. 2. Conflict handling. 3. Aborts and rollbacks. 4. Lack of forward-progress guarantees. 5. Irrevocable operations. 6. Semantic differences. Each of these complications is covered in the following sections, followed by a summary. 15.3.2.1 Transaction-Size Limitations The transaction-size limitations of current HTM implementations stem from the use of  the processor caches to hold the data affected by the transaction. Although this allows a given CPU to make the transaction appear atomic to other CPUs by executing the transaction within the confines of its cache, it also means that any transaction that does not fit must be aborted. Furthermore, events that change execution context, such as interrupts, system calls, exceptions, traps, and context switches either must abort any 411 ongoing transaction on the CPU in question or must further restrict transaction size due to the cache footprint of the other execution context. Of course, modern CPUs tend to have large caches, and the data required for many transactions would fit easily in a one-megabyte cache. Unfortunately, with caches, sheer size is not all that matters. The problem is that most caches can be thought of hash tables implemented in hardware. However, hardware caches do not chain their buckets (which are normally called  sets ), but rather provide a fixed number of cachelines per set. The number of elements provided for each set in a given cache is termed that cache’s associativity . Although cache associativity varies, the eight-way associativity of the level-0 cache on the laptop I am typing this on is not unusual. What this means is that if a given transaction needed to touch nine cache lines, and if all nine cache lines mapped to the same set, then that transaction cannot possibly complete, never mind how many megabytes of additional space might be available in that cache. Yes, given randomly selected data elements in a given data structure, the probability of that transaction being able to commit is quite high, but there can be no guarantee. There has been some research work to alleviate this limitation. Fully associative  vic- tim caches  would alleviate the associativity constraints, but there are currently stringent performance and energy-efficiency constraints on the sizes of victim caches. That said, HTM victim caches for unmodified cache lines can be quite small, as they need to retain only the address: The data itself can be written to memory or shadowed by other caches, while the address itself is sufficient to detect a conflicting write  [RD12 ]. Unbounded transactional memory  (UTM) schemes [ AAKL06 ,  MBM + 06 ]  use DRAM as an extremely large victim cache, but integrating such schemes into a production-quality cache-coherence mechanism is still an unsolved problem. In addition, useofDRAMasavictimcachemayhaveunfortunateperformanceandenergy-efficiency consequences, particularly if the victim cache is to be fully associative. Finally, the “unbounded” aspect of UTM assumes that all of DRAM could be used as a victim cache, while in reality the large but still fixed amount of DRAM assigned to a given CPU would limit the size of that CPU’s transactions. Other schemes use a combination of hardware and software transactional memory [ KCH + 06 ] and one could imagine using STM as a fallback mechanism for HTM. However, to the best of my knowledge, currently available systems do not implement any of these research ideas, and perhaps for good reason. 15.3.2.2 Conflict Handling The first complication is the possibility of   conflicts . For example, suppose that transac- tions A and B are defined as follows: Trasaction A Transaction B x = 1; y = 2; y = 3; x = 4; Suppose that each transaction executes concurrently on its own processor. If trans- action A stores to  x  at the same time that transaction B stores to  y , neither transaction can progress. To see this, suppose that transaction A executes its store to  y . Then trans- action A will be interleaved within transaction B, in violation of the requirement that transactions execute atomically with respect to each other. Allowing transaction B to execute its store to x  similarly violates the atomic-execution requirement. This situation 412 is termed a  conflict  , which happens whenever two concurrent transactions access the same variable where at least one of the accesses is a store. The system is therefore obligated to abort one or both of the transactions in order to allow execution to progress. The choice of exactly which transaction to abort is an interesting topic that will very likely retain the ability to generate Ph.D. dissertations for some time to come, see for example  [ ATC + 11 ] . 9 For the purposes of this section, we can assume that the system makes a random choice. Another complication is conflict detection, which is comparatively straightforward, at least in the simplest case. When a processor is executing a transaction, it marks every cache line touched by that transaction. If the processor’s cache receives a request involving a cache line that has been marked as touched by the current transaction, a potential conflict has occurred. More sophisticated systems might try to order the current processors’ transaction to precede that of the processor sending the request, and optimization of this process will likely also retain the ability to generate Ph.D. dissertations for quite some time. However this section assumes a very simple conflict- detection strategy. However, for HTM to work effectively, the probability of conflict must be suitably low, which in turn requires that the data structures be organized so as to maintain a sufficiently low probability of conflict. For example, a red-black tree with simple insertion, deletion, and search operations fits this description, but a red-black tree that maintains an accurate count of the number of elements in the tree does not. 10 For another example, a red-black tree that enumerates all elements in the tree in a single transaction will have high conflict probabilities, degrading performance and scalability. As a result, many serial programs will require some restructuring before HTM can work effectively. In some cases, practitioners will prefer to take the extra steps (in the red-black-tree case, perhaps switching to a partitionable data structure such as a radix tree or a hash table), and just use locking, particularly during the time before HTM is readily available on all relevant architectures [ Cli09 ]. Quick Quiz 15.4:  How could a red-black tree possibly efficiently enumerate all elements of the tree regardless of choice of synchronization mechanism??? Furthermore, the fact that conflicts can occur brings failure handling into the picture, as discussed in the next section. 15.3.2.3 Aborts and Rollbacks Because any transaction might be aborted at any time, it is important that transactions contain no statements that cannot be rolled back. This means that transactions cannot do I/O, system calls, or debugging breakpoints (no single stepping in the debugger for HTM transactions!!!). Instead, transactions must confine themselves to accessing normal cached memory. Furthermore, on some systems, interrupts, exceptions, traps, TLB misses, and other events will also abort transactions. Given the number of bugs that have resulted from improper handling of error conditions, it is fair to ask what impact aborts and rollbacks have on ease of use. Quick Quiz 15.5:  But why can’t a debugger emulate single stepping by setting breakpoints at successive lines of the transaction, relying on the retry to retrace the steps of the earlier instances of the transaction? 9 Liu’s and Spear’s paper entitled “Toxic Transactions” [ LS11]  is particularly instructive in this regard. 10 The need to update the count would result in additions to and deletions from the tree conflicting with each other, resulting in strong non-commutativity [ AGH + 11a, AGH + 11b,  McK11b ]. 413 Of course, aborts and rollbacks raise the question of whether HTM can be useful for hard realtime systems. Do the performance benefits of HTM outweigh the costs of the aborts and rollbacks, and if so under what conditions? Can transactions use priority boosting? Or should transactions for high-priority threads instead preferentially abort those of low-priority threads? If so, how is the hardware efficiently informed of priorities? The literature on realtime use of HTM is quite sparse, perhaps because researchers are finding more than enough problems in getting transactions to work well in non-realtime environments. Because current HTM implementations might deterministically abort a given trans- action, software must provide fallback code. This fallback code must use some other form of synchronization, for example, locking. If the fallback is used frequently, then all the limitations of locking, including the possibility of deadlock, reappear. One can of course hope that the fallback isn’t used often, which might allow simpler and less deadlock-prone locking designs to be used. But this raises the question of how the system transitions from using the lock-based fallbacks back to transactions. 11 One approach is to use a test-and-test-and-set discipline  [ MT02 ], so that everyone holds off  until the lock is released, allowing the system to start from a clean slate in transactional mode at that point. However, this could result in quite a bit of spinning, which might not be wise if the lock holder has blocked or been preempted. Another approach is to allow transactions to proceed in parallel with a thread holding a lock [ MT02 ] , but this raises difficulties in maintaining atomicity, especially if the reason that the thread is holding the lock is because the corresponding transaction would not fit into cache. Finally, dealing with the possibility of aborts and rollbacks seems to put an additional burden on the developer, who must correctly handle all combinations of possible error conditions. It is clear that users of HTM must put considerable validation effort into testing both the fallback code paths and transition from fallback code back to transactional code. 15.3.2.4 Lack of Forward-Progress Guarantees Even though transaction size, conflicts, and aborts/rollbacks can all cause transactions to abort, one might hope that sufficiently small and short-duration transactions could be guaranteed to eventually succeed. This would permit a transaction to be unconditionally retried, in the same way that compare-and-swap (CAS) and load-linked/store-conditional (LL/SC) operations are unconditionally retried in code that uses these instructions to implement atomic operation. Unfortunately, most currently available HTM implementation refuse to make any sort of forward-progress guarantee, which means that HTM cannot be used to avoid deadlock on those systems . 12 Hopefully future implementations of HTM will provide some sort of forward-progress guarantees. Until that time, HTM must be used with extreme caution in real-time applications . 13 The one exception to this gloomy picture as of 2013 is upcoming versions of the IBM mainframe, which provides a separate instruction that may be used to start a special 11 The possibility of an application getting stuck in fallback mode has been termed the “lemming effect”, a term that Dave Dice has been credited with coining. 12 HTM might well be used to reduce the probability of deadlock, but as long as there is some possibility of the fallback code being executed, there is some possibility of deadlock. 13 As of mid-2012, there has been surprisingly little work on transactional memory’s real-time characteris- tics. 414 constrained transaction  [ JSG12 ] . As you might guess from the name, such transactions must live within the following constraints: 1.  Each transaction’s data footprint must be contained within four 32-byte blocks of  memory. 2. Each transaction is permitted to execute at most 32 assembler instructions. 3. Transactions are not permitted to have backwards branches (e.g., no loops). 4. Each transaction’s code is limited to 256 bytes of memory. 5.  If a portion of a given transaction’s data footprint resides within a given 4K page, then that 4K page is prohibited from containing any of that transaction’s instructions. These constraints are severe, but the nevertheless permit a wide variety of data- structure updates to be implemented, including stacks, queues, hash tables, and so on. These operations are guaranteed to eventually complete, and are free of deadlock and livelock conditions. It will be interesting to see how hardware support of forward-progress guarantees evolves over time. 15.3.2.5 Non-Idempotent Operations Another consequence of aborts and rollbacks is that HTM transactions cannot accom- modate irrevocable operations. Current HTM implementations typically enforce this limitation by requiring that all of the accesses in the transaction be to cacheable memory (thus prohibiting MMIO accesses) and aborting transactions on interrupts, traps, and exceptions (thus prohibiting system calls). Note that buffered I/O can be accommodated by HTM transactions as long as the buffer fill/flush operations occur extra-transactionally. The reason that this works is that adding data to and removing data from the buffer is revocable: Only the actual buffer fill/flush operations are irrevocable. Of course, this buffered-I/O approach has the effect of including the I/O in the transaction’s footprint, increasing the size of the transaction and thus increasing the probability of failure. 15.3.2.6 Semantic Differences Although HTM can in many cases be used as a drop-in replacement for locking (hence the name transactional lock elision  [ DHL + 08 ]), there are subtle differences in seman- tics. A particularly nasty example involving coordinated lock-based critical sections that results in deadlock or livelock when executed transactionally was given by Blun- dell [ BLM06] , but a much simpler example is the empty critical section. In a lock-based program, an empty critical section will guarantee that all processes that had previously been holding that lock have now released it. This idiom was used by the 2.4 Linux kernel’s networking stack to coordinate changes in configuration. But if this empty critical section is translated to a transaction, the result is a no-op. The guarantee that all prior critical sections have terminated is lost. In other words, 415 1 void boostee(void) 2 { 3 int i = 0; 4 5 acquire_lock(&boost_lock[i]); 6 for (;;) { 7 acquire_lock(&boost_lock[!i]); 8 release_lock(&boost_lock[i]); 9 i = i ^ 1; 10 do_something(); 11 } 12 } 13 14 void booster(void) 15 { 16 int i = 0; 17 18 for (;;) { 19 usleep(1000); / *  sleep 1 ms.  * / 20 acquire_lock(&boost_lock[i]); 21 release_lock(&boost_lock[i]); 22 i = i ^ 1; 23 } 24 } Figure 15.12: Exploiting Priority Boosting transactional lock elision preserves the data-protection semantics of locking, but loses locking’s time-based messaging semantics. Quick Quiz 15.6:  But why would  anyone  need an empty lock-based critical sec- tion??? Quick Quiz 15.7:  Can’t transactional lock elision trivially handle locking’s time- based messaging semantics by simply choosing not to elide empty lock-based critical sections? Quick Quiz 15.8:  Given modern hardware [ MOZ09 ], how can anyone possibly expect parallel software relying on timing to work? One important semantic difference between locking and transactions is the priority boosting that is used to avoid priority inversion in lock-based real-time programs. One way in which priority inversion can occur is when a low-priority thread holding a lock is preempted by a medium-priority CPU-bound thread. If there is at least one such medium-priority thread per CPU, the low-priority thread will never get a chance to run. If a high-priority thread now attempts to acquire the lock, it will block. It cannot acquire the lock until the low-priority thread releases it, the low-priority thread cannot release the lock until it gets a chance to run, and it cannot get a chance to run until one of the medium-priority threads gives up its CPU. Therefore, the medium-priority threads are in effect blocking the high-priority process, which is the rationale for the name “priority inversion.” One way to avoid priority inversion is  priority inheritance , in which a high-priority thread blocked on a lock temporarily donates its priority to the lock’s holder, which is also called  priority boosting . However, priority boosting can be used for things other than avoiding priority inversion, as shown in Figure  15.12.  Lines 1-12 of this figure show a low-priority process that must nevertheless run every millisecond or so, while lines 14-24 of this same figure show a high-priority process that uses priority boosting to ensure that  boostee()  runs periodically as needed. The  boostee()  function arranges this by always holding one of the two  boost_  416 lock[]  locks, so that lines 20-21 of   booster()  can boost priority as needed. Quick Quiz 15.9:  But the  boostee()  function in Figure  15.12  alternatively acquires its locks in reverse order! Won’t this result in deadlock? This arrangement requires that  boostee()  acquire its first lock on line 5 before the system becomes busy, but this is easily arranged, even on modern hardware. Unfortunately, this arrangement can break down in presence of transactional lock elision. The  boostee()  function’s overlapping critical sections become one infinite transaction, which will sooner or later abort, for example, on the first time that the thread running the  boostee()  function is preempted. At this point,  boostee()  will fall back to locking, but given its low priority and that the quiet initialization period is now complete (which after all is why  boostee()  was preempted), this thread might never again get a chance to run. And if the  boostee()  thread is not holding the lock, then the  booster() thread’s empty critical section on lines 20 and 21 of Figure  15.12  will become an empty transaction that has no effect, so that  boostee()  never runs. This example illustrates some of the subtle consequences of transactional memory’s rollback-and-retry semantics. Given that experience will likely uncover additional subtle semantic differences, application of HTM-based lock elision to large programs should be undertaken with caution. 15.3.2.7 Summary Although it seems likely that HTM will have compelling use cases, current imple- mentations have serious transaction-size limitations, conflict-handling complications, abort-and-rollback issues, and semantic differences that will require careful handling. HTM’s current situation relative to locking is summarized in Table  15.1.  As can be seen, although the current state of HTM alleviates some serious shortcomings of lock- ing , 14 it does so by introducing a significant number of shortcomings of its own. These shortcomings are acknowledged by leaders in the TM community [ MS12 ] . 15 In addition, this is not the whole story. Locking is not normally used by itself, but is instead typically augmented by other synchronization mechanisms, including reference counting, atomic operations, non-blocking data structures, hazard pointers [ Mic04 , HLM02 ], and read-copy update (RCU) [ MS98a ,  MAK + 01 ,  HMBW07 ,  McK12b ]. The next section looks at how such augmentation changes the equation. 15.3.3 HTM Weaknesses WRT to Locking When Augmented Practitioners have long used reference counting, atomic operations, non-blocking data structures, hazard pointers, and RCU to avoid some of the shortcomings of lock- ing. For example, deadlock can be avoided in many cases by using reference counts, hazard pointers, or RCU to protect data structures, particularly for read-only criti- cal sections  [ Mic04 ,  HLM02 ,  DMS + 12 ,  GMTW08 ,  HMBW07 ] . These approaches 14 In fairness, it is important to emphasize that locking’s shortcomings do have well-known and heavily used engineering solutions, including deadlock detectors  [ Cor06a ], a wealth of data structures that have been adapted to locking, and a long history of augmentation, as discussed in Section  15.3.3.  In addition, if locking really were as horrible as a quick skim of many academic papers might reasonably lead one to believe, where did all the large lock-based parallel programs (both FOSS and proprietary) come from, anyway? 15 In addition, in early 2011, I was invited to deliver a critique of some of the assumptions underlying transactional memory  [ McK11d ] . The audience was surprisingly non-hostile, though perhaps they were taking it easy on me due to the fact that I was heavily jet-lagged while giving the presentation. 417 Locking Hardware Transactional Memory Basic Idea  Allow only one thread at a time to access a given set of objects. Cause a given operation over a set of  objects to execute atomically. Scope  +  Handles all operations.  +  Handles revocable operations. −  Irrevocable operations force fallback (typically to locking). Composability  ⇓  Limited by deadlock.  ⇓  Limited by irrevocable operations, transaction size, and deadlock (as- suming lock-based fallback code). Scalability & Per- formance −  Data must be partitionable to avoid lock contention. −  Data must be partionable to avoid conflicts. ⇓  Partioning must typically be fixed at design time. +  Dynamic adjustment of partitioning carried out automatically down to cacheline boundaries. −  Partitioning required for fallbacks (less important for rare fallbacks). ⇓  Locking primitives typically result in expensive cache misses and memory- barrier instructions. −  Transactions begin/end instructions typically do not result in cache misses, but do have memory- ordering consequences. +  Contention effects are focused on ac- quisition and release, so that the crit- ical section runs at full speed. −  Contentionabortsconflictingtransac- tions, even if they have been running for a long time. +  Privatization operations are simple, intuitive, performant, and scalable. −  Privatized data contributes to trans- action size. Hardware Support  +  Commodity hardware suffices.  −  New hardware required (and is start- ing to become available). +  Performance is insensitive to cache- geometry details. −  Performance depends critically on cache geometry. Software Support  +  APIs exist, large body of code and experience, debuggers operate natu- rally. −  APIs emerging, little experience out- side of DBMS, breakpoints mid- transaction can be problematic. Interaction With Other Mecha- nisms +  Long experience of successful inter- action. ⇓  Just beginning investigation of inter- action. Practical Apps  +  Yes.  +  Yes. Wide Applicabil- ity +  Yes.  −  Jury still out, but likely to win signif- icant use. Table 15.1: Comparison of Locking and HTM (“+” is Advantage, “-” is Disadvantage, “ ⇓ ” is Strong Disadvantage) 418 Locking with RCU or Hazard Pointers Hardware Transactional Memory Basic Idea  Allow only one thread at a time to access a given set of objects. Cause a given operation over a set of objects to execute atomically. Scope  +  Handles all operations.  +  Handles revocable operations. −  Irrevocable operations force fallback (typically to locking). Composability  +  Readers limited only by grace-period- wait operations. ⇓  Limited by irrevocable operations, trans- action size, and deadlock. −  Updaters limited by deadlock. Readers reduce deadlock. (Assuming lock-based fallback code.) Scalability & Perfor- mance −  Data must be partitionable to avoid lock contention among updaters. −  Data must be partionable to avoid con- flicts. +  Partitioning not needed for readers. ⇓  Partioning for updaters must typically be fixed at design time. +  Dynamic adjustment of partitioning car- ried out automatically down to cacheline boundaries. +  Partitioning not needed for readers.  −  Partitioning required for fallbacks (less important for rare fallbacks). ⇓  Updater locking primitives typically re- sult in expensive cache misses and memory-barrier instructions. −  Transactions begin/end instructions typ- ically do not result in cache misses, but do have memory-ordering consequences. +  Update-side contention effects are fo- cused on acquisition and release, so that the critical section runs at full speed. −  Contention aborts conflicting transac- tions, even if they have been running for a long time. +  Readers do not contend with updaters or with each other. +  Read-side primitives are typically wait- free with low overhead. (Lock-free for hazard pointers.) −  Read-only transactions subject to con- flicts and rollbacks. No forward-progress guarantees other than those supplied by fallback code. +  Privatization operations are simple, intu- itive, performant, and scalable when data is visible only to updaters. −  Privatized data contributes to transaction size. −  Privitization operations are expensive (though still intuitive and scalable) for reader-visible data. Hardware Support  +  Commodity hardware suffices.  −  New hardware required (and is starting to become available). +  Performance is insensitive to cache- geometry details. −  Performance depends critically on cache geometry. Software Support  +  APIs exist, large body of code and expe- rience, debuggers operate naturally. −  APIs emerging, little experience outside of DBMS, breakpoints mid-transaction can be problematic. Interaction With Other Mechanisms +  Long experience of successful interac- tion. ⇓  Just beginning investigation of interac- tion. Practical Apps  +  Yes.  +  Yes. Wide Applicability  +  Yes.  −  Jury still out, but likely to win significant use. Table 15.2: Comparison of Locking (Augmented by RCU or Hazard Pointers) and HTM (“+” is Advantage, “-” is Disadvantage, “ ⇓ ” is Strong Disadvantage) 419 also reduce the need to partition data structures [ McK12a ]. RCU further provides contention-free wait-free read-side primitives  [ DMS + 12 ]. Adding these considerations to Table  15.1  results in the updated comparison between augmented locking and HTM shown in Table  15.2.  A summary of the differnces between the two tables is as follows: 1. Use of non-blocking read-side mechanisms alleviates deadlock issues. 2.  Read-side mechanisms such as hazard pointers and RCU can operate efficiently on non-partitionable data. 3.  Hazard pointers and RCU do not contend with each other or with updaters, allowing excellent performance and scalability for read-mostly workloads. 4.  Hazard pointers and RCU provide forward-progress guarantees (lock freedom and wait-freedom, respectively). 5. Privatization operations for hazard pointers and RCU are straightforward. Of course, it is also possible to augment HTM, as discussed in the next section. 15.3.4 Where Does HTM Best Fit In? Although it will likely be some time before HTM’s area of applicability can be as crisply delineated as that shown for RCU in Figure  8.34  on page  195 , that is no reason not to start moving in that direction. HTM seems best suited to update-heavy workloads involving relatively small changes to disparate portions of a relatively large in-memory data structures running on largemultiprocessors, as this meets thesizerestrictions of current HTMimplementations while minimizing the probability of conflicts and attendant aborts and rollbacks. This scenario is also one that is relatively difficult to handle given current synchronization primitives. Use of locking in conjunction with HTM seems likely to overcome HTM’s difficul- ties with irrevocable operations, while use of RCU or hazard pointers might alleviate HTM’s transaction-size limitations for read-only operations that traverse large fractions of the data structure. Current HTM implementations unconditionally abort an update transaction that conflicts with an RCU or hazard-pointer reader, but perhaps future HTM implementations will interoperate more smoothly with these synchronization mechanisms. In the meantime, the probability of an update conflicting with a large RCU or hazard-pointer read-side critical section should be much smaller than the probability of conflicting with the equivalent read-only transaction. 16 Nevertheless, it is quite possible that a steady stream of RCU or hazard-pointer readers might starve updaters due to a corresponding steady stream of conflicts. This vulnerability could be eliminated (perhaps at significant hardware cost and complexity) by giving extra-tranactional reads the pre-transaction copy of the memory location being loaded. The fact that HTM transactions must have fallbacks might in some cases force static partitionability of data structures back onto HTM. This limitation might be alleviated if future HTM implementations provide forward-progress guarantees, which might eliminate the need for fallback code in some cases, which in turn might allow HTM to be used efficiently in situations with higher conflict probabilities. 16 It is quite ironic that strictly transactional mechanisms are appearing in shared-memory systems at  just about the time that NoSQL databases are relaxing the traditional database-application reliance on strict transactions. 420 In short, although HTM is likely to have important uses and applications, it is another tool in the parallel programmer’s toolbox, not a replacement for the toolbox in its entirety. 15.3.5 Potential Game Changers Game changers that could greatly increase the need for HTM include the following: 1. Forward-progress guarantees. 2. Transaction-size increases. 3. Improved debugging support. 4. Weak atomicity. These are expanded upon in the following sections. 15.3.5.1 Forward-Progress Guarantees As was discussed in Section  15.3.2.4 , current HTM implementations lack forward- progress guarantees, which requires that fallback software be available to handle HTM failures. Of course, it is easy to demand guarantees, but not always to easy to provide them. In the case of HTM, obstacles to guarantees can include cache size and asso- ciativity, TLB size and associativity, transaction duration and interrupt frequency, and scheduler implementation. Cache size and associativity was discussed in Section  15.3.2.1,  along with some research intended to work around current limitations. However, we HTM forward- progress guarantees would come with size limits, large though these limits might one day be. So why don’t current HTM implementations provide forward-progress guarantees for small transactions, for example, limited to the associativity of the cache? One potential reason might be the need to deal with hardware failure. For example, a failing cache SRAM cell might be handled by deactivating the failing cell, thus reducing the associativity of the cache and therefore also the maximum size of transactions that can be guaranteed forward progress. Given that this would simply decrease the guaranteed transaction size, it seems likely that other reasons are at work. Perhaps providing forward progress guarantees on production-quality hardware is more difficult than one might think, an entirely plausible explanation given the difficulty of making forward-progress guarantees in software. Moving a problem from software to hardware does not necessarily make it easier to solve. Given a physically tagged and indexed cache, it is not enough for the transaction to fit in the cache. Its address translations must also fit in the TLB. Any forward-progress guarantees must therefore also take TLB size and associativity into account. Given that interrupts, traps, and exceptions abort transactions in current HTM implementations, it is necessary that the execution duration of a given transaction be shorter than the expected interval between interrupts. No matter how little data a given transaction touches, if it runs too long, it will be aborted. Therefore, any forward- progress guarantees must be conditioned not only on transaction size, but also on transaction duration. Forward-progress guarantees depend critically on the ability to determine which of  several conflicting transactions should be aborted. It is all too easy to imagine an endless 421 series of transactions, each aborting an earlier transaction only to itself be aborted by a later transactions, so that none of the transactions actually commit. The complexity of  conflict handling is evidenced by the large number of HTM conflict-resolution policies that have been proposed [ ATC + 11 ,  LS11 ]. Additional complications are introduced by extra-transactional accesses, as noted by Blundell [ BLM06 ]. It is easy to blame the extra-transactional accesses for all of these problems, but the folly of this line of  thinking is easily demonstrated by placing each of the extra-transactional accesses into its own single-access transaction. It is the pattern of accesses that is the issue, not whether or not they happen to be enclosed in a transaction. Finally, any forward-progress guarantees for transactions also depend on the sched- uler, which must let the thread executing the transaction run long enough to successfully commit. So there are significant obstacles to HTM vendors offering forward-progress guaran- tees. However, the impact of any of them doing so would be enormous. It would mean that HTM transactions would no longer need software fallbacks, which would mean that HTM could finally deliver on the TM promise of deadlock elimination. And as of late 2012, the IBM Mainframe announced an HTM implementation that includes  constrained transactions  in addition to the usual best-effort HTM implementa- tion  [ JSG12 ] . A constrained transaction starts with the  tbeginc  instruction instead of  the  tbegin  instruction that is used for best-effort transactions. Constrained transac- tions are guaranteed to always complete (eventually), so if a transaction aborts, rather than branching to a fallback path (as is done for best-effort transactions), the hardware instead restarts the transaction at the  tbeginc  instruction. TheMainframearchitectsneededtotakeextrememeasurestodeliveronthisforward- progress guarantee. If a given constrained transaction repeatedly fails, the CPU might disable branch prediction, force in-order execution, and even disable pipelining. If  the repeated failures are due to high contention, the CPU might disable speculative fetches, introduce random delays, and even serialize execution of the conflicting CPUs. “Interesting” forward-progress scenarios involve as few as two CPUs or as many as one hundred CPUs. Perhaps these extreme measures provide some insight as to why other CPUs have thus far refrained from offering constrained transactions. As the name implies, constrained transactions are in fact severely constrained: 1.  The maximum data footprint is four blocks of memory, where each block can be no larger than 32 bytes. 2. The maximum code footprint is 256 bytes. 3.  If a given 4K page contains a constrained transaction’s code, then that page may not contain that transaction’s data. 4. The maximum number of assembly instructions that may be executed is 32. 5. Backwards branches are forbidden. Nevertheless, these constraints support a number of important data structures, in- cluding linked lists, stacks, queues, and arrays. Constrained HTM therefore seems likely to become an important tool in the parallel programmer’s toolbox. 422 15.3.5.2 Transaction-Size Increases Forward-progress guarantees are important, but as we saw, they will be conditional guarantees based on transaction size and duration. It is important to note that even small-sized guarantees will be quite useful. For example, a guarantee of two cache lines is sufficient for a stack, queue, or dequeue. However, larger data structures require larger guarantees, for example, traversing a tree in order requires a guarantee equal to the number of nodes in the tree. Therefore, increasing the size of the guarantee also increases the usefulness of HTM, thereby increasing the need for CPUs to either provide it or provide good-and-sufficient workarounds. 15.3.5.3 Improved Debugging Support Another inhibitor to transaction size is the need to debug the transactions. The problem with current mechanisms is that a single-step exception aborts the enclosing transaction. There are a number of workarounds for this issue, including emulating the processor (slow!), substituting STM for HTM (slow and slightly different semantics!), playback techniques using repeated retries to emulate forward progress (strange failure modes!), and full support of debugging HTM transactions (complex!). Should one of the HTM vendors produce an HTM system that allows straightforward use of classical debugging techniques within transactions, including breakpoints, single stepping, and print statements, this will make HTM much more compelling. Some transactional-memory researchers are starting to recognize this problem as of 2013, with at least one proposal involving hardware-assisted debugging facilities [ GKP13 ]. Of course, this proposal depends on readily available hardware gaining such facilities. 15.3.5.4 Weak Atomicity Given that HTM is likely to face some sort of size limitations for the foreseeable future, it will be necessary for HTM to interoperate smoothly with other mechanisms. HTM’s interoperability with read-mostly mechanisms such as hazard pointers and RCU would be improved if extra-transactional reads did not unconditionally abort transactions with conflicting writes—instead, the read could simply be provided with the pre-transaction value. In this way, hazard pointers and RCU could be used to allow HTM to handle larger data structures and to reduce conflict probabilities. This is not necessarily simple, however. The most straightforward way of imple- menting this requires an additional state in each cache line and on the bus, which is a non-trivial added expense. The benefit that goes along with this expense is permitting large-footprint readers without the risk of starving updaters due to continual conflicts. 15.3.6 Conclusions Although current HTM implementations appear to be poised to deliver real benefits, they also have significant shortcomings. The most significant shortcomings appear to be limited transaction sizes, the need for conflict handling, the need for aborts and rollbacks, the lack of forward-progress guarantees, the inability to handle irrevocable operations, and subtle semantic differences from locking. Some of these shortcomings might be alleviated in future implementations, but it ap- pears that there will continue to be a strong need to make HTM work well with the many other types of synchronization mechanisms, as noted earlier [ MMW07, MMTW10 ]. 423 In short, current HTM implementations appear to be welcome and useful additions to the parallel programmer’s toolbox, and much interesting and challenging work is required to make use of them. However, they cannot be considered to be a magic wand with which to wave away all parallel-programming problems. 15.4 Functional Programming for Parallelism When I took my first-ever functional-programming class in the early 1980s, the pro- fessor asserted that the side-effect-free functional-programming style was well-suited to trivial parallelization and analysis. Thirty years later, this assertion remains, but mainstream production use of parallel functional languages is minimal, a state of affairs that might well stem from this professor’s additional assertion that programs should neither maintain state nor do I/O. There is niche use of functional languages such as Erlang, and multithreaded support has been added to several other functional languages, but mainstream production usage remains the province of procedural languages such as C, C++, Java, and FORTRAN (usually augmented with OpenMP or MPI). This situation naturally leads to the question “If analysis is the goal, why not transform the procedural language into a functional language before doing the analysis?” There are of course a number of objections to this approach, of which I list but three: 1.  Procedural languages often make heavy use of global variables, which can be updated independently by different functions, or, worse yet, by multiple threads. Note that Haskell’s  monads  were invented to deal with single-threaded global state, and that multi-threaded access to global state requires additional violence to the functional model. 2.  Multithreaded procedural languages often use synchonization primitives such as locks, atomic operations, and transactions, which inflict added violence upon the functional model. 3.  Procedural languages can  alias  function arguments, for example, by passing a pointer to the same structure via two different arguments to the same invocation of a given function. This can result in the function unknowingly updating that structure via two different (and possibly overlapping) code sequences, which greatly complicates analysis. Of course, given the importance of global state, synchronization primitives, and aliasing, clever functional-programming experts have proposed any number of attempts to reconcile the function programming model to them, monads being but one case in point. Another approach is to compile the parallel procedural program into a functional program, the use functional-programming tools to analyze the result. But it is possible to do much better than this, given that any real computation is a large finite-state machine with finite input that runs for a finite time interval. This means that any real program can be transformed into an expression, possibly albeit an impractically large one  [ DHK12 ] . However, a number of the low-level kernels of parallel algorithms transform into expressions that are small enough to fit easily into the memories of modern computers. If such an expression is coupled with an assertion, checking to see if the assertion would ever fire becomes a satisfiability problem. Even though satisfiability problems are NP- complete, they can often be solved in much less time than would be required to generate 424 the full state space. In addition, the solution time appears to be independent of the underlying memory model, so that algorithms running on weakly ordered systems can be checked just as quickly as they could on sequentially consistent systems [ AKT13 ]. The general approach is to transform the program into single-static-assignment (SSA) form, so that each assignment to a variable creates a separate version of that variable. This applies to assignments from all the active threads, so that the resulting expression embodies all possible executions of the code in question. The addition of an assertion entails asking whether any combination of inputs and initial values can result in the assertion firing, which, as noted above, is exactly the satisfiability problem. One possible objection is that it does not gracefully handle arbitrary looping con- structs. However, in many cases, this can be handled by unrolling the loop a finite number of times. In addition, perhaps some loops will also prove amenable to collapse via inductive mathods. Another possible objection is that spinlocks involve arbitrarily long loops, and any finite unrolling would fail to capture the full behavior of the spinlock. It turns out that this objection is easily overcome. Instead of modeling a full spinlock, model a trylock that attempts to obtain the lock, and aborts if it fails to immediately do so. The assertion must then be crafted so as to avoid firing in cases where a spinlock aborted due to the lock not being immediately available. Because the logic expression is independent of  time, all possible concurrency behaviors will be captured via this approach. A final objection is that this technique is unlikely to be able to handle a full-sized software artifact such as the millions of lines of code making up the Linux kernel. This is likely the case, but the fact remains that exhaustive validation of each of the much smaller parallel primitives within the Linux kernel would be quite valuable. And in fact the researchers spearheading this approach have applied it to non-trivial real-world code, including the RCU implementation in the Linux kernel (albeit to verify one of the less-profound properties of RCU). It remains to be seen how widely applicable this technique is, but it is one of the more interesting innovations in the field of formal verification. And it might be more well-received than the traditional advice of writing all programs in functional form. 425 426 Appendix A Important Questions The following sections discuss some important questions relating to SMP programming. Each section also shows how to  avoid   having to worry about the corresponding question, which can be extremely important if your goal is to simply get your SMP code working as quickly and painlessly as possible — which is an excellent goal, by the way! Although the answers to these questions are often quite a bit less intuitive than they would be in a single-threaded setting, with a bit of work, they are not that difficult to understand. If you managed to master recursion, there is nothing in here that should pose an overwhelming challenge. A.1 What Does “After” Mean? “After” is an intuitive, but surprisingly difficult concept. An important non-intuitive issue is that code can be delayed at any point for any amount of time. Consider a producing and a consuming thread that communicate using a global struct with a timestamp “t” and integer fields “a”, “b”, and “c”. The producer loops recording the current time (in seconds since 1970 in decimal), then updating the values of “a”, “b”, and “c”, as shown in Figure  A.1.  The consumer code loops, also recording the current time, but also copying the producer’s timestamp along with the fields “a”, “b”, and “c”, as shown in Figure  A.2 . At the end of the run, the consumer outputs a list of anomalous recordings, e.g., where time has appeared to go backwards. Quick Quiz A.1:  What SMP coding errors can you see in these examples? See time.c  for full code. One might intuitively expect that the difference between the producer and consumer timestamps would be quite small, as it should not take much time for the producer to record the timestamps or the values. An excerpt of some sample output on a dual-core 1GHz x86 is shown in Table  A.1.  Here, the “seq” column is the number of times through the loop, the “time” column is the time of the anomaly in seconds, the “delta” column is the number of seconds the consumer’s timestamp follows that of the producer (where a negative value indicates that the consumer has collected its timestamp before the producer did), and the columns labelled “a”, “b”, and “c” show the amount that these variables increased since the prior snapshot collected by the consumer. Why is time going backwards? The number in parentheses is the difference in microseconds, w