Remove the need for most locking in memory.c.
Using thread local storage for tracking memory allocations means that threads
no longer have to lock at all when doing memory allocations / frees. This
particularly helps the gemm driver since it does an allocation per invocation.
Even without threading at all, this helps, since even calling a lock with
no contention has a cost:
Before this change, no threading:
```
----------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------
BM_SGEMM/4 102 ns 102 ns
13504412
BM_SGEMM/6 175 ns 175 ns 7997580
BM_SGEMM/8 205 ns 205 ns 6842073
BM_SGEMM/10 266 ns 266 ns 5294919
BM_SGEMM/16 478 ns 478 ns 2963441
BM_SGEMM/20 690 ns 690 ns 2144755
BM_SGEMM/32 1906 ns 1906 ns 716981
BM_SGEMM/40 2983 ns 2983 ns 473218
BM_SGEMM/64 9421 ns 9422 ns 148450
BM_SGEMM/72 12630 ns 12631 ns 112105
BM_SGEMM/80 15845 ns 15846 ns 89118
BM_SGEMM/90 25675 ns 25676 ns 54332
BM_SGEMM/100 29864 ns 29865 ns 47120
BM_SGEMM/112 37841 ns 37842 ns 36717
BM_SGEMM/128 56531 ns 56532 ns 25361
BM_SGEMM/140 75886 ns 75888 ns 18143
BM_SGEMM/150 98493 ns 98496 ns 14299
BM_SGEMM/160 102620 ns 102622 ns 13381
BM_SGEMM/170 135169 ns 135173 ns 10231
BM_SGEMM/180 146170 ns 146172 ns 9535
BM_SGEMM/189 190226 ns 190231 ns 7397
BM_SGEMM/200 194513 ns 194519 ns 7210
BM_SGEMM/256 396561 ns 396573 ns 3531
```
with this change:
```
----------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------
BM_SGEMM/4 95 ns 95 ns
14500387
BM_SGEMM/6 166 ns 166 ns 8381763
BM_SGEMM/8 196 ns 196 ns 7277044
BM_SGEMM/10 256 ns 256 ns 5515721
BM_SGEMM/16 463 ns 463 ns 3025197
BM_SGEMM/20 636 ns 636 ns 2070213
BM_SGEMM/32 1885 ns 1885 ns 739444
BM_SGEMM/40 2969 ns 2969 ns 472152
BM_SGEMM/64 9371 ns 9372 ns 148932
BM_SGEMM/72 12431 ns 12431 ns 112919
BM_SGEMM/80 15615 ns 15616 ns 89978
BM_SGEMM/90 25397 ns 25398 ns 55041
BM_SGEMM/100 29445 ns 29446 ns 47540
BM_SGEMM/112 37530 ns 37531 ns 37286
BM_SGEMM/128 55373 ns 55375 ns 25277
BM_SGEMM/140 76241 ns 76241 ns 18259
BM_SGEMM/150 102196 ns 102200 ns 13736
BM_SGEMM/160 101521 ns 101525 ns 13556
BM_SGEMM/170 136182 ns 136184 ns 10567
BM_SGEMM/180 146861 ns 146864 ns 9035
BM_SGEMM/189 192632 ns 192632 ns 7231
BM_SGEMM/200 198547 ns 198555 ns 6995
BM_SGEMM/256 392316 ns 392330 ns 3539
```
Before, when built with USE_THREAD=1, GEMM_MULTITHREAD_THRESHOLD = 4, the cost
of small matrix operations was overshadowed by thread locking (look smaller than
32) even when not explicitly spawning threads:
```
----------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------
BM_SGEMM/4 328 ns 328 ns 4170562
BM_SGEMM/6 396 ns 396 ns 3536400
BM_SGEMM/8 418 ns 418 ns 3330102
BM_SGEMM/10 491 ns 491 ns 2863047
BM_SGEMM/16 710 ns 710 ns 2028314
BM_SGEMM/20 871 ns 871 ns 1581546
BM_SGEMM/32 2132 ns 2132 ns 657089
BM_SGEMM/40 3197 ns 3196 ns 437969
BM_SGEMM/64 9645 ns 9645 ns 144987
BM_SGEMM/72 35064 ns 32881 ns 50264
BM_SGEMM/80 37661 ns 35787 ns 42080
BM_SGEMM/90 36507 ns 36077 ns 40091
BM_SGEMM/100 32513 ns 31850 ns 48607
BM_SGEMM/112 41742 ns 41207 ns 37273
BM_SGEMM/128 67211 ns 65095 ns 21933
BM_SGEMM/140 68263 ns 67943 ns 19245
BM_SGEMM/150 121854 ns 115439 ns 10660
BM_SGEMM/160 116826 ns 115539 ns 10000
BM_SGEMM/170 126566 ns 122798 ns 11960
BM_SGEMM/180 130088 ns 127292 ns 11503
BM_SGEMM/189 120309 ns 116634 ns 13162
BM_SGEMM/200 114559 ns 110993 ns 10000
BM_SGEMM/256 217063 ns 207806 ns 6417
```
and after, it's gone (note this includes my other change which reduces calls
to num_cpu_avail):
```
----------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------
BM_SGEMM/4 95 ns 95 ns
12347650
BM_SGEMM/6 166 ns 166 ns 8259683
BM_SGEMM/8 193 ns 193 ns 7162210
BM_SGEMM/10 258 ns 258 ns 5415657
BM_SGEMM/16 471 ns 471 ns 2981009
BM_SGEMM/20 666 ns 666 ns 2148002
BM_SGEMM/32 1903 ns 1903 ns 738245
BM_SGEMM/40 2969 ns 2969 ns 473239
BM_SGEMM/64 9440 ns 9440 ns 148442
BM_SGEMM/72 37239 ns 33330 ns 46813
BM_SGEMM/80 57350 ns 55949 ns 32251
BM_SGEMM/90 36275 ns 36249 ns 42259
BM_SGEMM/100 31111 ns 31008 ns 45270
BM_SGEMM/112 43782 ns 40912 ns 34749
BM_SGEMM/128 67375 ns 64406 ns 22443
BM_SGEMM/140 76389 ns 67003 ns 21430
BM_SGEMM/150 72952 ns 71830 ns 19793
BM_SGEMM/160 97039 ns 96858 ns 11498
BM_SGEMM/170 123272 ns 122007 ns 11855
BM_SGEMM/180 126828 ns 126505 ns 11567
BM_SGEMM/189 115179 ns 114665 ns 11044
BM_SGEMM/200 89289 ns 87259 ns 16147
BM_SGEMM/256 226252 ns 222677 ns 7375
```
I've also tested this with ThreadSanitizer and found no data races during
execution. I'm not sure why 200 is always faster than it's neighbors, we must
be hitting some optimal cache size or something.