1 // Licensed to the .NET Foundation under one or more agreements.
2 // The .NET Foundation licenses this file to you under the MIT license.
3 // See the LICENSE file in the project root for more information.
15 Implementation of shared memory infrastructure for IPC
19 Interprocess synchronization
22 There doesn't seem to be ANY synchronization mechanism that will work
23 inter-process AND be pthread-safe. FreeBSD's pthread implementation has no
24 support for inter-process synchronization (PTHREAD_PROCESS_SHARED);
25 "traditionnal" inter-process syncronization functions, on the other hand, are
26 not pthread-aware, and thus will block entire processes instead of only the
29 From suggestions and information obtained on the freebsd-hackers mailing list,
30 I have come up with 2 possible strategies to ensure serialized access to our
33 Note that the estimates of relative efficiency are wild guesses; my assumptions
34 are that blocking entire processes is least efficient, busy wait somewhat
35 better, and anything that does neither is preferable. However, the overhead of
36 complex solutions is likely to have an important impact on performance
38 Option 1 : very simple; possibly less efficient. in 2 words : "busy wait"
41 while(InterlockedCompareExchange(spinlock_in_shared_memory, 1, 0)
44 In other words, if a value is 0, set it to 1; otherwise, try again until we
45 succeed. use shed_yield to give the system a chance to schedule other threads
46 while we wait. (once a thread succeeds at this, it does its work, then sets
48 One inconvenient : threads will not unblock in the order they are blocked;
49 once a thread releases the mutex, whichever waiting thread is scheduled next
50 will be unblocked. This is what is called the "thundering herd" problem, and in
51 extreme cases, can lead to starvation
52 Update : we'll set the spinlock to our PID instead of 1, that way we can find
53 out if the lock is held by a dead process.
55 Option 2 : possibly more efficient, much more complex, borders on
56 "over-engineered". I'll explain it in stages, in the same way I deduced it.
58 Option 2.1 : probably less efficient, reasonably simple. stop at step 2)
60 1) The minimal, original idea was to use SysV semaphores for synchronization.
61 This didn't work, because semaphores block the entire process, which can easily
62 lead to deadlocks (thread 1 takes sem, thread 2 tries to take sem, blocks
63 process, thread 1 is blocked and never releases sem)
65 2) (this is option 2.1) Protect the use of the semaphores in critical sections.
66 Enter the critical section before taking the semaphore, leave the section after
67 releasing the semaphore. This ensures that 2 threads of the same process will
68 never try to acquire the semaphore at the same time, which avoids deadlocks.
69 However, the entire process still blocks if another process has the semaphore.
70 Here, unblocking order should match blocking order (assuming the semaphores work
71 properly); therefore, no risk of starvation.
73 3) This is where it gets complicated. To avoid blocking whole processes, we
74 can't use semaphores. One suggestion I got was to use multi-ended FIFOs, here's
77 -as in option 1, use InterlockedCompareExchange on a value in shared memory.
78 -if this was not succesful (someone else has locked the shared memory), then :
79 -open a special FIFO for reading; try to read 1 byte. This will block until
80 someone writes to it, and *should* only block the current thread. (note :
81 more than one thread/process can open the same FIFO and block on read(),
82 in this case, only one gets woken up when someone writes to it.
83 *which* one is, again, not predictable; this may lead to starvation)
84 -once we are unblocked, we have the lock.
85 -once we have the lock (either from Interlocked...() or from read()),
87 -once the work is done, we open the FIFO for writing. this will fail if no one
89 -if no one is listening, release the lock by setting the shared memory value
91 -if someone is listening, write 1 byte to the FIFO to wake someone, then close
92 the FIFO. the value in shared memory will remain nonzero until a thread tries
93 to wake the next one and sees no one is listening.
95 problem with this option : it is possible for a thread to call Interlocked...()
96 BETWEEN the failed "open for write" attempt and the subsequent restoration of
97 the SHM value back to zero. In this case, that thread will go to sleep and will
98 not wake up until *another* thread asks for the lock, takes it and releases it.
100 so to fix that, we come to step
102 4) Instead of using InterlockedCompareExchange, use a SysV semaphore :
103 -when taking the lock :
105 -try to take the lock (check if value is zero, change it to 1 if it is)
106 -if we fail : open FIFO for reading, release the semaphore, read() and block
107 -if we succeed : release the semaphore
108 -when releasing the lock :
111 -if we succeed, release semaphore, then write value
112 -if we fail, reset SHM value to 0, then release semaphore.
114 Yes, using a SysV semaphore will block the whole process, but for a very short
115 time (unlike option 2.1)
116 problem with this : again, we get deadlocks if 2 threads from a single process
117 try to take the semaphore. So like in option 2.1, we ave to wrap the semaphore
118 usage in a critical section. (complex enough yet?)
120 so the locking sequence becomes EnterCriticalSection - take semaphore - try to
121 lock - open FIFO - release semaphore - LeaveCriticalSection - read
122 and the unlocking sequence becomes EnterCS - take sem - open FIFO - release
123 sem - LeaveCS - write
125 Once again, the unblocking order probably won't match the blocking order.
126 This could be fixed by using multiple FIFOs : waiting thread open their own
127 personal FIFO, write the ID of their FIFO to another FIFO. The thread that wants
128 to release the lock reads ID from that FIFO, determines which FIFO to open for
129 writing and writes a byte to it. This way, whoever wrote its ID to the FIFO
130 first will be first to awake. How's that for complexity?
132 So to summarize, the options are
134 2.1 - semaphores + critical sections (whole process blocks)
135 2 - semaphores + critical sections + FIFOs (minimal process blocking)
136 2.2 - option 2 with multiple FIFOs (minimal process blocking, order preserved)
138 Considering the overhead involved in options 2 & 2.2, it is our guess that
139 option 1 may in fact be more efficient, and this is how we'll implement it for
140 the moment. Note that other platforms may not present the same difficulties
141 (i.e. other pthread implementations may support inter-process mutexes), and may
142 be able to use a simpler, more efficient approach.
145 It is important for the shared memory implementation to be as foolproof as
146 possible. Since more than one process will be able to modify the shared data,
147 it becomes possible for one unstable process to destabilize the others. The
148 simplest example is a process that dies while modifying shared memory : if
149 it doesn't release its lock, we're in trouble. (this case will be taken care
150 of by using PIDs in the spinlock; this we we can check if the locking process
158 #include "pal/palinternal.h"
159 #include "pal/dbgmsg.h"
160 #include "pal/shmemory.h"
161 #include "pal/critsect.h"
162 #include "pal/shmemory.h"
163 #include "pal/init.h"
164 #include "pal/process.h"
165 #include "pal/misc.h"
167 #include <sys/types.h>
168 #include <sys/stat.h>
169 #include <sys/mman.h>
177 #if HAVE_YIELD_SYSCALL
178 #include <sys/syscall.h>
179 #endif /* HAVE_YIELD_SYSCALL */
181 SET_DEFAULT_DEBUG_CHANNEL(SHMEM);
183 /* Macro-definitions **********************************************************/
185 /* rounds 'val' up to be divisible by 'r'. 'r' must be a power of two. */
187 #define roundup(val, r) ( ((val)+(r)-1) & ~( (r)-1 ) )
190 #define SEGMENT_NAME_SUFFIX_LENGTH 10
192 #if defined(_DEBUG) && defined(_HPUX_)
193 #define TRACK_SHMLOCK_OWNERSHIP
194 #endif // _DEBUG && _HPUX_
198 High byte is SHM segment number
199 Low bytes are offset in the segment
201 #define SHMPTR_SEGMENT(shmptr) \
202 (((shmptr)>>24)&0xFF)
204 #define SHMPTR_OFFSET(shmptr) \
205 ((shmptr)&0x00FFFFFF)
207 #define MAKE_SHMPTR(segment,offset) \
208 ((SHMPTR)((((segment)&0xFF)<<24)|((offset)&0x00FFFFFF)))
210 /*#define MAX_SEGMENTS 256*//*definition is now in shmemory.h*/
212 /* Use MAP_NOSYNC to improve performance if it's available */
213 #if defined(MAP_NOSYNC)
214 #define MAPFLAGS MAP_NOSYNC|MAP_SHARED
216 #define MAPFLAGS MAP_SHARED
220 /* Type definitions ***********************************************************/
224 SPS_16 = 0, /* 16 bytes */
225 SPS_32, /* 32 bytes */
226 SPS_64, /* 64 bytes */
227 SPS_MAXPATHx2, /* 520 bytes, for long Unicode paths */
231 /* Block size associated to each SPS identifier */
232 static const int block_sizes[SPS_LAST] = {16,32,64,roundup((MAX_LONGPATH+1)*2, sizeof(INT64))};
236 Description of a shared memory pool for a specific block size.
238 Note on pool structure :
239 first_free identifies the first available SHMPTR in the block. Free blocks are
240 arranged in a linked list, each free block indicating the location of the next
241 one. To walk the list, do something like this :
242 SHMPTR *shmptr_ptr=(SHMPTR *)SHMPTR_TO_PTR(pool->first_free)
245 SHMPTR next = *shmptr_ptr;
246 shmptr_ptr = (SHMPTR *)SHMPTR_TO_PTR(next)
251 int item_size; /* size of 1 block, in bytes */
252 int num_items; /* total number of blocks in the pool */
253 int free_items; /* number of unused items in the pool */
254 SHMPTR first_free; /* location of first available block in the pool */
259 Description of a single shared memory segment
261 Notes on segment names :
262 next_semgent contains the string generated by mkstemp() when a new segment is
263 generated. This allows processes to map segment files created by other
264 processes. To get the file name of a segment file, concatenate
265 "segment_name_prefix" and "next_segment".
267 Notes on pool segments :
268 Each segment is divided into one pool for each defined block size (SPS_*).
269 These pools are linked with pools in other segment to form one large pool for
270 each block size, so that SHMAlloc() doesn't have to search each segment to find
272 the first_ and last_pool_blocks indicate the first and last block in a single
273 segment for each block size. This allows SHMFree() to determine the size of a
274 block by comparing its value with these boundaries. (note that within each
275 segment, each pool is composed of a single contiguous block of memory)
279 Volatile<SHMPTR> first_pool_blocks[SPS_LAST];
280 Volatile<SHMPTR> last_pool_blocks[SPS_LAST];
281 } SHM_SEGMENT_HEADER;
285 Global information about the shared memory system
286 In addition to the standard SHM_SEGGMENT_HEADER, the first segment contains some
287 information required to properly use the shared memory system.
289 The spinlock is used to ensure that only one process accesses shared memory at
290 the same time. A process can only take the spinlock if its contents is 0, and
291 it takes the spinlock by placing its PID in it. (this allows a process to catch
292 the special case where it tries to take a spinlock it already owns.
294 The first_* members will contain the location of the first element in the
295 various linked lists of shared information
298 #ifdef TRACK_SHMLOCK_OWNERSHIP
300 #define SHMLOCK_OWNERSHIP_HISTORY_ARRAY_SIZE 5
302 #define CHECK_CANARIES(header) \
303 _ASSERTE(HeadSignature == header->dwHeadCanaries[0]); \
304 _ASSERTE(HeadSignature == header->dwHeadCanaries[1]); \
305 _ASSERTE(TailSignature == header->dwTailCanaries[0]); \
306 _ASSERTE(TailSignature == header->dwTailCanaries[1])
308 typedef struct _pid_and_tid
311 Volatile<pthread_t> tid;
314 const DWORD HeadSignature = 0x48454144;
315 const DWORD TailSignature = 0x5441494C;
317 #endif // TRACK_SHMLOCK_OWNERSHIP
321 SHM_SEGMENT_HEADER header;
322 #ifdef TRACK_SHMLOCK_OWNERSHIP
323 Volatile<DWORD> dwHeadCanaries[2];
324 #endif // TRACK_SHMLOCK_OWNERSHIP
325 Volatile<pid_t> spinlock;
326 #ifdef TRACK_SHMLOCK_OWNERSHIP
327 Volatile<DWORD> dwTailCanaries[2];
328 pid_and_tid pidtidCurrentOwner;
329 pid_and_tid pidtidOwners[SHMLOCK_OWNERSHIP_HISTORY_ARRAY_SIZE];
330 Volatile<ULONG> ulOwnersIdx;
331 #endif // TRACK_SHMLOCK_OWNERSHIP
332 SHM_POOL_INFO pools[SPS_LAST]; /* information about each memory pool */
333 Volatile<SHMPTR> shm_info[SIID_LAST]; /* basic blocks of shared information.*/
337 /* Static variables ***********************************************************/
339 /* Critical section to ensure that only one thread at a time accesses shared
341 -Using a spinlock means that processes must busy-wait for the lock to be
342 available. The critical section ensures taht only one thread will busy-wait,
343 while the rest are put to sleep.
344 -Since the spinlock only contains a PID, it isn't possible to make a difference
345 between threads of the same process. This could be resolved by using 2
346 spinlocks, but this would introduce more busy-wait.
348 static CRITICAL_SECTION shm_critsec;
350 /* number of segments the current process knows about */
353 /* array containing the base address of each segment */
354 Volatile<LPVOID> shm_segment_bases[MAX_SEGMENTS];
356 /* number of locks the process currently holds (SHMLock calls without matching
357 SHMRelease). Because we take the critical section while inside a
358 SHMLock/SHMRelease pair, this is actually the number of locks held by a single
360 static Volatile<LONG> lock_count;
362 /* thread ID of thread holding the SHM lock. used for debugging purposes :
363 SHMGet/SetInfo will verify that the calling thread holds the lock */
364 static Volatile<HANDLE> locking_thread;
366 /* Constants ******************************************************************/
368 /* size of a single segment : 256KB */
369 static const int segment_size = 0x40000;
371 /* Static function prototypes *************************************************/
373 static SHMPTR SHMInitPool(SHMPTR first, int block_size, int pool_size,
374 SHM_POOL_INFO *pool);
375 static SHMPTR SHMLinkPool(SHMPTR first, int block_size, int num_blocks);
376 static BOOL SHMMapUnknownSegments(void);
377 static BOOL SHMAddSegment(void);
381 #define log_waste(x,y)
384 /* Public function implementations ********************************************/
389 Hook this process into the PAL shared memory system; initialize the shared
390 memory if no other process has done it.
393 BOOL SHMInitialize(void)
395 InternalInitializeCriticalSection(&shm_critsec);
400 SHM_FIRST_HEADER *header;
403 enum SHM_POOL_SIZES sps;
405 TRACE("Now initializing global shared memory system\n");
407 // Not really shared in CoreCLR; we don't try to talk to other CoreCLRs.
408 shm_segment_bases[0] = mmap(NULL, segment_size,PROT_READ|PROT_WRITE,
409 MAP_ANON|MAP_PRIVATE, -1, 0);
410 if(shm_segment_bases[0] == MAP_FAILED)
412 ERROR("mmap() failed; error is %d (%s)\n", errno, strerror(errno));
415 TRACE("Mapped first SHM segment at %p\n",shm_segment_bases[0].Load());
417 /* Initialize first segment's header */
418 header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
420 InterlockedExchange((LONG *)&header->spinlock, 0);
422 #ifdef TRACK_SHMLOCK_OWNERSHIP
423 header->dwHeadCanaries[0] = HeadSignature;
424 header->dwHeadCanaries[1] = HeadSignature;
425 header->dwTailCanaries[0] = TailSignature;
426 header->dwTailCanaries[1] = TailSignature;
428 // Check spinlock size
429 _ASSERTE(sizeof(DWORD) == sizeof(header->spinlock));
430 // Check spinlock alignment
431 _ASSERTE(0 == ((DWORD_PTR)&header->spinlock % (DWORD_PTR)sizeof(void *)));
432 #endif // TRACK_SHMLOCK_OWNERSHIP
434 #ifdef TRACK_SHMLOCK_OWNERSHIP
435 header->pidtidCurrentOwner.pid = 0;
436 header->pidtidCurrentOwner.tid = 0;
437 memset((void *)header->pidtidOwners, 0, sizeof(header->pidtidOwners));
438 header->ulOwnersIdx = 0;
439 #endif // TRACK_SHMLOCK_OWNERSHIP
441 /* SHM information array starts with NULLs */
442 memset((void *)header->shm_info, 0, SIID_LAST*sizeof(SHMPTR));
444 /* Initialize memory pools */
446 /* first pool starts right after header */
447 pool_start = roundup(sizeof(SHM_FIRST_HEADER), sizeof(INT64));
449 /* Same size for each pool, ensuring alignment is correct */
450 size = ((segment_size-pool_start)/SPS_LAST) & ~(sizeof(INT64)-1);
452 for (sps = static_cast<SHM_POOL_SIZES>(0); sps < SPS_LAST;
453 sps = static_cast<SHM_POOL_SIZES>(sps + 1))
455 pool_end = SHMInitPool(pool_start, block_sizes[sps], size,
456 (SHM_POOL_INFO *)&header->pools[sps]);
460 ERROR("SHMInitPool failed.\n");
461 munmap(shm_segment_bases[0],segment_size);
464 /* save first and last element of each pool for this segment */
465 header->header.first_pool_blocks[sps] = pool_start;
466 header->header.last_pool_blocks[sps] = pool_end;
468 /* next pool starts immediately after this one */
472 TRACE("Global shared memory initialization complete.\n");
478 /* hook into all SHM segments */
479 if(!SHMMapUnknownSegments())
481 ERROR("Error while mapping segments!\n");
491 Release all shared memory resources held; remove ourselves from the list of
492 registered processes, and remove all shared memory files if no process remains
494 Note that this function does not use thread suspension wrapper for unlink and free
495 because all thread objects are deleted before this function is called
499 void SHMCleanup(void)
501 SHM_FIRST_HEADER *header;
504 TRACE("Starting shared memory cleanup\n");
509 /* We should not be holding the spinlock at this point. If we are, release
510 the spinlock. by setting it to 0 */
512 header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
514 _ASSERT_MSG(header->spinlock != my_pid,
515 "SHMCleanup called while the current process still owns the lock "
516 "[owner thread=%u, current thread: %u]\n",
517 locking_thread.Load(), THREADSilentGetCurrentThreadId());
519 /* Now for the interprocess stuff. */
520 DeleteCriticalSection(&shm_critsec);
523 /* Unmap memory segments */
524 while(shm_numsegments)
527 if ( -1 == munmap( shm_segment_bases[ shm_numsegments ],
530 ASSERT( "munmap() failed; errno is %d (%s).\n",
531 errno, strerror( errno ) );
536 TRACE("SHMCleanup complete!\n");
542 Allocate a block of memory of the specified size
545 size_t size : size of block required
548 A SHMPTR identifying the new block, or 0 on failure. Use SHMPTR_TO_PTR to
549 convert a SHMPTR into a useable pointer (but remember to lock the shared
553 SHMalloc will fail if the requested size is larger than a certain maximum.
554 At the moment, the maximum is 520 bytes (MAX_LONGPATH*2).
556 SHMPTR SHMalloc(size_t size)
558 enum SHM_POOL_SIZES sps;
561 SHM_FIRST_HEADER *header;
564 TRACE("SHMalloc() called; requested size is %u\n", size);
568 WARN("Got a request for a 0-byte block! returning 0\n");
572 /* Find the first block size >= requested size */
573 for (sps = static_cast<SHM_POOL_SIZES>(0); sps < SPS_LAST;
574 sps = static_cast<SHM_POOL_SIZES>(sps + 1))
576 if (size <= static_cast<size_t>(block_sizes[sps]))
582 /* If no block size is found, requested size was too large. */
583 if( SPS_LAST == sps )
585 ASSERT("Got request for shared memory block of %u bytes; maximum block "
586 "size is %d.\n", size, block_sizes[SPS_LAST-1]);
590 TRACE("Best block size is %d (%d bytes wasted)\n",
591 block_sizes[sps], block_sizes[sps]-size );
593 log_waste(sps, block_sizes[sps]-size);
596 header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
598 /* If there are no free items of the specified size left, it's time to
599 allocate a new shared memory segment.*/
600 if(header->pools[sps].free_items == 0)
602 TRACE("No blocks of %d bytes left; allocating new segment.\n",
606 ERROR("Unable to allocate new shared memory segment!\n");
612 /* Remove the first free block from the pool */
613 first_free = header->pools[sps].first_free;
614 shmptr_ptr = static_cast<SHMPTR*>(SHMPTR_TO_PTR(first_free));
616 if( 0 == first_free )
618 ASSERT("First free block in %d-byte pool (%08x) was invalid!\n",
619 block_sizes[sps], first_free);
624 /* the block "first_free" is the head of a linked list of free blocks;
625 take the next link in the list and set it as new head of list. */
626 next_free = *shmptr_ptr;
627 header->pools[sps].first_free = next_free;
628 header->pools[sps].free_items--;
630 /* make sure we're still in a sane state */
631 if(( 0 == header->pools[sps].free_items && 0 != next_free) ||
632 ( 0 != header->pools[sps].free_items && 0 == next_free))
634 ASSERT("free block count is %d, but next free block is %#x\n",
635 header->pools[sps].free_items, next_free);
636 /* assume all remaining blocks in the pool are corrupt */
637 header->pools[sps].first_free = 0;
638 header->pools[sps].free_items = 0;
640 else if (0 != next_free && 0 == SHMPTR_TO_PTR(next_free) )
642 ASSERT("Next free block (%#x) in %d-byte pool is invalid!\n",
643 next_free, block_sizes[sps]);
644 /* assume all remaining blocks in the pool are corrupt */
645 header->pools[sps].first_free = 0;
646 header->pools[sps].free_items = 0;
651 TRACE("Allocation successful; %d blocks of %d bytes left. Returning %08x\n",
652 header->pools[sps].free_items, block_sizes[sps], first_free);
659 Release a block of shared memory and put it back in the shared memory pool
662 SHMPTR shmptr : identifier of block to release
666 void SHMfree(SHMPTR shmptr)
670 SHM_SEGMENT_HEADER *header;
671 SHM_FIRST_HEADER *first_header;
672 enum SHM_POOL_SIZES sps;
677 WARN("can't SHMfree() a NULL SHMPTR!\n");
682 TRACE("Releasing SHMPTR 0x%08x\n", shmptr);
684 shmptr_ptr = static_cast<SHMPTR*>(SHMPTR_TO_PTR(shmptr));
688 ASSERT("Tried to free an invalid shared memory pointer 0x%08x\n", shmptr);
693 /* note : SHMPTR_TO_PTR has already validated the segment/offset pair */
694 segment = SHMPTR_SEGMENT(shmptr);
695 header = (SHM_SEGMENT_HEADER *)shm_segment_bases[segment].Load();
697 /* Find out the size of this block. Each segment tells where are its first
698 and last blocks for each block size, so we simply need to check in which
699 interval the block fits */
700 for (sps = static_cast<SHM_POOL_SIZES>(0); sps < SPS_LAST;
701 sps = static_cast<SHM_POOL_SIZES>(sps + 1))
703 if(header->first_pool_blocks[sps]<=shmptr &&
704 header->last_pool_blocks[sps]>=shmptr)
710 /* If we didn't find an interval, then the block doesn't really belong in
711 this segment (shouldn't happen, the offset check in SHMPTR_TO_PTR should
712 have caught this.) */
715 ASSERT("Shared memory pointer 0x%08x is out of bounds!\n", shmptr);
720 TRACE("SHMPTR 0x%08x is a %d-byte block located in segment %d\n",
721 shmptr, block_sizes[sps], segment);
723 /* Determine the offset of this block (in bytes) relative to the first
724 block of the same size in this segment */
725 offset = shmptr - header->first_pool_blocks[sps];
727 /* Make sure that the offset is a multiple of the block size; otherwise,
728 this isn't a real SHMPTR */
729 if( 0 != ( offset % block_sizes[sps] ) )
731 ASSERT("Shared memory pointer 0x%08x is misaligned!\n", shmptr);
736 /* Put the SHMPTR back in its pool. */
737 first_header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
739 /* first_free is the head of a linked list of free SHMPTRs. All we need to
740 do is make shmptr point to first_free, and set shmptr as the new head
742 *shmptr_ptr = first_header->pools[sps].first_free;
743 first_header->pools[sps].first_free = shmptr;
744 first_header->pools[sps].free_items++;
746 TRACE("SHMPTR 0x%08x released; there are now %d blocks of %d bytes "
747 "available\n", shmptr, first_header->pools[sps].free_items,
755 Restrict shared memory access to the current thread of the current process
763 see comments at the declaration of shm_critsec for rationale of critical
768 /* Hold the critical section until the lock is released */
769 PALCEnterCriticalSection(&shm_critsec);
771 _ASSERTE((0 == lock_count && 0 == locking_thread) ||
772 (0 < lock_count && (HANDLE)pthread_self() == locking_thread));
776 SHM_FIRST_HEADER *header;
777 pid_t my_pid, tmp_pid;
779 #ifdef TRACK_SHMLOCK_OWNERSHIP
781 #endif // TRACK_SHMLOCK_OWNERSHIP
783 TRACE("First-level SHM lock : taking spinlock\n");
785 header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
787 // Store the id of the current thread as the (only) one that is
788 // trying to grab the spinlock from the current process
789 locking_thread = (HANDLE)pthread_self();
795 #ifdef TRACK_SHMLOCK_OWNERSHIP
796 _ASSERTE(0 != my_pid);
797 _ASSERTE(getpid() == my_pid);
798 _ASSERTE(my_pid != header->spinlock);
799 CHECK_CANARIES(header);
800 #endif // TRACK_SHMLOCK_OWNERSHIP
803 // Try to grab the spinlock
805 tmp_pid = InterlockedCompareExchange((LONG *) &header->spinlock, my_pid,0);
807 #ifdef TRACK_SHMLOCK_OWNERSHIP
808 CHECK_CANARIES(header);
809 #endif // TRACK_SHMLOCK_OWNERSHIP
813 // TODO: workaround for VSW # 381564
815 if (0 == tmp_pid && my_pid != header->spinlock)
817 ERROR("InterlockedCompareExchange returned the Comperand but "
818 "failed to store the Exchange value to the Destination: "
819 "looping again [my_pid=%u header->spinlock=%u tmp_pid=%u "
820 "spincount=%d locking_thread=%u]\n", (DWORD)my_pid,
821 (DWORD)header->spinlock, (DWORD)tmp_pid, (int)spincount,
822 (HANDLE)locking_thread);
831 // Spinlock acquired: break out of the loop
835 /* Check if lock holder is alive. If it isn't, we can reset the
836 spinlock and try to take it again. If it is, we have to wait.
837 We use "spincount" to do this check only every 8th time through
838 the loop, for performance reasons.*/
839 if( (0 == (spincount&0x7)) &&
840 (-1 == kill(tmp_pid,0)) &&
843 TRACE("SHM spinlock owner (%08x) is dead; releasing its lock\n",
846 InterlockedCompareExchange((LONG *) &header->spinlock, 0, tmp_pid);
850 /* another process is holding the lock... we want to yield and
851 give the holder a chance to release the lock
852 The function sched_yield() only yields to a thread in the
853 current process; this doesn't help us much, anddoens't help
854 at all if there's only 1 thread. There doesn't seem to be
855 any clean way to force a yield to another process, but the
856 FreeBSD syscall "yield" does the job. We alternate between
857 both methods to give other threads of this process a chance
858 to run while we wait.
860 #if HAVE_YIELD_SYSCALL
863 #endif /* HAVE_YIELD_SYSCALL */
865 #if HAVE_YIELD_SYSCALL
869 /* use the syscall first, since we know we'l need to yield
870 to another process eventually - the lock can't be held
871 by the current process, thanks to the critical section */
872 syscall(SYS_yield, 0);
874 #endif /* HAVE_YIELD_SYSCALL */
877 // Increment spincount
881 _ASSERT_MSG(my_pid == header->spinlock,
882 "\n(my_pid = %u) != (header->spinlock = %u)\n"
885 "locking_thread = %u\n",
886 (DWORD)my_pid, (DWORD)header->spinlock,
889 (HANDLE)locking_thread);
891 #ifdef TRACK_SHMLOCK_OWNERSHIP
892 _ASSERTE(0 == header->pidtidCurrentOwner.pid);
893 _ASSERTE(0 == header->pidtidCurrentOwner.tid);
895 header->pidtidCurrentOwner.pid = my_pid;
896 header->pidtidCurrentOwner.tid = locking_thread;
898 ulIdx = header->ulOwnersIdx % (sizeof(header->pidtidOwners) / sizeof(header->pidtidOwners[0]));
900 header->pidtidOwners[ulIdx].pid = my_pid;
901 header->pidtidOwners[ulIdx].tid = locking_thread;
903 header->ulOwnersIdx += 1;
904 #endif // TRACK_SHMLOCK_OWNERSHIP
909 TRACE("SHM lock level is now %d\n", lock_count.Load());
916 Release a lock on shared memory taken with SHMLock.
926 /* prevent a thread from releasing another thread's lock */
927 PALCEnterCriticalSection(&shm_critsec);
931 ASSERT("SHMRelease called without matching SHMLock!\n");
932 PALCLeaveCriticalSection(&shm_critsec);
938 _ASSERTE(lock_count >= 0);
940 /* If lock count is 0, this call matches the first Lock call; it's time to
941 set the spinlock back to 0. */
944 SHM_FIRST_HEADER *header;
945 pid_t my_pid, tmp_pid;
947 TRACE("Releasing first-level SHM lock : resetting spinlock\n");
951 header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
953 #ifdef TRACK_SHMLOCK_OWNERSHIP
954 CHECK_CANARIES(header);
955 _ASSERTE(0 != my_pid);
956 _ASSERTE(getpid() == my_pid);
957 _ASSERTE(my_pid == header->spinlock);
958 _ASSERTE(header->pidtidCurrentOwner.pid == my_pid);
959 _ASSERTE(pthread_self() == header->pidtidCurrentOwner.tid);
960 _ASSERTE((pthread_t)locking_thread == header->pidtidCurrentOwner.tid);
962 header->pidtidCurrentOwner.pid = 0;
963 header->pidtidCurrentOwner.tid = 0;
964 #endif // TRACK_SHMLOCK_OWNERSHIP
969 // TODO: workaround for VSW # 381564
974 /* Make sure we don't touch the spinlock if we don't own it. We're
975 supposed to own it if we get here, but just in case... */
976 tmp_pid = InterlockedCompareExchange((LONG *) &header->spinlock, 0, my_pid);
978 if (tmp_pid != my_pid)
980 ASSERT("Process 0x%08x tried to release spinlock owned by process "
981 "0x%08x! \n", my_pid, tmp_pid);
982 PALCLeaveCriticalSection(&shm_critsec);
988 // TODO: workaround for VSW # 381564
990 while (my_pid == header->spinlock);
993 /* indicate no thread (in this process) holds the SHM lock */
996 #ifdef TRACK_SHMLOCK_OWNERSHIP
997 CHECK_CANARIES(header);
998 #endif // TRACK_SHMLOCK_OWNERSHIP
1001 TRACE("SHM lock level is now %d\n", lock_count.Load());
1003 /* This matches the EnterCriticalSection from SHMRelease */
1004 PALCLeaveCriticalSection(&shm_critsec);
1006 /* This matches the EnterCriticalSection from SHMLock */
1007 PALCLeaveCriticalSection(&shm_critsec);
1015 Convert a SHMPTR value to a valid pointer within the address space of the
1019 SHMPTR shmptr : SHMPTR value to convert into a pointer
1022 Address corresponding to the given SHMPTR, valid for the current process
1025 (see notes for SHMPTR_SEGMENT macro for details on SHMPTR structure)
1027 It is possible for the segment index to be greater than the known total number
1028 of segments (shm_numsegments); this means that the SHMPTR points to a memory
1029 block in a shared memory segment this process doesn't know about. In this case,
1030 we must obtain an address for that new segment and add it to our array
1031 (see SHMMapUnknownSegments for details)
1033 In the simplest case (no need to map new segments), there is no need to hold
1034 the lock, since we don't access any information that can change
1036 LPVOID SHMPtrToPtr(SHMPTR shmptr)
1042 TRACE("Converting SHMPTR 0x%08x to a valid pointer...\n", shmptr);
1045 WARN("Got SHMPTR \"0\"; returning NULL pointer\n");
1049 segment = SHMPTR_SEGMENT(shmptr);
1051 /* If segment isn't known, it may have been added by another process. We
1052 need to map all new segments into our address space. */
1053 if(segment>= shm_numsegments)
1055 TRACE("SHMPTR is in segment %d, we know only %d. We must now map all "
1056 "unknowns.\n", segment, shm_numsegments);
1057 SHMMapUnknownSegments();
1059 /* if segment is still unknown, then it doesn't exist */
1060 if(segment>=shm_numsegments)
1062 ASSERT("Segment %d still unknown; returning NULL\n", segment);
1065 TRACE("Segment %d found; continuing\n", segment);
1068 /* Make sure the offset doesn't point outside the segment */
1069 offset = SHMPTR_OFFSET(shmptr);
1070 if(offset>=segment_size)
1072 ASSERT("Offset %d is larger than segment size (%d)! returning NULL\n",
1073 offset, segment_size);
1078 /* Make sure the offset doesn't point in the segment's header */
1081 if (static_cast<size_t>(offset) < roundup(sizeof(SHM_FIRST_HEADER), sizeof(INT64)))
1083 ASSERT("Offset %d is in segment header! returning NULL\n", offset);
1089 if (static_cast<size_t>(offset) < sizeof(SHM_SEGMENT_HEADER))
1091 ASSERT("Offset %d is in segment header! returning NULL\n", offset);
1096 retval = shm_segment_bases[segment];
1097 retval = static_cast<BYTE*>(retval) + offset;
1099 TRACE("SHMPTR %#x is at offset %d in segment %d; maps to address %p\n",
1100 shmptr, offset, segment, retval);
1109 Retrieve some information from shared memory
1112 SHM_INFO_ID element : identifier of element to retrieve
1115 Value of specified element
1118 The SHM lock should be held while manipulating shared memory
1120 SHMPTR SHMGetInfo(SHM_INFO_ID element)
1122 SHM_FIRST_HEADER *header = NULL;
1125 if(element < 0 || element >= SIID_LAST)
1127 ASSERT("Invalid SHM info element %d\n", element);
1131 /* verify that this thread holds the SHM lock. No race condition: if the
1132 current thread is here, it can't be in SHMLock or SHMUnlock */
1133 if( (HANDLE)pthread_self() != locking_thread )
1135 ASSERT("SHMGetInfo called while thread does not hold the SHM lock!\n");
1138 header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
1140 retval = header->shm_info[element];
1142 TRACE("SHM info element %d is %08x\n", element, retval );
1151 Place some information into shared memory
1154 SHM_INFO_ID element : identifier of element to save
1155 SHMPTR value : new value of element
1158 TRUE if successfull, FALSE otherwise.
1161 The SHM lock should be held while manipulating shared memory
1163 BOOL SHMSetInfo(SHM_INFO_ID element, SHMPTR value)
1165 SHM_FIRST_HEADER *header;
1167 if(element < 0 || element >= SIID_LAST)
1169 ASSERT("Invalid SHM info element %d\n", element);
1173 /* verify that this thread holds the SHM lock. No race condition: if the
1174 current thread is here, it can't be in SHMLock or SHMUnlock */
1175 if( (HANDLE)pthread_self() != locking_thread )
1177 ASSERT("SHMGetInfo called while thread does not hold the SHM lock!\n");
1180 header = (SHM_FIRST_HEADER*)shm_segment_bases[0].Load();
1182 TRACE("Setting SHM info element %d to %08x; used to be %08x\n",
1183 element, value, header->shm_info[element].Load() );
1185 header->shm_info[element] = value;
1191 /* Static function implementations ********************************************/
1196 Perform one-time initialization for a shared memory pool.
1199 SHMPTR first : SHMPTR of first memory block in the pool
1200 int block_size : size (in bytes) of a memory block in this pool
1201 int pool_size : total size (in bytes) of this pool
1202 SHM_POOL_INFO *pool : pointer to initialize with information about the pool
1205 SHMPTR of last memory block in the pool
1208 This function is used to initialize the memory pools of the first SHM segment.
1209 In addition to creating a linked list of SHMPTRs, it initializes the given
1210 SHM_POOL_INFO based on the given information.
1212 static SHMPTR SHMInitPool(SHMPTR first, int block_size, int pool_size,
1213 SHM_POOL_INFO *pool)
1218 TRACE("Initializing SHM pool for %d-byte blocks\n", block_size);
1220 /* Number of memory blocks of size "block_size" that can fit in "pool_size"
1221 bytes (rounded down) */
1222 num_blocks = pool_size/block_size;
1224 /* Create the initial linked list of free blocks */
1225 last = SHMLinkPool(first, block_size, num_blocks);
1228 ERROR("Failed to create linked list of free blocks!\n");
1232 /* Initialize SHM_POOL_INFO */
1233 pool->first_free = first;
1234 pool->free_items = num_blocks;
1235 pool->item_size = block_size;
1236 pool->num_items = num_blocks;
1238 TRACE("New SHM pool extends from SHMPTR 0x%08x to 0x%08x\n", first, last);
1245 Joins contiguous blocks of memory into a linked list..
1248 SHMPTR first : First SHMPTR in the memory pool; first link in the list
1249 int block_size : size (in bytes) of the memory blocks
1250 int num_blocks : number of contiguous blocks to link
1253 SHMPTR of last memory block in the pool
1256 The linked list is created by saving the value of the next SHMPTR in the list
1257 in the memory location corresponding to the previous SHMPTR :
1258 *(SHMPTR *)SHMPTR_TO_PTR(previous) = previous + block_size
1260 static SHMPTR SHMLinkPool(SHMPTR first, int block_size, int num_blocks)
1267 TRACE("Linking %d blocks of %d bytes, starting at 0x%08x\n",
1268 num_blocks, block_size, first);
1270 item_ptr = static_cast<LPBYTE>(
1271 static_cast<LPBYTE>(shm_segment_bases[SHMPTR_SEGMENT(first)].Load()) +
1272 (SHMPTR_OFFSET(first)));
1273 next_shmptr = first/*+block_size*/;
1275 /* Link blocks together */
1276 for(i=0; i<num_blocks; i++)
1278 next_shmptr += block_size;
1280 /* item_ptr is char * (so we can increment with +=blocksize), we cast
1281 it to a SHMPTR * and set its content to the next SHMPTR in the list*/
1282 shmptr_ptr = (SHMPTR *)item_ptr;
1283 *shmptr_ptr = next_shmptr;
1285 item_ptr+=block_size;
1287 /* Last SHMPTR in the list must point to NULL */
1288 item_ptr-=block_size;
1289 shmptr_ptr = (SHMPTR *)item_ptr;
1292 /* Return SHMPTR of last element in the list */
1293 next_shmptr -= block_size;
1295 TRACE("New linked pool goes from 0x%08x to 0x%08x\n", first, next_shmptr);
1300 SHMMapUnknownSegments
1302 Map into this process all SHM segments not yet mapped
1307 TRUE on success, FALSE in case of error
1309 static BOOL SHMMapUnknownSegments(void)
1317 Create a new SHM segment, map it into this process, initialize it, then link it
1318 to the other SHM segments
1323 TRUE on success, FALSE in case of error
1326 This function assumes the SHM lock is held.
1328 static BOOL SHMAddSegment(void)
1330 LPVOID segment_base;
1331 SHM_SEGMENT_HEADER *header;
1332 SHM_FIRST_HEADER *first_header;
1333 SHMPTR first_shmptr;
1338 int current_pool_size;
1343 /* Map all segments this process doesn't yet know about, so we link the new
1344 segment at the right place */
1345 if(!SHMMapUnknownSegments())
1347 ERROR("SHMMapUnknownSegments failed!\n");
1351 /* Avoid overflowing */
1352 if(shm_numsegments == MAX_SEGMENTS)
1354 ERROR("Can't map more segments : maximum number (%d) reached!\n",
1359 TRACE("Creating SHM segment #%d\n", shm_numsegments);
1361 segment_base = mmap(NULL, segment_size, PROT_READ|PROT_WRITE,
1362 MAP_ANON|MAP_PRIVATE,-1, 0);
1364 if(segment_base == MAP_FAILED)
1366 ERROR("mmap() failed! error is %d (%s)\n", errno, strerror(errno));
1370 shm_segment_bases[shm_numsegments] = segment_base;
1372 /* Save name (well, suffix) of new segment in the header of the old last
1373 segment, so that other processes know where it is. */
1374 header = (SHM_SEGMENT_HEADER *)shm_segment_bases[shm_numsegments-1].Load();
1376 /* Indicate that the new segment is the last one */
1377 header = (SHM_SEGMENT_HEADER *)segment_base;
1379 /* We're now ready to update our memory pools */
1381 first_header = (SHM_FIRST_HEADER *)shm_segment_bases[0].Load();
1383 /* Calculate total amount of used memory (in bytes) */
1385 for(sps = 0; sps<SPS_LAST;sps++)
1387 /* Add total size of this pool */
1388 used_size += first_header->pools[sps].num_items*block_sizes[sps];
1390 /* Remove unused size of this pool */
1391 used_size -= first_header->pools[sps].free_items*block_sizes[sps];
1394 /* Determine how to divide the new segment between the pools for the
1395 different block sizes, then update the pool inforamtion accordingly
1396 Allocation strategy :
1397 1) Calculate the proportion of used memory used by each pool
1398 2) Allocate this proportion of the new segment to each pool
1401 /* Add the new segment to the total amount of SHM memory */
1402 new_size = segment_size-roundup(sizeof(SHM_SEGMENT_HEADER), sizeof(INT64));
1404 /* Calculate value of first SHMPTR in the new segment : segment is
1405 shm_numsegments (not yet incremented); offset is the first byte after
1406 the segment header */
1407 first_shmptr = MAKE_SHMPTR(shm_numsegments,roundup(sizeof(SHM_SEGMENT_HEADER), sizeof(INT64)));
1409 TRACE("Updating SHM pool information; Total memory used is %d bytes; "
1410 "we are adding %d bytes\n", used_size, new_size);
1412 /* We want to allocate at least 1 block of each size (to avoid adding
1413 special cases everywhere). We remove the required space for these blocks
1414 from the size used in the calculations, then add 1 to each block count */
1415 for(sps=0;sps<SPS_LAST;sps++)
1416 new_size -= block_sizes[sps];
1418 /* Loop through all block sizes */
1419 for(sps=0; sps<SPS_LAST; sps++)
1421 TRACE("Now processing block size \"%d\"...\n", block_sizes[sps]);
1422 /* amount of memory currently reserved for this block size */
1423 current_pool_size = first_header->pools[sps].num_items*block_sizes[sps];
1425 /* how much of that is actually used? */
1426 used_pool_size = current_pool_size -
1427 first_header->pools[sps].free_items*block_sizes[sps];
1429 DBGOUT("%d bytes of %d bytes used (%d%%)\n", used_pool_size,
1430 current_pool_size, (used_pool_size*100)/current_pool_size);
1432 /* amount of memory we want to add to the pool for this block size :
1433 amount used by this pool/total amount used * new segment's size */
1434 new_pool_size = (((LONGLONG)used_pool_size)*new_size)/used_size;
1436 DBGOUT("Allocating %d bytes of %d to %d-byte pool\n",
1437 new_pool_size, new_size, block_sizes[sps]);
1439 /* determine the number of blocks that can fit in the chosen amount */
1440 num_new_items = new_pool_size/block_sizes[sps];
1442 /* make sure we allocate at least 1 block of each size */
1445 DBGOUT("Adding %d new blocks\n", num_new_items);
1447 /* Save the first and last block of the current block size in the new
1448 segment; join all blocks in between in a linked list */
1449 header->first_pool_blocks[sps] = first_shmptr;
1450 header->last_pool_blocks[sps] = SHMLinkPool(first_shmptr,
1454 /* Link the last block in the new linked list to the first block of the
1455 old global linked list. We don't use SHMPTR_TO_PTR because the pool
1456 data isn't updated yet */
1457 shmptr_ptr = reinterpret_cast<SHMPTR*>(
1458 static_cast<LPBYTE>(shm_segment_bases[SHMPTR_SEGMENT(header->last_pool_blocks[sps])].Load()) +
1459 SHMPTR_OFFSET(header->last_pool_blocks[sps]));
1461 *shmptr_ptr = first_header->pools[sps].first_free;
1463 /* Save the first block of the new linked list as the new beginning of
1464 the global linked list; the global list now contains all new blocks
1465 AND all blocks that were already free */
1466 first_header->pools[sps].first_free = header->first_pool_blocks[sps];
1468 /* Update block counts to include new blocks */
1469 first_header->pools[sps].free_items+=num_new_items;
1470 first_header->pools[sps].num_items+=num_new_items;
1472 DBGOUT("There are now %d %d-byte blocks, %d are free\n",
1473 first_header->pools[sps].num_items, block_sizes[sps],
1474 first_header->pools[sps].free_items);
1476 /* Update first_shmptr to first byte after the new pool */
1477 first_shmptr+=num_new_items*block_sizes[sps];
1487 Duplicates the string in shared memory.
1489 Returns the new address as SHMPTR on success.
1490 Returns (SHMPTR)NULL on failure.
1492 SHMPTR SHMStrDup( LPCSTR string )
1499 length = strlen( string );
1501 retVal = SHMalloc( ++length );
1505 LPVOID ptr = SHMPTR_TO_PTR( retVal );
1506 _ASSERT_MSG(ptr != NULL, "SHMPTR_TO_PTR returned NULL.\n");
1509 memcpy( ptr, string, length );
1513 // This code should never be reached. If a valid pointer
1514 // is passed to SHMPTR_TO_PTR and NULL is returned, then
1515 // there's a problem in either the macro, or the underlying
1516 // call to SHMPtrToPtr. In case the impossible happens,
1517 // though, free the memory and return NULL rather than
1518 // returning uninitialized memory.
1530 Duplicates the wide string in shared memory.
1532 Returns the new address as SHMPTR on success.
1533 Returns (SHMPTR)NULL on failure.
1535 SHMPTR SHMWStrDup( LPCWSTR string )
1542 length = ( PAL_wcslen( string ) + 1 ) * sizeof( WCHAR );
1544 retVal = SHMalloc( length );
1548 LPVOID ptr = SHMPTR_TO_PTR(retVal);
1549 _ASSERT_MSG(ptr != NULL, "SHMPTR_TO_PTR returned NULL.\n");
1552 memcpy( ptr, string, length );
1556 // This code should never be reached. If a valid pointer
1557 // is passed to SHMPTR_TO_PTR and NULL is returned, then
1558 // there's a problem in either the macro, or the underlying
1559 // call to SHMPtrToPtr. In case the impossible happens,
1560 // though, free the memory and return NULL rather than
1561 // returning uninitialized memory.
1573 SHMFindNamedObjectByName
1575 Searches for an object whose name matches the name and ID passed in.
1577 Returns a SHMPTR to its location in shared memory. If no object
1578 matches the name, the function returns NULL and sets pbNameExists to FALSE.
1579 If an object matches the name but is of a different type, the function
1580 returns NULL and sets pbNameExists to TRUE.
1583 SHMPTR SHMFindNamedObjectByName( LPCWSTR lpName, SHM_NAMED_OBJECTS_ID oid,
1584 BOOL *pbNameExists )
1586 PSHM_NAMED_OBJECTS pNamedObject = NULL;
1587 SHMPTR shmNamedObject = 0;
1588 LPWSTR object_name = NULL;
1590 if(oid==SHM_NAMED_LAST)
1592 ASSERT("Invalid named object type.\n");
1596 if (pbNameExists == NULL)
1598 ASSERT("pbNameExists must be non-NULL.\n");
1603 *pbNameExists = FALSE;
1604 shmNamedObject = SHMGetInfo( SIID_NAMED_OBJECTS );
1606 TRACE( "Entering SHMFindNamedObjectByName looking for %S .\n",
1607 lpName?lpName:W16_NULLSTRING );
1609 while ( shmNamedObject )
1611 pNamedObject = (PSHM_NAMED_OBJECTS)SHMPTR_TO_PTR( shmNamedObject );
1612 if(NULL == pNamedObject)
1614 ASSERT("Got invalid SHMPTR value; list of named objects is "
1619 if ( pNamedObject->ShmObjectName )
1621 object_name = (LPWSTR)SHMPTR_TO_PTR( pNamedObject->ShmObjectName );
1625 PAL_wcscmp( lpName, object_name ) == 0 )
1627 if(oid == pNamedObject->ObjectType)
1629 TRACE( "Returning the kernel object %p.\n", pNamedObject );
1634 *pbNameExists = TRUE;
1638 shmNamedObject = pNamedObject->ShmNext;
1642 TRACE( "No matching kernel object was found.\n" );
1646 return shmNamedObject;
1651 SHMRemoveNamedObject
1653 Removes the specified named object from the list
1657 note : the caller is reponsible for releasing all associated memory
1659 void SHMRemoveNamedObject( SHMPTR shmNamedObject )
1661 PSHM_NAMED_OBJECTS pshmLast = 0;
1662 PSHM_NAMED_OBJECTS pshmCurrent = 0;
1664 TRACE( "Entered SHMDeleteNamedObject shmNamedObject = %d\n", shmNamedObject );
1668 (PSHM_NAMED_OBJECTS)SHMPTR_TO_PTR( SHMGetInfo( SIID_NAMED_OBJECTS ) );
1669 pshmLast = pshmCurrent;
1671 while ( pshmCurrent )
1673 if ( pshmCurrent->ShmSelf == shmNamedObject )
1675 TRACE( "Patching the list.\n" );
1677 /* Patch the list, and delete the object. */
1678 if ( pshmLast->ShmSelf == pshmCurrent->ShmSelf )
1680 /* Either the first element or no elements left. */
1681 SHMSetInfo( SIID_NAMED_OBJECTS, pshmCurrent->ShmNext );
1683 else if ( (PSHM_NAMED_OBJECTS)SHMPTR_TO_PTR( pshmCurrent->ShmNext ) )
1685 pshmLast->ShmNext = pshmCurrent->ShmNext;
1689 /* Only one left. */
1690 pshmLast->ShmNext = 0;
1697 pshmLast = pshmCurrent;
1698 pshmCurrent = (PSHM_NAMED_OBJECTS)SHMPTR_TO_PTR( pshmCurrent->ShmNext );
1706 /*++ SHMAddNamedObject
1708 Adds the specified named object to the list.
1712 void SHMAddNamedObject( SHMPTR shmNewNamedObject )
1714 PSHM_NAMED_OBJECTS pshmNew = 0;
1716 pshmNew = (PSHM_NAMED_OBJECTS)SHMPTR_TO_PTR( shmNewNamedObject );
1718 if ( pshmNew == NULL )
1720 ASSERT( "pshmNew should not be NULL\n" );
1725 pshmNew->ShmNext = SHMGetInfo( SIID_NAMED_OBJECTS );
1727 if ( !SHMSetInfo( SIID_NAMED_OBJECTS, shmNewNamedObject ) )
1729 ASSERT( "Unable to add the mapping object to shared memory.\n" );