When we delete entries in the hash set, we mark them "deleted" by
setting their key to the deleted_key, which points to a dummy
deleted_key_value. When searching for an entry, we normally skip over
those, but set_add() had some code for searching for duplicate entries
which forgot to skip over deleted entries. This led to a segfault inside
the NIR vectorization pass, since its key comparison function
interpreted the memory where deleted_key_value resides as a pointer and
tried to dereference it.
v2:
- add better commit message (Timothy)
- use entry_is_deleted (Timothy)
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
Signed-off-by: Connor Abbott <cwabbott0@gmail.com>
* If freeing of old keys is required to avoid memory leaks,
* perform a search before inserting.
*/
- if (entry->hash == hash &&
+ if (!entry_is_deleted(entry) &&
+ entry->hash == hash &&
ht->key_equals_function(key, entry->key)) {
entry->key = key;
return entry;