Skip to content
  • Marc Alff's avatar
    e779a0cd
    Bug#22320066 EVENTS_STATEMENTS_SUMMARY_BY_DIGEST: NO UNIQUE ROWS BY · e779a0cd
    Marc Alff authored
    DIGEST/SCHEMA_NAME
    
    Fix for 5.7 (backport from 8.0)
    
    Before this fix, table events_statements_summary_by_digest
    exposed many rows for the same statement digest and schema,
    breaking the expected uniqueness of digests.
    
    The root cause is in function find_or_create_digest(),
    which does not handle duplicate inserts in the LF_HASH properly.
    
    When two different sessions execute
    the same statement for the first time,
    each session creates an entry for the statement digest.
    
    In this case, the index is maintained properly with only one entry,
    but the table data itself still contained duplicated rows, orphan.
    
    The fix is to:
    - use a pfs_lock for a statement digest record
    - free the duplicate record when duplication is detected in the unique index
    - loop in the entire buffer to find an empty record,
      so that duplicate entries do not create holes and do not cause leaks
    - honor the pfs_lock when exposing the data.
    
    With this fix, the allocation pattern is similar to other instrumentations,
    like the mutex instances for example.
    
    Tested manually with debug code added to expose the race conditions.
    Not testable by MTR scripts.
    e779a0cd
    Bug#22320066 EVENTS_STATEMENTS_SUMMARY_BY_DIGEST: NO UNIQUE ROWS BY
    Marc Alff authored
    DIGEST/SCHEMA_NAME
    
    Fix for 5.7 (backport from 8.0)
    
    Before this fix, table events_statements_summary_by_digest
    exposed many rows for the same statement digest and schema,
    breaking the expected uniqueness of digests.
    
    The root cause is in function find_or_create_digest(),
    which does not handle duplicate inserts in the LF_HASH properly.
    
    When two different sessions execute
    the same statement for the first time,
    each session creates an entry for the statement digest.
    
    In this case, the index is maintained properly with only one entry,
    but the table data itself still contained duplicated rows, orphan.
    
    The fix is to:
    - use a pfs_lock for a statement digest record
    - free the duplicate record when duplication is detected in the unique index
    - loop in the entire buffer to find an empty record,
      so that duplicate entries do not create holes and do not cause leaks
    - honor the pfs_lock when exposing the data.
    
    With this fix, the allocation pattern is similar to other instrumentations,
    like the mutex instances for example.
    
    Tested manually with debug code added to expose the race conditions.
    Not testable by MTR scripts.
Loading