Skip to content
  • Marc Alff's avatar
    0899bb82
    Squashed commit of the following: · 0899bb82
    Marc Alff authored
        Bug#22965826 CONTENTION IN PFS SCALABLE BUFFER
    
        Patch for 5.7 only.
    
        This is a performance improvement.
    
        Assume a server with:
        - a lot of objects instrumented overall (set A),
          that are never or rarely destroyed.
        - a load causing frequent instrumentation create + destroy
          for a few objects (set B).
    
        Before this fix,
    
        The mutex instrumentation uses a scalable buffer for instrumented objects.
        The buffer consists of pages, of size 1024 each.
    
        Pages are mostly filled with instances from set A,
        with a few holes that are later used for set B.
    
        Under load, the following contentions are present:
    
        1) create_mutex
    
        Execution the load creates a few mutex instances for set B.
        These are allocated from the "current" page in PFS_mutex_container.
    
        The issue arise when this page is mostly but not yet full,
        with a lot of entries occupied by objects from set A that are never
        destroyed:
        finding an empty slot, in
          PFS_buffer_scalable_container::allocate()
        can loop for a very long time in
          while (monotonic < monotonic_max)
    
        With a load where mutexes are created and destroyed often,
        the container always stays on the same current page,
        and each allocation needs to scan a lot or entries
        to find (rare) empty slots, which causes performance degradations.
    
        2) destroy_mutex
    
        When a mutex is destroyed, the code loops in all pages,
        in:
          PFS_buffer_scalable_container::deallocate()
    
        This does not scale when the total number of pages increases.
    
        After this fix,
    
        1) create_mutex
    
        Mutex instruments now have a volatility flag, used to indicate
        objects that are volatile.
    
        The mutex container is partitioned by volatility,
        so that:
        - instances from set A
        - instances from set B
        are kept in separate sub containers.
    
        Allocating and destroying objects from set B scales better,
        because the current page for the buffer used in partition B
        is never "almost full", as _all_ entries are now created and destroyed
        during the load, which decrease the number of iterations in the
        monotonic while loop.
    
        Mutex instances that are per session (THD) are marked volatile.
    
        2) destroy_mutex
    
        Each instrumented objects now contains a pointer to the parent page,
        which eliminates the loop in PFS_buffer_scalable_container::deallocate().
    0899bb82
    Squashed commit of the following:
    Marc Alff authored
        Bug#22965826 CONTENTION IN PFS SCALABLE BUFFER
    
        Patch for 5.7 only.
    
        This is a performance improvement.
    
        Assume a server with:
        - a lot of objects instrumented overall (set A),
          that are never or rarely destroyed.
        - a load causing frequent instrumentation create + destroy
          for a few objects (set B).
    
        Before this fix,
    
        The mutex instrumentation uses a scalable buffer for instrumented objects.
        The buffer consists of pages, of size 1024 each.
    
        Pages are mostly filled with instances from set A,
        with a few holes that are later used for set B.
    
        Under load, the following contentions are present:
    
        1) create_mutex
    
        Execution the load creates a few mutex instances for set B.
        These are allocated from the "current" page in PFS_mutex_container.
    
        The issue arise when this page is mostly but not yet full,
        with a lot of entries occupied by objects from set A that are never
        destroyed:
        finding an empty slot, in
          PFS_buffer_scalable_container::allocate()
        can loop for a very long time in
          while (monotonic < monotonic_max)
    
        With a load where mutexes are created and destroyed often,
        the container always stays on the same current page,
        and each allocation needs to scan a lot or entries
        to find (rare) empty slots, which causes performance degradations.
    
        2) destroy_mutex
    
        When a mutex is destroyed, the code loops in all pages,
        in:
          PFS_buffer_scalable_container::deallocate()
    
        This does not scale when the total number of pages increases.
    
        After this fix,
    
        1) create_mutex
    
        Mutex instruments now have a volatility flag, used to indicate
        objects that are volatile.
    
        The mutex container is partitioned by volatility,
        so that:
        - instances from set A
        - instances from set B
        are kept in separate sub containers.
    
        Allocating and destroying objects from set B scales better,
        because the current page for the buffer used in partition B
        is never "almost full", as _all_ entries are now created and destroyed
        during the load, which decrease the number of iterations in the
        monotonic while loop.
    
        Mutex instances that are per session (THD) are marked volatile.
    
        2) destroy_mutex
    
        Each instrumented objects now contains a pointer to the parent page,
        which eliminates the loop in PFS_buffer_scalable_container::deallocate().
Loading