Skip to content
  • Marc Alff's avatar
    b4287f93
    Bug#23550835 ITERATING ON A FULL PERFORMANCE SCHEMA BUFFER CAN CRASH · b4287f93
    Marc Alff authored
    Before this fix, a SELECT on performance schema tables could crash
    the server, when an internal buffer is full.
    
    This could happen for example with:
    - more than 2^20 tables
    - more than 2^20 indexes
    - more than 2^20 files
    
    The immediate root cause is that using PFS_buffer_scalable_iterator
    on a full buffer causes an overflow in
    PFS_buffer_scalable_container::scan_next(),
    by accessing a page outside of m_pages[]
    
    This has been fixed by changing the do {} while loop
    into a while {} loop.
    
    For robustness, other do while loops have been
    changed to use the same while {} pattern,
    which is more tolerant to edge cases,
    and therefore less risky for maintenance.
    
    While investigating this issue, another case of overflow was found
    in the code: every page in the scalable buffer is of size PFS_PAGE_SIZE,
    ** except ** the last page, which can be smaller, due to m_last_page_size.
    
    The problem is that every code that iterate on pages,
    for example PFS_buffer_scalable_container::apply(),
    expects page to have a size of PFS_PAGE_SIZE,
    and cause corruption when using a partial last page.
    
    The fix is to:
    - make each page aware of its own size, with
      PFS_buffer_default_array::m_max,
    - iterate from PFS_buffer_default_array::get_first()
      to PFS_buffer_default_array::get_last(),
      instead of using [0, PFS_PAGE_SIZE[
    
    Also, logic for iterators need to be aware of partial pages,
    with tests such as "if (index_2 >= page->m_max)",
    in PFS_buffer_scalable_container::get().
    
    Lastly, the hard coded theoretical size limit on some buffers
    has been raised, since it was reached in practice for some workloads.
    
    New limits are:
    - 16 million instrumented tables (2^24), increased from 1M (2^20)
    - 64 million instrumented indexes (2^26), increased from 1M (2^20)
    - 16 million instrumented files (2^24), increased from 1M (2^20)
    b4287f93
    Bug#23550835 ITERATING ON A FULL PERFORMANCE SCHEMA BUFFER CAN CRASH
    Marc Alff authored
    Before this fix, a SELECT on performance schema tables could crash
    the server, when an internal buffer is full.
    
    This could happen for example with:
    - more than 2^20 tables
    - more than 2^20 indexes
    - more than 2^20 files
    
    The immediate root cause is that using PFS_buffer_scalable_iterator
    on a full buffer causes an overflow in
    PFS_buffer_scalable_container::scan_next(),
    by accessing a page outside of m_pages[]
    
    This has been fixed by changing the do {} while loop
    into a while {} loop.
    
    For robustness, other do while loops have been
    changed to use the same while {} pattern,
    which is more tolerant to edge cases,
    and therefore less risky for maintenance.
    
    While investigating this issue, another case of overflow was found
    in the code: every page in the scalable buffer is of size PFS_PAGE_SIZE,
    ** except ** the last page, which can be smaller, due to m_last_page_size.
    
    The problem is that every code that iterate on pages,
    for example PFS_buffer_scalable_container::apply(),
    expects page to have a size of PFS_PAGE_SIZE,
    and cause corruption when using a partial last page.
    
    The fix is to:
    - make each page aware of its own size, with
      PFS_buffer_default_array::m_max,
    - iterate from PFS_buffer_default_array::get_first()
      to PFS_buffer_default_array::get_last(),
      instead of using [0, PFS_PAGE_SIZE[
    
    Also, logic for iterators need to be aware of partial pages,
    with tests such as "if (index_2 >= page->m_max)",
    in PFS_buffer_scalable_container::get().
    
    Lastly, the hard coded theoretical size limit on some buffers
    has been raised, since it was reached in practice for some workloads.
    
    New limits are:
    - 16 million instrumented tables (2^24), increased from 1M (2^20)
    - 64 million instrumented indexes (2^26), increased from 1M (2^20)
    - 16 million instrumented files (2^24), increased from 1M (2^20)
Loading