Skip to content
  • Dyre Tjeldvoll's avatar
    09a0462a
    Bug#30183865: HUGE MALLOC WHEN OPEN FILE LIMIT IS HIGH · 09a0462a
    Dyre Tjeldvoll authored
    Problem: Setting max_open_files to a high value, or setting it when
    the OS' rlimit had a large value (not equal to RLIM_INF) could cause
    the server run out of memory.
    
    Root cause was that the server would allocate an a array with capacity
    equal to the max_open_files setting at startup, and in particular that
    the OS rlim setting would override the setting supplied by the user,
    even if that value was much larger.
    
    Solution: A simple solution would be to not let the OS override the
    setting provided by the user. But this is a change in behavior which
    could potentially break existing applications. Moreover, it would not
    protect against the user setting the limit to an excessively large
    value. Furthermore, allocating space for the largest possible number of
    open files, is a pessimisation, since it is unlikely that all those
    array slots will be needed.
    
    In this patch the issue is addressed by changing the array into a
    vector and letting it grow as needed when files are opened.
    
    Additionally, on Windows the same file_info array was used to provide
    a mapping between Windows HANDLEs and pseudo file descriptors so that
    the mysys file functions could use integers to refer to files also on
    Windows. In order to hide this functionality in Windows-specific code
    and reduce the number of #ifdefs this mapping was moved into its own
    vector and data structure. In the process the mysys functions
    emulating posix functions with Windows API functions was re-factored
    and error handling tightened. The latter was necessary since negative
    testing using invalid pseudo file descriptors had to be intercepted
    earlier to avoid stepping outside the bounds of the vector mapping
    pseudo fds to HANDLEs.
    
    Change-Id: Ib255a509e8c538d79f316d642165263609961e78
    09a0462a
    Bug#30183865: HUGE MALLOC WHEN OPEN FILE LIMIT IS HIGH
    Dyre Tjeldvoll authored
    Problem: Setting max_open_files to a high value, or setting it when
    the OS' rlimit had a large value (not equal to RLIM_INF) could cause
    the server run out of memory.
    
    Root cause was that the server would allocate an a array with capacity
    equal to the max_open_files setting at startup, and in particular that
    the OS rlim setting would override the setting supplied by the user,
    even if that value was much larger.
    
    Solution: A simple solution would be to not let the OS override the
    setting provided by the user. But this is a change in behavior which
    could potentially break existing applications. Moreover, it would not
    protect against the user setting the limit to an excessively large
    value. Furthermore, allocating space for the largest possible number of
    open files, is a pessimisation, since it is unlikely that all those
    array slots will be needed.
    
    In this patch the issue is addressed by changing the array into a
    vector and letting it grow as needed when files are opened.
    
    Additionally, on Windows the same file_info array was used to provide
    a mapping between Windows HANDLEs and pseudo file descriptors so that
    the mysys file functions could use integers to refer to files also on
    Windows. In order to hide this functionality in Windows-specific code
    and reduce the number of #ifdefs this mapping was moved into its own
    vector and data structure. In the process the mysys functions
    emulating posix functions with Windows API functions was re-factored
    and error handling tightened. The latter was necessary since negative
    testing using invalid pseudo file descriptors had to be intercepted
    earlier to avoid stepping outside the bounds of the vector mapping
    pseudo fds to HANDLEs.
    
    Change-Id: Ib255a509e8c538d79f316d642165263609961e78
Loading