Skip to content
  • Mauritz Sundell's avatar
    017d1c26
    Bug#29485977 BACKUP BIG ROWS HITS NDBREQUIRE · 017d1c26
    Mauritz Sundell authored
    
    
    Problem
    =======
    
    For certain tables with big rows and with big primary key, taking backup
    (snapshotend) while doing insert into table may hit some ndbrequire in data
    nodes.
    
    For snapshotstart backup concurrent deletes may hit the same ndbrequire.
    
    The following ndbrequire have been seen failing in test:
    Error object: BACKUP (Line: 10218) 0x00000002 Check sz <= buf.getMaxWrite()
    failed
    Error object: BACKUP (Line: 10448) 0x00000002 Check entryLength <=
    trigPtr.p->operation->dataBuffer.getMaxWrite() failed
    
    In Backup::execDEFINE_BACKUP_REQ() the data size allowed in a log entry is
    set to 4096 words:
    
      const Uint32 maxInsert[] = {
        MAX_WORDS_META_FILE,
        4096,    // 16k
        BACKUP_MIN_BUFF_WORDS
      };
    
    For backup redo log the log event for insert contains both primary key and
    complete row (including primary key again) with attribute words.
    In worst case that adds up to 5067 words, MAX_ATTRIBUTES_IN_INDEX(32) +
    MAX_KEY_SIZE_IN_WORDS(1023) + MAX_ATTRIBUTES_IN_TABLE(512) +
    MAX_TUPLE_SIZE_IN_WORDS(3500).
    
    On top of that the log entry it self also use 4 words in header and possibly
    a gci word and for undo log also a trailing length word, adding up to 5073
    words.
    
    Solution
    ========
    
    Define a new constant MAX_BACKUP_FILE_LOG_ENTRY_SIZE and use that in
    maxInsert[1] to set max words in an insert log entry.
    
    Reviewed-by: default avatarFrazer Clement <frazer.clement@oracle.com>
    017d1c26
    Bug#29485977 BACKUP BIG ROWS HITS NDBREQUIRE
    Mauritz Sundell authored
    
    
    Problem
    =======
    
    For certain tables with big rows and with big primary key, taking backup
    (snapshotend) while doing insert into table may hit some ndbrequire in data
    nodes.
    
    For snapshotstart backup concurrent deletes may hit the same ndbrequire.
    
    The following ndbrequire have been seen failing in test:
    Error object: BACKUP (Line: 10218) 0x00000002 Check sz <= buf.getMaxWrite()
    failed
    Error object: BACKUP (Line: 10448) 0x00000002 Check entryLength <=
    trigPtr.p->operation->dataBuffer.getMaxWrite() failed
    
    In Backup::execDEFINE_BACKUP_REQ() the data size allowed in a log entry is
    set to 4096 words:
    
      const Uint32 maxInsert[] = {
        MAX_WORDS_META_FILE,
        4096,    // 16k
        BACKUP_MIN_BUFF_WORDS
      };
    
    For backup redo log the log event for insert contains both primary key and
    complete row (including primary key again) with attribute words.
    In worst case that adds up to 5067 words, MAX_ATTRIBUTES_IN_INDEX(32) +
    MAX_KEY_SIZE_IN_WORDS(1023) + MAX_ATTRIBUTES_IN_TABLE(512) +
    MAX_TUPLE_SIZE_IN_WORDS(3500).
    
    On top of that the log entry it self also use 4 words in header and possibly
    a gci word and for undo log also a trailing length word, adding up to 5073
    words.
    
    Solution
    ========
    
    Define a new constant MAX_BACKUP_FILE_LOG_ENTRY_SIZE and use that in
    maxInsert[1] to set max words in an insert log entry.
    
    Reviewed-by: default avatarFrazer Clement <frazer.clement@oracle.com>
Loading