Skip to content
  • Venkatesh Duggirala's avatar
    150d90fb
    Bug#18860225 HASH_SCAN SEEMS BROKEN: CAN'T FIND RECORD IN 'T1' ERROR_CODE: 1032 · 150d90fb
    Venkatesh Duggirala authored
    Problem: In RBR, on slave setting slave_search_algorithm to HASH_SCAN causes "Can't find
    record" error even though that record exists in storage layer.
    
    Analysis: When slave's slave_search_algorithm is set to Hash_scan, it prepares
    a unique key list in add_key_to_distinct_keyset() for all the rows in a
    particular Row_event.
    For eg: Lets say a table 't1(c1,c2, key(c2))' contains 4 tuples
    (1,10), (2,20), (3,10), (4,40). And lets say on Master a delete
    is executed to delete all these four tuples. Received Rows_log_event on slave
    contains all these 4 tuples. Slave should prepare a unique key list(i.e., 10,20,40).
    Later, the same unique key list will be used to retrieve all tuples associated
    to each key in the list from storage engine. In the old code, there was a problem
    while preparing this unique key list in  add_key_to_distinct_keyset. Before
    inserting an element in to the unique list, instead of searching the entire list whether an
    element is there or not, it was comparing only with the last inserted element.
    This can create problem when the processing list is not a sorted array.
    For eg: While processing 10,20,10,40 list, add_key_to_distinct_keyset() list
    thinks that the third 10 is unique as it compares with last inserted key which
    is 20. Hence unique key list prepared will be 10,20,10,40 instead of 10,20,40.
    
    Later in Rows_log_event::do_apply_event, we retrieve all the tuples with the first
    key in the list (10) and will apply the event on those tuples (which is delete in
    this example). When the logic retrieves '10' again from wrongly formed unique list,
    it tries to find the tuples from storage engine using the key value '10'. Thus
    storage engine returns 'ER_RECORD_NOT_FOUND' error.
    
    Fix: Correcting add_key_to_distinct_keyset() logic to maintain unique key list
    i.e., instead of comparing the newly inserted value with only last inserted key,
    now the list (unique list) will be maintained by std::set. To compare two keys,
    We have a compare class which will be supplied to std::set. This compare class
    compares two given keys using key_cmp2 function which is added in key.cc file.
    150d90fb
    Bug#18860225 HASH_SCAN SEEMS BROKEN: CAN'T FIND RECORD IN 'T1' ERROR_CODE: 1032
    Venkatesh Duggirala authored
    Problem: In RBR, on slave setting slave_search_algorithm to HASH_SCAN causes "Can't find
    record" error even though that record exists in storage layer.
    
    Analysis: When slave's slave_search_algorithm is set to Hash_scan, it prepares
    a unique key list in add_key_to_distinct_keyset() for all the rows in a
    particular Row_event.
    For eg: Lets say a table 't1(c1,c2, key(c2))' contains 4 tuples
    (1,10), (2,20), (3,10), (4,40). And lets say on Master a delete
    is executed to delete all these four tuples. Received Rows_log_event on slave
    contains all these 4 tuples. Slave should prepare a unique key list(i.e., 10,20,40).
    Later, the same unique key list will be used to retrieve all tuples associated
    to each key in the list from storage engine. In the old code, there was a problem
    while preparing this unique key list in  add_key_to_distinct_keyset. Before
    inserting an element in to the unique list, instead of searching the entire list whether an
    element is there or not, it was comparing only with the last inserted element.
    This can create problem when the processing list is not a sorted array.
    For eg: While processing 10,20,10,40 list, add_key_to_distinct_keyset() list
    thinks that the third 10 is unique as it compares with last inserted key which
    is 20. Hence unique key list prepared will be 10,20,10,40 instead of 10,20,40.
    
    Later in Rows_log_event::do_apply_event, we retrieve all the tuples with the first
    key in the list (10) and will apply the event on those tuples (which is delete in
    this example). When the logic retrieves '10' again from wrongly formed unique list,
    it tries to find the tuples from storage engine using the key value '10'. Thus
    storage engine returns 'ER_RECORD_NOT_FOUND' error.
    
    Fix: Correcting add_key_to_distinct_keyset() logic to maintain unique key list
    i.e., instead of comparing the newly inserted value with only last inserted key,
    now the list (unique list) will be maintained by std::set. To compare two keys,
    We have a compare class which will be supplied to std::set. This compare class
    compares two given keys using key_cmp2 function which is added in key.cc file.
Loading