-
Lakshmi Narayanan Sreethar authored
different encoding When the value read from a fixed width char column with an utf8 charset is written to a column of same type and length but with a latin1 charset, it fails. The actual issue is, reading from a fixed width char column with an utf8 charset returns the stored data with extra spaces, padded during its insertion. The fix is to trim the value before returning it. This patch also fixes the issue that causes 'Data length too long' error during the insertion of valid utf8 characters of 2 bytes and more. This is due to the padding of blank characters to the data, before encoding it. The fix is to do the padding after encoding, rather than doing it beforehand. NdbRecordImpl.java The result string is now trimmed before returning Utility.java Overloaded the padString to handle ByteBuffer class Padding blank bytes to the data is moved after encoding schema.sql added a new charsetswedishutf8 table Added testcases to verify the bug
Lakshmi Narayanan Sreethar authoreddifferent encoding When the value read from a fixed width char column with an utf8 charset is written to a column of same type and length but with a latin1 charset, it fails. The actual issue is, reading from a fixed width char column with an utf8 charset returns the stored data with extra spaces, padded during its insertion. The fix is to trim the value before returning it. This patch also fixes the issue that causes 'Data length too long' error during the insertion of valid utf8 characters of 2 bytes and more. This is due to the padding of blank characters to the data, before encoding it. The fix is to do the padding after encoding, rather than doing it beforehand. NdbRecordImpl.java The result string is now trimmed before returning Utility.java Overloaded the padString to handle ByteBuffer class Padding blank bytes to the data is moved after encoding schema.sql added a new charsetswedishutf8 table Added testcases to verify the bug
Loading