Apprently, sometimes there will be null pointers with 0 length
passed to the MyCTX::update() function, and will need to return
a valid buffer.
So weaken the assertion, and use a valid pointer for src if it was NULL.
Changed the function append_range_all_keyparts to use sel_arg_range_seq_init / sel_arg_range_seq_next to produce ranges.
Also adjusted to print format for the ranges, now the ranges are printed as:
(keypart1_min, keypart2_min,..) OP (keypart1_name,keypart2_name, ..) OP (keypart1_max,keypart2_max, ..)
Also added more tests for range and index merge access for optimizer trace
C++11 defines the singly-linked std::forward_list. Prefer it to
the doubly-linked std::list in cases where we dot really need it.
Also, clean up some code.
dict_index_remove_from_v_col_list(): Remove.
Obsoleted by dict_index_t::detach_columns().
There is no std::forward_list::push_back(). Use push_front() instead.
The ordering does not really matter.
dict_v_col_t::n_v_indexes: Added. There is no std::forward_list::size(),
and trx_undo_log_v_idx() needs to know the size.
rtr_info_track_t::rtr_active: Encapsulate. There really was no justification
for the pointer indirection.
Log_event_writer::encrypt_and_write() can pass NULL pointer as source buffer
for the encryption. WolfSSL EVP_CipherUpdate(), rightfully rejects this
as invalid parameter.
Fix Log_event_writer::encrypt_and_write() and check, with assertion,
that src parameterm is sane in MyCTX::update()
In Wolfcrypt, output length after CTR encryption is not the same
as input length. This is different from openssl and this makes unit test
aes-t fail.
So disable CTR for now.
MDEV-19581 Valgrind error with WolfSSL and encrypted binlog
WolfSSL can read memory out of bounds in EVP_CipherUpdate()
in decrypt/NOPAD mode, when the input length is not multiple of AES block
size.
The workaround ensures that input will have some padding at the end
by having slightly larger allocated buffer, or padding the structures
with 16 more bytes.
There is only one InnoDB crash recovery subsystem.
Allocating recv_sys statically removes one level of pointer indirection
and makes code more readable, and removes the awkward initialization of
recv_sys->dblwr.
recv_sys_t::create(): Replaces recv_sys_init().
recv_sys_t::debug_free(): Replaces recv_sys_debug_free().
recv_sys_t::close(): Replaces recv_sys_close().
recv_sys_t::add(): Replaces recv_add_to_hash_table().
recv_sys_t::empty(): Replaces recv_sys_empty_hash().
- Add new submodule for WolfSSL
- Build and use wolfssl and wolfcrypt instead of yassl/taocrypt
- Use HAVE_WOLFSSL instead of HAVE_YASSL
- Increase MY_AES_CTX_SIZE, to avoid compile time asserts in my_crypt.cc
(sizeof(EVP_CIPHER_CTX) is larger on WolfSSL)
Bootstrapping a new cluster from a backup created from a MariaDB
version prior to 10.3.5 may result in error "SST position can't be
set in past" when attempting to join additional nodes.
The problem stems from the fact that when reading the wsrep position
from InnoDB, the position is looked up in two places:
the TRX_SYS page, where versions prior to 10.3.5 used to store
WSREP's position; and rollback segments, this is where newer versions
store the position.
When starting a new cluster, the starting seqno is 0 and a new cluster
UUID is generated. This is persisted in rollback segments, but the old
UUID and seqno are not cleared from TRX_SYS page.
Subsequently, when reading back the position,
trx_rseg_read_wsrep_checkpoint() is going to return the maximum seqno
found in both TRX_SYS page and rollback segments. So in the case of a
newly bootstrapped cluster, it's always going to return the old
cluster information.
The fix consists of changing trx_rseg_read_wsrep_checkpoint() so that
only rollback segments are looked up. On startup, position is read
from the TRX_SYS page, and if present, it is copied to rollback
segments (unless a newer position is already present in the rollback
segments).
Finally the position stored in TRX_SYS page is cleared.
row_insert_for_mysql(): InnoDB sets values for row_start and row_end.
And this function used to return those values to server in
ha_innobase::write_row(). This buggy behavior was removed. Also,
a piece of code in this function was reformatted.
upd_node_t::make_versioned_helper(): Assert that the preallocated size
of the update vector is not exceeded.
The issue in this case is that we take in account the estimates from quick keys instead of rec_per_key.
The estimates for quick keys are better than rec_per_key only if we have ref(const), so we need to check
that all keyparts in the ref key are of the type ref(const).
* add error for truncation of versioned tables: `ER_TRUNCATE_ILLEGAL_VERS`
* make a full table open with `tdc_aquire_share` instead of just `ha_table_exists` check
test suites run: main, parts, versioning
Closes#785
dict_sys.lock(), dict_sys_lock(): Acquire both mutex and latch.
dict_sys.unlock(), dict_sys_unlock(): Release both mutex and latch.
dict_sys.assert_locked(): Assert that both mutex and latch are held.