Problem:
========
- After commit cc8eefb0dca1372378905fbae11044f20364c42d (MDEV-33087),
InnoDB does use bulk insert operation for ALTER TABLE.. ALGORITHM=COPY
and CREATE TABLE..SELECT as well. InnoDB fails to clear the bulk
buffer when it encounters error during CREATE..SELECT. Problem
is that while transaction cleanup, InnoDB fails to identify
the bulk insert for DDL operation.
Fix:
====
- Represent bulk_insert in trx by 2 bits. By doing that, InnoDB
can distinguish between TRX_DML_BULK, TRX_DDL_BULK. During DDL,
set bulk insert value for transaction to TRX_DDL_BULK.
- Introduce a parameter HA_EXTRA_ABORT_ALTER_COPY which rollbacks
only TRX_DDL_BULK transaction.
- bulk_insert_apply() happens for TRX_DDL_BULK transaction happens
only during HA_EXTRA_END_ALTER_COPY extra() call.
in row_update_for_mysql
932ec586 (MDEV-23644) in TABLE::delete_row() added ha_delete_row() for
the case of HA_ERR_FOREIGN_DUPLICATE_KEY. The problem is
ha_update_row() called beforewards may change m_last_part which is
required for ha_delete_row() to delete from correct partition.
The fix reverts m_last_part in case ha_partition::update_row() fails.
While ALTER thread tries to notify SELECT thread about lock conflict
it accesses its TABLE object (THD::notify_shared_lock()) and lock data
(mysql_lock_abort_for_thread()). As part of accessing lock data it
calls ha_partition::store_lock() which iterates over all partitions
and does their store_lock().
The problem is SELECT opened 2 read partitions, but
ha_partition::store_lock() tries to access all partitions as indicated
in m_tot_parts which is 4. So the last 2 partitions m_file[2] and
m_file[3] are uninitialized and store_lock() accesses uninitialized
data.
The code in ha_partition::store_lock() does this wrong handling to use
all partitions specifically for the case of
mysql_lock_abort_for_thread(), this is conducted with comment:
/*
This can be called from get_lock_data() in mysql_lock_abort_for_thread(),
even when thd != table->in_use. In that case don't use partition pruning,
but use all partitions instead to avoid using another threads structures.
*/
if (thd != table->in_use)
{
for (i= 0; i < m_tot_parts; i++)
to= m_file[i]->store_lock(thd, to, lock_type);
}
The explanation is "to avoid using another threads structures" does
not really explain why this change was needed.
The change was originally introduced by:
commit 9b7cccaf319
Author: Mattias Jonsson <mattias.jonsson@oracle.com>
Date: Wed May 30 00:14:39 2012 +0200
WL#4443:
final code change for dlenevs review.
- Don't use pruning in lock_count().
- Don't use pruning in store_lock() if not owning thd.
- Renamed is_fields_used_in_trigger to
is_fields_updated_in_trigger() and check if they
may be updated.
- moved out mark_fields_used(TRG_EVENT_UPDATE)
from mark_columns_needed_for_update().
And reverted the changed call order. And call
mark_fields_used(TRG_EVENT_UPDATE) instead.
which also fails to explain the rationale of the change. The original
idea of WL#4443 is to reduce locks and this change does not happen to
serve this goal.
So reverting this change restores original behaviour of using only
partitions marked for use and fixes invalid access to uninitialized
data.
Allows index condition pushdown for reverse ordered scans, a previously
disabled feature due to poor performance. This patch adds a new
API to the handler class called set_end_range which allows callers to
tell the handler what the end of the index range will be when scanning.
Combined with a pushed index condition, the handler can scan the index
efficiently and not read beyond the end of the given range. When
checking if the pushed index condition matches, the handler will also
check if scanning has reached the end of the provided range and stop if
so.
If we instead only enabled ICP for reverse ordered scans without
also calling this new API, then the handler would perform unnecessary
index condition checks. In fact this would continue until the end of
the index is reached.
These changes are agnostic of storage engine. That is, any storage
engine that supports index condition pushdown will inhereit this new
behavior as it is implemented in the SQL and storage engine
API layers.
The partitioned tables storage meta-engine (ha_partition) adds an
override of set_end_range which recursively calls set_end_range on its
child storage engine (handler) implementations.
This commit updates the test made in an earlier commit to show that
ICP matches happen for the reverse ordered case.
This patch is based on changes written by Olav Sandstaa in
MySQL commit da1d92fd46071cd86de61058b6ea39fd9affcd87
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
(checks are done in a different order)
It is not clear that the three Compare_key enums form a hierarchy like
alter algorithms do. So on the safe side (in case MDEV-22168 gets
implemented without looking at this code) we should return NotEqual if
partition engines do not return the same enum value. There's a static
merge function that merges these enums, but it is used to merge the
Compare_keys of different key parts (horizontally), rather than
different partitions (vertically).
The partitioning error handling code was looking at
thd->lex->alter_info.partition_flags in non-alter-table cases, in which cases
the value is stale and contains whatever was set by any earlier ALTER TABLE.
This could cause the wrong error code to be generated, which then in some cases
can cause replication to break with "different errorcode" error.
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
Change the type of my_hash_get_key to:
1) Return const
2) Change the context parameter to be const void*
Also fix casting in hash adjacent areas.
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict
The functions queue_compare, qsort2_cmp, and qsort_cmp2
all had similar interfaces, and were used interchangable
and unsafely cast to one another.
This patch consolidates the functions all into the
qsort_cmp2 interface.
Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
create templates
thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n)
and the same for thd->calloc(). By the default the type is char,
so old usage of thd->alloc(size) works too.
When an DDL statement results in a local partition table with
partitions not covering all values in the table, a failure is emitted.
However, when the table in question is a spider table, the issue does
not surface until some future statements (DELETE in the test examples
in this commit) are executed. This is consistent with the design of
spider which aims to minimise connections with the data node. The
resulting error is legitimate and should not result in an assertion
failure. Similarly, a partitioned spider table could have misplaced
rows, so we remove the other assertion as well.