Debian has support for 3 different arm machine and probably one which is
failing is a pretty old version which doesn't support the said instruction.
So we can make it specific to _aarch64_ (as per the arm official reference
manual ISB should be defined by all ARMv8 processors). [as per the ARMv7 specs
even 32 bits processor should support it but not sure which exact version
Debian has under armel].
In either case, the said performance issue will have an impact mainly with a
high-end processors with extreme parallelism so it safe to limit it to _aarch64_.
Corrects: MDEV-24630
Though these will all get case to unsigned long
long where it is populated into the perfschema's BIGINT
type.
Use uintptr_t for NetBSD per Nia Alarie's original #1836.
This is a fix for operating systems that have pthread_t defined
as a pointer and use the default pthread_self() mechanism for
identifying threads. More specifically, this is a build fix
for NetBSD.
Any changes I submit are freely available under the new BSD
license.
Signed-off-by: Nia Alarie <nia@NetBSD.org>
This commit reduces the likelihood of getting a busy port on
quick restarts with rsync SST (problem MDEV-25818) and fixes
a number of other flaws in SST scripts, adds new functionality,
and also synchronizes the xtrabackup-v2 script with the
mariabackup script (the latter applies only to the 10.2 branch):
1) SST via rsync: rsync and stunnel does not always get the right
time to complete by correctly handling SIGTERM. These utilities
are now given more time to complete normally (via normal SIGTERM
processing) before we move on to using "kill -9";
2) SST via rsync: attempts to terminate an rsync or stunnel process
(via "kill" utility) are only made if it did not terminated on
its own;
3) SST via rsync: if a combination of stunnel and rsync is used,
then we need to wait for both utilities to finish or stop, not
just one of them;
4) The config file and pid file for stunnel are now deleted after
successful completion of SST on the donor node;
5) The configs and pid files from rsync and stunnel should not be
deleted unless these utilities succeed (or are sucessfully
terminated) on the joiner node;
6) The configs and pid files now excluded from transfer via rsync;
7) Spaces in paths are now valid for config files as well (when
used with SST via rsync or mariabackup / xtrabackup[-v2]);
8) SST via mariabackup: added preliminary verification of keys and
certificates that are used when establishing a connection using
SSL (to avoid long timeouts and improve diagnostics) - by analogy
with how it is done for the xtrabackup-v2 (plus check for CA file),
while that check is skipped if the user does not have openssl
installed (or does not have diff utility);
9) Added backup-threads=<n> configuration option which adds
"--parallel=<n>" for mariabackup / xtrabackup at backup and
move-back stages;
10) Added encrypt-threads and encrypt-chunk-size configuration
options for xbcrypt management (when xbcrypt is used);
11) Small optimization: checking the socat version and adding
a file with parameters for 2048-bit Diffie-Hellman (if necessary)
is done only if the user has not specified "dhparam=" in the
"sockopt" option value;
12) SST via rsync now supports "backup-threads" configuration option
(in server-related sections or in the "[sst]");
13) Determining the number of available processors is now supported
for FreeBSD + mariabackup/xtrabackup: before that we might have
problems with "--compact" (rebuild indexes) or qpress on FreeBSD;
14) The check_pid() function should not raise an error state in
the rare cases when the pid file was created, but it is empty,
or if it is deleted right during the check, or when zero is read
from the pid file;
15) Iproved templates that are used to check if a requested socket
is "listening" when using the ss utility;
16) Shortened some other templates for socket state utilities;
17) Temporary files created by mariabackup / xtrabackup are moved
to a separate subdirectory inside tmpdir (so they don't get
mixed with other temporary files, which can make debugging
more difficult);
18) 10.2 only: the script for SST via xtrabackup-v2 has been brought
in full compliance with all the bugfixes made for mariabackup (as
it previously contained many flaws compared to the updated script
for mariabackup).
page_apply_insert_redundant(): Correct a condition that would
occasionally fail when recovering changes for the change buffer tree
(where extra_size and data_size can vary wildly).
This was broken in commit 138cbec5f2300bd5b401e83802642c1806264992
(MDEV-21724).
Both EXPLAIN and EXPLAIN EXTENDED statements produce different results set
in case it is run in normal way and in PS mode for the statements
UPDATE/DELETE with subquery.
The use case below reproduces the issue:
MariaDB [test]> CREATE TABLE t1 (c1 INT KEY) ENGINE=MyISAM;
Query OK, 0 rows affected (0,128 sec)
MariaDB [test]> CREATE TABLE t2 (c2 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,023 sec)
MariaDB [test]> CREATE TABLE t3 (c3 INT) ENGINE=MyISAM;
Query OK, 0 rows affected (0,021 sec)
MariaDB [test]> EXPLAIN EXTENDED UPDATE t3 SET c3 =
-> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
-> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
-> ON a12.c1 = a11.c1 ) d1 );
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| 1 | PRIMARY | t3 | ALL | NULL | NULL | NULL | NULL | 0 | 100.00 | |
| 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Impossible WHERE noticed after reading const tables
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,002 sec)
MariaDB [test]> PREPARE stmt FROM
-> EXPLAIN EXTENDED UPDATE t3 SET c3 =
-> ( SELECT COUNT(d1.c1) FROM ( SELECT a11.c1 FROM t1 AS a11
-> STRAIGHT_JOIN t2 AS a21 ON a21.c2 = a11.c1 JOIN t1 AS a12
-> ON a12.c1 = a11.c1 ) d1 );
Query OK, 0 rows affected (0,000 sec)
Statement prepared
MariaDB [test]> EXECUTE stmt;
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
| 1 | PRIMARY | t3 | ALL | NULL | NULL | NULL | NULL | 0 | 100.00 | |
| 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | no matching row in const table |
+------+-------------+-------+------+---------------+------+---------+------+------+----------+--------------------------------+
2 rows in set (0,000 sec)
The reason by that different result sets are produced is that on execution
of the statement 'EXECUTE stmt' the flag SELECT_DESCRIBE not set
in the data member SELECT_LEX::options for instances of SELECT_LEX that
correspond to subqueries used in the UPDTAE/DELETE statements.
Initially, these flags were set on parsing the statement
PREPARE stmt FROM "EXPLAIN EXTENDED UPDATE t3 SET ..."
but latter they were reset before starting real execution of
the parsed query during handling the statement 'EXECUTE stmt';
So, to fix the issue the functions mysql_update()/mysql_delete()
have been modified to set the flag SELECT_DESCRIBE forcibly
in the data member SELECT_LEX::options for the primary SELECT_LEX
of the UPDATE/DELETE statement.
- As solution `PLUGIN_CONNECT=NO` use early check to disable plugin:
Solution suggested by wlad@mariadb.com
- `JNI_FOUND` is a internal result variable and should be set with
cached library and header variables (like `JAVA_INCLUDE_PATH`) defined.
* Note: wrapper cmake/FindJNI.cmake runs first time and cmake native Find<module> returns only cached variable, like `JAVA_INCLUDE_PATH`, results variable are not cached).
Reviewed by: serg@mariadb.com
read_link_file(): Avoid GCC -Wtype-limits when char is unsigned,
by always using unsigned comparison. Also, avoid buffer overflow
when the .isl file consists entirely of spaces or control codes.
Back in 2006 or 2007, when MySQL AB and Innobase Oy existed as
separately controlled entities (Innobase had been acquired by
Oracle Corporation), MySQL 5.1 introduced a storage engine plugin
interface and Oracle made use of it by distributing a separate
InnoDB Plugin, which would contain some more bug fixes and
improvements, compared to the version of InnoDB that was statically
linked with the mysqld server that was distributed by MySQL AB.
The built-in InnoDB would export global symbols, which would clash
with the symbols of the dynamic InnoDB Plugin (which was supposed
to override the built-in one when present).
The solution to this problem was to declare all global symbols with
UNIV_INTERN, so that they would get the GCC function attribute that
specifies hidden visibility.
Later, in MariaDB Server, something based on Percona XtraDB (a fork of
MySQL InnoDB) became the statically linked implementation, and something
closer to MySQL InnoDB was available as a dynamic plugin. Starting with
version 10.2, MariaDB Server includes only one InnoDB implementation,
and hence any reason to have the UNIV_INTERN definition was lost.
btr_get_size_and_reserved(): Move to the same compilation unit with
the only caller.
innodb_set_buf_pool_size(): Remove. Modify innobase_buffer_pool_size
directly.
fil_crypt_calculate_checksum(): Merge to the only caller.
ha_innobase::innobase_reset_autoinc(): Merge to the only caller.
thd_query_start_micro(): Remove. Call thd_start_utime() directly.
InnoDB should calculate the MBR for the first field of
spatial index and do the comparison with the clustered
index field MBR. Due to MDEV-25459 refactoring, InnoDB
calculate the length of the first field and fails with
too long column error.
The only call of the virtual member function
handler::update_table_comment() was removed in
commit 82d28fada7dc928564aefac802400c6684c11917 (MySQL 5.5.53)
but the implementation was not removed.
The only non-trivial implementation was for InnoDB. The information
is now returned via handler::get_foreign_key_create_info() and
ha_statistics::delete_length.
- Better, easier to read code (no used of 'random' constants).
- All defines are now unique, so it is easier to find bugs if
somethings goes wrong.
Other things:
- Created sub function of common code in Aggregator_distinct::setup() and
Item_func_group_concat::setup() that set item->marker
- More documentation
- Folded a few long lines.
- Allmost all changes in item.cc, sql_lex.cc and sql_window.cc are done
with 'replace'.
The problem was that when LOCK TABLES where unwinded as part of
a killed connection, unlink_all_closed_tables() did not like that
there was uncommited transactions.
Fixed by doing a rollback of any open transaction in this particular case.
In the code existed just before this patch binding of a table reference to
the specification of the corresponding CTE happens in the function
open_and_process_table(). If the table reference is not the first in the
query the specification is cloned in the same way as the specification of
a view is cloned for any reference of the view. This works fine for
standalone queries, but does not work for stored procedures / functions
for the following reason.
When the first call of a stored procedure/ function SP is processed the
body of SP is parsed. When a query of SP is parsed the info on each
encountered table reference is put into a TABLE_LIST object linked into
a global chain associated with the query. When parsing of the query is
finished the basic info on the table references from this chain except
table references to derived tables and information schema tables is put
in one hash table associated with SP. When parsing of the body of SP is
finished this hash table is used to construct TABLE_LIST objects for all
table references mentioned in SP and link them into the list of such
objects passed to a pre-locking process that calls open_and_process_table()
for each table from the list.
When a TABLE_LIST for a view is encountered the view is opened and its
specification is parsed. For any table reference occurred in
the specification a new TABLE_LIST object is created to be included into
the list for pre-locking. After all objects in the pre-locking have been
looked through the tables mentioned in the list are locked. Note that the
objects referenced CTEs are just skipped here as it is impossible to
resolve these references without any info on the context where they occur.
Now the statements from the body of SP are executed one by one that.
At the very beginning of the execution of a query the tables used in the
query are opened and open_and_process_table() now is called for each table
reference mentioned in the list of TABLE_LIST objects associated with the
query that was built when the query was parsed.
For each table reference first the reference is checked against CTEs
definitions in whose scope it occurred. If such definition is found the
reference is considered resolved and if this is not the first reference
to the found CTE the the specification of the CTE is re-parsed and the
result of the parsing is added to the parsing tree of the query as a
sub-tree. If this sub-tree contains table references to other tables they
are added to the list of TABLE_LIST objects associated with the query in
order the referenced tables to be opened. When the procedure that opens
the tables comes to the TABLE_LIST object created for a non-first
reference to a CTE it discovers that the referenced table instance is not
locked and reports an error.
Thus processing non-first table references to a CTE similar to how
references to view are processed does not work for queries used in stored
procedures / functions. And the main problem is that the current
pre-locking mechanism employed for stored procedures / functions does not
allow to save the context in which a CTE reference occur. It's not trivial
to save the info about the context where a CTE reference occurs while the
resolution of the table reference cannot be done without this context and
consequentially the specification for the table reference cannot be
determined.
This patch solves the above problem by moving resolution of all CTE
references at the parsing stage. More exactly references to CTEs occurred in
a query are resolved right after parsing of the query has finished. After
resolution any CTE reference it is marked as a reference to to derived
table. So it is excluded from the hash table created for pre-locking used
base tables and view when the first call of a stored procedure / function
is processed.
This solution required recursive calls of the parser. The function
THD::sql_parser() has been added specifically for recursive invocations of
the parser.
# Conflicts:
# sql/sql_cte.cc
# sql/sql_cte.h
# sql/sql_lex.cc
# sql/sql_lex.h
# sql/sql_view.cc
# sql/sql_yacc.yy
# sql/sql_yacc_ora.yy
The underlying problem with MDEV-25551 turned out to be that
transactions having changes for tables with no primary key,
were not safe to apply in parallel. This is due to excessive locking
in innodb side, and even non related row modifications could end up
in lock conflict during applying.
The fix for MDEV-25551 has disabled parallel applying for tables with no PK.
This fix depends on change for wsrep-lib, where a separate PR allows
application to modify transaction flags in wsrep-lib.
This commit has also separate mtr test for verifying that transactions
modifying a table with no primary key, will not apply in parallel.
This test is a modified version of initial test created by Gabor Orosz,
the reporterr of MDEV-25551.
Another mtr test was added in galera_sr suite, for testing if modifying
tables with no primary key would causes issues for streaming replication
use cases.
Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
In NetBSD 9.x and prior, udata is an intptr_t, but in 10.x (current
development branch) it was changed to be a void * for compatibility
with other BSDs a year or so ago.
Unfortunately, this does not simplify the code, as NetBSD 8.x and 9.x
are still supported and will be for a few more years.
Signed-off-by: Nia Alarie <nia@NetBSD.org>