MDEV-26642/MDEV-26643/MDEV-32898 Implement innodb_snapshot_isolation

https://jepsen.io/analyses/mysql-8.0.34 highlights that the
transaction isolation levels in the InnoDB storage engine do not
correspond to any widely accepted definitions, such as
"Generalized Isolation Level Definitions"
https://pmg.csail.mit.edu/papers/icde00.pdf
(PL-1 = READ UNCOMMITTED, PL-2 = READ COMMITTED, PL-2.99 = REPEATABLE READ,
PL-3 = SERIALIZABLE).
Only READ UNCOMMITTED in InnoDB seems to match the above definition.

The issue is that InnoDB does not detect write/write conflicts
(Section 4.4.3, Definition 6) in the above.

It appears that as soon as we implement write/write conflict detection
(SET SESSION innodb_snapshot_isolation=ON), the default isolation level
(SET TRANSACTION ISOLATION LEVEL REPEATABLE READ) will become
Snapshot Isolation (similar to Postgres), as defined in Section 4.2 of
"A Critique of ANSI SQL Isolation Levels", MSR-TR-95-51, June 1995
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-51.pdf

Locking reads inside InnoDB used to read the latest committed version,
ignoring what should actually be visible to the transaction.
The added test innodb.lock_isolation illustrates this. The statement
	UPDATE t SET a=3 WHERE b=2;
is executed in a transaction that was started before a read view or
a snapshot of the current transaction was created, and committed before
the current transaction attempts to execute
	UPDATE t SET b=3;
If SET innodb_snapshot_isolation=ON is in effect when the second
transaction was started, the second transaction will be aborted with
the error ER_CHECKREAD. By default (innodb_snapshot_isolation=OFF),
the second transaction would execute inconsistently, displaying an
incorrect SELECT COUNT(*) FROM t in its read view.

If innodb_snapshot_isolation=ON, if an attempt to acquire a lock on a
record that does not exist in the current read view is made, an error
DB_RECORD_CHANGED (HA_ERR_RECORD_CHANGED, ER_CHECKREAD) will
be raised. This error will be treated in the same way as a deadlock:
the transaction will be rolled back.

lock_clust_rec_read_check_and_lock(): If the current transaction has
a read view where the record is not visible and
innodb_snapshot_isolation=ON, fail before trying to acquire the lock.

row_sel_build_committed_vers_for_mysql(): If innodb_snapshot_isolation=ON,
disable the "semi-consistent read" logic that had been implemented by
myself on the directions of Heikki Tuuri in order to address
https://bugs.mysql.com/bug.php?id=3300 that was motivated by a customer
wanting UPDATE to skip locked rows that do not match the WHERE condition.
It looks like my changes were included in the MySQL 5.1.5
commit ad126d90e019f223470e73e1b2b528f9007c4532; at that time, employees
of Innobase Oy (a recent acquisition of Oracle) had lost write access to
the repository.

The only reason why we set innodb_snapshot_isolation=OFF by default is
backward compatibility with applications, such as the one that motivated
the implementation of "semi-consistent read" back in 2005. In a later
major release, we can default to innodb_snapshot_isolation=ON.

Thanks to Peter Alvaro, Kyle Kingsbury and Alexey Gotsman for their work
on https://github.com/jepsen-io/ and to Kyle and Alexey for explanations
and some testing of this fix.

Thanks to Vladislav Lesin for the initial test for MDEV-26643,
as well as reviewing these changes.
This commit is contained in:
Marko Mäkelä 2024-03-20 09:48:03 +02:00
parent ca07f62992
commit b8a6719889
26 changed files with 315 additions and 73 deletions

View File

@ -103,7 +103,6 @@ connection con2;
# The following query should hang because con1 is locking the record
update t2 set a=2 where b = 0;
select * from t2;
--send
update t1 set x=2 where id = 0;
--sleep 2

View File

@ -22,7 +22,6 @@ select * from t1;
connection con1;
begin work;
insert into t1 values (5);
select * from t1;
# Lock wait timeout set to 2 seconds in <THIS TEST>-master.opt; this
# statement will time out; in 5.0.13+, it will not roll back transaction.
--error ER_LOCK_WAIT_TIMEOUT

View File

@ -89,11 +89,6 @@ id x
300 300
connection con2;
update t2 set a=2 where b = 0;
select * from t2;
b a
0 2
1 20
2 30
update t1 set x=2 where id = 0;
connection con1;
update t1 set x=1 where id = 0;

View File

@ -1837,7 +1837,7 @@ slave-run-triggers-for-rbr NO
slave-skip-errors OFF
slave-sql-verify-checksum TRUE
slave-transaction-retries 10
slave-transaction-retry-errors 1158,1159,1160,1161,1205,1213,1429,2013,12701
slave-transaction-retry-errors 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701
slave-transaction-retry-interval 0
slave-type-conversions
slow-launch-time 2

View File

@ -96,11 +96,8 @@ a b c
DROP TABLE t1;
CREATE TABLE t1 (a INT, b INT, c INT GENERATED ALWAYS AS(a+b));
INSERT INTO t1(a, b) VALUES (1, 1), (2, 2), (3, 3), (4, 4);
connection con1;
# disable purge
BEGIN;
SELECT * FROM t0;
a
connect stop_purge,localhost,root,,;
START TRANSACTION WITH CONSISTENT SNAPSHOT;
connection default;
DELETE FROM t1 WHERE a = 1;
UPDATE t1 SET a = 2, b = 2 WHERE a = 5;
@ -109,10 +106,11 @@ SET DEBUG_SYNC= 'inplace_after_index_build SIGNAL uncommitted WAIT_FOR purged';
ALTER TABLE t1 ADD INDEX idx (c), ALGORITHM=INPLACE, LOCK=NONE;
connection con1;
SET DEBUG_SYNC= 'now WAIT_FOR uncommitted';
BEGIN;
DELETE FROM t1 WHERE a = 3;
UPDATE t1 SET a = 7, b = 7 WHERE a = 4;
INSERT INTO t1(a, b) VALUES (8, 8);
# enable purge
disconnect stop_purge;
COMMIT;
# wait for purge to process the deleted/updated records.
InnoDB 2 transactions not purged

View File

@ -131,9 +131,8 @@ CREATE TABLE t1 (a INT, b INT, c INT GENERATED ALWAYS AS(a+b));
INSERT INTO t1(a, b) VALUES (1, 1), (2, 2), (3, 3), (4, 4);
connection con1;
--echo # disable purge
BEGIN; SELECT * FROM t0;
connect (stop_purge,localhost,root,,);
START TRANSACTION WITH CONSISTENT SNAPSHOT;
connection default;
DELETE FROM t1 WHERE a = 1;
@ -148,13 +147,14 @@ send ALTER TABLE t1 ADD INDEX idx (c), ALGORITHM=INPLACE, LOCK=NONE;
connection con1;
SET DEBUG_SYNC= 'now WAIT_FOR uncommitted';
BEGIN;
DELETE FROM t1 WHERE a = 3;
UPDATE t1 SET a = 7, b = 7 WHERE a = 4;
INSERT INTO t1(a, b) VALUES (8, 8);
--echo # enable purge
disconnect stop_purge;
COMMIT;
--echo # wait for purge to process the deleted/updated records.

View File

@ -431,10 +431,6 @@ a
connection con1;
begin work;
insert into t1 values (5);
select * from t1;
a
1
5
insert into t1 values (2);
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
select * from t1;
@ -509,10 +505,6 @@ a
connection con1;
begin work;
insert into t1 values (5);
select * from t1;
a
1
5
insert into t1 values (2);
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
select * from t1;
@ -1217,10 +1209,6 @@ a
connection con1;
begin work;
insert into t1 values (5);
select * from t1;
a
1
5
insert into t1 values (2);
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
select * from t1;

View File

@ -17,10 +17,6 @@ a
connection con1;
begin work;
insert into t1 values (5);
select * from t1;
a
1
5
insert into t1 values (2);
ERROR HY000: Lock wait timeout exceeded; try restarting transaction
select * from t1;

View File

@ -0,0 +1,108 @@
#
# MDEV-26642 Weird SELECT view when a record is
# modified to the same value by two transactions
# MDEV-32898 Phantom rows caused by updates of PRIMARY KEY
#
CREATE TABLE t(a INT PRIMARY KEY, b INT) ENGINE=InnoDB;
INSERT INTO t VALUES (1,1),(2,2);
BEGIN;
SELECT * FROM t LOCK IN SHARE MODE;
a b
1 1
2 2
connect con_weird,localhost,root;
BEGIN;
SELECT * FROM t;
a b
1 1
2 2
connect consistent,localhost,root;
SET innodb_snapshot_isolation=ON;
BEGIN;
SELECT * FROM t;
a b
1 1
2 2
connection default;
UPDATE t SET a=3 WHERE b=2;
COMMIT;
connection consistent;
UPDATE t SET b=3;
ERROR HY000: Record has changed since last read in table 't'
SELECT * FROM t;
a b
1 1
3 2
COMMIT;
connection con_weird;
UPDATE t SET b=3;
SELECT * FROM t;
a b
1 3
2 2
3 3
COMMIT;
connection default;
SELECT * FROM t;
a b
1 3
3 3
DROP TABLE t;
#
# MDEV-26643 Inconsistent behaviors of UPDATE under
# READ UNCOMMITTED and READ COMMITTED isolation level
#
CREATE TABLE t(a INT, b INT) ENGINE=InnoDB;
INSERT INTO t VALUES(NULL, 1), (2, 2);
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN;
UPDATE t SET a = 10;
connection consistent;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
UPDATE t SET b = 20 WHERE a;
connection default;
COMMIT;
connection consistent;
SELECT * FROM t;
a b
10 20
10 20
connection default;
TRUNCATE TABLE t;
INSERT INTO t VALUES(NULL, 1), (2, 2);
BEGIN;
UPDATE t SET a = 10;
connection consistent;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
UPDATE t SET b = 20 WHERE a;
connection default;
COMMIT;
connection consistent;
SELECT * FROM t;
a b
10 20
10 20
disconnect consistent;
connection default;
TRUNCATE TABLE t;
INSERT INTO t VALUES(NULL, 1), (2, 2);
BEGIN;
UPDATE t SET a = 10;
connection con_weird;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
UPDATE t SET b = 20 WHERE a;
connection default;
SELECT * FROM t;
a b
10 1
10 2
COMMIT;
connection con_weird;
COMMIT;
disconnect con_weird;
connection default;
SELECT * FROM t;
a b
10 1
10 20
DROP TABLE t;

View File

@ -0,0 +1,110 @@
--source include/have_innodb.inc
--echo #
--echo # MDEV-26642 Weird SELECT view when a record is
--echo # modified to the same value by two transactions
--echo # MDEV-32898 Phantom rows caused by updates of PRIMARY KEY
--echo #
CREATE TABLE t(a INT PRIMARY KEY, b INT) ENGINE=InnoDB;
INSERT INTO t VALUES (1,1),(2,2);
BEGIN; SELECT * FROM t LOCK IN SHARE MODE;
--connect con_weird,localhost,root
BEGIN;
SELECT * FROM t;
--connect consistent,localhost,root
SET innodb_snapshot_isolation=ON;
BEGIN;
SELECT * FROM t;
--connection default
UPDATE t SET a=3 WHERE b=2;
COMMIT;
--connection consistent
--error ER_CHECKREAD
UPDATE t SET b=3;
SELECT * FROM t;
COMMIT;
--connection con_weird
UPDATE t SET b=3;
SELECT * FROM t;
COMMIT;
--connection default
SELECT * FROM t;
DROP TABLE t;
--echo #
--echo # MDEV-26643 Inconsistent behaviors of UPDATE under
--echo # READ UNCOMMITTED and READ COMMITTED isolation level
--echo #
CREATE TABLE t(a INT, b INT) ENGINE=InnoDB;
INSERT INTO t VALUES(NULL, 1), (2, 2);
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN; UPDATE t SET a = 10;
--connection consistent
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
--send UPDATE t SET b = 20 WHERE a
--connection default
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = 'Updating'
and info = 'UPDATE t SET b = 20 WHERE a';
--source include/wait_condition.inc
COMMIT;
--connection consistent
--reap
SELECT * FROM t;
--connection default
TRUNCATE TABLE t;
INSERT INTO t VALUES(NULL, 1), (2, 2);
BEGIN; UPDATE t SET a = 10;
--connection consistent
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
--send UPDATE t SET b = 20 WHERE a
--connection default
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where info = 'UPDATE t SET b = 20 WHERE a';
--source include/wait_condition.inc
COMMIT;
--connection consistent
--reap
SELECT * FROM t;
--disconnect consistent
--connection default
TRUNCATE TABLE t;
INSERT INTO t VALUES(NULL, 1), (2, 2);
BEGIN; UPDATE t SET a = 10;
--connection con_weird
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
send UPDATE t SET b = 20 WHERE a;
--connection default
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = 'Updating'
and info = 'UPDATE t SET b = 20 WHERE a';
--source include/wait_condition.inc
SELECT * FROM t;
COMMIT;
--connection con_weird
--reap
COMMIT;
--disconnect con_weird
--connection default
SELECT * FROM t;
DROP TABLE t;

View File

@ -1,20 +1,20 @@
select @@global.slave_transaction_retry_errors;
@@global.slave_transaction_retry_errors
1158,1159,1160,1161,1205,1213,1429,2013,12701,10,20,5000,400
1158,1159,1160,1161,1205,1213,1020,1429,2013,12701,10,20,5000,400
select @@session.slave_transaction_retry_errors;
ERROR HY000: Variable 'slave_transaction_retry_errors' is a GLOBAL variable
show global variables like 'slave_transaction_retry_errors';
Variable_name Value
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1429,2013,12701,10,20,5000,400
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701,10,20,5000,400
show session variables like 'slave_transaction_retry_errors';
Variable_name Value
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1429,2013,12701,10,20,5000,400
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701,10,20,5000,400
select * from information_schema.global_variables where variable_name='slave_transaction_retry_errors';
VARIABLE_NAME VARIABLE_VALUE
SLAVE_TRANSACTION_RETRY_ERRORS 1158,1159,1160,1161,1205,1213,1429,2013,12701,10,20,5000,400
SLAVE_TRANSACTION_RETRY_ERRORS 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701,10,20,5000,400
select * from information_schema.session_variables where variable_name='slave_transaction_retry_errors';
VARIABLE_NAME VARIABLE_VALUE
SLAVE_TRANSACTION_RETRY_ERRORS 1158,1159,1160,1161,1205,1213,1429,2013,12701,10,20,5000,400
SLAVE_TRANSACTION_RETRY_ERRORS 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701,10,20,5000,400
set global slave_transaction_retry_errors=1;
ERROR HY000: Variable 'slave_transaction_retry_errors' is a read only variable
set session slave_transaction_retry_errors=1;

View File

@ -1411,6 +1411,18 @@ NUMERIC_BLOCK_SIZE 0
ENUM_VALUE_LIST NULL
READ_ONLY NO
COMMAND_LINE_ARGUMENT OPTIONAL
VARIABLE_NAME INNODB_SNAPSHOT_ISOLATION
SESSION_VALUE OFF
DEFAULT_VALUE OFF
VARIABLE_SCOPE SESSION
VARIABLE_TYPE BOOLEAN
VARIABLE_COMMENT Use snapshot isolation (write-write conflict detection).
NUMERIC_MIN_VALUE NULL
NUMERIC_MAX_VALUE NULL
NUMERIC_BLOCK_SIZE NULL
ENUM_VALUE_LIST OFF,ON
READ_ONLY NO
COMMAND_LINE_ARGUMENT OPTIONAL
VARIABLE_NAME INNODB_SORT_BUFFER_SIZE
SESSION_VALUE NULL
DEFAULT_VALUE 1048576

View File

@ -858,7 +858,7 @@ static void make_slave_transaction_retry_errors_printable(void)
}
#define DEFAULT_SLAVE_RETRY_ERRORS 9
static constexpr uint DEFAULT_SLAVE_RETRY_ERRORS= 10;
bool init_slave_transaction_retry_errors(const char* arg)
{
@ -900,9 +900,10 @@ bool init_slave_transaction_retry_errors(const char* arg)
slave_transaction_retry_errors[3]= ER_NET_WRITE_INTERRUPTED;
slave_transaction_retry_errors[4]= ER_LOCK_WAIT_TIMEOUT;
slave_transaction_retry_errors[5]= ER_LOCK_DEADLOCK;
slave_transaction_retry_errors[6]= ER_CONNECT_TO_FOREIGN_DATA_SOURCE;
slave_transaction_retry_errors[7]= 2013; /* CR_SERVER_LOST */
slave_transaction_retry_errors[8]= 12701; /* ER_SPIDER_REMOTE_SERVER_GONE_AWAY_NUM */
slave_transaction_retry_errors[6]= ER_CHECKREAD;
slave_transaction_retry_errors[7]= ER_CONNECT_TO_FOREIGN_DATA_SOURCE;
slave_transaction_retry_errors[8]= 2013; /* CR_SERVER_LOST */
slave_transaction_retry_errors[9]= 12701; /* ER_SPIDER_REMOTE_SERVER_GONE_AWAY_NUM */
/* Add user codes after this */
for (p= arg, i= DEFAULT_SLAVE_RETRY_ERRORS; *p; )

View File

@ -881,6 +881,10 @@ static MYSQL_THDVAR_BOOL(table_locks, PLUGIN_VAR_OPCMDARG,
/* check_func */ NULL, /* update_func */ NULL,
/* default */ TRUE);
static MYSQL_THDVAR_BOOL(snapshot_isolation, PLUGIN_VAR_OPCMDARG,
"Use snapshot isolation (write-write conflict detection).",
NULL, NULL, FALSE);
static MYSQL_THDVAR_BOOL(strict_mode, PLUGIN_VAR_OPCMDARG,
"Use strict mode when evaluating create options.",
NULL, NULL, TRUE);
@ -2238,6 +2242,9 @@ convert_error_code_to_mysql(
return(HA_ERR_LOCK_DEADLOCK);
case DB_RECORD_CHANGED:
return HA_ERR_RECORD_CHANGED;
case DB_LOCK_WAIT_TIMEOUT:
/* Starting from 5.0.13, we let MySQL just roll back the
latest SQL statement in a lock wait timeout. Previously, we
@ -2881,6 +2888,8 @@ innobase_trx_init(
trx->check_unique_secondary = !thd_test_options(
thd, OPTION_RELAXED_UNIQUE_CHECKS);
trx->snapshot_isolation = THDVAR(thd, snapshot_isolation) & 1;
#ifdef WITH_WSREP
trx->wsrep = wsrep_on(thd);
#endif
@ -4431,7 +4440,7 @@ innobase_start_trx_and_assign_read_view(
Do this only if transaction is using REPEATABLE READ isolation
level. */
trx->isolation_level = innobase_map_isolation_level(
thd_get_trx_isolation(thd));
thd_get_trx_isolation(thd)) & 3;
if (trx->isolation_level == TRX_ISO_REPEATABLE_READ) {
trx->read_view.open(trx);
@ -15376,7 +15385,7 @@ ha_innobase::check(
}
/* Restore the original isolation level */
m_prebuilt->trx->isolation_level = old_isolation_level;
m_prebuilt->trx->isolation_level = old_isolation_level & 3;
#ifdef BTR_CUR_HASH_ADAPT
# if defined UNIV_AHI_DEBUG || defined UNIV_DEBUG
/* We validate the whole adaptive hash index for all tables
@ -16431,7 +16440,7 @@ ha_innobase::store_lock(
if (lock_type != TL_IGNORE
&& trx->n_mysql_tables_in_use == 0) {
trx->isolation_level = innobase_map_isolation_level(
(enum_tx_isolation) thd_tx_isolation(thd));
(enum_tx_isolation) thd_tx_isolation(thd)) & 3;
if (trx->isolation_level <= TRX_ISO_READ_COMMITTED) {
@ -19802,6 +19811,7 @@ static struct st_mysql_sys_var* innobase_system_variables[]= {
MYSQL_SYSVAR(ft_server_stopword_table),
MYSQL_SYSVAR(ft_user_stopword_table),
MYSQL_SYSVAR(disable_sort_file_cache),
MYSQL_SYSVAR(snapshot_isolation),
MYSQL_SYSVAR(stats_on_metadata),
MYSQL_SYSVAR(stats_transient_sample_pages),
MYSQL_SYSVAR(stats_persistent),

View File

@ -864,6 +864,9 @@ my_error_innodb(
case DB_DEADLOCK:
my_error(ER_LOCK_DEADLOCK, MYF(0));
break;
case DB_RECORD_CHANGED:
my_error(ER_CHECKREAD, MYF(0), table);
break;
case DB_LOCK_WAIT_TIMEOUT:
my_error(ER_LOCK_WAIT_TIMEOUT, MYF(0));
break;

View File

@ -32,23 +32,25 @@ Created 5/24/1996 Heikki Tuuri
enum dberr_t {
DB_SUCCESS,
DB_SUCCESS_LOCKED_REC = 9, /*!< like DB_SUCCESS, but a new
DB_SUCCESS_LOCKED_REC= 9, /*!< like DB_SUCCESS, but a new
explicit record lock was created */
/* The following are error codes */
DB_ERROR = 11,
DB_RECORD_CHANGED,
DB_ERROR,
DB_INTERRUPTED,
DB_OUT_OF_MEMORY,
DB_OUT_OF_FILE_SPACE,
DB_LOCK_WAIT,
DB_DEADLOCK,
DB_ROLLBACK,
DB_DUPLICATE_KEY,
DB_MISSING_HISTORY, /*!< required history data has been
deleted due to lack of space in
rollback segment */
DB_CLUSTER_NOT_FOUND = 30,
DB_TABLE_NOT_FOUND,
#ifdef WITH_WSREP
DB_ROLLBACK,
#endif
DB_TABLE_NOT_FOUND= 31,
DB_TOO_BIG_RECORD, /*!< a record in an index would not fit
on a compressed page, or it would
become bigger than 1/2 free space in

View File

@ -370,6 +370,12 @@ row_search_index_entry(
mtr_t* mtr) /*!< in: mtr */
MY_ATTRIBUTE((nonnull, warn_unused_result));
/** Get the byte offset of the DB_TRX_ID column
@param[in] rec clustered index record
@param[in] index clustered index
@return the byte offset of DB_TRX_ID, from the start of rec */
ulint row_trx_id_offset(const rec_t* rec, const dict_index_t* index);
#define ROW_COPY_DATA 1
#define ROW_COPY_POINTERS 2

View File

@ -739,13 +739,19 @@ public:
const char* op_info; /*!< English text describing the
current operation, or an empty
string */
uint isolation_level;/*!< TRX_ISO_REPEATABLE_READ, ... */
bool check_foreigns; /*!< normally TRUE, but if the user
wants to suppress foreign key checks,
(in table imports, for example) we
set this FALSE */
/** TRX_ISO_REPEATABLE_READ, ... */
unsigned isolation_level:2;
/** when set, REPEATABLE READ will actually be Snapshot Isolation, due to
detecting write/write conflicts and disabling "semi-consistent read" */
unsigned snapshot_isolation:1;
/** normally set; "SET foreign_key_checks=0" can be issued to suppress
foreign key checks, in table imports, for example */
unsigned check_foreigns:1;
/** normally set; "SET unique_checks=0, foreign_key_checks=0"
enables bulk insert into an empty table */
unsigned check_unique_secondary:1;
/** whether an insert into an empty table is active */
bool bulk_insert;
unsigned bulk_insert:1;
/*------------------------------*/
/* MySQL has a transaction coordinator to coordinate two phase
commit between multiple storage engines and the binary log. When
@ -759,13 +765,6 @@ public:
/** whether this is holding the prepare mutex */
bool active_commit_ordered;
/*------------------------------*/
bool check_unique_secondary;
/*!< normally TRUE, but if the user
wants to speed up inserts by
suppressing unique key checks
for secondary indexes when we decide
if we can use the insert buffer for
them, we set this FALSE */
bool flush_log_later;/* In 2PC, we hold the
prepare_commit mutex across
both phases. In that case, we

View File

@ -5949,6 +5949,14 @@ lock_clust_rec_read_check_and_lock(
return DB_SUCCESS;
}
if (heap_no > PAGE_HEAP_NO_SUPREMUM && gap_mode != LOCK_GAP
&& trx->snapshot_isolation
&& trx->read_view.is_open()
&& !trx->read_view.changes_visible(
trx_read_trx_id(rec + row_trx_id_offset(rec, index)))) {
return DB_RECORD_CHANGED;
}
dberr_t err = lock_rec_lock(false, gap_mode | mode,
block, heap_no, index, thr);

View File

@ -695,6 +695,7 @@ handle_new_error:
DBUG_RETURN(true);
case DB_DEADLOCK:
case DB_RECORD_CHANGED:
case DB_LOCK_TABLE_FULL:
rollback:
/* Roll back the whole transaction; this resolution was added

View File

@ -865,6 +865,11 @@ row_sel_build_committed_vers_for_mysql(
column version if any */
mtr_t* mtr) /*!< in: mtr */
{
if (prebuilt->trx->snapshot_isolation) {
*old_vers = rec;
return;
}
if (prebuilt->old_vers_heap) {
mem_heap_empty(prebuilt->old_vers_heap);
} else {

View File

@ -190,7 +190,7 @@ row_undo_mod_clust_low(
@param[in] rec clustered index record
@param[in] index clustered index
@return the byte offset of DB_TRX_ID, from the start of rec */
static ulint row_trx_id_offset(const rec_t* rec, const dict_index_t* index)
ulint row_trx_id_offset(const rec_t* rec, const dict_index_t* index)
{
ut_ad(index->n_uniq <= MAX_REF_PARTS);
ulint trx_id_offset = index->trx_id_offset;

View File

@ -412,12 +412,12 @@ void trx_t::free()
#endif
read_view.mem_noaccess();
MEM_NOACCESS(&lock, sizeof lock);
MEM_NOACCESS(&op_info, sizeof op_info);
MEM_NOACCESS(&isolation_level, sizeof isolation_level);
MEM_NOACCESS(&check_foreigns, sizeof check_foreigns);
MEM_NOACCESS(&op_info, sizeof op_info +
sizeof(unsigned) /* isolation_level, snapshot_isolation,
check_foreigns, check_unique_secondary,
bulk_insert */);
MEM_NOACCESS(&is_registered, sizeof is_registered);
MEM_NOACCESS(&active_commit_ordered, sizeof active_commit_ordered);
MEM_NOACCESS(&check_unique_secondary, sizeof check_unique_secondary);
MEM_NOACCESS(&flush_log_later, sizeof flush_log_later);
MEM_NOACCESS(&duplicates, sizeof duplicates);
MEM_NOACCESS(&dict_operation, sizeof dict_operation);

View File

@ -312,14 +312,16 @@ ut_strerr(
return("Lock wait");
case DB_DEADLOCK:
return("Deadlock");
case DB_RECORD_CHANGED:
return("Record changed");
#ifdef WITH_WSREP
case DB_ROLLBACK:
return("Rollback");
#endif
case DB_DUPLICATE_KEY:
return("Duplicate key");
case DB_MISSING_HISTORY:
return("Required history data has been deleted");
case DB_CLUSTER_NOT_FOUND:
return("Cluster not found");
case DB_TABLE_NOT_FOUND:
return("Table not found");
case DB_TOO_BIG_RECORD:

View File

@ -9,7 +9,7 @@ for slave1_1
connection slave1_1;
SHOW VARIABLES LIKE 'slave_transaction_retry_errors';
Variable_name Value
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1429,2013,12701,10000,20000,30000
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701,10000,20000,30000
connection slave1_1;
for slave1_1
for master_1

View File

@ -9,7 +9,7 @@ for slave1_1
connection slave1_1;
SHOW VARIABLES LIKE 'slave_transaction_retry_errors';
Variable_name Value
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1429,2013,12701
slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1020,1429,2013,12701
connection slave1_1;
for slave1_1
for master_1