This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.2 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.32 (see Section C.1.19, “Changes in MySQL 5.1.32 (14 February 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Important Change:
Formerly, when the management server failed to create a
transporter for a data node connection,
net_write_timeout
seconds
elapsed before the data node was actually allowed to disconnect.
Now in such cases the disconnection occurs immediately.
(Bug#41965)
See also Bug#41713.
Important Change: Replication:
RESET MASTER
and
RESET SLAVE
now reset the values
shown for Last_IO_Error
,
Last_IO_Errno
,
Last_SQL_Error
, and
Last_SQL_Errno
in the output of
SHOW SLAVE STATUS
.
(Bug#34654)
See also Bug#44270.
Cluster Replication: Important Note:
This release of MySQL Cluster derives in part from MySQL 5.1.29,
where the default value for the
--binlog-format
option changed to
STATEMENT
. That change does
not affect this or future MySQL Cluster NDB
6.x releases, where the default value for this option remains
MIXED
, since MySQL Cluster Replication does
not work with the statement-based format.
(Bug#40586)
Disk Data:
It is now possible to specify default locations for Disk Data
data files and undo log files, either together or separately,
using the data node configuration parameters
FileSystemPathDD
,
FileSystemPathDataFiles
, and
FileSystemPathUndoFiles
. For information
about these configuration parameters, see
Disk
Data filesystem parameters.
It is also now possible to specify a log file group, tablespace,
or both, that is created when the cluster is started, using the
InitialLogFileGroup
and
InitialTablespace
data node configuration
parameters. For information about these configuration
parameters, see
Disk
Data object creation parameters.
Bugs fixed:
Performance:
Updates of the SYSTAB_0
system table to
obtain a unique identifier did not use transaction hints for
tables having no primary key. In such cases the NDB kernel used
a cache size of 1. This meant that each insert into a table not
having a primary key required an update of the corresponding
SYSTAB_0
entry, creating a potential
performance bottleneck.
With this fix, inserts on NDB
tables without
primary keys can be under some conditions be performed up to
100% faster than previously.
(Bug#39268)
Packaging:
Packages for MySQL Cluster were missing the
libndbclient.so
and
libndbclient.a
files.
(Bug#42278)
Partitioning:
Executing ALTER TABLE ... REORGANIZE
PARTITION
on an
NDBCLUSTER
table having only one
partition caused mysqld to crash.
(Bug#41945)
See also Bug#40389.
Cluster API:
Failed operations on BLOB
and
TEXT
columns were not always
reported correctly to the originating SQL node. Such errors were
sometimes reported as being due to timeouts, when the actual
problem was a transporter overload due to insufficient buffer
space.
(Bug#39867, Bug#39879)
Backup IDs greater than 231 were not handled correctly, causing negative values to be used in backup directory names and printouts. (Bug#43042)
When using ndbmtd, NDB kernel threads could
hang while trying to start the data nodes with
LockPagesInMainMemory
set to 1.
(Bug#43021)
When using multiple management servers and starting several API nodes (possibly including one or more SQL nodes) whose connectstrings listed the management servers in different order, it was possible for 2 API nodes to be assigned the same node ID. When this happened it was possible for an API node not to get fully connected, consequently producing a number of errors whose cause was not easily recognizable. (Bug#42973)
ndb_error_reporter worked correctly only with GNU tar. (With other versions of tar, it produced empty archives.) (Bug#42753)
Triggers on NDBCLUSTER
tables
caused such tables to become locked.
(Bug#42751)
When performing more than 32 index or tuple scans on a single fragment, the scans could be left hanging. This caused unnecessary timeouts, and in addition could possibly lead to a hang of an LCP. (Bug#42559)
A data node failure that occurred between calls to
NdbIndexScanOperation::readTuples(SF_OrderBy)
and NdbTransaction::Execute()
was not
correctly handled; a subsequent call to
nextResult()
caused a null pointer to be
deferenced, leading to a segfault in mysqld.
(Bug#42545)
Issuing SHOW GLOBAL STATUS LIKE 'NDB%'
before
mysqld had connected to the cluster caused a
segmentation fault.
(Bug#42458)
Data node failures that occurred before all data nodes had connected to the cluster were not handled correctly, leading to additional data node failures. (Bug#42422)
When a cluster backup failed with Error 1304 (Node
node_id1
: Backup request from
node_id2
failed to start), no clear
reason for the failure was provided.
As part of this fix, MySQL Cluster now retries backups in the event of sequence errors. (Bug#42354)
See also Bug#22698.
Issuing SHOW ENGINE
NDBCLUSTER STATUS
on an SQL node before the management
server had connected to the cluster caused
mysqld to crash.
(Bug#42264)
A maximum of 11 TUP
scans were allowed in
parallel.
(Bug#42084)
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN
statement while inserting rows into the
table caused mysqld to crash.
(Bug#41905)
If the master node failed during a global checkpoint, it was possible in some circumstances for the new master to use an incorrect value for the global checkpoint index. This could occur only when the cluster used more than one node group. (Bug#41469)
API nodes disconnected too agressively from cluster when data nodes were being restarted. This could sometimes lead to the API node being unable to access the cluster at all during a rolling restart. (Bug#41462)
An abort path in the DBLQH
kernel block
failed to release a commit acknowledgement marker. This meant
that, during node failure handling, the local query handler
could be added multiple times to the marker record which could
lead to additional node failures due an array overflow.
(Bug#41296)
During node failure handling (of a data node other than the
master), there was a chance that the master was waiting for a
GCP_NODEFINISHED
signal from the failed node
after having received it from all other data nodes. If this
occurred while the failed node had a transaction that was still
being committed in the current epoch, the master node could
crash in the DBTC
kernel block when
discovering that a transaction actually belonged to an epoch
which was already completed.
(Bug#41295)
If a transaction was aborted during the handling of a data node failure, this could lead to the later handling of an API node failure not being completed. (Bug#41214)
Given a MySQL Cluster containing no data (that is, whose data
nodes had all been started using --initial
, and
into which no data had yet been imported) and having an empty
backup directory, executing START BACKUP
with
a user-specified backup ID caused the data nodes to crash.
(Bug#41031)
Issuing EXIT
in the management client
sometimes caused the client to hang.
(Bug#40922)
Redo log creation was very slow on some platforms, causing MySQL Cluster to start more slowly than necessary with some combinations of hardware and operating system. This was due to all write operations being synchronized to disk while creating a redo log file. Now this synchronization occurs only after the redo log has been created. (Bug#40734)
Transaction failures took longer to handle than was necessary.
When a data node acting as transaction coordinator (TC) failed, the surviving data nodes did not inform the API node initiating the transaction of this until the failure had been processed by all protocols. However, the API node needed only to know about failure handling by the transaction protocol — that is, it needed to be informed only about the TC takeover process. Now, API nodes (including MySQL servers acting as cluster SQL nodes) are informed as soon as the TC takeover is complete, so that it can carry on operating more quickly. (Bug#40697)
It was theoretically possible for stale data to be read from
NDBCLUSTER
tables when the
transaction isolation level was set to
ReadCommitted
.
(Bug#40543)
In some cases, NDB
did not check
correctly whether tables had changed before trying to use the
query cache. This could result in a crash of the debug MySQL
server.
(Bug#40464)
Restoring a MySQL Cluster from a dump made using mysqldump failed due to a spurious error: Can't execute the given command because you have active locked tables or an active transaction. (Bug#40346)
O_DIRECT
was incorrectly disabled when making
MySQL Cluster backups.
(Bug#40205)
Events logged after setting ALL CLUSTERLOG
STATISTICS=15
in the management client did not always
include the node ID of the reporting node.
(Bug#39839)
Start phase reporting was inconsistent between the management client and the cluster log. (Bug#39667)
The MySQL Query Cache did not function correctly with
NDBCLUSTER
tables containing
TEXT
columns.
(Bug#39295)
A segfault in Logger::Log
caused
ndbd to hang indefinitely. This fix improves
on an earlier one for this issue, first made in MySQL Cluster
NDB 6.2.16 and MySQL Cluster NDB 6.3.17.
(Bug#39180)
See also Bug#38609.
Memory leaks could occur in handling of strings used for storing cluster metadata and providing output to users. (Bug#38662)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug#34526)
A duplicate key or other error raised when inserting into an
NDBCLUSTER
table caused the current
transaction to abort, after which any SQL statement other than a
ROLLBACK
failed. With this fix, the
NDBCLUSTER
storage engine now
performs an implicit rollback when a transaction is aborted in
this way; it is no longer necessary to issue an explicit
ROLLBACK
statement, and the next statement that is issued automatically
begins a new transaction.
It remains necessary in such cases to retry the complete transaction, regardless of which statement caused it to be aborted.
See also Bug#47654.
Error messages for NDBCLUSTER
error
codes 1224 and 1227 were missing.
(Bug#28496)
Partitioning: A query on a user-partitioned table caused MySQL to crash, where the query had the following characteristics:
The query's WHERE
clause referenced
an indexed column that was also in the partitioning key.
The query's WHERE
clause included a
value found in the partition.
The query's WHERE
clause used the
<
or <>
operators to compare with the indexed column's value
with a constant.
The query used an ORDER BY
clause, and
the same indexed column was used in the ORDER
BY
clause.
The ORDER BY
clause used an explcit or
implicit ASC
sort priority.
Two examples of such a query are given here, where
a
represents an indexed column used in the
table's partitioning key:
SELECT * FROMtable
WHERE a <constant
ORDER BY a;
SELECT * FROMtable
WHERE a <>constant
ORDER BY a;
This regression was introduced by Bug#30573, Bug#33257, Bug#33555.
Partitioning:
Dropping or creating an index on a partitioned table managed by
the InnoDB
Plugin locked the table.
(Bug#37453)
Disk Data:
It was not possible to add an in-memory column online to a table
that used a table-level or column-level STORAGE
DISK
option. The same issue prevented ALTER
ONLINE TABLE ... REORGANIZE PARTITION
from working on
Disk Data tables.
(Bug#42549)
Disk Data:
Issuing concurrent CREATE TABLESPACE
,
ALTER TABLESPACE
, CREATE LOGFILE
GROUP
, or ALTER LOGFILE GROUP
statements on separate SQL nodes caused a resource leak that led
to data node crashes when these statements were used again
later.
(Bug#40921)
Disk Data: Disk-based variable-length columns were not always handled like their memory-based equivalents, which could potentially lead to a crash of cluster data nodes. (Bug#39645)
Disk Data: Creating a Disk Data tablespace with a very large extent size caused the data nodes to fail. The issue was observed when using extent sizes of 100 MB and larger. (Bug#39096)
Disk Data: Creation of a tablespace data file whose size was greater than 4 GB failed silently on 32-bit platforms. (Bug#37116)
See also Bug#29186.
Disk Data:
O_SYNC
was incorrectly disabled on platforms
that do not support O_DIRECT
. This issue was
noted on Solaris but could have affected other platforms not
having O_DIRECT
capability.
(Bug#34638)
Disk Data:
Trying to execute a CREATE LOGFILE
GROUP
statement using a value greater than
150M
for UNDO_BUFFER_SIZE
caused data nodes to crash.
As a result of this fix, the upper limit for
UNDO_BUFFER_SIZE
is now
600M
; attempting to set a higher value now
fails gracefully with an error.
(Bug#34102)
See also Bug#36702.
Disk Data: When attempting to create a tablespace that already existed, the error message returned was Table or index with given name already exists. (Bug#32662)
Disk Data:
Using a path or filename longer than 128 characters for Disk
Data undo log files and tablespace data files caused a number of
issues, including failures of CREATE
LOGFILE GROUP
, ALTER LOGFILE
GROUP
, CREATE
TABLESPACE
, and ALTER
TABLESPACE
statements, as well as crashes of
management nodes and data nodes.
With this fix, the maximum length for path and file names used for Disk Data undo log files and tablespace data files is now the same as the maximum for the operating system. (Bug#31769, Bug#31770, Bug#31772)
Disk Data: Starting a cluster under load such that Disk Data tables used most of the undo buffer could cause data node failures.
The fix for this bug also corrected an issue in the
LGMAN
kernel block where the amount of free
space left in the undo buffer was miscalculated, causing buffer
overruns. This could cause records in the buffer to be
overwritten, leading to problems when restarting data nodes.
(Bug#28077)
Disk Data: Attempting to perform a system restart of the cluster where there existed a logfile group without and undo log files caused the data nodes to crash.
While issuing a CREATE LOGFILE
GROUP
statement without an ADD
UNDOFILE
option fails with an error in the MySQL
server, this situation could arise if an SQL node failed
during the execution of a valid CREATE
LOGFILE GROUP
statement; it is also possible to
create a logfile group without any undo log files using the
NDB API.
Cluster Replication:
Sometimes, when using the --ndb_log_orig
option, the orig_epoch
and
orig_server_id
columns of the
ndb_binlog_index
table on the slave contained
the ID and epoch of the local server instead.
(Bug#41601)
Cluster API:
Some error messages from ndb_mgmd contained
newline (\n
) characters. This could break the
MGM API protocol, which uses the newline as a line separator.
(Bug#43104)
Cluster API: When using an ordered index scan without putting all key columns in the read mask, this invalid use of the NDB API went undetected, which resulted in the use of uninitialized memory. (Bug#42591)
Cluster API: The MGM API reset error codes on management server handles before checking them. This meant that calling an MGM API function with a null handle caused applications to crash. (Bug#40455)
Cluster API:
It was not always possible to access parent objects directly
from NdbBlob
,
NdbOperation
, and
NdbScanOperation
objects. To alleviate this
problem, a new getNdbOperation()
method has
been added to NdbBlob
and new
getNdbTransaction() methods have been added to
NdbOperation
and
NdbScanOperation
. In addition, a const
variant of NdbOperation::getErrorLine()
is
now also available.
(Bug#40242)
Cluster API:
NdbScanOperation::getBlobHandle()
failed when
used with incorrect column names or numbers.
(Bug#40241)
Cluster API: The NDB API example programs included in MySQL Cluster source distributions failed to compile. (Bug#37491)
See also Bug#40238.
Cluster API:
mgmapi.h
contained constructs which only
worked in C++, but not in C.
(Bug#27004)
User Comments
Add your own comment.