MySQL Cluster NDB 6.3.27 was pulled shortly after release due to Bug#47844. Users seeking to upgrade from a previous MySQL Cluster NDB 6.3 release should instead use MySQL Cluster NDB 6.3.27a, which contains a fix for this bug, in addition to all bugfixes and improvements made in MySQL Cluster NDB 6.3.27.
This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.3 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.3 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.37 (see Section C.1.13, “Changes in MySQL 5.1.37 (13 July 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Disk Data:
Two new columns have been added to the output of
ndb_desc to make it possible to determine how
much of the disk space allocated to a given table or fragment
remains free. (This information is not available from the
INFORMATION_SCHEMA.FILES
table,
since the FILES
table applies only
to Disk Data files.) For more information, see
Section 17.4.9, “ndb_desc — Describe NDB Tables”.
(Bug#47131)
Bugs fixed:
Cluster Replication: Important Change:
In a MySQL Cluster acting as a replication slave and having
multiple SQL nodes, only the SQL node receiving events directly
from the master recorded DDL statements in its binary logs
unless this SQL node also had binary logging enabled; otherwise,
other SQL nodes in the slave cluster failed to log DDL
statements, regardless of their individual
--log-bin
settings.
The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:
DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.
DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.
Replicated DDL and DML statements on the slave are logged by
any attached mysqld that has both
--log-bin
and
--log-slave-updates
enabled.
Replicated DDL and DML statements are logged with the server
ID of the original (master) MySQL server by any attached
mysqld that has both
--log-bin
and
--log-slave-updates
enabled.
Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:
Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on “old” SQL nodes until all SQL nodes have been upgraded.
Make sure that
--log-slave-updates
is
enabled on all SQL nodes performing binary logging prior
to the upgrade, so that all DDL is captured.
Logging of DML statements was not affected by this issue.
mysqld allocated an excessively large buffer
for handling BLOB
values due to
overestimating their size. (For each row, enough space was
allocated to accommodate every
BLOB
or
TEXT
column value in the result
set.) This could adversely affect performance when using tables
containing BLOB
or
TEXT
columns; in a few extreme
cases, this issue could also cause the host system to run out of
memory unexpectedly.
(Bug#47574)
NDBCLUSTER
uses a dynamically-allocated
buffer to store BLOB
or
TEXT
column data that is read
from rows in MySQL Cluster tables.
When an instance of the NDBCLUSTER
table
handler was recycled (this can happen due to table definition
cache pressure or to operations such as
FLUSH TABLES
or
ALTER TABLE
), if the last row
read contained blobs of zero length, the buffer was not freed,
even though the reference to it was lost. This resulted in a
memory leak.
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
Now execute repeatedly a SELECT
on this table, such that the zero-length
LONGTEXT
row is
last, followed by a FLUSH
TABLES
statement (which forces the handler object to
be re-used), as shown here:
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to
the size of the stored
LONGTEXT
value
each time these two statements were executed.
(Bug#47573)
Large transactions involving joins between tables containing
BLOB
columns used excessive
memory.
(Bug#47572)
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug#47505)
When a data node restarts, it first runs the redo log until
reaching the latest restorable global checkpoint; after this it
scans the remainder of the redo log file, searching for entries
that should be invalidated so they are not used in any
subsequent restarts. (It is possible, for example, if restoring
GCI number 25, that there might be entries belonging to GCI 26
in the redo log.) However, under certain rare conditions, during
the invalidation process, the redo log files themselves were not
always closed while scanning ahead in the redo log. In rare
cases, this could lead to MaxNoOfOpenFiles
being exceeded, causing a the data node to crash.
(Bug#47171)
For very large values of MaxNoOfTables
+
MaxNoOfAttributes
, the calculation for
StringMemory
could overflow when creating
large numbers of tables, leading to NDB error 773
(Out of string memory, please modify StringMemory
config parameter), even when
StringMemory
was set to
100
(100 percent).
(Bug#47170)
The default value for the StringMemory
configuration parameter, unlike other MySQL Cluster
configuration parameters, was not set in
ndb/src/mgmsrv/ConfigInfo.cpp
.
(Bug#47166)
Signals from a failed API node could be received after an
API_FAILREQ
signal (see
Operations and Signals)
has been received from that node, which could result in invalid
states for processing subsequent signals. Now, all pending
signals from a failing API node are processed before any
API_FAILREQ
signal is received.
(Bug#47039)
See also Bug#44607.
Using triggers on NDB
tables caused
ndb_autoincrement_prefetch_sz
to be treated as having the NDB kernel's internal default
value (32) and the value for this variable as set on the
cluster's SQL nodes to be ignored.
(Bug#46712)
Running an ALTER TABLE
statement
while an NDB backup was in progress caused
mysqld to crash.
(Bug#44695)
When performing auto-discovery of tables on individual SQL
nodes, NDBCLUSTER
attempted to overwrite
existing MyISAM
.frm
files and corrupted them.
Workaround.
In the mysql client, create a new table
(t2
) with same definition as the corrupted
table (t1
). Use your system shell or file
manager to rename the old .MYD
file to
the new file name (for example, mv t1.MYD
t2.MYD). In the mysql client,
repair the new table, drop the old one, and rename the new
table using the old file name (for example,
RENAME TABLE t2
TO t1
).
Running ndb_restore with the
--print
or --print_log
option
could cause it to crash.
(Bug#40428, Bug#33040)
An insert on an NDB
table was not
always flushed properly before performing a scan. One way in
which this issue could manifest was that
LAST_INSERT_ID()
sometimes failed
to return correct values when using a trigger on an
NDB
table.
(Bug#38034)
When a data node received a TAKE_OVERTCCONF
signal from the master before that node had received a
NODE_FAILREP
, a race condition could in
theory result.
(Bug#37688)
Some joins on large NDB
tables
having TEXT
or
BLOB
columns could cause
mysqld processes to leak memory. The joins
did not need to reference the
TEXT
or
BLOB
columns directly for this
issue to occur.
(Bug#36701)
On Mac OS X 10.5, commands entered in the management client
failed and sometimes caused the client to hang, although
management client commands invoked using the
--execute
(or
-e
) option from the system shell worked
normally.
For example, the following command failed with an error and hung until killed manually, as shown here:
ndb_mgm>SHOW
Warning, event thread startup failed, degraded printouts as result, errno=36^C
However, the same management client command, invoked from the system shell as shown here, worked correctly:
shell> ndb_mgm -e "SHOW"
See also Bug#34438.
Replication:
In some cases, a STOP SLAVE
statement could cause the replication slave to crash. This issue
was specific to MySQL on Windows or Macintosh platforms.
(Bug#45238, Bug#45242, Bug#45243, Bug#46013, Bug#46014, Bug#46030)
See also Bug#40796.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.
This miscalculation was not reflected in the contents of the
INFORMATION_SCHEMA.FILES
table,
as it applied to extents allocated to a fragment, and not to a
file.
Cluster API:
In some circumstances, if an API node encountered a data node
failure between the creation of a transaction and the start of a
scan using that transaction, then any subsequent calls to
startTransaction()
and
closeTransaction()
could cause the same
transaction to be started and closed repeatedly.
(Bug#47329)
Cluster API:
Performing multiple operations using the same primary key within
the same
NdbTransaction::execute()
call could lead to a data node crash.
This fix does not make change the fact that performing
multiple operations using the same primary key within the same
execute()
is not supported; because there
is no way to determine the order of such operations, the
result of such combined operations remains undefined.
See also Bug#44015.
API: The fix for Bug#24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the “dummy” thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug#42850)
User Comments
Add your own comment.