10bet网址
MySQL NDB Cluster 7.5 Release Notes
Download these Release Notes
PDF (US Ltr)- 1.0Mb
PDF (A4)- 1.0Mb


MySQL NDB Cluster 7.5 Release Notes/Release Series Changelogs: MySQL NDB Cluster 7.5/ MySQL集群NDB 7.5.14 (5.7.26-ndb——的变化7.5.14) (2019-04-26, General Availability)

MySQL NDB集群7.5.14 (5.7.26-ndb-7的变化。5.14) (2019-04-26, General Availability)

Bugs Fixed

  • NDB Disk Data:NDBdid not validateMaxNoOfOpenFilesin relation toInitialNoOfOpenFilescorrectly, leading data nodes to fail with an error message that did not make the nature of the problem clear to users. (Bug #28943749)

  • NDB Disk Data:Repeated execution ofALTER TABLESPACE ... ADD DATAFILEagainst the same tablespace caused data nodes to hang and left them, after being killed manually, unable to restart. (Bug #22605467)

  • NDB Cluster APIs:NDBnow identifies short-lived transactions not needing the reduction of lock contention provided byNdbBlob::close()and no longer invokes this method in cases (such as when autocommit is enabled) in which unlocking merely causes extra work and round trips to be performed prior to committing or aborting the transaction. (Bug #29305592)

    References: See also: Bug #49190, Bug #11757181.

  • NDB Cluster APIs:When the most recently failed operation was released, the pointer to it held byNdbTransactionbecame invalid and when accessed led to failure of the NDB API application. (Bug #29275244)

  • When a pushed join executing in theDBSPJblock had to store correlation IDs during query execution, memory for these was allocated for the lifetime of the entire query execution, even though these specific correlation IDs are required only when producing the most recent batch in the result set. Subsequent batches require additional correlation IDs to be stored and allocated; thus, if the query took sufficiently long to complete, this led to exhaustion of query memory (error 20008). Now in such cases, memory is allocated only for the lifetime of the current result batch, and is freed and made available for re-use following completion of the batch. (Bug #29336777)

    References: See also: Bug #26995027.

  • AddedDUMP 406(NdbfsDumpRequests) to provideNDBfile system information to global checkpoint and local checkpoint stall reports in the node logs. (Bug #28922609)

  • A race condition between theDBACCandDBLQHkernel blocks occurred when different operations in a transaction on the same row were concurrently being prepared and aborted. This could result inDBTUPattempting to prepare an operation when a preceding operation had been aborted, which was unexpected and could thus lead to undefined behavior including potential data node failures. To solve this issue,DBACCandDBLQHnow check that all dependencies are still valid before attempting to prepare an operation.

    Note

    This fix also supersedes a previous one made for a related issue which was originally reported as Bug #28500861.

    (Bug #28893633)

  • Thendbinfo.cpustattable reported inaccurate information regarding send threads. (Bug #28884157)

  • In some cases, one and sometimes more data nodes underwent an unplanned shutdown while runningndb_restore. This occurred most often, but was not always restircted to, when restoring to a cluster having a different number of data nodes from the cluster on which the original backup had been taken.

    The root cause of this issue was exhaustion of the pool ofSafeCounterobjects, used by theDBDICTkernel block as part of executing schema transactions, and taken from a per-block-instance pool shared with protocols used forNDBevent setup and subscription processing. The concurrency of event setup and subscription processing is such that theSafeCounterpool can be exhausted; event and subscription processing can handle pool exhaustion, but schema transaction processing could not, which could result in the node shutdown experienced during restoration.

    This problem is solved by givingDBDICTschema transactions an isolated pool of reservedSafeCounterswhich cannot be exhausted by concurrentNDBevent activity. (Bug #28595915)

  • After a commit failed due to an error,mysqldshut down unexpectedly while trying to get the name of the table involved. This was due to an issue in the internal functionndbcluster_print_error(). (Bug #28435082)

  • ndb_restoredid not restore autoincrement values correctly when one or more staging tables were in use. As part of this fix, we also in such cases block applying of theSYSTAB_0backup log, whose content continued to be applied directly based on the table ID, which could ovewrite the autoincrement values stored inSYSTAB_0for unrelated tables. (Bug #27917769, Bug #27831990)

    References: See also: Bug #27832033.

  • ndb_restoreemployed a mechanism for restoring autoincrement values which was not atomic, and thus could yield incorrect autoincrement values being restored when multiple instances ofndb_restorewere used in parallel. (Bug #27832033)

    References: See also: Bug #27917769, Bug #27831990.

  • When query memory was exhausted in theDBSPJkernel block while storing correlation IDs for deferred operations, the query was aborted with error status 20000Query aborted due to out of query memory. (Bug #26995027)

    References: See also: Bug #86537.

  • MaxBufferedEpochsis used on data nodes to avoid excessive buffering of row changes due to laggingNDBevent API subscribers; when epoch acknowledgements from one or more subscribers lag by this number of epochs, an asynchronous disconnection is triggered, allowing the data node to release the buffer space used for subscriptions. Since this disconnection is asynchronous, it may be the case that it has not completed before additional new epochs are completed on the data node, resulting in new epochs not being able to seize GCP completion records, generating warnings such as those shown here:

    [ndbd] ERROR -- c_gcp_list.seize() failed... ... [ndbd] WARNING -- ACK wo/ gcp record...

    And leading to the following warning:

    Disconnecting node %u because it has exceeded MaxBufferedEpochs (100 > 100), epoch ....

    This fix performs the following modifications:

    • Modifies the size of the GCP completion record pool to ensure that there is always some extra headroom to account for the asynchronous nature of the disconnect processing previously described, thus avoidingc_gcp_listseize failures.

    • Modifies the wording of theMaxBufferedEpochswarning to avoid the contradictory phrase100 > 100.

    (Bug #20344149)

  • When executing the redo log in debug mode it was possible for a data node to fail when deallocating a row. (Bug #93273, Bug #28955797)

  • AnNDBtable having both a foreign key on anotherNDBtable usingON DELETE CASCADEand one or moreTEXTorBLOBcolumns leaked memory.

    As part of this fix,ON DELETE CASCADEis no longer supported for foreign keys onNDBtables when the child table contains a column that uses any of theBLOBorTEXTtypes. (Bug #89511, Bug #27484882)