This release contains a number of key bug fixes and improvements. Please read the full list, including behavior changes, in the Tungsten v7.0.2 Release Notes:
Below are a few of the highlights:
Improvements, New Features, and Functionality:
- rsync is now an option in
tprovisionin addition to xtrabackup and mysqldump. (CT-338) - The
trepctl statuscommand will now show the last known applied seqno and latency. (CT-1823) - For the
trepctlcommand, a new-coption is now available that can be used in conjunction with the-roption to indicate the number of times to refresh before automatically terminating. (CT-679) - The
tungsten_merge_logscommand now supports the–before TIMESTAMPand–after TIMESTAMPfilters. (CT-1869) - A new log file has been created for data drift messages (tungsten-replicator/log/data-drift.log). (CT-1873)
- The
tpm ask,tpm ask summarycommands (CT-1874), andtpm ask stagesandtpm ask allstages(CT-1943) have been improved. - The
tungsten_generate_haproxy_for_apiandtpm generate-ha-proxy-for-apicommands now support using connector hosts in the backend definitions via-c, and extra backend flags using-f; also no longer calling Data::Dumper. (CT-1909, CT-1915) - The
tungsten_reset_managercommand now supports the ability to print out the path or paths to be cleared. (CT-1917) - The
tmonitorcommand now accepts cli argos to specify the ports and will auto-configure the ports if they have been changed via the Tungsten configuration. (CT-1919) - The
tpmcommand calls to glob are more strict and compliant. (CT-1940) - A new standalone status script called
tungsten_get_statusshows the datasources and replicators for all nodes in all services along with seqno and latency. (CT-1962) - A new
-dsctloption has been added to thethlcommand, and new-eventoption added tothl list. You may learn more about these here. (CT-2012) - A new feature has been added to pause a replicator stage for some amount of time. You may read a blog about the new ‘pause’ feature here. (CT-1912)
- Per-service tuning of the replicator thl directory is now possible for multi-service replicator-only installs as well as for clustering. (CT-1927)
- A new replicator role (thl-applier) has been added to allow a replicator service to apply its locally available THL without pulling from a remote host. (CT-1936)
- Added a way to configure the maximum number of rows that can be grouped together when applying row based events for multiple insert or delete statements. (CT-1980)
- The
connector graceful-stopcommand now supports systemd service manager properly. (CT-1921) - Two new commands have been introduced to
cctrlto provide better insight and control of the connectors. For more details, seecctrl datasourcecommand. (CT-1949) - Also for the Connector, added logging configuration example to print load balancers activity. (CT-1966)
- Added a new option to TPM
manager-replicator-offline-timeout=<timeout_in_sec>that configures the timeout for the manager to wait until the replicator goes offline. (CT-1892) - New logs have been created for the REST API. (CT-1983)
- A failsafe shunned cluster (Caused by a network split) will be auto recovered after the network connection is re-established. (CT-241)
Bug Fixes:
- The
tungsten_skip_seqnocommand no longer fails when-iis specified, and now properly filters using--filterwhen there is a long error message. (CT-1877) - The
tpmcommand now allows any case for section entries (i.e. [alpha_FROM_beta]) in the INI files. (CT-1879) - The
tpm diagcommand now gathers the mysql.log file when SSL is enabled in the server. (CT-1920) - Fixes an issue that prevented
dsctlfrom connecting to MySQL if SSL was enabled. (CT-1928) - The
tpm mysqlcommand will now gracefully handle being run on a non-database node. (CT-1946) - REST API v2 bug fixes (CT-796, CT-1971, CT-1945)
- The
cluster_backupscript will no longer backup a replica if the replicator is in an ERROR state. (CT-1036)
We’re very proud to announce this release to you! There are numerous bug fixes and improvements in this v7.0.2 release, so please get the full scoop in the release notes linked above, and reach out to Continuent Support via Zendesk or by emailing if you have any questions!
Comments
Add new comment