Hi<br><br> I changed a parameter and carried it out again.<br> Then the CTS seems to have been completed.<br> However, the error of "Audit LogAudit FAILED" occurs as ever.<br> I dont know log as a result of right CTS.<br>
Would you pass the test in this log?<br><br>-----<br>$ python /usr/share/pacemaker/tests/cts/CTSlab.py --nodes "cts0101 cts0102" --at-boot 1 --logfile /share/ha/logs/ha-log-local7 --syslog-facility local7<br>Jan 21 14:16:40 Random seed is: 1295587000<br>
Jan 21 14:16:40 >>>>>>>>>>>>>>>> BEGINNING 0 TESTS<br>Jan 21 14:16:40 Stack: openais (whitetank)<br>Jan 21 14:16:40 Schema: pacemaker-1.0<br>Jan 21 14:16:40 Scenario: Random Test Execution<br>
Jan 21 14:16:40 Random Seed: 1295587000<br>Jan 21 14:16:40 System log files: /share/ha/logs/ha-log-local7<br>Jan 21 14:16:40 Cluster nodes:<br>Jan 21 14:16:40 * cts0101<br>Jan 21 14:16:40 * cts0102<br>Jan 21 14:16:43 Testing for syslog logs<br>
Jan 21 14:16:43 Testing for remote logs<br>Jan 21 14:17:48 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:19:21 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:21:54 Restarting logging on: ['cts0101', 'cts0102']<br>
Jan 21 14:25:26 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:29:26 ERROR: Cluster logging unrecoverable.<br>Jan 21 14:29:26 Audit LogAudit FAILED.<br>Jan 21 14:29:29 Stopping Cluster Manager on all nodes<br>
Jan 21 14:29:29 Stopping crm-whitetank on node cts0101<br>Jan 21 14:29:29 Could not stop crm-whitetank on node cts0101<br>Jan 21 14:29:29 Stopping crm-whitetank on node cts0102<br>Jan 21 14:29:29 Could not stop crm-whitetank on node cts0102<br>
Jan 21 14:29:29 Starting Cluster Manager on all nodes.<br>Jan 21 14:30:04 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:31:36 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:34:09 Restarting logging on: ['cts0101', 'cts0102']<br>
Jan 21 14:37:42 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:41:42 ERROR: Cluster logging unrecoverable.<br>Jan 21 14:41:42 Audit LogAudit FAILED.<br>Jan 21 14:41:43 Stopping Cluster Manager on all nodes<br>
Jan 21 14:41:43 Stopping crm-whitetank on node cts0101<br>Jan 21 14:41:43 Could not stop crm-whitetank on node cts0101<br>Jan 21 14:41:43 Stopping crm-whitetank on node cts0102<br>Jan 21 14:41:43 Could not stop crm-whitetank on node cts0102<br>
Jan 21 14:42:18 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:43:50 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:46:23 Restarting logging on: ['cts0101', 'cts0102']<br>
Jan 21 14:49:56 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 14:53:56 ERROR: Cluster logging unrecoverable.<br>Jan 21 14:53:56 Audit LogAudit FAILED.<br>Jan 21 14:53:57 ****************<br>Jan 21 14:53:57 Overall Results:{'failure': 0, 'skipped': 0, 'success': 0, 'BadNews': 0}<br>
Jan 21 14:53:57 ****************<br>Jan 21 14:53:57 Test Summary<br>Jan 21 14:53:57 Test Flip: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test Restart: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test Stonithd: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test StartOnebyOne: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test SimulStart: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test SimulStop: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test StopOnebyOne: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test RestartOnebyOne: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test PartialStart: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test Standby: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test ResourceRecover: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test ComponentFail: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test Reattach: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 Test SpecialTest1: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>
Jan 21 14:53:57 Test NearQuorumPoint: {'auditfail': 0, 'failure': 0, 'skipped': 0, 'calls': 0}<br>Jan 21 14:53:57 <<<<<<<<<<<<<<<< TESTS COMPLETED<br>
-----<br> <br><br><div class="gmail_quote">2011/1/21 nozawat <span dir="ltr"><<a href="mailto:nozawat@gmail.com">nozawat@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hi<br><br> I ran CTS in the following environment.<br> * OS:RHEL5.5-x86_64<br> * pacemaker-1.0.9.1-1.15.el5<br> * TDN(bbs01)<br> * TNNs(cts0101 cts0102)<br><br> Probably it is a phenomenon like the following.<br> <a href="http://www.gossamer-threads.com/lists/linuxha/pacemaker/69322" target="_blank">http://www.gossamer-threads.com/lists/linuxha/pacemaker/69322</a><br>
<br> SSH login without password -> OK.<br> Syslog Message transfer by syslog-ng -> OK.<br><br>-------<br>$ python /usr/share/pacemaker/tests/cts/CTSlab.py --nodes "cts0101 cts0102" --at-boot 1 --stack heartbeat --stonith no --logfile /share/ha/logs/ha-log-local7 --syslog-facility local7 1<br>
Jan 21 13:23:08 Random seed is: 1295583788<br>Jan 21 13:23:08 >>>>>>>>>>>>>>>> BEGINNING 1 TESTS<br>Jan 21 13:23:08 Stack: heartbeat<br>Jan 21 13:23:08 Schema: pacemaker-1.0<br>
Jan 21 13:23:08 Scenario: Random Test Execution<br>Jan 21 13:23:08 Random Seed: 1295583788<br>Jan 21 13:23:08 System log files: /share/ha/logs/ha-log-local7<br>Jan 21 13:23:08 Cluster nodes:<br>Jan 21 13:23:08 * cts0101<br>
Jan 21 13:23:08 * cts0102<br>Jan 21 13:23:12 Testing for syslog logs<br>Jan 21 13:23:12 Testing for remote logs<br>Jan 21 13:24:16 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 13:25:49 Restarting logging on: ['cts0101', 'cts0102']<br>
Jan 21 13:28:21 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 13:31:54 Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 13:35:54 ERROR: Cluster logging unrecoverable.<br>Jan 21 13:35:54 Audit LogAudit FAILED.<br>
-----<br><br> I run it in heartbeat, but a similar error occurs in corosync.<br> I become the error in "Single search timed out" in the log and seem to retry.<br>----- <br>Jan 21 13:23:11 bbs01 CTS: debug: Audit DiskspaceAudit passed.<br>
Jan 21 13:23:12 bbs01 CTS: Testing for syslog logs<br>Jan 21 13:23:12 bbs01 CTS: Testing for remote logs<br>Jan 21 13:23:12 bbs01 CTS: debug: lw: cts0101:/share/ha/logs/ha-log-local7: Installing /tmp/cts_log_watcher.py on cts0101<br>
Jan 21 13:23:12 bbs01 CTS: debug: lw: cts0102:/share/ha/logs/ha-log-local7: Installing /tmp/cts_log_watcher.py on cts0102<br>Jan 21 13:23:13 cts0102 logger: Test message from cts0102<br>Jan 21 13:23:13 cts0101 logger: Test message from cts0101<br>
Jan 21 13:23:44 bbs01 CTS: debug: lw: LogAudit: Single search timed out: timeout=30, start=1295583793, limit=1295583824, now=1295583824<br>Jan 21 13:24:16 bbs01 CTS: debug: lw: LogAudit: Single search timed out: timeout=30, start=1295583824, limit=1295583855, now=1295583856<br>
Jan 21 13:24:16 bbs01 CTS: Restarting logging on: ['cts0101', 'cts0102']<br>Jan 21 13:24:16 bbs01 CTS: debug: cmd: async: target=cts0101, rc=22203: /etc/init.d/syslog-ng restart 2>&1 > /dev/null<br>
Jan 21 13:24:16 bbs01 CTS: debug: cmd: async: target=cts0102, rc=22204: /etc/init.d/syslog-ng restart 2>&1 > /dev/null<br>Jan 21 13:25:17 cts0102 logger: Test message from cts0102<br>Jan 21 13:25:17 cts0101 logger: Test message from cts0101<br>
-----<br><br> The test case seems to be carried out after this error.<br> However, the script is finished by an error. It is because "Audit LogAudit FAILED" occurs.<br> Is it right that how becomes the result of the CTS?<br>
<br>Regards,<br>Tomo<br><br>
</blockquote></div><br>