<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;"><BR><BR>--- <B>10年10月18日,周一, pacemaker-request@oss.clusterlabs.org <I><pacemaker-request@oss.clusterlabs.org></I></B> 写道:<BR>
<BLOCKQUOTE style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: rgb(16,16,255) 2px solid">Hi I also use Pacemake manage Lustre system. Now I meet a question. If I mount and umount the same ost or mdt several times. It will last more then 3 minutes to mount this ost or mdt again. <BR>
<DIV class=plainMail> </DIV>
<DIV class=plainMail><BR>No, not entirely. Pacemaker managed Lustre systems are quite common. And <BR>although 126 nodes is a rather high number, it is still possible for large <BR>sites. It also makes sense to manage Lustre in a global configuration, <BR>although usually for Lustre a subset of two pairs forms an OSS or MDS Lustre <BR>fail-over system. The reason is that Lustre requires an ordered shutdown <BR>sequence (MDT first). While I already wrote scripts to that with the <BR>traditional heartbeat pair setup, it is really far more complex than to do it <BR>with pacemaker.<BR>So our scripts generate a set of constraints that only pairs can run MDS/OSS <BR>resources, but still everything is in global pacemaker setup.<BR>We also have syslog-ng rules and a patched logd (patches sent to this list, <BR>need to update them again) to filter out all pacemaker debug logs, so that we <BR>can easily see messages from the lustre RA in
syslogs.<BR><BR><BR>Cheers,<BR>Bernd<BR><BR>-- <BR>Bernd Schubert<BR>DataDirect Networks<BR><BR><BR><BR></DIV></BLOCKQUOTE></td></tr></table><br>