<div dir="ltr">Hi!<div><br></div><div>Well, since I needed one thing - only one node starts the database on shared storage - I made an ugly dirty hack :-), that seems to work for me. I wrote a custom RA, that relies on frequent 'monitor' actions, and simply writes timestamp+hostname to physical partition. In case it detects that someone else writes to the same device - it considers that it has to be stopped. Putting this RA first in the group prevents database from starting if other node started the group - even in case there is no network connectivity. </div>
<div><br></div><div>Probably needs more testing :-)<br><br><div>oralock_start() {</div><div> oralock_monitor ; rc=$?</div><div> if [ $rc = $OCF_SUCCESS ]; then</div><div> ocf_log info "oralock already running."</div>
<div> exit $OCF_SUCCESS</div><div> fi</div><div> NEW=`date +%s``hostname`</div><div> echo $NEW > $LFILE</div><div> oralock_monitor ; rc=$?</div><div> if [ $rc = $OCF_SUCCESS ]; then</div>
<div> ocf_log info "oralock started."</div><div> exit $OCF_SUCCESS</div><div> fi</div><div> exit $rc</div><div>}</div><div><br></div><div>oralock_stop() {</div><div> rm -f $LFILE</div>
<div> exit $OCF_NOT_RUNNING</div><div>}</div><div><br></div><div>oralock_monitor() {</div><div> [[ ! -s $LFILE ]] && return $OCF_NOT_RUNNING</div><div> PREV=`cat $LFILE`</div><div> CURR=`dd if=$DEVICE of=/dev/stdout bs=16 count=1 2>/dev/null`</div>
<div> ocf_log info "File: $PREV, device: $CURR"</div><div> if [[ "$PREV" != "$CURR" ]]; then</div><div> for i in 1 2 3; do</div><div> sleep 5</div><div>
NCURR=`dd if=$DEVICE of=/dev/stdout bs=16 count=1 2>/dev/null`</div><div> if [[ "$CURR" != "$NCURR" ]]; then</div><div> ocf_log err "Device changed: was $CURR, now: $NCURR! Someone is writing to device!"</div>
<div> rm -f $LFILE</div><div> return $OCF_NOT_RUNNING</div><div> else</div><div> ocf_log info "Device not changed..."</div><div> fi</div>
<div> done</div><div> fi</div><div> NEW=`date +%s``hostname`</div><div> echo $NEW > $LFILE</div><div> dd if=$LFILE of=$DEVICE bs=16 count=1 2>/dev/null</div><div> return $OCF_SUCCESS</div>
</div><div>}</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-03-20 19:34 GMT+04:00 Саша Александров <span dir="ltr"><<a href="mailto:shurrman@gmail.com" target="_blank">shurrman@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi!<div><br></div><div>I removed all clustr-related staff and installed from <a href="http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/" target="_blank">http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/</a></div>
<div>However, stonith-ng uses fence_* agents here... So I cannot put into crmsh </div><div><br></div><div>primitive stonith_sbd stonith:external/sbd<br></div><div><br></div><div>:-(</div><div class="gmail_extra"><br><br>
<div class="gmail_quote">2014-03-19 20:14 GMT+04:00 Lars Marowsky-Bree <span dir="ltr"><<a href="mailto:lmb@suse.com" target="_blank">lmb@suse.com</a>></span>:<div><div class="h5"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div>On 2014-03-19T19:20:35, Саша Александров <<a href="mailto:shurrman@gmail.com" target="_blank">shurrman@gmail.com</a>> wrote:<br>
<br>
> Now, we got shared storage over multipath FC there, so we need to move from<br>
> drbd to shared storage. And I got totally confused now - I can not find a<br>
> guide on how to set things up. I see two options:<br>
> - use gfs2<br>
> - use ext4 with sbd<br>
<br>
</div>If you don't need concurrent access from both nodes to the same file<br>
system, using ext4/XFS in a fail-over configuration is to be preferred<br>
over the complexity of a cluster file system like GFS2/OCFS2.<br>
<br>
RHT has chosen to not ship sbd, unfortunately, so you can't use this<br>
very reliable fencing mechanism on CentOS/RHEL. Or you'd have to build<br>
it yourself. Assuming you have hardware fencing right now, you can<br>
continue to use that too.<br>
<br>
<br>
Regards,<br>
Lars<br>
<span><font color="#888888"><br>
--<br>
Architect Storage/HA<br>
SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg)<br>
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde<br>
</font></span><div><div><br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div></div></div><div><br></div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>С уважением, ААА.
</div>