[Pacemaker] Questions about 2-node cluster with mysql, apache, tomcat and shared partition

Glauber Cabral glauber.sp at gmail.com
Thu Feb 18 14:52:41 UTC 2010


Hi Andrew.

I've followed your tutorial, but as I'm using ocfs2 and shared disk, I
really got some doubts about mysql config been cloned or not.
So, I've set up the mysql service without clone and I guess it's ok now.

I'll try to configure the STONITH, now.
And latter I make another test with apache and tomcat and generate
hb_report if the problem persists.

Thanks a lot for your help.

[]s
Glauber

On Tue, Feb 16, 2010 at 6:53 PM, Andrew Beekhof <andrew at beekhof.net> wrote:
> On Thu, Feb 11, 2010 at 4:07 PM, Glauber Cabral <glauber.sp at gmail.com> wrote:
>> Hi people.
>> I've been reading about cluster for some time and trying to configure
>> a 2-node cluster with shared storage.
>> I've already used the IRC channel, but these questions were to big to
>> ask there and I though the list should be a better place to make these
>> questions.
>> I would appreciate a lot if you can help me.
>>
>> I've created the configuration showed in the end of this message.
>> I've still not included STONITH and I'm not sure if this is causing problems.
>> I think the cluster behavior should be like this:
>> OCFS2: partition with database files and some files that tomcat will
>> access ( the application guarantees that a file will not be accessed
>> by both tomcat instances, at least to our needs at the moment).
>> Mysql: only one node running and accessing datadir inside the shared storage.
>> Tomcat: both nodes running tomcat and accessing mysql by a virtual IP
>> apache: will run only on one node and will load balance connections to
>> both tomcats by ajp protocol.
>> tomcat needs to start after mysql and filesystem
>> apache will start after tomcat (but I'm not sure if it's necessary)
>>
>> The first question if about mysql. It's running only on one node at a
>> time,
>
> You cloned it though, so it thinks it should be able to run more than once.
>
>> but the logfile has a warn message showing that the other mysqld
>> process cannot run anywehere. I'm not sure if this warn is correct or
>> if I did something wrong configuring mysql service. Should it be
>> configured as master/slave?
>
> If they're supposed to run on both hosts with one of them r/o and
> syncing from the other, yes.
>
>> I didn't understand this correctly because
>> all the tutorials I've found use DRDB and not a shared partition.
>>
>> Another question is about migrating apache from one node to other.
>> When I migrate apache, the tomcat service is restarted and I guess it
>> shouldn't be.
>> I think it's being restarted because of the colocation rule. Is this
>> correct? Should I remove that rule?
>
> Probably not.
> You should create a hb_report for that.
>
>>
>> The third question. I tried to shutdown a node by powering it off, and
>> the services didn't turned on in the other node. Does it occurred
>> because I don't have STONITH configured?
>
> No, its because they're already there.  You cloned them all, they're
> always running there.
> Although OCFS2 with no STONITH is insanely dangerous.
>
>> I know there are few
>> information here to answer this question, but I just want to have
>> ideas about what kind of error I must look for.
>>
>> Thank you in advance for any suggestions and help.
>> Cheers,
>> Glauber
>>
>> The crm configuration:
>>
>> node erytheia
>> node panopea
>> primitive apache ocf:heartbeat:apache \
>>        params configfile="/etc/httpd/conf/httpd.conf" \
>>        op monitor interval="1min" \
>>        meta target-role="Started" is-managed="true"
>> primitive dlm ocf:pacemaker:controld \
>>        op monitor interval="120s"
>> primitive fs ocf:heartbeat:Filesystem \
>>        params device="/dev/xvdb1" directory="/tidia" fstype="ocfs2" \
>>        op monitor interval="120s"
>> primitive ip ocf:heartbeat:IPaddr2 \
>>        params ip="143.106.157.26" cidr_netmask="25" \
>>        op monitor interval="30s"
>> primitive ip_mysql ocf:heartbeat:IPaddr2 \
>>        params ip="192.168.128.10" cidr_netmask="25" \
>>        op monitor interval="30s" timeout="30s" start-delay="0" depth="0"
>> primitive mysqld ocf:heartbeat:mysql \
>>        params binary="/usr/bin/mysqld_safe"
>> datadir="/tidia/mysql/datadir" pid="/home/sakai/mysql/mysqld.pid"
>> socket="/home/sakai/mysql/mysqld.sock"
>> log="/home/sakai/mysql/mysqld.log" user="sakai" group="tidia" \
>>        meta target-role="Started" \
>>        op monitor interval="120s"
>> primitive o2cb ocf:ocfs2:o2cb \
>>        op monitor interval="120s"
>> primitive tomcat ocf:heartbeat:tomcat \
>>        params statusurl="http://127.0.0.1:8080"
>> java_home="/usr/java/jdk1.5.0_22" tomcat_name="tomcat"
>> tomcat_user="sakai" tomcat_stop_timeout="120"
>> tomcat_start_opts="-server -Xms512m -Xmx1024m -XX:+UseParallelGC
>> -XX:PermSize=256m -XX:MaxPermSize=512m -XX:NewSize=256m
>> -XX:MaxNewSize=486m -Djava.awt.headless=true -Duser.language=pt
>> -Duser.region=BR -Dsakai.demo=false"
>> catalina_home="/usr/lib/apache-tomcat-5.5.23"
>> catalina_pid="/usr/lib/apache-tomcat-5.5.23/logs/catalina.pid"
>> catalina_opts="-Xmx512M -XX:MaxPermSize=256M
>> -Duser.timezone=America/Sao_Paulo -Duser.language=pt -Duser.region=BR"
>> \
>>        op start interval="0" timeout="240s" \
>>        op stop interval="0" timeout="240s" \
>>        op status interval="0" timeout="60s" \
>>        op monitor interval="10s" timeout="30s" start-delay="0" depth="0" \
>>        meta target-role="Started"
>> clone dlm-clone dlm \
>>        meta globally-unique="false" interleave="true" target-role="Started"
>> clone fs-clone fs \
>>        meta interleave="true" ordered="true" target-role="Started"
>> clone mysql-clone mysqld \
>>        meta interleave="true" target-role="Started"
>> clone o2cb-clone o2cb \
>>        meta globally-unique="false" interleave="true" target-role="Started"
>> clone tomcat-clone tomcat \
>>        meta ordered="false" interleave="true" globally-unique="false"
>> is-managed="true"
>> location cli-prefer-apache apache \
>>        rule $id="cli-prefer-rule-apache" inf: #uname eq erytheia
>> location cli-prefer-ip ip \
>>        rule $id="cli-prefer-rule-ip" inf: #uname eq erytheia
>> location cli-prefer-mysql-clone mysql-clone \
>>        rule $id="cli-prefer-rule-mysql-clone" inf: #uname eq erytheia
>> colocation apache-with-ip inf: apache ip
>> colocation fs-with-o2cb inf: fs-clone o2cb-clone
>> colocation mysql_with_fs inf: mysql-clone fs-clone
>> colocation mysql_with_ip_mysql inf: mysql-clone ip_mysql
>> colocation o2cb-with-dlm inf: o2cb-clone dlm-clone
>> colocation tomcat_with_fs inf: tomcat-clone fs-clone
>> order start-apache-after-ip inf: ip apache
>> order start-fs-after-o2cb inf: o2cb-clone fs-clone
>> order start-o2cb-after-dlm inf: dlm-clone o2cb-clone
>> order start_mysql_after_fs inf: fs-clone mysql-clone
>> order start_mysql_after_ip_mysql inf: ip_mysql mysql-clone
>> order start_tomcat_after_fs inf: fs-clone tomcat-clone
>> order start_tomcat_after_ip inf: ip tomcat-clone
>> order start_tomcat_before_apache inf: tomcat-clone apache
>> property $id="cib-bootstrap-options" \
>>        last-lrm-refresh="1265892279" \
>>        expected-quorum-votes="2" \
>>        dc-version="1.0.5-ee19d8e83c2a5d45988f1cee36d334a631d84fc7" \
>>        cluster-infrastructure="openais" \
>>        stonith-enabled="false" \
>>        no-quorum-policy="ignore"
>> rsc_defaults $id="rsc-options" \
>>        resource-stickiness="100"
>>
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>




More information about the Pacemaker mailing list