[Pacemaker] syslog-ng as resource / how to make sure it gets restarted

Koch, Sebastian Sebastian.Koch at netzwerk.de
Mon Jun 28 08:52:56 UTC 2010


Hi,

 

i got a loggin application that is running on a active passive LAMP
Cluster. The server is working as a logging server form my
infrastructure. I configured syslog-ng to listen on all open ip
addresses. It is working fine but when i migrate the CLusterIP the
syslog-ng daemon doesn't recognize the new IP Address and is not
recieving any messages.

 

What could be the correct way to cluster syslog-ng? My first idea ist o
clone syslog-ng and to include it into the grp_MySQL to make sure it
gets restarted when the CLusterIP migrates. Maybe some of you got a
better idea or hints on how to make sure that syslog gets restarted on
ip migration.

 

Thanks in advance.

Regards

 

Sebastian Koch



 

Von: Robert Lindgren [mailto:robert.lindgren at gmail.com] 
Gesendet: Montag, 28. Juni 2010 10:34
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] small gfs question

 

 

On Mon, Jun 28, 2010 at 10:19 AM, Andrew Beekhof <andrew at beekhof.net>
wrote:

On Sun, Jun 27, 2010 at 8:00 PM, Robert Lindgren

<robert.lindgren at gmail.com> wrote:
>
>>>

>>> You do have stonith configured right?
>>
>> No :)
>
>> Ah, that explains it then
>>
>> > right now (during test) I don't have hardware with stonith devices,
>> > like drac5 or something. Is it possible to configure stonith with
for
>> > example external/ssh and make it work?
>>
>> Well external/ssh isnt going to work if there's no network access to
>> the "bad" node...
>
> Any recommendations on  how to configure stonith on a test environment
here
> there is not any physical stonith devices? I want to test this before
> production stage, where there is proper stonith environment.

Virtual machines perhaps?
There are a couple of VM fencing options out there.

 

 

He, well my setup is the following:

 

I have two physical machines, running pacemaker with gfs2/drbd
active/active, with some VirtualDomain kvm resources, where the kvm
images are served from the gfs2/drbd partition. 

 

I guess this setup is pretty hard to implement on another layer of
visualization, where the physical machines are turned into virtual once.
It might be possible but I might be able to test the gfs2/drbd problem
I'm having at least.

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100628/59b3ce9d/attachment-0002.htm>


More information about the Pacemaker mailing list