[ClusterLabs] Antw: Re: Antw: Re: Antw: Re: EL6, cman, rrp, unicast and iptables
Ulrich.Windl at rz.uni-regensburg.de
Tue Sep 15 02:09:19 EDT 2015
>>> Noel Kuntze <noel at familie-kuntze.de> schrieb am 14.09.2015 um 17:46 in
Nachricht <55F6EBF0.2000504 at familie-kuntze.de>:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> Hello Ullrich,
>> What totem does it detect network problems when there are none:
>> # grep ringid.*FAULTY /var/log/messages |wc -l
> Yup. What information from your specific setup can you contribute to this
> particular discussion
> about Digimer's problem?
>>> This is something that no other protocol you encounter on the
>>> is supposed to do.
>> Definitely not: 0 interface errors on any interface, not communication
> What does protocol error detection have to do with interface errors?
> Protocol errors aren't
> contained in interface errors.
If you send a protocol from A to B where neither A's interface nor B's
interface has any errors, and B reports a protocol error, the obvious
conclusion is that the protocol is broken. Es pecially if the protocol claims
to implement reliable in-order transfer.
Of cours from "no interface errors" you cannot deduce "no protocol errors",
but if both parties use the same software, the bad protocol can only come from
>> Even NFS over UDP is much smarter than TOTEM is.
> NFS over UDP is for bulk transfer of data. You're comparing apples to
> What improvement for TOTEM do you want to take from NFS over UDP?
> NFS over UDP has congestion control. Is it that what you mean?
NFS over UDP has the feature that it works under load, even if some packets
are dropped. I cannot confirm that property for TOTEM.
>> If you have a central authority that can decide on each eand every
>> you are right. I was talking from practical experience...
> In a point-to-point topology, like Digimer is using (host-switch-host), the
> switch in between basicly guarantees that every valid frame that enters one
> port also exits
> another port of the switch. This basicly makes the switch vanish here. It's
> no longer relevant for this,
> assuming that it does its job and the switching matrix can handle the
> frames, which it probably does.
> If it can not do that, then the switch should probably be replaced.
> The remaining two points are the interfaces of the hosts.
> Digimer uses bonds. So we have two interfaces on either side. The interfaces
> themselves do not do any prioritization.
> They just pass valid frames through to the OS over interrupts. The central
> points now are the bonding devices,
> which have traffic congestion control algorithms attached to them, too. Not
> for ingress, because Linux can't do it,
> but for egress. The egress, as I deducted in other emails, is fine.
> The pfifo_fast qdisc is the central point. There's nothing else in this
> that influences the transmission between the two points. Of course, this
> model is idealized.
> Maybe there are outside factors that cause the problems, maybe there are
> not. This is something
> I cannot know about this setup.
> - --
> Mit freundlichen Grüßen/Kind Regards,
> Noel Kuntze
> GPG Key ID: 0x63EC6658
> Fingerprint: 23CA BB60 2146 05E7 7278 6592 3839 298F 63EC 6658
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
> -----END PGP SIGNATURE-----
> Users mailing list: Users at clusterlabs.org
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users