<div dir="ltr">I had a feeling it was something to do with that. It was confusing because I could use the move command to move between my three original hosts, just not the fourth. Then there was the device not found errors which added to the confusion. <div><br></div><div>I have the resource stickiness set because I have critical things running that I don't want to move around (such as a windows kvm) and I'd rather not see any downtime from the reboots. (It tries to actually migrate kvm's but it doesn't work, lxc vm's are stopped and started.)</div><div><br></div><div>I moved them to specific servers because I thought I could do a better job at balancing the cluster on my own (based on knowing what my VM's were capable of). I'm not sure if my methods are a good idea or not, just sounded right to me at the time.<br><br><div class="gmail_quote"><div dir="ltr">On Tue, Nov 8, 2016 at 2:00 PM Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 11/08/2016 12:54 PM, Ryan Anstey wrote:<br class="gmail_msg">
> I've been running a ceph cluster with pacemaker for a few months now.<br class="gmail_msg">
> Everything has been working normally, but when I added a fourth node it<br class="gmail_msg">
> won't work like the others, even though their OS is the same and the<br class="gmail_msg">
> configs are all synced via salt. I also don't understand pacemaker that<br class="gmail_msg">
> well since I followed a guide for it. If anyone could steer me in the<br class="gmail_msg">
> right direction I would greatly appreciate it. Thank you!<br class="gmail_msg">
><br class="gmail_msg">
> - My resources only start if the new node is the only active node.<br class="gmail_msg">
> - Once started on new node, if they are moved back to one of the<br class="gmail_msg">
> original nodes, it won't go back to the new one.<br class="gmail_msg">
> - My resources work 100% if I start them manually (without pacemaker).<br class="gmail_msg">
> - (In the logs/configs below, my resources are named "unifi",<br class="gmail_msg">
> "rbd_unifi" being the main one that's not working.)<br class="gmail_msg">
<br class="gmail_msg">
The key is all the location constraints starting with "cli-" in your<br class="gmail_msg">
configuration. Such constraints were added automatically by command-line<br class="gmail_msg">
tools, rather than added by you explicitly.<br class="gmail_msg">
<br class="gmail_msg">
For example, Pacemaker has no concept of "moving" a resource. It places<br class="gmail_msg">
all resources where they can best run, as specified by the<br class="gmail_msg">
configuration. So, to move a resource, command-line tools add a location<br class="gmail_msg">
constraint making the resource prefer a different node.<br class="gmail_msg">
<br class="gmail_msg">
The downside is that the preference doesn't automatically go away. The<br class="gmail_msg">
resource will continue to prefer the other node until you explicitly<br class="gmail_msg">
remove the constraint.<br class="gmail_msg">
<br class="gmail_msg">
Command-line tools that add such constraints generally provide some way<br class="gmail_msg">
to clear them. If you clear all those constraints, resources will again<br class="gmail_msg">
be placed on any node equally.<br class="gmail_msg">
<br class="gmail_msg">
Separately, you also have a default resource stickiness of 100. That<br class="gmail_msg">
means that even after you remove the constraints, resources that are<br class="gmail_msg">
already running will tend to stay where they are. But if you stop and<br class="gmail_msg">
start a resource, or add a new resource, it could start on a different node.<br class="gmail_msg">
<br class="gmail_msg"><snip>
</blockquote></div></div></div>