P13 Controllers CS nodes redundany issue
I am facing strange redundancy issue in P13 Controllers connectivity servers.
System Ver : 800xA 6.0.3
Windows installed : Windows 8.1 Pro for servers
1oo2 Combination network
2-AS nodes combined with Domain Controllers.
2-CS Nodes for P13 Controllers
another some connectivity nodes for other controller + operator nodes.
While checking redundancy :
1). Removed both Cables in CS1 (Connection with Controller stopped by removing it).
We are NOT able to see live values in CS1 & AS2 servers & in few Operator workplaces.
2). Removed both Cables in CS2 (Connection with Controller stopped by removing it).
We are NOT able to see live values in CS2 & AS1 servers & in few Operator workplaces (these operator workplaces are different than Step 1) .
(I am unable to understand how the live values updation Iis stopping in particular nodes because any nodes in 800xA System will not look only particular node for to fetch the data).
I cross checked with some of guys who knows little bit about P13 Controllers & found that we will not add P13 Controllers secondary CS node as
Redundant server to the primary P13 Connectivity Server .
Both these P13 Connectivity Servers will work paralelly.
Is it correct ??
Let me know if anybody has any suggestions to resolve above said issue
Don't know what P13 is.
OPC DA redundancy in 800xA requires two (2) OPC DA servers running with identical address space and content.
In such situations, a redundant OPC DA Connector service group should be configured with two providers running on two separate server nodes.
In the image below there are three redundant groups and two non-redundant.
Apart from running redundant service group and providers, affinity must be correct.
Without affinity, connection is random (which in most cases is not desired, e.g. randomization may cause an entire EOW console with three clients to use the same set of servers).
With affinity, you must make sure all servers are listed for every row in the affinity definition - not listed servers will be unreachable. We also recommend terminating every affinity row with the "wildcard", the All Nodes group. This prevent being "locked out" by bad affinity. Adding and removing nodes from a system often messes up affinity; this can partly be prevented by using node groups. Just make sure the client is put in the correct group (e.g. OddClients or EvenClients) and trust the affinity configuration on the group to do the job. The configuration can use groups also for servers, then the affinity becomes very short and neat.