Network throughput very low when Ethernet Network Adapters are bridged
We have an 800xA 6.0.3 System installed, consisting of clients running Windows 8.1 pro and redundant servers running Windows Server 2012 R2 Standard.The connectivity package installed is PLC Connect for communication with AC500 PLC.
In order to provide redundancy in ethernet connectivity beween all nodes in the system, we have NICs with two ports on all the PCs.Since the PLCs and PLC connect donot support RNRP, in order to eliminate a single point of failure in the ethernet connectivity, we are using Network Bridging technique to combine two ports on each NIC. This concept works fairly well with 800xa 5.1 and Windows Server 2008.However, the same when applied to servers running Win Server 2012, we have observed that the network throughput falls significantly.This causes the workplaces on the clients to be extremely sluggish.
We are also trying to use NIC teaming option in Win Server 2012 to provide ethernet redundancy.But with this configuration, we are unable to ping any node after assigning an IP to the NIC Teamed adapter (Firewall disabled and ICMP enabled).
This issue with bridging is only observed on the Server PCs.There is no problem of network throughput drop between the client PCs running Win 8.1 OS.
Currently,we are enabling only one ethernet port on the Server NICs so that we are able continue with the commissioning, untill a proper solution is available for the above mentioned networking issue. Hence, it would be great if a solution could be provided.
Voted best answer
Rob is absolutely correct.
RNRP network redundancy require two (2) physically separated (isolated) networks, primary and secondary.
The mixture of network teaming and RNRP is not a tested nor approved solution.
How does the PLC handle teaming? Or are you defining two separate IP addresses and have redundancy made in the PLC applications and double connection from PLC Connect to PLC?
Since you must use RNRP (really no choice here) I recommend disabling the network teaming between 800xA client and server computers, possibly setting up separate primary and secondary networks as per the users guide.
Separated client/server and control network adds separation/isolation. With so few PLCs you could use a small industrial switch, ABB NE801/802 close to the PLC and use fiber back to the computer room where you have a mirrored NE801/802 as media converters.
If the PLC does not run RNRP, nor teaming you cLule in theory use a ABB NE871 industrial router close to the PLC. The NE871 have three network ports and can act as bridge between a redundant RNRP network and a single connected computer/PLC.
Do NOT Bridge network adapters when you are running RNRP.
Bridging and NIC teaming (and anything else at all that tries to re-route traffic, like Rapid Spanning Tree ) all interfere in unpredicatable ways with RNRP and you should not do this. Even at version 5.1 where things "appear" to work correctly you can have problems.
You should first understand that RNRP is not really a part of 800xA - it is a base level Ethernet Routing system. It is completely indpendant of 800xA and can run on any windows based PC. The statement that "PLC Connect does not support RNRP" is meaningless. PLC connect is is just a software package running on a PC, which has RNRP installed, so PLC Connect DOES "support" RNRP. RNRP routers (including your connectivity servers if they are configured to route) will re-route ALL ethernet traffic based on the areas you have configured. RNRP does not care if this is an "800xA" message or not.
The only thing that doesnt use RNRP is the AC500 PLC itself which will not route Ethernet messages. This doesnt matter - the PLC does not need to route.
Do your AC500 PLC's have redundant ethernet ports ? If the answer is "no", then there's nothing you can do to make the network redundant. The only redundant component will be the connectivity servers. You will have a single point of failure on the network card on each server, but you have redundancy because you have 2 connectivity servers.
If the answer is "yes" then configure the network according to the rules for your PLC. This may mean using two separate network areas, or perhaps you can use RNRP on the connectivity server to configure a "local" network area, which will prevent the CS from using that area to route other traffic, but still route traffic between the two AC500 PLC networks.
Get a copy of the 800xA Network Configration guide and RTFM.
Hello Mr. Stefan,
I was able to resolve the issues.Please find the steps I followed as an answer.Thank you.
As suggested, I went back to using RNRPs Implicit method of addressing for redundant NICs on the connectivity servers. However, after IP assignment it was observed that the secondary network connection (Path 1) would never take over if any cable of a node on Path 0 was disconnected.
From one of your answers on an old thread, I managed to identify the issue to be improper Multicast setting on the switch connecting Path 1 connections of all nodes. Apparently, Multicast was disabled.The settings for enabling Multicast were not straight forward in the switches.Only after a comparison between two switches, I was able to replicate the settings of the Path 0 network switch in the Path 1 network switch.Following this, a restart of all the nodes and switches was done.The switchover is now seamless for client and server ethernet communication.
The second issue was how the CM597 will communicate with the connectivity server having NICs which are on two different networks, since CM597 can have only one IP.
The CM597 and the NICs on the connectivity servers were assigned with a Class C IP address 192.168.1.xxx/255.255.255.0.
In the Advanced TCP/IP settings of the ethernet adapter of the connectivity servers, I assigned IPs in the 192.168.1.xxx range to each ethernet adpater, in addition to the Class B addresses 172.16.4.x (path 0 adapter) and 172.17.4.x (Path 1 adapter). I made sure that the nodeID and IP address HostID are the same when assigning the Class B addresses.
Note: I am unsure whether it is advisable to assign multiple IPs to the same ethernet adapter. With the current setup, this was the only possible solution.I did not come across any communication issues during network redundancy checks.