Abonrmaties in CN1 network :- PM864
We are using 7 set of redunadent controller (Upper primary and Lower Secondary) and we are facing the problem as explained below.
1. The intermittently abnormal blinking is observed in all Upper Controllers (CM1, CM2, CM3, CM4, CM5, CM6, CM7) CN1 connection.
2. Attcahed snap shot when any of seven controllers is pinged.
3. And there is no issue with cable, connector or switch ports. Still problem exist.
4. There is no abnormaties in RNRP also.
1. Suspected network cable of CM5 CN2 was replaced with new crimped cable.. still problem persist.
2. Suspected cable’s port was changed to other healthy port (CM3- CN2).. still problem persist.
Switch Detail:- Cisco 2960 SI Series.
KIndly share any one having the knwoledge or solution.
Can you describe what you mean with "intermittently abnormal blinking"?
An AC 800M controller will for natural reasons not respond to ping while it is running any downloaded application code; the downloaded application has higher priority than Ethernet communication.
Incoming telegrams will be buffered until all application code blocks has stopped executing.
A maxed out AC 800M will stop executing downloaded code blocks once 70% of the total available time has elapsed (or rather, the firmware in AC 800M will automatically throttle back application load by increasing its cycle time(s) to meet a maximum of 70% Cyclic Load; the remaining 30% is reserved for system tasks and Ethernet communication).
Too much communication in combination with too much application code could lead to "Buffer overruns" (packet loss) resulting in resending and loss of communication bandwidth.
Overruns (and other Ethernet statistics) are visible in the Remote System -> Show Controller Analysis -> (Get) Network Information output.
The Control Builder M -> Tools -> Task Analysis function can be used to view code execution.
When online with application, requested and actual offset can be viewed in Task Overview
0) Read about Task Analysis in Online Help and relevant User's Guides
1) Insert Task Offset (20ms or larger) in between code block.
2) Split large tasks into several smaller sub tasks with offset in between each
3) Keep total CPU load well below 90%
"intermittently abnormal blinking" means we can observe intermitent amber color blinking instead of
green color on switch port LEDs.
checked the no. of overruns, it is showing 12, 42 etc. in different controllers.
Offset is configured as zero for all tasks of perticular controller.
CPU load is 60%
Is it recommended to do any change related to tasks in running system/ plant ???
According to information I found on Cisco web, alternating green-amber has the following meaning:
"Link fault. Error frames can affect connectivity, and errors such as excessive collisions, cyclic redundancy check (CRC) errors, and alignment and jabber errors are monitored for a link-fault indication. "
AC 800M controllers (with exception for the PM891) can only run 10 mbit/half duplex.
Half-duplex means switch and controller share same wire for send/receive; hence collissions are a naturally occurring thing.
I believe some of the amber blinking might be due to collissions. I suggest monitoring the switch error counters for a while to see that the amount of collissions are acceptable, I would say <5% of all outgoing traffic is not to worry about.
The CLI commands "show interface counters" and "show interface counters errors" can be used to dump all counters and errors (the Management web GUI or Cisco Network Assistant tool may show the counters as well, but makes it harder to do the maths):
SW#0>show interface counters
Port InOctets InUcastPkts InMcastPkts InBcastPkts
Gi1/0/1 394830767011 595460952 68820822 255397
Gi1/0/2 153821770 502248 199915 1037690
Gi1/0/3 42375555796 49277762 8026053 3844717
Gi1/0/4 2149629407100 3631500016 573415079 190944236
Gi1/0/5 22743854653 31273616 508619 572296
Port OutOctets OutUcastPkts OutMcastPkts OutBcastPkts
Gi1/0/1 890927021700 843547977 3413495290 844360014
Gi1/0/2 4295368233 764316 20278106 3592887
Gi1/0/3 669187655677 47095171 3852675583 950312080
Gi1/0/4 2660169721149 3931332243 3282956067 760803153
Gi1/0/5 628556390441 30434047 3623180906 881846965
SW#0>show interface counters errors
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
Gi1/0/1 0 0 0 0 0 31160
Gi1/0/2 0 0 58 0 0 369993850
Gi1/0/3 0 0 0 0 0 40
Gi1/0/4 0 0 0 0 0 287
Gi1/0/5 0 0 0 0 0 0
Port Single-Col Multi-Col Late-Col Excess-Col Carri-Sen Runts Giants
Gi1/0/1 0 0 0 0 0 0 0
Gi1/0/2 0 0 0 0 0 0 0
Gi1/0/3 0 0 0 0 0 0 0
Gi1/0/4 0 0 0 0 0 0 0
Gi1/0/5 5630 1916 0 0 0 0 0
Error percentage can be calculated as follows: "error counter" / (OutUcastPkts2 - OutUcastPkts1) where 1 and 2 are two by some time interval separated samples.
Example 1: single collissions (we shall count outgoing packets)
At 10:00 the port Gi1/0/1 report OutUcastPkts=10000 and Single-Col=100
At 11:00 the port Gi1/0/1 report OutUcastPkts=20000 and Single-Col=200
In one hour a delta of 10.000 packets were sent and a delta of 100 single collissions occured. 100 / 10.000 = 1% which is well below the stipulated limit of 5%.
Example 2: frame checksum errors (we shall count incoming packets)
At 10:00 the port Gi1/0/1 report InUcastPkts=10000 and FCS-Err=100
At 11:00 the port Gi1/0/1 report InUcastPkts=20000 and FCS-Err=200
In one hour a delta of 10.000 packets were received and a delta of 100 FCS-errors occured. 100 / 10.000 = 1% of received packets were malformed. This is probably worth further investigation.
Please note that collission and FCS-errors can happen on all types of packets (unicast, multicast and broadcast), not just unicasted - the formulas above need some additional work to be fully proper.
Please have (all) error counters for the suspected ports examined and compared to traffic volume. I can take a look if you provide me with the raw data attached as .txt files to the thread (please don't send images of statistics).
Regarding the Task Analysis, having zero task offset on all tasks may cause your application code to constantly occupy the CPU until the scheduling ends. I.e. the ethernet communication is forced to take place in the 30% reserve the controller keeps.
Roughly, with 60% total load (if you meant total and not cyclic load), the controller is idling 400 ms per second.
With no task tuning at all, all free time may be found last in scheduling cycle (the interval time plays in as well - use Task Analysis tool to see a graphical representation of your controllers):
[Task1][Task2][Task3][Task4][400 ms Free time]
It may be wiser to insert offset (=reallocating some of the free time to interspace tasks) to allow Ethernet communication to take place more often.
50 ms offset between each task:
[Offset 50ms][Task1][Offset 50ms][Task2][Offset 50ms][Task3][Offset 50ms][Task4][200 ms Free time]
Total System Load is still 60%, but the 40% remaining time has been dispersed along the whole scheduling cycle. This improves Ethernet communications and "ping" response times should become improved.
Task tuning may be a delicate matter which may not be advisable during full production. I suggest you study your current Task Analysis output, and if in doubt engage your regional ABB support center in helping you with selecting reasonable values for task offset.