Does any of you ever had an issue with the dint_to_dword function ?
I receive an 16 bit real value (yes it's possible) in the form of a dint register. I made a function that convert it to a dowrd, shift to the left 16 bits, dword to real then gets into a piecewise to scale the value if necessary.
So far so good you would say. Until I see some spurious spikes on the output of the piecewise function block. I managed to isolate the source.
If for exemple my real value is 8.9375, it should be 0x410F0000(IEEE754). Strip the 16 LSB and you get 16655
16652 is the value I receive on the input and the task here is to put it back in the 32bit single precision floating point. As simple as that.
My issue is at the very beginning of the whole procedure. Once in a while, the dint_to_dword conversion gives incorrect results. For an input of 16555 (0x410F) the function gives me an incorrect result(0xB538) and it lasts for only 1 scan (normal task @ 250ms).
I managed to put a double conversion to patch the issue. I convert the original Dint input back to a Dint after converting it to a dword. I compare the two and if they are equal I keep going otherwise, the output is frozen.
BTW the conversion here is the real value received from a DCU controller through a Profibus gateway. The value sent by the DCU is a real according to IEEE754 with the last 16bit of precision stripped down. So it is basically a half precision floating point number.
5.0 SP2 RevC
If someone has ever encountered this behaviour, I would be more than happy to know it.
Thanks to all,
I may have not explained my problem in an understandable manner. I will try to be easier to follow this time
I receive floating point data from a DCU controller through a profibus gateway. The DCU sends the floating point value in the IEEE754 format but only on 16bit not on 32. Also, the data coming out of the gateway is in the DintIO datatype, but once again, only the 16 lower bits are used.
So, I made a Function block type in which I convert the DintIO connected to the gateway to a RealIO.
Here are the tasks completed inside the function block type
1- Initialize the XY table if the Init bit is true
2- Dint_to_dword the dint input variable and mask the 16 upper bits
3- shift left the dword variable and mask the lower 16 bits.
4- Dword_to_real the dword variable
5- pass the converted value to the XY table.
My first doubt was around the XY table. To exactly determine what was wrong I made intermediate values between all the steps and made them available as Out pins of the function block type.
The instances of the function block are in a program running under the normal task at 250ms. I made a new task at 50ms and a debug program callad MouseTrap running under it. In the MouseTrap program, I have created an array of a new special datatype I also created for this purpose. I organized the array to record in a FIFO manner.
At every scan of the MouseTrap program I put all the inputs, outputs and intermediate values of the function block type instance programmed in the main program.
The value received from the gateway is a temperature and should move by 1.5 deg in one 50ms scan. So I have coded a trap to keep filling the FIFO for half of its capacity in the case of a variation larger than this. After that fuse is blown, the FIFO filling procedures stops.
I then pull out all of the data from the array.
I analyzed it and my biggest surprise was to find out that the corrupted numbers was straight at the ouput of the first step : the dint_to_dword conversion.
For an integer received on input with the value 16655(base 10) the dword equivalent should be 0x410F. Which I get most of the time. But once in a while, it gives an erronous number. The last time I captured it, it gave me 0xB538 !
As I said, I made a workaround by double converting the dint to dword then to dint again. If the double-converted value is not equal to the input, I reject the calculation. It seems to work great so far, but I'll wait on monday for my customer to confirm this.