What is the mechanism of compensation-jitter or handling jitter in linuxcnc??

More
21 Feb 2024 05:49 #293896 by jang
Hello, LinuxCNC

I have some question about compensating jitter in linuxcnc. 

Maybe I guess there is DC(distributed clocks), but I don't know the mechanism.

I think there are some codes that let slaves know amount of jitter(?_or size of jitter?) occured or that says to slaves "hey I'm late about 10us"  in linuxcnc-ethercat master source code,

when call the jitter-compensation-function in linuxcnc-master, linux-cnc-master and ethercat-master???

How to compensate jitter in linuxcnc??? 

Plz let me know!! 
 

Please Log in or Create an account to join the conversation.

More
24 Feb 2024 18:40 #294231 by scottlaird
EtherCAT in general handles most of this for us. Run `ethercat slaves -v`, and one of the fields provided should show the measured latency from slave to slave and overall. It tends to run around ~200 ns per hop. Distributed Clocks should help keep all of the slaves' clocks in sync within a small number of microseconds, but I don't have the spec for that in front of me.

I don't think we have anything today that explicitly checks that jitter is low, other than the usual servo time checks, but I may be mistaken. I haven't dug into that bit of the code very deeply.

Please Log in or Create an account to join the conversation.

More
24 Feb 2024 19:15 #294234 by rodw
I don't think linuxcnc cares about jitter as long as the Servo-thread ( usually 1 kHz) is not overrun. In some kernel traces we did, the servo thread was executed in about 200 usec and slept for the remaining 800 usec. The thread does not run late until the execution time exceeds 1000 usec.
Ethercat has its own synchronisation method, but its not something we care about.
Ethercat is a very efficient communication protocol and the time it takes is a lot less than normal  TCP networking used by say a Mesa Card so it is more immune to timing overruns due to latency.

Please Log in or Create an account to join the conversation.

More
24 Feb 2024 19:37 #294237 by tommylight
Pretty sure i read somewhere Mesa cards use TCP just for the initial comms, then UDP, can anyone confirm this?
Sorry, on the phone.

Please Log in or Create an account to join the conversation.

More
24 Feb 2024 20:12 - 24 Feb 2024 20:15 #294246 by rodw
whether its tcp or udp, it does not matter, ethercat is more efficient due to both shorter packet length and less overhead in the packet. I did look at the hm2_eth code once and from memory it uses sockets. Once a connection is negotiated, it would make sense to use UDP. But there is still handshaking happening. With ethercat, the packet is unloaded by the relevant slave on the fly as the train goes past the station without stopping.

I have attached a paper on this from a seminar I attended.
It also deals with the distributed timing mechanism used by ethercat.

Oops too big, try this
drive.google.com/file/d/19VggxoJGlKLrIm4...H2M/view?usp=sharing

 
Last edit: 24 Feb 2024 20:15 by rodw.

Please Log in or Create an account to join the conversation.

More
24 Feb 2024 20:47 - 24 Feb 2024 21:08 #294252 by PCW
With UDP there is no handshaking going on. Ethercat is faster chiefly because it
has it own real time hardware Ethernet driver and uses a single packet for
all communication.The packet overhead (unused data) is negligible in either case.
hm2_eth uses three packets, an outgoing read request, an incoming read data packet and then
an outgoing write data packet. These are the main differences. Note that to match
LinuxCNCs read inputs, process input data ,write outputs mode, Ethercat would need
2 cycles per servo thread.

Hostmot2 deals with jitter by using a DPLL the locks onto the servo
thread and can reduce critical sampling times to the 100s of ns region.
EtherCAT has a couple similar mechanisms (distributed clocks etc)
Last edit: 24 Feb 2024 21:08 by PCW.
The following user(s) said Thank You: tommylight

Please Log in or Create an account to join the conversation.

More
25 Feb 2024 08:00 #294347 by rodw

LinuxCNCs read inputs, process input data ,write outputs mode, Ethercat would need
2 cycles per servo thread.
 

I don't believe Ethercat is like this  With careful ordering of the addf statements, you read from ethercat, process time critical I/O and then write the current state back to ethercat in the one servo cycle. Doesn't the encoder component do something similar?

eg

addf lcec.read-all servo-thread
addf cia402.0.read-all servo-thread

addf motion-command-handler servo-thread
addf motion-controller servo-thread
addf s-pid.do-pid-calcs servo-thread
addf 0-pid.do-pid-calcs servo-thread

addf cia402.0.write-all servo-thread
addf lcec.write-all servo-thread
 

Please Log in or Create an account to join the conversation.

More
25 Feb 2024 09:31 #294349 by Hakan
I think you are right there, rodw. And PCW is right as well.
From what I have seen there is only one read-write package (I mean, there is a package sent out with data to the client, and some 10 usec later there is a package sent back to the master with data from the client) sent and that's triggered by lcec.write-all.
If one tries to follow that
1st lcnc servo-thread cycle: do math and send data to EC client with lcec.write-all
2nd lcnc servo-thread cycle: read data from client lcec.read-all. It reads data that came as response from lcec.write-all. However, due to that small window between write and read, like some 10s usec, one may or may not be able to prepare data based on the data that was received. I tend to think that if one uses a function that is integrated in the EtherCAT controller chip, one can get a response based on the current write. If there is an MCU that does the job and communicates with the EtherCAT controller chip, than those 10 usecs are too short to receive the data to the MCU, do the real thing and write back a response to the EC client controller and prepare that for sending in a package. Meaning that the data one reads is from two cycles before (or one cycle before, you know what i mean),
3rd lcnc servo-thread cycle: Read the response from the 1st lcnc servo-thread cycle.

The consequence is that writes to the EC client are immediate, reads from the EC clients are delayed by one cycle.

Regarding jitter from the EC client view. It's seems to be up to the client to do the synchronization it wants to do, there is nothing dictated as far as I can see. The client can choose to synchronize on incoming package SM2 sync, outgoing package SM3 sync or based on clock DC sync. The Distributed clock time is available with nano-second resolution and the client can use that, but doesn't have to.
The measurements I have done show a low jitter of +-1 usec on a 1 second window (1000 1ms cycles). So, SM2 synchronization gives +- 1 usec or better in the mini test setup I have.



 
The following user(s) said Thank You: foxington

Please Log in or Create an account to join the conversation.

More
25 Feb 2024 10:50 #294355 by rodw
I think you have to take care here about what is linuxcnc and what is ethercat. The vast majority of ethercat installations are through large industrial complexes. CNC is a very small subset but ethercat has it covered as it can can service 100 servos in 100 usec. 

The ethercat frame (aka the bullet train) passes by a slave (eg the station) and data is unloaded and put on the train without the train stopping. So at the ethercat level, there is no concept of seperate read and write operations as it all happens at the one time.
But the issue is that linuxcnc can't handle that.. My take was more that the train is unloaded with lcec.read-all, linuxcnc does what it needs to do and then loads the train with lcec.write-all.

The servo thread executes  read-all, does all the hal stuff and then it sleeps until it is called again (every 1 kHz). How long the execution phase takes will depend on how much you are doing in hal and how fast your processor is. We observed via kernel traces on an i5 that the execution took about 200 usec. So my take is that about 200 usec after the train arrives, the station master loads it up with a bit more luggage. This luggage may not end up on the same train but another train (ethercat frame) will come along shortly after to take the luggage from the stationmaster. It is this behaviour that makes ethercat so lean when compared with other fieldbus technologies.

Please Log in or Create an account to join the conversation.

More
04 Mar 2024 06:23 #295110 by foxington
hello,

Yes, all of you are right i think.. but we can compare etherNET with etherCAT(it has special FPGA hardware) just in the layer of type of communication.. thats all.. they are different and therefore both are right for other stuff.. I do not know how exactly is lcec.c was written but Hakan has great truth about the cycles... ethercat does stuff in the other way... when that non stopping train is passing through the slaves, firstly output data from the actual slave are written in the free space of the train and then are input data readen... 

There should be a cycle delay but ... Sasha I. should know more about details 

what I knew from my experiences with ethercat at the industrial level, for regular purposes it should runs on 62,5us without any crutial jitter BUT only and only at beckhoff stuff masters... there is tool for this watching purpose in the twincat3 GUI.... When lcnc scan time is 1ms == 1000us, there is planty room... ethercat as protocop could do stuff faster as we need I think...

But every slave with cia402 has bit of status of cyclic communication in the its own statusword... If there will be something wrong, it will goes low immediatelly 

regards

Please Log in or Create an account to join the conversation.

Time to create page: 0.162 seconds
Powered by Kunena Forum