Reduce read-all timing 7i76e + 7i77
- endian
-
Topic Author
- Offline
- Elite Member
-
Less
More
- Posts: 246
- Thank you received: 70
17 Jul 2025 09:37 #331934
by endian
Reduce read-all timing 7i76e + 7i77 was created by endian
Hello gentelmen,
How to reduce the read-all timig from 0,8ms to something below please? Ping I have beloe 0,3ms and latency at servothread below 800ns
I am using 7i76e+7i77 at P1 with 3 axis setup..
I have tried everything what AI told me but without any success ...
thanks
How to reduce the read-all timig from 0,8ms to something below please? Ping I have beloe 0,3ms and latency at servothread below 800ns
I am using 7i76e+7i77 at P1 with 3 axis setup..
I have tried everything what AI told me but without any success ...
thanks
Please Log in or Create an account to join the conversation.
- unknown
- Offline
- Platinum Member
-
Less
More
- Posts: 849
- Thank you received: 288
17 Jul 2025 10:15 #331938
by unknown
Replied by unknown on topic Reduce read-all timing 7i76e + 7i77
So you tried stuff AI has, what have you tried and what were the exact results ? To save repeating stuff.
Please Log in or Create an account to join the conversation.
- endian
-
Topic Author
- Offline
- Elite Member
-
Less
More
- Posts: 246
- Thank you received: 70
17 Jul 2025 10:35 #331940
by endian
Replied by endian on topic Reduce read-all timing 7i76e + 7i77
I did the
ping between mesa is less then 0,3ms and it is per to per connection
I did not try the custom firmware
goal is -
resualt is no change at all ...
packet-read-timeout=500000sserial_port_0=2000x to sserial_port_0=2000hping between mesa is less then 0,3ms and it is per to per connection
I did not try the custom firmware
./mesa-configtool --new-fpga 7i76e --no-sserial --optimize-readgoal is -
SERVO_PERIOD = 500000resualt is no change at all ...
Please Log in or Create an account to join the conversation.
- PCW
-
- Away
- Moderator
-
Less
More
- Posts: 17461
- Thank you received: 5098
17 Jul 2025 15:26 #331944
by PCW
Replied by PCW on topic Reduce read-all timing 7i76e + 7i77
If those are AI suggestions, they are pure nonsense...
To reduce network latency:
1. Make sure basic host latency setup is done (disable power management etc)
2. If you have an intel MAC on the host PC, disable IRQ coalescing.
3. Use isolcpus to run LinuxCNC on the last CPU
4. (in conjunction with #3) Pin the Ethernet IRQ to the last CPU
(and set irqbalance to one-shot mode so it doesn't mess with the IRQ pinning)
To check maximum network latency:
sudo chrt 99 ping -i .001 -q -c 60000 10.10.10.10
( Assuming 10.10.10.10 is the 7I76E IP address )
This will run for 1 minute and print statistics
To reduce network latency:
1. Make sure basic host latency setup is done (disable power management etc)
2. If you have an intel MAC on the host PC, disable IRQ coalescing.
3. Use isolcpus to run LinuxCNC on the last CPU
4. (in conjunction with #3) Pin the Ethernet IRQ to the last CPU
(and set irqbalance to one-shot mode so it doesn't mess with the IRQ pinning)
To check maximum network latency:
sudo chrt 99 ping -i .001 -q -c 60000 10.10.10.10
( Assuming 10.10.10.10 is the 7I76E IP address )
This will run for 1 minute and print statistics
The following user(s) said Thank You: tommylight, endian
Please Log in or Create an account to join the conversation.
- unknown
- Offline
- Platinum Member
-
Less
More
- Posts: 849
- Thank you received: 288
18 Jul 2025 01:00 #331962
by unknown
Replied by unknown on topic Reduce read-all timing 7i76e + 7i77
What the difference between AI and the crazy guy yelling at the sky on a street corner, the advice appears the same.
Please Log in or Create an account to join the conversation.
- endian
-
Topic Author
- Offline
- Elite Member
-
Less
More
- Posts: 246
- Thank you received: 70
29 Nov 2025 18:17 - 30 Nov 2025 13:27 #339441
by endian
Replied by endian on topic Reduce read-all timing 7i76e + 7i77
hello PCW,
base thread or servo thread latency is way under 6us ...
here are statistics
user@debian:~$ sudo systemctl restart irqbalance
user@debian:~$ ping 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
64 bytes from 192.168.1.10: icmp_seq=0 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 192.168.1.10: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 192.168.1.10: icmp_seq=3 ttl=64 time=0.042 ms
64 bytes from 192.168.1.10: icmp_seq=4 ttl=64 time=0.048 ms
64 bytes from 192.168.1.10: icmp_seq=5 ttl=64 time=0.035 ms
64 bytes from 192.168.1.10: icmp_seq=6 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=7 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=8 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=9 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=10 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=11 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=12 ttl=64 time=0.041 ms
64 bytes from 192.168.1.10: icmp_seq=13 ttl=64 time=0.036 ms
64 bytes from 192.168.1.10: icmp_seq=14 ttl=64 time=0.035 ms
64 bytes from 192.168.1.10: icmp_seq=15 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=16 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=17 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=18 ttl=64 time=0.035 ms
64 bytes from 192.168.1.10: icmp_seq=19 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=20 ttl=64 time=0.051 ms
64 bytes from 192.168.1.10: icmp_seq=21 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=22 ttl=64 time=0.042 ms
64 bytes from 192.168.1.10: icmp_seq=23 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=24 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=25 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=26 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=27 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=28 ttl=64 time=0.034 ms
linuxcnc on
^C--- 192.168.1.10 ping statistics ---
29 packets transmitted, 29 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.033/0.037/0.051/0.000 ms
linuxcnc off
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.032/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.032/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.031/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.032/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.031/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.037/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.036/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.036/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.037/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.037/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
[sudo] password for user:
Sorry, try again.
[sudo] password for user:
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.041/0.000 ms
what to do next please? regards and thanks
base thread or servo thread latency is way under 6us ...
here are statistics
Warning: Spoiler!
user@debian:~$ sudo systemctl restart irqbalance
user@debian:~$ ping 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
64 bytes from 192.168.1.10: icmp_seq=0 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 192.168.1.10: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 192.168.1.10: icmp_seq=3 ttl=64 time=0.042 ms
64 bytes from 192.168.1.10: icmp_seq=4 ttl=64 time=0.048 ms
64 bytes from 192.168.1.10: icmp_seq=5 ttl=64 time=0.035 ms
64 bytes from 192.168.1.10: icmp_seq=6 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=7 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=8 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=9 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=10 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=11 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=12 ttl=64 time=0.041 ms
64 bytes from 192.168.1.10: icmp_seq=13 ttl=64 time=0.036 ms
64 bytes from 192.168.1.10: icmp_seq=14 ttl=64 time=0.035 ms
64 bytes from 192.168.1.10: icmp_seq=15 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=16 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=17 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=18 ttl=64 time=0.035 ms
64 bytes from 192.168.1.10: icmp_seq=19 ttl=64 time=0.033 ms
64 bytes from 192.168.1.10: icmp_seq=20 ttl=64 time=0.051 ms
64 bytes from 192.168.1.10: icmp_seq=21 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=22 ttl=64 time=0.042 ms
64 bytes from 192.168.1.10: icmp_seq=23 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=24 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=25 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=26 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=27 ttl=64 time=0.034 ms
64 bytes from 192.168.1.10: icmp_seq=28 ttl=64 time=0.034 ms
linuxcnc on
^C--- 192.168.1.10 ping statistics ---
29 packets transmitted, 29 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.033/0.037/0.051/0.000 ms
linuxcnc off
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.032/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.032/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.031/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.016/0.032/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.031/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.037/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.036/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.036/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.037/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.037/0.000 ms
user@debian:~$ sudo chrt 99 ping -i 0.001 -q -c 60000 192.168.1.10
[sudo] password for user:
Sorry, try again.
[sudo] password for user:
PING 192.168.1.10 (192.168.1.10): 56 data bytes
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.013/0.014/0.041/0.000 ms
what to do next please? regards and thanks
Last edit: 30 Nov 2025 13:27 by endian. Reason: editor messing
Please Log in or Create an account to join the conversation.
- endian
-
Topic Author
- Offline
- Elite Member
-
Less
More
- Posts: 246
- Thank you received: 70
02 Dec 2025 10:02 #339629
by endian
Replied by endian on topic Reduce read-all timing 7i76e + 7i77
To reduce network latency:
1. Make sure basic host latency setup is done (disable power management etc)
Actions:
sudo apt update
sudo apt install net-tools
Edit your GRUB config:
sudo nano /etc/default/grub
Find the line starting with:
GRUB_CMDLINE_LINUX_DEFAULT=
Append these parameters:
quiet splash isolcpus=nohz,domain,managed_irq,1 intel_idle.max_cstate=0 processor.max_cstate=1 idle=poll
Adjust isolcpus numbers later once you know your CPU layout.
Then update and reboot:
sudo update-grub
sudo reboot
Also:
Disable CPU frequency scaling:
sudo apt install cpufrequtils
echo 'GOVERNOR="performance"' | sudo tee /etc/default/cpufrequtils
sudo systemctl enable --now cpufrequtils
Disable power-saving for NIC and USB:
sudo ethtool -s eth0 wol d
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
2. If you have an intel MAC on the host PC, disable IRQ coalescing.
IRQ coalescing groups interrupts together to reduce CPU load — bad for real-time use.
Run:
sudo ethtool -C eth0 rx-usecs 0 rx-frames 0 tx-usecs 0 tx-frames 0
Make it persistent by adding it to:
sudo nano /etc/network/interfaces
Example:
auto eth0
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
pre-up /sbin/ethtool -C eth0 rx-usecs 0 rx-frames 0 tx-usecs 0 tx-frames 0
3. Use isolcpus to run LinuxCNC on the last CPU
You “isolate” a CPU core so only LinuxCNC and its threads run there.
Example:
If you have 4 cores (0–3), you isolate the last one:
isolcpus=3 nohz_full=3 rcu_nocbs=3
That goes into your GRUB line (as above). Then:
sudo update-grub
sudo reboot
To verify:
cat /sys/devices/system/cpu/isolated
4. (in conjunction with #3) Pin the Ethernet IRQ to the last CPU
(and set irqbalance to one-shot mode so it doesn't mess with the IRQ pinning)
First, find your Ethernet IRQ number:
grep eth0 /proc/interrupts
You’ll see something like:
123: 12345 0 0 0 IR-PCI-MSI eth0
→ IRQ number is 123
Then pin it:
echo 8 | sudo tee /proc/irq/123/smp_affinity
Here 8 is a bitmask (binary 1000) meaning CPU 3.
(Use 1, 2, 4, 8, etc., depending on which CPU you isolated.)
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
5. Set irqbalance to One-Shot Mode
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
To check maximum network latency:
sudo chrt 99 ping -i .001 -q -c 60000 192.168.1.10
( Assuming 192.168.1.10 is the 7I76E IP address )
This will run for 1 minute and print statistics
Command Breakdown
sudo chrt 99 ping -i .001 -q -c 60000 192.168.1.10
Part
Meaning
sudo
Run with root privileges (needed for chrt and for such a short ping interval).
chrt 99
Runs the command with real-time priority 99 (highest possible SCHED_FIFO priority).
ping
Standard ICMP ping utility.
-i .001
Interval of 1 millisecond (1000 Hz) between packets — very fast.
-q
Quiet mode (only summary results).
-c 60000
Send 60,000 pings, which takes about 60 seconds at 1 kHz.
192.168.1.10
Target IP address (replace with your remote machine or device).
What It Tests
Latency/jitter of your Ethernet path between your host and the device at 192.168.1.10
Consistency under real-time scheduling (since chrt gives the ping thread real-time priority).
Impact of CPU isolation and IRQ pinning on network determinism.
then
How to Interpret the Results
When it finishes, you’ll get something like:
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 received, 0% packet loss, time 60040ms
rtt min/avg/max/mdev = 0.120/0.135/0.210/0.015 ms
Value
Meaning
min
Best-case latency.
avg
Typical latency.
max
Worst-case latency (what we care about most).
mdev
Standard deviation (jitter).
Good results for a well-tuned LinuxCNC host:
max under 0.5 ms (ideally < 0.2 ms).
No packet loss.
mdev small (under 0.05 ms).
Bad results (what to look out for):
max spikes to several milliseconds → indicates power management or IRQ contention.
Packet loss → NIC or cable issues.
mdev large → jittery CPU or network driver.
Tips
Make sure your Ethernet link partner (the device at 192.168.1.10) responds quickly — e.g., another PC on the same subnet, or an EtherCAT master/slave device.
Run it while LinuxCNC is idle and again under load (with the machine moving) — compare the two.
For continuous monitoring, use:
watch -n 1 "ping -q -c 1000 192.168.1.10"
to sample shorter bursts every second.
then
edit servo thread time to for example Ts=333 333 ns in the .ini file
finaly
edit setp hm2_7i76e.0.read.timeout 100000 # 100μs or less depends at measured latency
and check the time of servo thread which has to be less then for example < then Ts
1. Make sure basic host latency setup is done (disable power management etc)
Actions:
sudo apt update
sudo apt install net-tools
Edit your GRUB config:
sudo nano /etc/default/grub
Find the line starting with:
GRUB_CMDLINE_LINUX_DEFAULT=
Append these parameters:
quiet splash isolcpus=nohz,domain,managed_irq,1 intel_idle.max_cstate=0 processor.max_cstate=1 idle=poll
Adjust isolcpus numbers later once you know your CPU layout.
Then update and reboot:
sudo update-grub
sudo reboot
Also:
Disable CPU frequency scaling:
sudo apt install cpufrequtils
echo 'GOVERNOR="performance"' | sudo tee /etc/default/cpufrequtils
sudo systemctl enable --now cpufrequtils
Disable power-saving for NIC and USB:
sudo ethtool -s eth0 wol d
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
2. If you have an intel MAC on the host PC, disable IRQ coalescing.
IRQ coalescing groups interrupts together to reduce CPU load — bad for real-time use.
Run:
sudo ethtool -C eth0 rx-usecs 0 rx-frames 0 tx-usecs 0 tx-frames 0
Make it persistent by adding it to:
sudo nano /etc/network/interfaces
Example:
auto eth0
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
pre-up /sbin/ethtool -C eth0 rx-usecs 0 rx-frames 0 tx-usecs 0 tx-frames 0
3. Use isolcpus to run LinuxCNC on the last CPU
You “isolate” a CPU core so only LinuxCNC and its threads run there.
Example:
If you have 4 cores (0–3), you isolate the last one:
isolcpus=3 nohz_full=3 rcu_nocbs=3
That goes into your GRUB line (as above). Then:
sudo update-grub
sudo reboot
To verify:
cat /sys/devices/system/cpu/isolated
4. (in conjunction with #3) Pin the Ethernet IRQ to the last CPU
(and set irqbalance to one-shot mode so it doesn't mess with the IRQ pinning)
First, find your Ethernet IRQ number:
grep eth0 /proc/interrupts
You’ll see something like:
123: 12345 0 0 0 IR-PCI-MSI eth0
→ IRQ number is 123
Then pin it:
echo 8 | sudo tee /proc/irq/123/smp_affinity
Here 8 is a bitmask (binary 1000) meaning CPU 3.
(Use 1, 2, 4, 8, etc., depending on which CPU you isolated.)
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
5. Set irqbalance to One-Shot Mode
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
To check maximum network latency:
sudo chrt 99 ping -i .001 -q -c 60000 192.168.1.10
( Assuming 192.168.1.10 is the 7I76E IP address )
This will run for 1 minute and print statistics
Command Breakdown
sudo chrt 99 ping -i .001 -q -c 60000 192.168.1.10
Part
Meaning
sudo
Run with root privileges (needed for chrt and for such a short ping interval).
chrt 99
Runs the command with real-time priority 99 (highest possible SCHED_FIFO priority).
ping
Standard ICMP ping utility.
-i .001
Interval of 1 millisecond (1000 Hz) between packets — very fast.
-q
Quiet mode (only summary results).
-c 60000
Send 60,000 pings, which takes about 60 seconds at 1 kHz.
192.168.1.10
Target IP address (replace with your remote machine or device).
What It Tests
Latency/jitter of your Ethernet path between your host and the device at 192.168.1.10
Consistency under real-time scheduling (since chrt gives the ping thread real-time priority).
Impact of CPU isolation and IRQ pinning on network determinism.
then
How to Interpret the Results
When it finishes, you’ll get something like:
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 received, 0% packet loss, time 60040ms
rtt min/avg/max/mdev = 0.120/0.135/0.210/0.015 ms
Value
Meaning
min
Best-case latency.
avg
Typical latency.
max
Worst-case latency (what we care about most).
mdev
Standard deviation (jitter).
Good results for a well-tuned LinuxCNC host:
max under 0.5 ms (ideally < 0.2 ms).
No packet loss.
mdev small (under 0.05 ms).
Bad results (what to look out for):
max spikes to several milliseconds → indicates power management or IRQ contention.
Packet loss → NIC or cable issues.
mdev large → jittery CPU or network driver.
Tips
Make sure your Ethernet link partner (the device at 192.168.1.10) responds quickly — e.g., another PC on the same subnet, or an EtherCAT master/slave device.
Run it while LinuxCNC is idle and again under load (with the machine moving) — compare the two.
For continuous monitoring, use:
watch -n 1 "ping -q -c 1000 192.168.1.10"
to sample shorter bursts every second.
then
edit servo thread time to for example Ts=333 333 ns in the .ini file
finaly
edit setp hm2_7i76e.0.read.timeout 100000 # 100μs or less depends at measured latency
and check the time of servo thread which has to be less then for example < then Ts
The following user(s) said Thank You: tommylight
Please Log in or Create an account to join the conversation.
- unknown
- Offline
- Platinum Member
-
Less
More
- Posts: 849
- Thank you received: 288
02 Dec 2025 11:16 #339634
by unknown
Replied by unknown on topic Reduce read-all timing 7i76e + 7i77
I wonder if it would be worth having a sticky thread that is an index to some of these great posts regarding latency and such.
Please Log in or Create an account to join the conversation.
- rodw
-
- Offline
- Platinum Member
-
Less
More
- Posts: 11535
- Thank you received: 3862
02 Dec 2025 12:15 #339638
by rodw
Replied by rodw on topic Reduce read-all timing 7i76e + 7i77
Excellent info. I've started working on a similar procedure.There are a few more optimisations but I'll test before sharing anything.
Now PREEMPT_RT is in the mainline, there are a lot of articles emerging about RT tuning.
Now PREEMPT_RT is in the mainline, there are a lot of articles emerging about RT tuning.
The following user(s) said Thank You: endian
Please Log in or Create an account to join the conversation.
- endian
-
Topic Author
- Offline
- Elite Member
-
Less
More
- Posts: 246
- Thank you received: 70
02 Dec 2025 19:14 #339661
by endian
Replied by endian on topic Reduce read-all timing 7i76e + 7i77
Rod share your tips please or add them to my guide ... I really like to learn something new about this topic!
Thanks
Thanks
The following user(s) said Thank You: rodw
Please Log in or Create an account to join the conversation.
Time to create page: 0.094 seconds