Category: Advanced Configuration
To reduce network latency:
1. Make sure basic host latency setup is done (disable power management etc)
Actions:
sudo apt update
sudo apt install net-tools
Edit your GRUB config:
sudo nano /etc/default/grub
Find the line starting with:
GRUB_CMDLINE_LINUX_DEFAULT=
Append these parameters:
quiet splash isolcpus=nohz,domain,managed_irq,1 intel_idle.max_cstate=0 processor.max_cstate=1 idle=poll
Adjust isolcpus numbers later once you know your CPU layout.
Then update and reboot:
sudo update-grub
sudo reboot
Also:
Disable CPU frequency scaling:
sudo apt install cpufrequtils
echo 'GOVERNOR="performance"' | sudo tee /etc/default/cpufrequtils
sudo systemctl enable --now cpufrequtils
Disable power-saving for NIC and USB:
sudo ethtool -s eth0 wol d
sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
2. If you have an intel MAC on the host PC, disable IRQ coalescing.
IRQ coalescing groups interrupts together to reduce CPU load — bad for real-time use.
Run:
sudo ethtool -C eth0 rx-usecs 0 rx-frames 0 tx-usecs 0 tx-frames 0
Make it persistent by adding it to:
sudo nano /etc/network/interfaces
Example:
auto eth0
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
pre-up /sbin/ethtool -C eth0 rx-usecs 0 rx-frames 0 tx-usecs 0 tx-frames 0
3. Use isolcpus to run LinuxCNC on the last CPU
You “isolate” a CPU core so only LinuxCNC and its threads run there.
Example:
If you have 4 cores (0–3), you isolate the last one:
isolcpus=3 nohz_full=3 rcu_nocbs=3
That goes into your GRUB line (as above). Then:
sudo update-grub
sudo reboot
To verify:
cat /sys/devices/system/cpu/isolated
4. (in conjunction with #3) Pin the Ethernet IRQ to the last CPU
(and set irqbalance to one-shot mode so it doesn't mess with the IRQ pinning)
First, find your Ethernet IRQ number:
grep eth0 /proc/interrupts
You’ll see something like:
123: 12345 0 0 0 IR-PCI-MSI eth0
→ IRQ number is 123
Then pin it:
echo 8 | sudo tee /proc/irq/123/smp_affinity
Here 8 is a bitmask (binary 1000) meaning CPU 3.
(Use 1, 2, 4, 8, etc., depending on which CPU you isolated.)
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
5. Set irqbalance to One-Shot Mode
irqbalance dynamically moves IRQs — you don’t want that after pinning.
Edit:
sudo nano /etc/default/irqbalance
Find this line:
ENABLED="1"
Change to:
ENABLED="1"
ONESHOT="1"
Then restart:
sudo systemctl restart irqbalance
To check maximum network latency:
sudo chrt 99 ping -i .001 -q -c 60000 192.168.1.10
( Assuming 192.168.1.10 is the 7I76E IP address )
This will run for 1 minute and print statistics
Command Breakdown
sudo chrt 99 ping -i .001 -q -c 60000 192.168.1.10
Part
Meaning
sudo
Run with root privileges (needed for chrt and for such a short ping interval).
chrt 99
Runs the command with real-time priority 99 (highest possible SCHED_FIFO priority).
ping
Standard ICMP ping utility.
-i .001
Interval of 1 millisecond (1000 Hz) between packets — very fast.
-q
Quiet mode (only summary results).
-c 60000
Send 60,000 pings, which takes about 60 seconds at 1 kHz.
192.168.1.10
Target IP address (replace with your remote machine or device).
What It Tests
Latency/jitter of your Ethernet path between your host and the device at 192.168.1.10
Consistency under real-time scheduling (since chrt gives the ping thread real-time priority).
Impact of CPU isolation and IRQ pinning on network determinism.
then
How to Interpret the Results
When it finishes, you’ll get something like:
--- 192.168.1.10 ping statistics ---
60000 packets transmitted, 60000 received, 0% packet loss, time 60040ms
rtt min/avg/max/mdev = 0.120/0.135/0.210/0.015 ms
Value
Meaning
min
Best-case latency.
avg
Typical latency.
max
Worst-case latency (what we care about most).
mdev
Standard deviation (jitter).
Good results for a well-tuned LinuxCNC host:
max under 0.5 ms (ideally < 0.2 ms).
No packet loss.
mdev small (under 0.05 ms).
Bad results (what to look out for):
max spikes to several milliseconds → indicates power management or IRQ contention.
Packet loss → NIC or cable issues.
mdev large → jittery CPU or network driver.
Tips
Make sure your Ethernet link partner (the device at 192.168.1.10) responds quickly — e.g., another PC on the same subnet, or an EtherCAT master/slave device.
Run it while LinuxCNC is idle and again under load (with the machine moving) — compare the two.
For continuous monitoring, use:
watch -n 1 "ping -q -c 1000 192.168.1.10"
to sample shorter bursts every second.
then
edit servo thread time to for example Ts=333 333 ns in the .ini file
finaly
edit setp hm2_7i76e.0.read.timeout 100000 # 100μs or less depends at measured latency
and check the time of servo thread which has to be less then for example < then Ts