Trajectory Planner using Ruckig Lib
Please Log in or Create an account to join the conversation.
Assuming this can be integrated in LCNC (for non-coders), the next step would be figuring out how to re-name axes or add more axis letters and define the axis type (linear or rotary). Such as X1/X2, Y1/Y2, Z1/Z2, C1/C2, etc.
I've not programmed a Swiss lathe, but the twin-turret lathe I've fiddled with uses n1/n2 for the four duplicated axes (XYZC).
Please Log in or Create an account to join the conversation.
Please Log in or Create an account to join the conversation.
But maybe it will be never realized
The linuxcnc sai lib, can maybe run multiple instances of the interpreter.
But that's not tested as this point.
The rs297 lib is tangled with python. Linked to many other lcnc libs.
This makes it hard to decouple and test it stand alone.
But you make it so easy!!! I very impressed about that!
The hardest part to figur out where the "unlimited" nested while loops.
Parsing a raw gcode file to something the interpreter can read is not that hard.
@Spumco,
Such as X1/X2, Y1/Y2, Z1/Z2, C1/C2, etc.
As i understand you want to send a X2 from master to slave machine directly in the gcode.
This is not realised yet. And i think the command should look a little different as the slave's have a name.
prototype : [Command_id] [Name] [Command]
Then you should send a command like "G200 n2 G0 X100 Y100 Z100 C0".
or something like "SEND n2 G0 X100 Y100 Z100 C0".
Where G200 is a chosen as command_id.
Where n2 is the name of the duplicated axes (XYZC).
@Arciera,
That's exactly how it is!
And it's not easy to code this into lcnc. As everything is based on one instance of lcnc.
However it's interesting to try to setup multiple hal environments.
This is an idea of how it might work:
Websocket line
|
| - GUI Application (websocket client) (user land)
| - HAL Environment A (Super-Imposed Interpreter) (websocket server) (kernel space)
| - HAL Environment B (Machine controller 1) (websocket client) (kernel space) -|
| - HAL Environment C (Machine controller 2) (websocket client) (kernel space) -|
| - HAL Environment D (Machine controller 3) (websocket client) (kernel space) -|
|
|
Ethercat bus
** Ps, Relating the the lcnc trajectory's planner, soon i have to make a opencascade viewer to check the fillet output's made
in the tpmod.so component. As i can not see anything now.
Please Log in or Create an account to join the conversation.
To my suprise was able to load multiple instances of hal, each using their own memory region.
file : hal_priv.h
Add a few define's
#define MAX_HAL_INSTANCES 10 // Number of instances
#define SHMEM_INSTANCE_SIZE (sizeof(hal_data_t)) // Size of one hal_data_t instance
#define TOTAL_SHMEM_SIZE (SHMEM_INSTANCE_SIZE * MAX_HAL_INSTANCES) // Total size of all instances in shared memory
file : hal_lib.c
Normally it uses : int init_hal_data() ; Then it uses a previous declared pointer hal_data->....
We cannot use this for initializing multiple instances. Solution is to change the function a little bit.
int init_hal_data_struct(hal_data_t *data) {
/* has the block already been initialized? */
if (data->version != 0) {
/* yes, verify version code */
if (data->version == HAL_VER) {
return 0;
} else {
rtapi_print_msg(RTAPI_MSG_ERR,
"HAL: ERROR: version code mismatch\n");
return -1;
}
}
/* no, we need to init it, grab the mutex unconditionally */
rtapi_mutex_try(&(data->mutex));
/* set version code so nobody else init's the block */
data->version = HAL_VER;
/* initialize everything */
data->comp_list_ptr = 0;
data->pin_list_ptr = 0;
data->sig_list_ptr = 0;
data->param_list_ptr = 0;
data->funct_list_ptr = 0;
data->thread_list_ptr = 0;
data->base_period = 0;
data->threads_running = 0;
data->oldname_free_ptr = 0;
data->comp_free_ptr = 0;
data->pin_free_ptr = 0;
data->sig_free_ptr = 0;
data->param_free_ptr = 0;
data->funct_free_ptr = 0;
data->pending_constructor = 0;
data->constructor_prefix[0] = 0;
list_init_entry(&(data->funct_entry_free));
data->thread_free_ptr = 0;
data->exact_base_period = 0;
/* set up for shmalloc_xx(), with instance-specific boundaries */
data->shmem_bot = (uintptr_t)data;
data->shmem_top = data->shmem_bot + SHMEM_INSTANCE_SIZE;
data->lock = HAL_LOCK_NONE;
/* done, release mutex */
rtapi_mutex_give(&(data->mutex));
return 0;
}
new file : hal_instances.c
This file creates the hal instances. It allocates the memory for each instance.
Then it initializes each hal_data_t struct using the above function : int init_hal_data_struct(hal_data_t *data) ;
#include "rtapi.h" /* RTAPI realtime OS API */
#include "hal.h" /* HAL public API decls */
#include "hal_priv.h" /* HAL private decls */
#include <stdio.h> /* Include standard printf for logging */
#include "rtapi_string.h"
#include "rtapi_atomic.h"
// Declare an array of hal_data_t pointers to hold multiple instances
hal_data_t *hal_instances[MAX_HAL_INSTANCES];
// Define a pointer for the base of shared memory
void *shmem_base = NULL; // Pointer to the base of the shared memory region
int init_hal_instances(void) {
// Print message before starting memory allocation
printf("Initializing HAL instances...\n");
// Allocate shared memory for all instances (allocate total size)
shmem_base = hal_malloc(TOTAL_SHMEM_SIZE);
if (shmem_base == NULL) {
printf("Failed to allocate shared memory, total size: %ld bytes\n", TOTAL_SHMEM_SIZE);
return -1;
}
printf("Successfully allocated shared memory at address: %p\n", shmem_base);
// Get the size of a single hal_data_t instance
size_t instance_size = sizeof(hal_data_t);
printf("Each hal_data_t instance requires %zu bytes of memory\n", instance_size);
size_t total_used_memory = 0;
// Initialize each instance's memory region
for (int i = 0; i < MAX_HAL_INSTANCES; i++) {
hal_data_t *instance = (hal_data_t *)((uintptr_t)shmem_base + (i * SHMEM_INSTANCE_SIZE));
hal_instances = instance;
// Print message about instance initialization
printf("Initializing HAL instance %d at address: %p\n", i, instance);
// Initialize instance data
if (init_hal_data_struct(instance) != 0) {
printf("Failed to initialize HAL instance %d\n", i);
return -1;
}
printf("Successfully initialized HAL instance %d\n", i);
// Calculate and track memory usage
total_used_memory += instance_size;
}
// Print total memory usage
printf("Total memory used for HAL instances: %zu bytes\n", total_used_memory);
// Verify if the allocated shared memory matches the total used memory
if (total_used_memory != TOTAL_SHMEM_SIZE) {
printf("Warning: Total memory used (%zu bytes) does not match the allocated memory size (%ld bytes)\n",
total_used_memory, TOTAL_SHMEM_SIZE);
}
printf("All HAL instances initialized successfully.\n");
return 0;
}
Then added a init function to halcommand.
Then you can type ~./bin/halcmd init
The file halcmd.c
{"init", FUNCT(do_init_cmd), A_ZERO},
The file halcmd_commands.c
int do_init_cmd(){
printf("hal init instances. \n");
init_hal_instances();
}
Then in terminal :
user@pc:~/hal/bin$ ./halcmd init
hal init instances.
Initializing HAL instances...
Successfully allocated shared memory at address: 0x7f0557fe8120
Each hal_data_t instance requires 288 bytes of memory
Initializing HAL instance 0 at address: 0x7f0557fe8120
Successfully initialized HAL instance 0
Initializing HAL instance 1 at address: 0x7f0557fe8240
Successfully initialized HAL instance 1
Initializing HAL instance 2 at address: 0x7f0557fe8360
Successfully initialized HAL instance 2
Initializing HAL instance 3 at address: 0x7f0557fe8480
Successfully initialized HAL instance 3
Initializing HAL instance 4 at address: 0x7f0557fe85a0
Successfully initialized HAL instance 4
Initializing HAL instance 5 at address: 0x7f0557fe86c0
Successfully initialized HAL instance 5
Initializing HAL instance 6 at address: 0x7f0557fe87e0
Successfully initialized HAL instance 6
Initializing HAL instance 7 at address: 0x7f0557fe8900
Successfully initialized HAL instance 7
Initializing HAL instance 8 at address: 0x7f0557fe8a20
Successfully initialized HAL instance 8
Initializing HAL instance 9 at address: 0x7f0557fe8b40
Successfully initialized HAL instance 9
Total memory used for HAL instances: 2880 bytes
All HAL instances initialized successfully.
So far it seems to work. But now we must dive a little deeper.
What should be the next step? Maybe do a halcmd show..
This is for example the ~./bin/halcmd show function
static void print_comp_info(char **patterns)
{
int next;
hal_comp_t *comp;
if (scriptmode == 0) {
halcmd_output("Loaded HAL Components:\n");
halcmd_output("ID Type %-*s PID State\n", HAL_NAME_LEN, "Name");
}
rtapi_mutex_get(&(hal_data->mutex));
next = hal_data->comp_list_ptr;
while (next != 0) {
comp = SHMPTR(next);
if ( match(patterns, comp->name) ) {
if(comp->type == COMPONENT_TYPE_OTHER) {
hal_comp_t *comp1 = halpr_find_comp_by_id(comp->comp_id & 0xffff);
halcmd_output(" INST %s %s",
comp1 ? comp1->name : "(unknown)",
comp->name);
} else {
halcmd_output(" %5d %-4s %-*s",
comp->comp_id, (comp->type == COMPONENT_TYPE_REALTIME) ? "RT" : "User",
HAL_NAME_LEN, comp->name);
if(comp->type == COMPONENT_TYPE_USER) {
halcmd_output(" %5d %s", comp->pid, comp->ready > 0 ?
"ready" : "initializing");
} else {
halcmd_output(" %5s %s", "", comp->ready > 0 ?
"ready" : "initializing");
}
}
halcmd_output("\n");
}
next = comp->next_ptr;
}
rtapi_mutex_give(&(hal_data->mutex));
halcmd_output("\n");
}
In above function hal_data->.... is used, this is just one hal instance pointer.
in the file hal_priv.h its declared : extern hal_data_t *hal_data;
So now we know we just have to implement our instance regions.
hal_data_t *hal_instances[MAX_HAL_INSTANCES];
Will try to test if this works.
Please Log in or Create an account to join the conversation.
github.com/LinuxCNC/linuxcnc/issues/2716
github.com/LinuxCNC/linuxcnc/pull/2722/files
Please Log in or Create an account to join the conversation.
Thanks for the links !
As @rene-dev mentioned to me it is currently not possible to run tests in parallel, mainly because there can only be one linuxcnc instance running at a time.
It seems not so easy for me to create hal instances in a short time period. Segment fault.... Huh.
Then i looked more in detail to the rtapi code.
I came to the conclusion that linux cnc hal at runtime is clamped to a single cpu core.
If user installs a preempt rt kernel, we can use posix.
This posix is then is used by rtapi to run's a pthread very fast and quite time stable.
So what looks like a insmod command in rtapi, is not a insmod command at all. Insmod in this case is not a kernel insertion command.
Insmod in rtapi is loading a .so lib.
Rtapi can be seen as a library function loader. And does a little more.
For run(); read rtapi_app_main();
So if you have a lib called test.so. This lib has a function run();
Then the Rtapi opens the "test.so" lib and finds the function "run".
The lib stay's open and rtapi creates a pointer to the function run();.
This pointer is used later on to execute the run(); function.
The rtapi then finally has a list of loaded .so libs with their names and their run(); functions (or pointers * to the run(); functions).
This final function list is then executed in order.
All the .so libs, have the same function name to execute.
This is basicly what rtapi does.
The hal environement is coded in c. But can eventually be coded in c++. Because rtapi is also written in c++. And there is no
real .ko hocus pocus.
Transforming hal into a multi instance hal environment is quite a lot of work i have seen so far. It also comes with some
difficulties to solve.
In the end of this post :
Will take some time to create a rtapi look alike example.
Creating a test app that load's .so libs with a certain function. Then run the function list.
But then with examples how to choose the cpu's.
How to set priority flags to .so libs wich are time critical.
How to create parallel or detached processing using posix.
Please Log in or Create an account to join the conversation.
So far i am happy with the insights tonight.
Source code:
rtapi_function_loader
Above is a repository testing the posix pthread, running different .so libary functions named "test();".
It's more or less how the original rtapi workflow is.
This is tested ok with a base thread nano second : 25.000
And you can set the used cpu's. You can set it by passing a cpu_list = {0, 1};
If you load 4 instances of the rtapi_function_loader class and assign each
instance to a different CPU (CPU 1, CPU 2, etc.),
each instance will run on its own dedicated CPU core,
ensuring that all 4 threads run in parallel without sharing CPU time.
We have also a priority for the rtapi_function_loader's pthread in the value 0 up to 100.
Ok, now next thing is maybe to look when, where and why we need to allocate shared memory blocks.
Please Log in or Create an account to join the conversation.
Made a even bigger example how linuxcnc primairely works with rtapi and hal.
Why do we made this example?
Primairely to get more insight's how rtapi loads rt modules, how hal is used and how to setup hal environment.
How to connect hal pin's to each other, and how they are updated.
Specs:
- loading multiple linuxcnc instances. (function loader instances)
- Choose the cpu's to clamp on.
- Set prioriy
- Create multiple shared memory regions.
- Update hal pins over different shared memory regions.
- Create and update hal connection's like "net", but then also connect structure types.
example
Then i looked into the hal ethercat component code, and indeed, above code could implement
a edited version of the hal ethercat code.
Please Log in or Create an account to join the conversation.
- smc.collins
- Offline
- Platinum Member
- Posts: 676
- Thank you received: 117
I could be wrong, but given the current API that linuxcnc is written on top of, this would likely require very deep work into the core of linuxcnc. AFAICT there are no tools in the API that can synchronize the threads for instance. Also the code appears to be heavily written for serial execution. What would probably need to happen otomh is that the api would need a way to have each thread run independently with a centrally controlled memory management cache to synchronize the various threads. so you would have a trajectory planner " all required code to execute motion" for each new instance of a aixs and have it run in it's own thread and then a central trajectory planner would keep the various thread in sync.Aciera post=313997 userid=25994Hm, as I understand it, this does _not_ give us multi thread gcode execution as that would require multiple parallel motion planners in the same linuxcnc instance. A single instance of linuxcnc cannot interpolate more than one tool path at the same time.
at that point, it would probably make sense to port to a mulithreaded api that has c and c++ tools for this kind of work. Texias Instruments RTOS for there microcontrollers has this kind of capabiliy. Any newer x86 can do this easily if it is multicore. I am not super familiar with the ROTS version of the linux kernel that just sa release. But yes a multithreaded pi would be required bare minimum.
Please Log in or Create an account to join the conversation.