Properly exiting a GladeVCP panel embedded in Axis
22 Jan 2015 14:03 #55260
by cmorley
Replied by cmorley on topic Properly exiting a GladeVCP panel embedded in Axis
So I still don't fully understand.
Are you saying that the embedded. gladevcp panel is still running after linuxcnc is closed?
by your error snippet - it seems gladevcp is getting the signals to close.
I guess what I'm saying (because I haven't run your code) how do you know gladvcp is still running?
AXIS is supposed to send the destroy signal through the plug and socket and the xembed protocol.
Are you saying that the embedded. gladevcp panel is still running after linuxcnc is closed?
by your error snippet - it seems gladevcp is getting the signals to close.
I guess what I'm saying (because I haven't run your code) how do you know gladvcp is still running?
AXIS is supposed to send the destroy signal through the plug and socket and the xembed protocol.
Please Log in or Create an account to join the conversation.
22 Jan 2015 14:11 #55261
by cmorley
Replied by cmorley on topic Properly exiting a GladeVCP panel embedded in Axis
For your other problem:
Here is what I did in a experimental config:
toplevel.py:
remap.py:
Note the 'if self.task:'
The script is called twice: once for the GUI's task code and once for machine's task code.
Chris M
Here is what I did in a experimental config:
toplevel.py:
import remap
def __init__(self):
remap.init_pins(self)
remap.py:
from stdglue import *
import hal
def init_pins(self):
# only run this if we are really moving the machine
# skip this if running task for the screen
if self.task:
h = hal.component("tool-info")
h.newpin("tool-number", hal.HAL_S32, hal.HAL_OUT)
h.newpin("tool-wear-number", hal.HAL_S32, hal.HAL_OUT)
h.newpin("tool-wear-offset-x", hal.HAL_FLOAT, hal.HAL_OUT)
h.newpin("tool-wear-offset-z", hal.HAL_FLOAT, hal.HAL_OUT)
h.newpin("tool-offset-x", hal.HAL_FLOAT, hal.HAL_OUT)
h.newpin("tool-offset-z", hal.HAL_FLOAT, hal.HAL_OUT)
h.ready()
self.h = h
def index_fanuc_lathe_tool(self,**words):
# only run this if we are really moving the machine
# skip this if running task for the screen
if not self.task:
return INTERP_OK
try:
# check there is a tool number from the Gcode
cblock = self.blocks[self.remap_level]
if not cblock.t_flag:
self.set_errormsg("T requires a tool number")
return INTERP_ERROR
tool_raw = int(cblock.t_number)
# If it's less then 100 someone forgot to add the wear #, so we added it automatically
# separate out tool number (tool) and wear number (wear), add 10000 to wear number
if tool_raw <100:
tool_raw=tool_raw*100
tool = int(tool_raw/100)
wear = 10000 + tool_raw % 100
# uncomment for debugging
# print'***tool#',cblock.t_number,'toolraw:',tool_raw,'tool split:',tool,'wear split',wear
# set HAL pins
self.h["tool-number"] = cblock.t_number
self.h["tool-wear-number"] = wear
# check for tool number entry in tool file
(status, pocket) = self.find_tool_pocket(tool)
if status != INTERP_OK:
self.set_errormsg("T%d: tool entry not found" % (tool))
return status
# if there is a wear number, check for it in the tool file
# For wear offset 00, we dont require a tool 10000 entry in the tool file - so we will skip this check
if wear>10000:
(status, pocket) = self.find_tool_pocket(wear)
if status != INTERP_OK:
self.set_errormsg("T%d: wear entry not found" % (wear))
return status
# index tool immediately to tool number
# immediately add tool offset
self.selected_tool = self.selected_pocket = tool
emccanon.SELECT_POCKET(self.selected_pocket, self.selected_tool)
self.execute("m6")
self.execute("g43 h%d"% tool)
# record the x and z offsets of the current tool
x1= float(self.tool_offset.x)
z1= float(self.tool_offset.z)
# if the wear offset is specified, immediately add it's offset
if wear>10000:
self.execute("g43.2 h%d"% wear)
# Not sure why but must call this again to resynch, other wise
# ;py,print self.tool_table[0].toolno (in mdi) will always be one tool behind
self.execute("m6")
# calculate wear offset and set HAL pins
self.h["tool-offset-x"] = x1
self.h["tool-offset-z"] = z1
self.h["tool-wear-offset-x"] = (float(self.tool_offset.x) -x1)
self.h["tool-wear-offset-z"] = (float(self.tool_offset.z) -z1)
# all good
return INTERP_OK
except Exception, e:
self.set_errormsg("T%d index_fanuc_lathe_tool: %s" % (int(words['t']), e))
return INTERP_ERROR
Note the 'if self.task:'
The script is called twice: once for the GUI's task code and once for machine's task code.
Chris M
The following user(s) said Thank You: tnl.lambert
Please Log in or Create an account to join the conversation.
22 Jan 2015 14:14 #55262
by cmorley
Replied by cmorley on topic Properly exiting a GladeVCP panel embedded in Axis
Coulda probably added the 'if self.task' to the yoplevel.py instead I bet.
Chris M
Chris M
Please Log in or Create an account to join the conversation.
- tnl.lambert
- Offline
- Junior Member
Less
More
- Posts: 37
- Thank you received: 1
23 Jan 2015 04:20 #55274
by tnl.lambert
Replied by tnl.lambert on topic Properly exiting a GladeVCP panel embedded in Axis
Ok, so first things first: I can see the gladevcp task still running in the System Monitor. That's how I know it's still there. Everything exits fine except for the threads - do you have a recommendation for a method to terminate the threads when the main 'handlers.py' is terminated by axis?
In regards to your experimental code, it certainly explains a few things, but it doesn't solve all of my problems. It does, however, fix remap.py.
In addition to remap.py, I also have modbus.py, and another script called tester.py. Both additional scripts try to make hal components. My current workaround is to keep importing everything into handlers.py, which then runs inside the gladevcp process. Not good factoring, but it does run once.
The issue is that my other files use threads. See modbus.py below:
I can't seem to figure out a way to make your 'if self.task' strategy work here.
In regards to your experimental code, it certainly explains a few things, but it doesn't solve all of my problems. It does, however, fix remap.py.
In addition to remap.py, I also have modbus.py, and another script called tester.py. Both additional scripts try to make hal components. My current workaround is to keep importing everything into handlers.py, which then runs inside the gladevcp process. Not good factoring, but it does run once.
The issue is that my other files use threads. See modbus.py below:
#from stdglue import *
import hal
import gtk
import serial
import threading
import sys
import time
#def init_pins(self):
#if self.task:
err = hal.component("ErrGlade")
err.newpin("Robot", hal.HAL_BIT, hal.HAL_IN)
err.newpin("ModBus", hal.HAL_BIT, hal.HAL_IN)
err.ready()
#self.err = err
eout = hal.component("ErrGladeOut")
eout.newpin("Robot", hal.HAL_BIT, hal.HAL_IN)
eout.newpin("ModBus", hal.HAL_BIT, hal.HAL_IN)
eout.ready()
#self.eout = eout
comm = hal.component("m77handlerPin")
comm.newpin("l", hal.HAL_S32, hal.HAL_IO)
comm.newpin("h", hal.HAL_S32, hal.HAL_IO)
comm.newpin("o", hal.HAL_S32, hal.HAL_IO)
comm.ready()
#self.comm = comm
h = hal.component("IRB")
h.newpin("go", hal.HAL_BIT, hal.HAL_IN)
h.ready()
#self.h = h
#pause = hal.component("PauseOut")
#pause.newpin("p", hal.HAL_BIT, hal.HAL_IO)
#pause.ready()
numerr = 0
mainclosed = 0
cmdarray = bytearray(2)
cmdarray[0] = 0
class myThread (threading.Thread):
def __init__(self, threadID, name, timeout):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.timeout = timeout
def run(self):
print "Starting " + self.name
modbusInit(self.name, self.timeout)
print "Exiting " + self.name
def modbusInit(threadName,timeoutIn):
ser = serial.Serial('/dev/ttyUSB0',timeout = timeoutIn)
outputByte = 165
numerr = 0
inSize = 1
#calcXor()
print "Start Time: %f" % time.time()
while numerr < 100:
cmdarray[0] = comm.l
cmdarray[1] = comm.h
if mainclosed == 1:
print "Thread Closing"
thread.exit()
if (cmdarray[0] > 0) and (err.Robot == 0) and (err.ModBus == 0):
print "working thread 1: %f" % time.time()
#try:
resp1,resp2 = sendFc16(165,1,len(cmdarray),cmdarray,ser)
print "cmd = %d" % cmdarray[0]
#cmdarray[0] = 0
comm.l = 0
comm.h = 0
comm.o = 1
print "Communication Success"
#Error from Robot
if (resp1 == 99) and (resp2 == 99):
eout.Robot = 1
#Error from Modbus
if (resp1 == 0) and (resp2 == 99):
eout.ModBus = 1
ser.close()
def writeByte(outByte,outPort):
#outPort.flushInput()
#outPort.flushOutput()
return outPort.write(outByte)
def readByte(readSize,readPort):
inputByte = readPort.read(readSize)
while len(inputByte)==0:
inputByte = readPort.read(readSize)
return inputByte
def getCRC(outMessage):
CRCFull = 0xFFFF
#0xFFFF;
CRC = bytearray(2);
for i in range(0,len(outMessage)-2):
CRCFull = CRCFull ^ outMessage[i]
#print "CRCFull1 =" + bin(CRCFull) + "i = %d" % i
#CRCFull = CRCFull % 2**16
for j in range(0,8):
CRCLSB = CRCFull & 0x0001
CRCFull = (CRCFull >> 1) & 0x7FFF
#0x7FFF
#print "CRCFull2 =" + bin(CRCFull) + "CRCLSB = %d" % CRCLSB
#CRCFull = CRCFull % 2**16
if CRCLSB==1:
CRCFull = CRCFull ^ 0xA001
#0xA001
#print "CRCFull3 =" + bin(CRCFull)
#CRCFull = CRCFull % 2**16
CRC[1] = (CRCFull >> 8) & 0xFF
CRC[0] = CRCFull & 0xFF
return CRC
def buildMessage(address,type,start,registers,message):
message[0] = address
message[1] = type
message[2] = start >> 8
message[3] = start
message[4] = registers >> 8
message[5] = registers
outputCRC = getCRC(message)
#print "mess0 is: %d" % message[0]
#print "mess1 is: %d" % message[1]
#print "mess2 is: %d" % message[2]
#print "mess3 is: %d" % message[3]
#print "mess4 is: %d" % message[4]
#print "mess5 is: %d" % message[5]
#print "mess6 is: %d" % message[6]
print "mess7 is: %d" % message[7]
print "mess8 is: %d" % message[8]
print "mess9 is: %d" % message[9]
print "mess10 is: %d" % message[10]
message[len(message)-2] = outputCRC[0]
message[len(message)-1] = outputCRC[1]
#print "CRC0 is: %d" % message[11]
#print "CRC1 is: %d" % message[12]
def sendFc16(address,start,registers,values,port):
if port.isOpen():
port.flushInput()
port.flushOutput()
message = bytearray(9 + 2 * registers)
response = bytearray(8)
message[6] = registers * 2
for i in range(0,registers):
message[7 + 2*i] = values[i] >> 8
message[8 + 2*i] = values[i]
#print "i = %d" % i
try:
buildMessage(address,16,start,registers,message)
#print "message length = %d" % len(message)
writeMessage(port,message)
#print "Message Written"
response = getResponse(port)
#print "res7 is: %d" % response[7]
#print "After Response"
if checkResponse(response):
print "CRC Match"
return response[3],response[5]
except (RuntimeError, TypeError, NameError):
return 0,99
return 0,0
def writeMessage(port,message):
for i in range(0,len(message)):
writeByte(chr(message[i]),port)
#print "chrmess " + chr(message[i])
#print "Write Complete"
def getResponse(port):
response = bytearray(8)
for i in range(0,8):
#print "Before Read"
response[i] = readByte(1,port)
#print "resi is: %d" % response[i]
return response
def checkResponse(response):
CRC = getCRC(response)
#print "crc0 is: %d" % CRC[0]
return CRC[0] == response[len(response) - 2] and CRC[1] == response[len(response) - 1]
thread1 = myThread(1,"Modbus-Thread-1",0.5)
thread1.start()
print "Exiting Main Thread"
I can't seem to figure out a way to make your 'if self.task' strategy work here.
Please Log in or Create an account to join the conversation.
- tnl.lambert
- Offline
- Junior Member
Less
More
- Posts: 37
- Thank you received: 1
23 Jan 2015 06:17 #55279
by tnl.lambert
Replied by tnl.lambert on topic Properly exiting a GladeVCP panel embedded in Axis
Another bit of info - it looks like my toplevel.py is executed 4 times:
My print statement says 'Code Execution' each time toplevel is called. The duplicate component names are resolved by your earlier fix.
Starting LinuxCNC...
HAL: ERROR: duplicate component name 'classicladder'
INFO CLASSICLADDER---I/O modbus master closed!
INFO CLASSICLADDER---I/O modbus master (Ethernet) init ok !
Code Execution
Code Execution
Code Execution
HAL: ERROR: duplicate component name 'm77remapPin'
Second hal pin initialization attempt.
Code Execution
Executing Tester2
Starting Modbus-Thread-1
Exiting Main Thread
Starting Motor-Thread-1
MakeMotor
Start Time: 1421964823.614192
Xlib.protocol.request.QueryExtension
Xlib.protocol.request.QueryExtension
My print statement says 'Code Execution' each time toplevel is called. The duplicate component names are resolved by your earlier fix.
Please Log in or Create an account to join the conversation.
23 Jan 2015 15:30 #55285
by cmorley
Replied by cmorley on topic Properly exiting a GladeVCP panel embedded in Axis
I find it hard to help you debug, as you tell us symptoms more then problems.
Telling me the code executes four times does not tell me the actual problem.
Remapping is a relatively new and unexplored technique. I doubt there are a lot of experts.
What I'm getting at is your delving into unknown territories so may take us a bit to find the answers
Is it possible for you to post a complete sample config?
Also I was wondering why you wished to have multiple components inside of one with multiple pins?
Chris M
Telling me the code executes four times does not tell me the actual problem.
Remapping is a relatively new and unexplored technique. I doubt there are a lot of experts.
What I'm getting at is your delving into unknown territories so may take us a bit to find the answers
Is it possible for you to post a complete sample config?
Also I was wondering why you wished to have multiple components inside of one with multiple pins?
Chris M
Please Log in or Create an account to join the conversation.
- tnl.lambert
- Offline
- Junior Member
Less
More
- Posts: 37
- Thank you received: 1
23 Jan 2015 22:57 #55293
by tnl.lambert
Replied by tnl.lambert on topic Properly exiting a GladeVCP panel embedded in Axis
I appreciate the difficulty - it's not an easy task to help via forum posts. That said, it is much appreciated.
I have attached everything to this post.
python.zip is the linuxcnc python folder.
SingleMotorTest.zip is the config.
gladetests.zip has all three glade panels I've tried.
pythontests.zip has the glade handler files.
As to your question, I am using multiple components mostly because I am performing multiple tasks. If all of the code was properly factored, I would most likely have one or two components per script.
With that said, I would very much like to know how to connect to an existing HAL pin in python. That way, I could create the pins somewhere convenient, and simply connect to them when the script needs them.
I have attached everything to this post.
python.zip is the linuxcnc python folder.
SingleMotorTest.zip is the config.
gladetests.zip has all three glade panels I've tried.
pythontests.zip has the glade handler files.
As to your question, I am using multiple components mostly because I am performing multiple tasks. If all of the code was properly factored, I would most likely have one or two components per script.
With that said, I would very much like to know how to connect to an existing HAL pin in python. That way, I could create the pins somewhere convenient, and simply connect to them when the script needs them.
Please Log in or Create an account to join the conversation.
- tnl.lambert
- Offline
- Junior Member
Less
More
- Posts: 37
- Thank you received: 1
24 Jan 2015 03:48 #55302
by tnl.lambert
Replied by tnl.lambert on topic Properly exiting a GladeVCP panel embedded in Axis
Solved the multiple execution problem. I'll post the solution later today. Still looking for the solution to the original problem - exiting the threads when Axis closes.
Please Log in or Create an account to join the conversation.
24 Jan 2015 16:17 #55308
by cmorley
Replied by cmorley on topic Properly exiting a GladeVCP panel embedded in Axis
Try something like this to connect pins to signals:
h = hal.component("IRB")
h.newpin("go", hal.HAL_BIT, hal.HAL_IN)
hal.new_sig("go-sig",hal.HAL_BIT)
hal.connect('IRB.go','go-sig')
Please Log in or Create an account to join the conversation.
- tnl.lambert
- Offline
- Junior Member
Less
More
- Posts: 37
- Thank you received: 1
30 Jan 2015 03:32 #55463
by tnl.lambert
Replied by tnl.lambert on topic Properly exiting a GladeVCP panel embedded in Axis
Okay, so I have *most* of a solution. Here's what it looks like:
I have consolidated almost all of my hal components into toplevel.py. So in toplevel, I do stuff like this:
In order to get the initialization run once, I do some other funky stuff in toplevel.py:
Turns out that self.task = 0 after the __init__ method is called, the last time toplevel is loaded.
The remap file is imported fully, because it doesn't have a thread class in it. However, it does still need a 'run-once' wrapper:
Now, to solve the thread exiting problem - since everything is tied to axis, I've just made a gladeVCP button which is tied to a pin in each thread that exits all of the while loops:
So the myThread2 class is imported to toplevel, and started there. The hal component is passed in when the thread is started, and the component is passed down to the motorInit or modbusInit methods when the myThread2 class is instantiated.
Thus, I get access to my thread, and hal connections like the one to comm.on_state, which I use to exit the loops properly. With the while loops gone, each thread simply completes, and at that point, axis will properly close GladeVCP (at least properly enough for memory management and zombie processes to no longer be an issue).
My second hal connection to comm.reset can be used to pause the while loop, by effectively stopping the execution of any code that the if statement contains.
There is one exception to my hal component creation strategy, which is that handlers.py, the python script for the glade embedded panel, needs its own hal components. Since handlers.py is only loaded once, that isn't a problem, and since there are no more pesky threads running in handlers.py, it will exit successfully with axis. Connect up all the hal pins in glade_postgui.hal and custom_postgui.hal, and you're pretty much done.
I have consolidated almost all of my hal components into toplevel.py. So in toplevel, I do stuff like this:
h = hal.component("IRB")
h.newpin("go", hal.HAL_BIT, hal.HAL_IN)
In order to get the initialization run once, I do some other funky stuff in toplevel.py:
import remap
from modbus2 import myThread
def __init__(self):
# Since the program starts twice (parsed 4 times, but only initialized twice),
# the self.task signal is used to identify each attempt to start.
# self.task = 1 is the first run, and self.task = 0 is the second.
print "Code Execution - task: " + str(self.task)
remap.init_pins(self)
# Run when self.task = 0
if not self.task:
# HAL pins for the ModBus script
mod = hal.component("ModBus")
mod.newpin("RobotErr", hal.HAL_BIT, hal.HAL_IO)
mod.newpin("ModBusErr", hal.HAL_BIT, hal.HAL_IO)
mod.newpin("PythonErr", hal.HAL_BIT, hal.HAL_IO)
mod.newpin("Error", hal.HAL_BIT, hal.HAL_IN)
mod.newpin("on_state", hal.HAL_BIT, hal.HAL_IN)
mod.ready()
mod.on_state = 1
comm = hal.component("m77handlerPin")
comm.newpin("l", hal.HAL_S32, hal.HAL_IO)
comm.newpin("h", hal.HAL_S32, hal.HAL_IO)
comm.newpin("o", hal.HAL_S32, hal.HAL_IO)
comm.ready()
thread1 = myThread(1,"Modbus-Thread-1",0.5,mod,comm)
thread1.start()
Turns out that self.task = 0 after the __init__ method is called, the last time toplevel is loaded.
The remap file is imported fully, because it doesn't have a thread class in it. However, it does still need a 'run-once' wrapper:
# Run-once HAL component initialization called by toplevel.py
def init_pins(self):
if self.task:
h = hal.component("m77remapPin")
h.newpin("l", hal.HAL_S32, hal.HAL_IO)
h.newpin("h", hal.HAL_S32, hal.HAL_IO)
h.ready()
self.h = h
else:
return INTERP_OK
def m77(self, **words):
if not self.task:
return INTERP_OK
if words.has_key('l') and words.has_key('h'):
print "wordval - %d" % int(words['l'])
print "wordval - %d" % int(words['h'])
self.h["l"] = int(words['l'])
self.h["h"] = int(words['h'])
return INTERP_OK
Now, to solve the thread exiting problem - since everything is tied to axis, I've just made a gladeVCP button which is tied to a pin in each thread that exits all of the while loops:
class myThread2 (threading.Thread):
def __init__(self, threadID, name, comm):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.comm = comm
def run(self):
print "Starting " + self.name
motorInit(self.name, self.comm)
print "Exiting " + self.name
def motorInit(threadName,comm):
while comm.on_state:
if comm.reset:
comm.reset = 0
......
So the myThread2 class is imported to toplevel, and started there. The hal component is passed in when the thread is started, and the component is passed down to the motorInit or modbusInit methods when the myThread2 class is instantiated.
Thus, I get access to my thread, and hal connections like the one to comm.on_state, which I use to exit the loops properly. With the while loops gone, each thread simply completes, and at that point, axis will properly close GladeVCP (at least properly enough for memory management and zombie processes to no longer be an issue).
My second hal connection to comm.reset can be used to pause the while loop, by effectively stopping the execution of any code that the if statement contains.
There is one exception to my hal component creation strategy, which is that handlers.py, the python script for the glade embedded panel, needs its own hal components. Since handlers.py is only loaded once, that isn't a problem, and since there are no more pesky threads running in handlers.py, it will exit successfully with axis. Connect up all the hal pins in glade_postgui.hal and custom_postgui.hal, and you're pretty much done.
Please Log in or Create an account to join the conversation.
Moderators: HansU
Time to create page: 0.094 seconds