Vision for pick and place machine
I read a post somewhere discussing vision, but I could not decide if EMC actually processes info from it or if it can just be displayed for the user.
Please Log in or Create an account to join the conversation.
Does EMC support using vision to actually adjust the path? I am looking at building a SMT P-N-P and am trying to decide if I can use vision to adjust for the orientation of the part using a "look up" camera like the "big boys" do. Also would like to be able to use a "look down" camera to find the fudicals to orient the board.
All things are possible, but some are harder than others.
You can use G10 L2 R... to align to both offset and rotation. (virtual-rotary table)
www.linuxcnc.org/docview/html/gcode_main...:G10:-Set-Coordinate
If it acceptable to jog the crosshair to the marks, then I think it can all be done fairly easily.
If you need to pick the points out of an image, then it gets more difficult, as EMC2 has no machine-vision capability. However, an external program called by an M1nn call could pass back coordinates to the G-code analogue inputs, and those could be used to compute the parameters required to make the G10 work correctly.
Please Log in or Create an account to join the conversation.
Do you know of any software that can be called from the M1xx?
Please Log in or Create an account to join the conversation.
The only machine vision software I have ever used was very expensive, proprietary, and ran on the camera itself.Do you know of any software that can be called from the M1xx?
Google suggests that OpenCV might work, but I think coding would be needed.
Please Log in or Create an account to join the conversation.
John
Please Log in or Create an account to join the conversation.
The second part shows at least one way to use camera data.
Would clicking on a camera image be good enough? ie do the machine vision in WetWare? I would imagine it would be easier to process mouse clicks as events on an image than to invoke a machine vision system.
Please Log in or Create an account to join the conversation.
lets say the machine is placing a 14mmx14mm TQFP part. the part will not be EXACTLY in the correct position in the tray it comes in. So the machine goes over and picks up the part, in what it believes to be the center of the part. Then it zips over to the camera which snaps a pic of the part. in the next moments, the image is processed and the machine corrects for the small mis-alignment of the part. (imagine in the tray the part was twisted just a bit. the image processor will tell the machine to twist some amount to bring the part square.) This could all be done by mouse clicks, and may be a good way just to start, but eventually one will get real tired of doing each part that way.
As for the comment by John, Now that I think about it, it does seem easier to process the raw data from th ccd then to process the coded image files. Maybe I need to go check up on reading the CCD directly.
Please Log in or Create an account to join the conversation.
In cine vfx we can motion track chroma keyed elements over time in a video (Nuke/AE etc...) - easily up to PAL spec we can do the same in real time (Max/Jitter/Processing etc...) ...
You could make yourself a chicken:
Please Log in or Create an account to join the conversation.
www.linuxcnc.org/component/option,com_ku...d,7615/lang,english/
Please Log in or Create an account to join the conversation.
So after I get a machine up and running, I will start playing with cameras.
Please Log in or Create an account to join the conversation.