Vision for pick and place machine

More
16 Aug 2011 13:56 #12526 by btvpimill
Does EMC support using vision to actually adjust the path? I am looking at building a SMT P-N-P and am trying to decide if I can use vision to adjust for the orientation of the part using a "look up" camera like the "big boys" do. Also would like to be able to use a "look down" camera to find the fudicals to orient the board.

I read a post somewhere discussing vision, but I could not decide if EMC actually processes info from it or if it can just be displayed for the user.

Please Log in or Create an account to join the conversation.

More
16 Aug 2011 15:38 #12527 by andypugh
btvpimill wrote:

Does EMC support using vision to actually adjust the path? I am looking at building a SMT P-N-P and am trying to decide if I can use vision to adjust for the orientation of the part using a "look up" camera like the "big boys" do. Also would like to be able to use a "look down" camera to find the fudicals to orient the board.


All things are possible, but some are harder than others.

You can use G10 L2 R... to align to both offset and rotation. (virtual-rotary table)
www.linuxcnc.org/docview/html/gcode_main...:G10:-Set-Coordinate

If it acceptable to jog the crosshair to the marks, then I think it can all be done fairly easily.

If you need to pick the points out of an image, then it gets more difficult, as EMC2 has no machine-vision capability. However, an external program called by an M1nn call could pass back coordinates to the G-code analogue inputs, and those could be used to compute the parameters required to make the G10 work correctly.

Please Log in or Create an account to join the conversation.

More
16 Aug 2011 15:56 #12528 by btvpimill
I was afraid that was gonna be the answer. Jogging could work for the fudicals, as this could be a setup thing. But the parts would need to be "found" by software.

Do you know of any software that can be called from the M1xx?

Please Log in or Create an account to join the conversation.

More
16 Aug 2011 16:44 #12529 by andypugh
btvpimill wrote:

Do you know of any software that can be called from the M1xx?

The only machine vision software I have ever used was very expensive, proprietary, and ran on the camera itself.
Google suggests that OpenCV might work, but I think coding would be needed.

Please Log in or Create an account to join the conversation.

More
16 Aug 2011 17:07 #12530 by BigJohnT
All the vision cameras we use the software is in the camera and you access the camera with a lan cable and a browser.

John

Please Log in or Create an account to join the conversation.

More
16 Aug 2011 21:29 #12535 by andypugh
Have you seen wiki.linuxcnc.org/cgi-bin/emcinfo.pl?Axis_Embed_Video

The second part shows at least one way to use camera data.

Would clicking on a camera image be good enough? ie do the machine vision in WetWare? I would imagine it would be easier to process mouse clicks as events on an image than to invoke a machine vision system.

Please Log in or Create an account to join the conversation.

More
16 Aug 2011 22:30 #12536 by btvpimill
Yes, thats the WIKI that made me think maybe this is possible. Here is the use for this:

lets say the machine is placing a 14mmx14mm TQFP part. the part will not be EXACTLY in the correct position in the tray it comes in. So the machine goes over and picks up the part, in what it believes to be the center of the part. Then it zips over to the camera which snaps a pic of the part. in the next moments, the image is processed and the machine corrects for the small mis-alignment of the part. (imagine in the tray the part was twisted just a bit. the image processor will tell the machine to twist some amount to bring the part square.) This could all be done by mouse clicks, and may be a good way just to start, but eventually one will get real tired of doing each part that way.

As for the comment by John, Now that I think about it, it does seem easier to process the raw data from th ccd then to process the coded image files. Maybe I need to go check up on reading the CCD directly.

Please Log in or Create an account to join the conversation.

More
17 Aug 2011 01:59 - 23 Aug 2011 20:17 #12538 by 1:1
ooh, hey this is interesting.

In cine vfx we can motion track chroma keyed elements over time in a video (Nuke/AE etc...) - easily up to PAL spec we can do the same in real time (Max/Jitter/Processing etc...) ...

You could make yourself a chicken:



:silly:
Last edit: 23 Aug 2011 20:17 by 1:1.

Please Log in or Create an account to join the conversation.

More
17 Aug 2011 11:23 #12541 by Vider
I start one project for this some time ago, but I didn't finish the integration with emc2,

www.linuxcnc.org/component/option,com_ku...d,7615/lang,english/

Please Log in or Create an account to join the conversation.

More
17 Aug 2011 11:41 #12542 by btvpimill
Wow that's good stuff! I guess from the thread you are using it as an input device based on mouse clicks? Any thoughts on doing actual vision? I think the pick and place machines use IR as when looking at an output on screen, the leads are almost white, the body is black, and the rest of the background is gray. To me this makes perfect sense. There is no software filtering needed, just line ut the white dots.
So after I get a machine up and running, I will start playing with cameras.

Please Log in or Create an account to join the conversation.

Time to create page: 0.165 seconds
Powered by Kunena Forum