Pick and place feeders and camera locations, VCPs

More
27 May 2012 10:02 - 27 May 2012 10:03 #20442 by edsimmons
Hi Linux CNC forum users,

I'm working towards a HAL module for linuxcnc that provides machine vision needed for pick and place of SMT parts. Does anyone have any experience with pick and place operations of any kind using linuxcnc? I'm keen to assemble comprehensive instructions/software for the setup of pick and place with linuxcnc.

I like the fact that linuxcnc can be heavily customised with relative ease, is easily set up on all manner of machines etc. It would be great to extend this ease of use and retrofitting to the pick and place world.

I have a few points I'd like to ask about/muse on in detail:
  • Component feeders - defining base locations in machine coords
  • Components and tapes - advance, thickness and offsets from base feeder location
  • Camera locations - also need to be defined somehow in the machine config
  • Control without Gcode - I see elsewhere on the forum Andy was working on raster scanning laser cutters without Gcode - perhaps he's willing to share on this experience? ;)

It would certainly make most sense to avoid Gcode for this - parsing the file from CAM (in something like a GladeVCP) should not be too hard, queuing up and running the deduced steps into HAL commands to get the desired motion would be my preferred method here. This should also make it easy to make the control robust against dropped picks etc, as it'll be possible to try a pick and verify before continuing much more easily than this behaviour can be defined in gcode (I think).

In an effort to keep the results as easily manageable as possible I'd propose the following simple structure:

GladeVCP panel - keeps track of feeder locations, cameras, loaded components and parameters, file parser for the input file and eventual config file(s)
HAL Vision module - a userspace module providing common needs such as alignment with fiducials, verifying part presence, orientation and offsets
Example HAL files to set it all up

Is anyone experienced with any of these points and willing to donate some time to worthy cause? Since I'm working on the vision system at the moment it would be great to see some parallel development of the other aspects. I'd like to work with others on this if there's anyone mad enough/willing to chip in somehow!

I'd be glad to hear anyone's thoughts on any points here (or any I have missed)

Agree?/disagree?/it'll all end in tears?/sounds like fun, count me in? :D

Thanks in advance,
Ed
Last edit: 27 May 2012 10:03 by edsimmons.

Please Log in or Create an account to join the conversation.

More
27 May 2012 12:03 #20447 by andypugh
edsimmons wrote:

  • Control without Gcode - I see elsewhere on the forum Andy was working on raster scanning laser cutters without Gcode - perhaps he's willing to share on this experience? ;)
  • I see a number of ways that this could be done.
    What format does pick-and-place data normally come in? Are there any standards?
    The simplest approach would be a preprocessor. LinuxCNC is trivial to configure so that all files with a certain extension are run through a filter first. The best example is Image2Gcode, which allows you to open a JPEG image from the Axis "File" menu, and then it is automatically converted to G-code and run.
    A neater solution might be to write a whole new interpreter for LinuxCNC, but one that feeds into the existing motion system. Alternative interpreters are not easy at the moment, but there is the beginnings of a move to put in the structures to make it possible.
    My laser-raster project hasn't got very far. I got bogged down in trying to define a finite-jerk trajectory planner, and then other projects bubbled up the queue. However the basic idea was a Userspace HAL module to import graphics files and pass them to a realtime HAL module which outputs position data for stepgens. I think that the P&P application would need a lot more of the standard LinuxCNC motion and interface control, so this way probably doesn't make the most sense.

    Please Log in or Create an account to join the conversation.

    More
    27 May 2012 16:28 #20449 by cncbasher
    the standard is usualy just a comma deliminated file with part no , x , y , rotation
    i have a eagle ulp which gives this info , so it's a start , and easy to expand the script if needed

    then using the vision to check the part and rotate , by matching the part shapes to a database , partly from the input'd file

    Please Log in or Create an account to join the conversation.

    More
    28 May 2012 11:39 #20457 by andypugh
    cncbasher wrote:

    the standard is usualy just a comma deliminated file with part no , x , y , rotation


    In that case, I suspect that the preprocessor might be easiest, generating G-code that converts that to:
    move to bin P
    digital-io wait for part acquisiiton
    relative move to part
    lower head
    operate pickup.
    raise head
    relative move to orientation (including camera-measured orient)
    move to XY
    place part.

    repeat.

    I imagine that would be a sub, and the G-code would be
    O<pickplace> call [3] [100] [200] [90]

    Please Log in or Create an account to join the conversation.

    More
    29 May 2012 03:38 #20474 by jmelson
    A couple thoughts. How are you going to center parts on the nozzles? I have a Philips CSM-84, kind of
    high-end for the home shop, but I actually do have it in my basement! it uses 4 chuck jaws that align
    the part when the nozzle is raised. This deals with both centering and rotation. I have two nozzles
    set up with aligning jaws, one for small passives (0605 up to 1206 or a little bigger) and
    one for SO8 up to SO16 size chips. A 3rd nozzle has no jaws, and there is a mechanical alignment
    station that will center parts up to 30 mm square or so.

    This machine has no vision, but it does have a "beam sensor", essentially a one-pixel camera that
    can locate fiducials and plated through holes.

    It has a location to dump parts when there is a mis-pick (detected by poor vacuum at the nozzle).

    I also have a vibratory feeder that holds 8 SO-size sticks of chips. I have made a variety of
    additional stick feeder lanes that clamp to the side of this unit to accommodate chips that have
    odd-sized tubes.

    I also built a rack that holds a chip tray in the back of the machine for FPGAs and
    such parts. The CSM-84 firmware handles counting out the X and Y increments
    of the waffle tray.

    I got over 50 feeders with this machine, I can't imagine a setup where I had to manually advance
    component tapes every so many boards. Peeling the cover tape back to expose many parts
    at once seems like a REALLY bad idea, the tiniest vibration makes them all fly out of the
    pockets. The CSM feeders have a "blade" that covers the exposed part until just before the
    nozzle picks it up to prevent them from tumbling out of the tape when it is advanced.

    Jon

    Please Log in or Create an account to join the conversation.

    More
    29 May 2012 03:48 #20475 by jmelson
    andypugh wrote:

    What format does pick-and-place data normally come in? Are there any standards?

    Not really. but, the data format is quite simple. My CAD package (Protel) generates a text file, with a
    bunch of columns. But, the important ones are :
    Designator centX centY rotation part description
    R123 12.34 56.78 90 100 Ohm 0805

    I read this into a program which is also fed a file with part description and feeder number, head to
    be used and rotation between how the part is in the feeder and how it is oriented in the CAD
    system. This program generates the file for the CSM84 in its own format. The machine
    also needs the location and size of the fiducials and the location of the board's origin
    in the P&P machine. This is about the same stuff any P&P machine would need.

    But, errors are VERY common in P&P work, dirt gets on the nozzle, the part bumps against the
    side of the pocket in the tape and flips sideways, the feeder runs out of parts or jams,
    and all sorts of unexpected things go wrong. So, the control program needs to have
    mechanisms to deal with as many of these events as possible.

    Jon

    Please Log in or Create an account to join the conversation.

    More
    29 May 2012 06:37 #20477 by cncbasher
    yes pick and place machines can be a pain ....

    i have the feeder idea just about sorted out , very similar to the way you describe Jon , along with the pickup head which is rotateable , my thoughts are to use the camera to align the parts rather than using fingers , using a pattern matching process against a database of footprint pictures , locate the parts using the pattern matching algorythm , this then uses the pattern match to locate the centre of the part , then lifting by suction which is rotatable by stepper , the camera pattern matches the location pads and rotates the part before mounting . i'm lighting the camera area by Infra red led's and have the camera tuned to match this , and filtering the colours out , so you end up with a white image of the pads , later on if this all works my ultimate goal is to use two cameras and a stereo image to give 3d .

    the area that i have not worked out yet is to how to rotate the part, i have the pattern matching somewhere near , with an angle measurement along with the pick up centre point in x y
    but a long way to go , i.e matching the x & y and rotating .

    Please Log in or Create an account to join the conversation.

    More
    30 May 2012 13:25 #20505 by andypugh

    Please Log in or Create an account to join the conversation.

    More
    30 May 2012 14:04 #20507 by edsimmons

    Ideal machine to practice with?


    Shh!! Don't tell the world it's there!! I've had my eye on that for a few days. That's likely to be in my workshop soon.... :)

    Cheers for looking!! lol

    Please Log in or Create an account to join the conversation.

    Time to create page: 0.236 seconds
    Powered by Kunena Forum