Thursday, 7 May 2015

AUTOMATIC PICK AND PLACE ROBOT

AUTOMATIC PICK AND PLACE ROBOT
{implemented based on below paper}
With the help of machine vision we identify the object and remove the noise in it. Visual servoing is used to control the trajectory of the robot toward the object in the image. After that we want to pick the object and return to the starting position for placing it.  We use Arduino, H bridge and motors for controlling the robot. This is the block diagram for automatic pick and place robot using camera. This contains mainly eight blocks as shown in Fig



pick and place robot design
block diagram for automatic pick and place robot





Camera is used as sensor to measure the distance to the object. In modern robots camera is used as a sensor since it scans the entire environment and can be used instead of many other sensors for distance measurement, inspection, identification etc.
Lighting is an essential part of machine vision. Good illumination of the scene is important because of its effect on the level of complexity of image processing algorithm. Poor lighting makes the task of interpreting the scene more difficult. Proper lighting technique should provide high contrast and minimize reflections and shadows
MATLAB stands for MATrixLABoratory . Hence, as the name suggests, here you play around with matrices. Hence, an image (or any other data like sound, etc.) can be converted to a matrix and then various operations can be performed on it to get the desired results and values. An image in MATLAB is stored as a 2D matrix (of size mxn) where each element of the matrix represents the intensity of light/color of that particular pixel. Hence, for a binary image, the value of each element of the matrix is either 0 or 1 and for a grayscale image each value lies between 0 and 255. A color image is stored as an mxnx3 matrix where each element is the RGB value of that particular pixel (hence it’s a 3D matrix). You can consider it as three 2D matrices for red, green and blue intensities. Here MATLAB is used to interpret the vision data obtained from camera as well as it controls the hardware.

MATLAB camera interfacing

The command imaqhwinfo returns out, a structure that contains information about the image acquisition adaptors available on the system. An adaptor is the interface between MATLAB and the image acquisition devices connected to the system. The adaptor's main purpose is to pass information between MATLAB and an image acquisition device via its driver. The command c=videoinput('winvideo',1,'YUY2_640x480') helps us to take video input to the matlab and can be viewed using preview command. Getsnapshot is used to take the image from video.
Circuit diagram
Motors are connected as shown in Fig below. The motor operations of two motors can be controlled by input logic at pins 1A & 2A and 3A & 4A.Input logic 00 or 11 will stop the corresponding motor. Logic 01 and 10 will rotate it in clockwise and anticlockwise directions, respectively. RA0, RA1, RA2 and RA3 are connected to Arduino Uno for controlling the motor.


robot design using arduino
motor connection for robot 

pick and place robot using arduino uno
automatic pick and place robot






MATLAB CODE:
Program for thresholding:
functionbw=greenb(im)
[m,n,t]=size(im);
bw=zeros(m,n);
for i=1:m
for j=1:n
        if (im(i,j,1)<185&&im(i,j,2)>130&&im(i,j,3)<116)
bw(i,j)=1;
end
end
end
Program for distance calculation and robotic movements:
clc;
if exist('a','var') &&isa(a,'arduino') &&isvalid(a), %%connect arduino
    % nothing to do   
else
    a=arduino('DEMO');
end
a.pinMode(2,'output'); %%declare pinmodes
a.pinMode(7,'output');
a.pinMode(4,'output');
a.pinMode(8,'output');
a.digitalWrite(2,1);   %%make input pins of h bridge high
a.digitalWrite(7,1);   %%wheel and hand motors not rotating
a.digitalWrite(8,1);
a.digitalWrite(4,1);
pause(2);
c=videoinput('winvideo',1,'YUY2_640x480');
preview(c);            %%preview of video input

for count=0:4          %%number of loops
x=getsnapshot(c);
figure,imshow(x);
img=greenb(x);      %%figure,imshow(img);
se=strel('disk',5);
bw=imclose(img,se);
%%figure,imshow(bw);
b1=imfill(bw,'holes');
%%figure,imshow(b1);
se=strel('disk',6);
b2=imopen(b1,se);
figure,imshow(b2);      %%till this object recognition
[i,j,s] = find(b2);
[u,n] = size(b2);
S = sparse(i,j,s,u,n);
k=min(i);
l=max(i);
t=l-k;                  %%till this finding no. of raws
disp('length of object');
disp(t);
%%a.digitalWrite(2,1);
if(t>=5)                  %%at least 5 raws,otherwise no object
a.digitalWrite(4,0);
pause(.48);
a.digitalWrite(4,1);  %% move a known distance,15cm
pause(3.5);

x1=getsnapshot(c);   %%get second image and process like above
figure,imshow(x1);
img1=greenb(x1);     %%figure,imshow(img1);
se=strel('disk',5);
bw=imclose(img1,se);
    %%figure,imshow(bw);
    b1=imfill(bw,'holes');
    %%figure,imshow(b1);
se=strel('disk',6);
    b2=imopen(b1,se);
figure,imshow(b2);
    [i,j,s] = find(b2);
    [u,n] = size(b2);
    S = sparse(i,j,s,u,n);
    k=min(i);
    l=max(i);
    t1=l-k;
disp('length of object');
disp(t1);
if(t1>t)           %%as robot moves towards the object,length of object
        d=15/(1-(t/t1));       %%must be increased,calculate distance
        d1=d-20;
disp(d1);
time=d1/31.4159;      %%calculate time required to move d1 distance
disp('time');
disp(time);
a.digitalWrite(4,0);       %%move to object
pause(time);
a.digitalWrite(4,1);
pause(2);
a.digitalWrite(7,0);       %%close the arm to take object
pause(.6);
a.digitalWrite(7,1);
pause(2)
a.digitalWrite(2,0);      %%robot moves back
pause(time);
a.digitalWrite(2,1);
pause(2);
a.digitalWrite(8,0);      %%open the arm to place object
pause(.5);
a.digitalWrite(8,1);
pause(2)
a.digitalWrite(2,0);      %%movement after placing object
pause(.5)
a.digitalWrite(2,1);
else
disp('object not recognized well in either shots');
a.digitalWrite(2,0);       %%return to original position if
pause(.48);        %%object not recognized
a.digitalWrite(2,1);
end
else
disp('no object');
end
end


1 comment: