2017-04-10 1 views
3

J'ai donc fait une détection d'objet basée sur la couleur en utilisant openCV et je l'ai lancée sur fraspberry pi 3. Elle fonctionne, car elle suit la balle de tennis en temps réel j'utilise kinect v1 (bibliothèque freenect)). Maintenant, je veux déterminer la position où l'objet trouvé est. Je veux savoir si c'est au milieu, ou plus à gauche ou plus à droite. Je pensais à diviser le cadre de la caméra en 3 parties. J'aurais 3 booléens, un pour le milieu, un pour la gauche et un pour la droite, et ensuite toutes les 3 variables seraient envoyées via la communication USB. Cependant, j'essaye depuis une semaine maintenant de déterminer où est l'objet, mais je suis incapable de le faire. Je demande ici de l'aide.Détection de la position des objets via la vidéo

Code de travail actuel pour la détection d'objets en utilisant OpenCV (je détecter objet par couleur)

# USAGE 
# python ball_tracking.py --video ball_tracking_example.mp4 
# python ball_tracking.py 

# import the necessary packages 
from collections import deque 
import numpy as np 
import argparse 
import imutils 
import cv2 

# construct the argument parse and parse the arguments 
ap = argparse.ArgumentParser() 
ap.add_argument("-v", "--video", 
    help="path to the (optional) video file") 
ap.add_argument("-b", "--buffer", type=int, default=64, 
    help="max buffer size") 
args = vars(ap.parse_args()) 

# define the lower and upper boundaries of the "green" 
# ball in the HSV color space, then initialize the 
# list of tracked points 
greenLower = (29, 86, 6) 
greenUpper = (64, 255, 255) 
pts = deque(maxlen=args["buffer"]) 

# if a video path was not supplied, grab the reference 
# to the webcam 
if not args.get("video", False): 
    camera = cv2.VideoCapture(0) 

# otherwise, grab a reference to the video file 
else: 
    camera = cv2.VideoCapture(args["video"]) 

# keep looping 
while True: 
    # grab the current frame 
    (grabbed, frame) = camera.read() 

    # if we are viewing a video and we did not grab a frame, 
    # then we have reached the end of the video 
    if args.get("video") and not grabbed: 
     break 

    # resize the frame, blur it, and convert it to the HSV 
    # color space 
    frame = imutils.resize(frame, width=600) 
    # blurred = cv2.GaussianBlur(frame, (11, 11), 0) 
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) 

    # construct a mask for the color "green", then perform 
    # a series of dilations and erosions to remove any small 
    # blobs left in the mask 
    mask = cv2.inRange(hsv, greenLower, greenUpper) 
    mask = cv2.erode(mask, None, iterations=2) 
    mask = cv2.dilate(mask, None, iterations=2) 

    # find contours in the mask and initialize the current 
    # (x, y) center of the ball 
    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, 
     cv2.CHAIN_APPROX_SIMPLE)[-2] 
    center = None 

    # only proceed if at least one contour was found 
    if len(cnts) > 0: 
     # find the largest contour in the mask, then use 
     # it to compute the minimum enclosing circle and 
     # centroid 
     c = max(cnts, key=cv2.contourArea) 
     ((x, y), radius) = cv2.minEnclosingCircle(c) 
     M = cv2.moments(c) 
     center = (int(M["m10"]/M["m00"]), int(M["m01"]/M["m00"])) 

     # only proceed if the radius meets a minimum size 
     if radius > 10: 
      # draw the circle and centroid on the frame, 
      # then update the list of tracked points 
      cv2.circle(frame, (int(x), int(y)), int(radius), 
       (0, 255, 255), 2) 
      cv2.circle(frame, center, 5, (0, 0, 255), -1) 
     #EDIT: 
     if int(x) > int(200) & int(x) < int(400): 
      middle = True 
      left = False 
      notleft = False 

     if int(x) > int(1) & int(x) < int(200): 
      left = True 
      middle = False 
      notleft = False 

     if int(x) > int(400) & int(x) < int(600): 
      notleft = True 
      left = False 
      middle = False 

     print ("middle: ", middle, " left: ", left, " right: ", notleft) 

    # update the points queue 
    pts.appendleft(center) 

    # loop over the set of tracked points 
    for i in xrange(1, len(pts)): 
     # if either of the tracked points are None, ignore 
     # them 
     if pts[i - 1] is None or pts[i] is None: 
      continue 

     # otherwise, compute the thickness of the line and 
     # draw the connecting lines 
     thickness = int(np.sqrt(args["buffer"]/float(i + 1)) * 2.5) 
     cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness) 

    # show the frame to our screen 
    cv2.imshow("Frame", frame) 
    key = cv2.waitKey(1) & 0xFF 

    # if the 'q' key is pressed, stop the loop 
    if key == ord("q"): 
     break 

# cleanup the camera and close any open windows 
camera.release() 
cv2.destroyAllWindows() 

Le code est correctement commenté. Envoyer des informations en utilisant le port USB n'est pas un problème, je ne peux pas trouver, comment détecter où la balle est. Je cours raspbian sur mon pi de framboise.

EDIT: J'ai oublié de mentionner, je suis seulement intéressé par la position des objets selon l'axe X. Je me suis dit que comme j'ai le cadre actuel fixé à 600, que j'écrirais 3 si c'est comme if x > 200 && x < 400: bool middle = true. Ça ne marche pas toi.

EDIT2: Je pense que je l'ai eu à travailler d'une certaine manière, mais le «milieu» ne sera jamais vrai. Je deviens vrai pour la gauche et la droite, mais pas pour le milieu.

+0

est-ce pas 'center' le centre réel de la balle? Es-tu heureux avec cela? Je le ferais comme ça mais au lieu de 'int' utiliser un' double'. Cela devrait être ce que vous cherchez. –

Répondre

2
 if int(x) > int(200) AND int(x) < int(400): 
      middle = True 
      left = False 
      notleft = False 

     if int(x) > int(1) AND int(x) < int(200): 
      left = True 
      middle = False 
      notleft = False 

     if int(x) > int(400) AND int(x) < int(600): 
      notleft = True 
      left = False 
      middle = False 

tout ce que je devais écrire était « ET » insted de « & » ... tant de peine, un peu correctif.

0

Si l'objet dont vous allez détecter la position, il vaudra mieux utiliser cv2.findContours() pour utiliser cv2.HoughCircles(). Depuis cv2.HoughCircles() renvoie directement la position centrale (x, y) des cercles.

Vous pouvez trouver l'échantillon d'utiliser les HoughCircles() here

Si vous obtenez le centre de ce cercle, puis de déterminer sa position sera facile.

Bonne chance.

+0

Je posterai edit sur la façon dont je l'ai fait maintenant, je ne reçois que la gauche ou la droite, le milieu ne sera jamais vrai. – user3029612

0

Voici la solution pour votre question,

# import the necessary packages 
from collections import deque 
import numpy as np 
import argparse 
import imutils 
import cv2 

# construct the argument parse and parse the arguments 
ap = argparse.ArgumentParser() 
ap.add_argument("-v", "--video", 
    help="path to the (optional) video file") 
ap.add_argument("-b", "--buffer", type=int, default=32, 
    help="max buffer size") 
args = vars(ap.parse_args()) 

# define the lower and upper boundaries of the "green" 
# ball in the HSV color space 
greenLower = (29, 86, 6) 
greenUpper = (64, 255, 255) 

# initialize the list of tracked points, the frame counter, 
# and the coordinate deltas 
pts = deque(maxlen=args["buffer"]) 
counter = 0 
(dX, dY) = (0, 0) 
direction = "" 

# if a video path was not supplied, grab the reference 
# to the webcam 
if not args.get("video", False): 
    camera = cv2.VideoCapture(0) 

# otherwise, grab a reference to the video file 
else: 
    camera = cv2.VideoCapture(args["video"]) 

# keep looping 
while True: 
    # grab the current frame 
    (grabbed, frame) = camera.read() 

    # if we are viewing a video and we did not grab a frame, 
    # then we have reached the end of the video 
    if args.get("video") and not grabbed: 
     break 

    # resize the frame, blur it, and convert it to the HSV 
    # color space 
    frame = imutils.resize(frame, width=600) 
    # blurred = cv2.GaussianBlur(frame, (11, 11), 0) 
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) 

    # construct a mask for the color "green", then perform 
    # a series of dilations and erosions to remove any small 
    # blobs left in the mask 
    mask = cv2.inRange(hsv, greenLower, greenUpper) 
    mask = cv2.erode(mask, None, iterations=2) 
    mask = cv2.dilate(mask, None, iterations=2) 

    # find contours in the mask and initialize the current 
    # (x, y) center of the ball 
    cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, 
     cv2.CHAIN_APPROX_SIMPLE)[-2] 
    center = None 

    # only proceed if at least one contour was found 
    if len(cnts) > 0: 
     # find the largest contour in the mask, then use 
     # it to compute the minimum enclosing circle and 
     # centroid 
     c = max(cnts, key=cv2.contourArea) 
     ((x, y), radius) = cv2.minEnclosingCircle(c) 
     M = cv2.moments(c) 
     center = (int(M["m10"]/M["m00"]), int(M["m01"]/M["m00"])) 

     # only proceed if the radius meets a minimum size 
     if radius > 10: 
      # draw the circle and centroid on the frame, 
      # then update the list of tracked points 
      cv2.circle(frame, (int(x), int(y)), int(radius), 
       (0, 255, 255), 2) 
      cv2.circle(frame, center, 5, (0, 0, 255), -1) 
      pts.appendleft(center) 

    # loop over the set of tracked points 
    for i in np.arange(1, len(pts)): 
     # if either of the tracked points are None, ignore 
     # them 
     if pts[i - 1] is None or pts[i] is None: 
      continue 

     # check to see if enough points have been accumulated in 
     # the buffer 
     if counter >= 10 and i == 1 and pts[-10] is not None: 
      # compute the difference between the x and y 
      # coordinates and re-initialize the direction 
      # text variables 
      dX = pts[-10][0] - pts[i][0] 
      dY = pts[-10][1] - pts[i][1] 
      (dirX, dirY) = ("", "") 

      # ensure there is significant movement in the 
      # x-direction 
      if np.abs(dX) > 20: 
       dirX = "East" if np.sign(dX) == 1 else "West" 

      # ensure there is significant movement in the 
      # y-direction 
      if np.abs(dY) > 20: 
       dirY = "North" if np.sign(dY) == 1 else "South" 

      # handle when both directions are non-empty 
      if dirX != "" and dirY != "": 
       direction = "{}-{}".format(dirY, dirX) 

      # otherwise, only one direction is non-empty 
      else: 
       direction = dirX if dirX != "" else dirY 

     # otherwise, compute the thickness of the line and 
     # draw the connecting lines 
     thickness = int(np.sqrt(args["buffer"]/float(i + 1)) * 2.5) 
     cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness) 

    # show the movement deltas and the direction of movement on 
    # the frame 
    cv2.putText(frame, direction, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 
     0.65, (0, 0, 255), 3) 
    cv2.putText(frame, "dx: {}, dy: {}".format(dX, dY), 
     (10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 
     0.35, (0, 0, 255), 1) 

    # show the frame to our screen and increment the frame counter 
    cv2.imshow("Frame", frame) 
    key = cv2.waitKey(1) & 0xFF 
    counter += 1 

    # if the 'q' key is pressed, stop the loop 
    if key == ord("q"): 
     break 

# cleanup the camera and close any open windows 
camera.release() 
cv2.destroyAllWindows()