[Coral] Playing with PoseNet

PoseNet is an open source pose estimation model optimized for edge TPU provided by Google. It identifies people in an image and returns keypoint values, the x, y coordinates of their eyes, ears, nose, shoulders, knees, etc.

Connecting the camera

To run PoseNet on Coral first connect the Coral camera to the Coral board.

Coral and camera
Figure 1. The Coral board and the camera

On the top left corner of the board there is a camera connector. If the connector is closed, flip it open.

open closed
Figure 2. Camera connector open Figure 3. Camera connector closed

Then push the strip connected to the camera into the connector and close the connector.

connected
Figure 4. Camera connected to Coral board

Next connect Coral to the computer. Refer to this post if you don't know how.

Now check whether the camera is well connected to the Coral board. Type the command v4l2-ctl --list-devices. If the camera is connected, you will see the name of the camera in the command line.

camera connection
Figure 5. Checking camera connection

Running PoseNet

Then clone the PoseNet github repo with the command git clone https://github.com/google-coral/project-posenet. Now if you type ls in the command line, you will see a new folder project-posenet containing the contents of the repository.

The code we will run is pose_camera.py. It detects people from streamed camera video and overlays keypoints and a skeleton over it. However to successfully display video output from Coral one must have a HDMI cable and a corresponding screen. Since I do not have such equipment I modified the code to check PoseNet is functioning properly in a different way.

for pose in outputs:
  draw_pose(svg_canvas, pose, src_size, inference_box)
  if pose.score < 0.4: continue
  print('\nPose Score: ', pose.score)
  for label, keypoint in pose.keypoints.items():
    print('  %-20s x=%-4d y=%-4d score=%.1f' %
    (label.name, keypoint.point[0], keypoint.point[1], keypoint.score))

I added the print lines from simple_pose.py and added it to pose_camera.py. Now we can see the output coordinates as text.

posenet output
Figure 6. PoseNet output