Handtrack.js库实现实时监测手部运动(推荐)

下面我将详细介绍如何使用Handtrack.js库实现实时监测手部运动。

下面我将详细介绍如何使用Handtrack.js库实现实时监测手部运动。

1. 简介

Handtrack.js是一个基于Tensorflow.js的开源JavaScript库,用于实时监测手部运动。它使用深度学习模型实现手部位置的检测,并可以通过调用API实时对手部位置进行跟踪。Handtrack.js可以在浏览器中运行,而无需安装任何其他软件。

2. 前提条件

在使用Handtrack.js之前,你需要了解HTML、CSS和JavaScript语言,并具备一定的编程基础。

3. 安装Handtrack.js

在使用Handtrack.js之前,你需要先将其安装到你的项目中。你可以使用npm或直接下载前端JS文件进行安装。在这里,我们介绍如何通过npm安装Handtrack.js:

  1. 在终端中,进入你的项目所在的目录。
  2. 运行以下命令,安装Handtrack.js:

npm install handtrackjs

  1. 等待安装完成后,在你的项目中引入手部跟踪库:

javascript
import * as handTrack from 'handtrackjs';

4. 使用Handtrack.js

使用Handtrack.js需要以下几个步骤:

4.1 加载模型

首先,需要在你的HTML文件中引入Handtrack.js的模型文件。你可以在下面的GitHub链接中找到预训练的模型(model.json、metadata.json和weights.bin):

https://github.com/victordibia/handtracking

你只需要将这三个文件下载并保存到你的项目中,然后在HTML文件中引用它们即可:

<script src="path/to/model.json"></script>
<script src="path/to/metadata.json"></script>
<script src="path/to/weights.bin"></script>

然后,使用以下代码加载模型:

const model = await handTrack.load({
  model: 'path/to/model.json',
  metadata: 'path/to/metadata.json',
  weights: 'path/to/weights.bin'
});

4.2 检测手部位置

加载模型后,你可以使用以下代码来检测手部位置:

const video = document.createElement('video');
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');

navigator.mediaDevices.getUserMedia({video: true}).then(stream => {
  video.srcObject = stream;
  video.play();
});

video.addEventListener('loadeddata', async () => {
  canvas.width = video.videoWidth;
  canvas.height = video.videoHeight;

  const predictions = await model.detect(video);
  predictions.forEach(prediction => {
    const {x, y, width, height} = prediction.bbox;
    ctx.strokeRect(x, y, width, height);
  });

  requestAnimationFrame(() => {
    this.detectHands(video, model);
  });
});

这段代码会创建一个video元素和一个canvas元素,并从摄像头中获取视频流。然后,将视频流绘制到canvas元素上,并使用Handtrack.js的API检测手部位置并将其绘制到canvas上。

4.3 实时跟踪手部位置

要实现实时跟踪手部位置,你需要对检测代码进行轮询,并在检测新的手部位置时更新canvas元素。以下是一个示例:

let startTime = new Date().getTime();

async function detectHands(video, model) {
  const currentTime = new Date().getTime();
  if (currentTime - startTime >= 100) {
    startTime = currentTime;

    const predictions = await model.detect(video);
    const canvas = document.getElementById('canvas');
    const ctx = canvas.getContext('2d');

    ctx.clearRect(0, 0, canvas.width, canvas.height);
    predictions.forEach(prediction => {
      const {x, y, width, height} = prediction.bbox;
      ctx.strokeRect(x, y, width, height);
    });
  }

  requestAnimationFrame(() => {
    this.detectHands(video, model);
  });
}

这段代码将轮询Handtrack.js的API,每隔约100毫秒检测一次手部位置。如果有新的手部位置,则将其绘制到canvas元素上。

5. 示例代码

下面是两个使用Handtrack.js检测手部位置的示例代码:

5.1 手势识别

这个示例演示了如何识别简单的手势,例如“OK”和“心形”:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>Handtrack.js Gesture Recognition</title>
  <script src="path/to/handtrack.js"></script>
</head>
<body>
  <video id="video" width="640" height="480" autoplay></video>
  <canvas id="canvas" width="640" height="480"></canvas>
  <script>
    const video = document.getElementById('video');
    const canvas = document.getElementById('canvas');
    const ctx = canvas.getContext('2d');

    const model = await handTrack.load({
      model: 'path/to/model.json',
      metadata: 'path/to/metadata.json',
      weights: 'path/to/weights.bin'
    });

    navigator.mediaDevices.getUserMedia({video: true}).then(stream => {
      video.srcObject = stream;
      video.play();
    });

    video.addEventListener('loadeddata', async () => {
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;

      requestAnimationFrame(() => {
        detectHands(video, model);
      });
    });

    async function detectHands(video, model) {
      const predictions = await model.detect(video);
      const okEmoji = '\u{1F44C}';
      const heartEmoji = '\u2764\uFE0F';

      predictions.forEach(prediction => {
        const {x, y, width, height} = prediction.bbox;
        ctx.strokeRect(x, y, width, height);

        if (width > 100 && height > 100) {
          const centerX = x + width / 2;
          const centerY = y + height / 2;

          if (centerX > video.videoWidth / 2 - 50 && centerX < video.videoWidth / 2 + 50
            && centerY > video.videoHeight / 2 - 50 && centerY < video.videoHeight / 2 + 50) {
            ctx.font = '48px serif';
            ctx.fillText(okEmoji, centerX, centerY);

          } else if (centerX > video.videoWidth / 2 - 100 && centerX < video.videoWidth / 2
            && centerY > video.videoHeight / 2 - 100 && centerY < video.videoHeight / 2) {
            ctx.font = '48px serif';
            ctx.fillText(heartEmoji, centerX, centerY);
          }
        }
      });

      requestAnimationFrame(() => {
        detectHands(video, model);
      });
    }
  </script>
</body>
</html>

5.2 点击和拖拽

这个示例演示了如何使用鼠标来拖动一个元素,并使用Handtrack.js检测手动是否放下:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>Handtrack.js Click and Drag</title>
  <style>
    #box {
      width: 100px;
      height: 100px;
      background-color: red;
      position: absolute;
      top: 50%;
      left: 50%;
      transform: translate(-50%, -50%);
    }
  </style>
  <script src="path/to/handtrack.js"></script>
</head>
<body>
  <video id="video" width="640" height="480" autoplay></video>
  <canvas id="canvas" width="640" height="480"></canvas>

  <div id="box"></div>

  <script>
    const video = document.getElementById('video');
    const canvas = document.getElementById('canvas');
    const ctx = canvas.getContext('2d');

    const model = await handTrack.load({
      model: 'path/to/model.json',
      metadata: 'path/to/metadata.json',
      weights: 'path/to/weights.bin'
    });

    navigator.mediaDevices.getUserMedia({video: true}).then(stream => {
      video.srcObject = stream;
      video.play();
    });

    let isHandDown = false;
    let mouseX = 0;
    let mouseY = 0;

    video.addEventListener('loadeddata', async () => {
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;

      requestAnimationFrame(() => {
        detectHands(video, model);
      });
    });

    async function detectHands(video, model) {
      const predictions = await model.detect(video);
      const box = document.getElementById('box');

      predictions.forEach(prediction => {
        const {x, y, width, height} = prediction.bbox;
        ctx.strokeRect(x, y, width, height);

        if (width > 100 && height > 100) {
          const centerX = x + width / 2;
          const centerY = y + height / 2;

          mouseX = centerX;
          mouseY = centerY;

          const isHandUp = centerY < video.videoHeight / 2;

          if (isHandUp && !isHandDown) {
            isHandDown = true;
            box.style.backgroundColor = 'green';
            box.style.cursor = 'grabbing';
            box.style.userSelect = 'none';
            box.onmousedown = function(event) {
              event.preventDefault();
              box.style.position = 'absolute';

              box.style.left = event.pageX - box.offsetWidth / 2 + 'px';
              box.style.top = event.pageY - box.offsetHeight / 2 + 'px';

              const moveHandler = function(event) {
                box.style.left = event.pageX - box.offsetWidth / 2 + 'px';
                box.style.top = event.pageY - box.offsetHeight / 2 + 'px';
              };

              const upHandler = function() {
                isHandDown = false;
                box.style.backgroundColor = 'red';
                box.style.cursor = 'grab';
                box.style.userSelect = 'auto';
                document.removeEventListener('mousemove', moveHandler);
                document.removeEventListener('mouseup', upHandler);
              };

              document.addEventListener('mousemove', moveHandler);
              document.addEventListener('mouseup', upHandler);
            };
          } else if (!isHandUp && isHandDown) {
            isHandDown = false;
          }
        }
      });

      if (isHandDown) {
        box.style.left = mouseX - box.offsetWidth / 2 + 'px';
        box.style.top = mouseY - box.offsetHeight / 2 + 'px';
      }

      requestAnimationFrame(() => {
        detectHands(video, model);
      });
    }
  </script>
</body>
</html>

这个代码块创建了一个红色的盒子,当你将手举起并放下时,盒子开始变绿色,并且你可以使用鼠标拖动它。当你将手提起时,盒子变为红色,不能拖动。

本文标题为:Handtrack.js库实现实时监测手部运动(推荐)

基础教程推荐