Professional supplier of AI and robot teaching equipment

Six-Axis Vision Inspection Production Line

$9,800

Available on backorder

The production line, built around a vision system and a six-axis robot, targets practical applications in visual recognition, positioning, and detection within artificial intelligence, robotics, and smart manufacturing fields. It sets up a typical robot + vision production line scenario.
The production line utilizes edge computing terminals as processing units, deploying an AI software environment. It not only supports basic experiments such as digital image processing, machine vision, and robot motion control, but also facilitates more complex experiments including character recognition, defect detection, object recognition, and feature analysis.

  • Industrial-grade design
  • 1-year warranty
  • Free technical support
  • Customization available
  • Educational resources included
  • Compatible with major platforms
  • Supports secondary development

Product Overview:

  • Support for Courses and Key Concepts
    Meets the teaching needs for courses or topics such as Python programming, machine learning, deep learning, digital image processing, computer vision, vision-based robotic applications, and sensor control.
  • Integrated Platform Design
    The production line platform integrates the robot, vision system, and multiple sensors in a unified design. The entire system can be placed directly on the desktop for easy operation during experiments and teaching.
  • Flexible Component Installation & Diverse Scenarios
    All components use modular installation, allowing free positioning to create diverse experimental training scenarios that cater to different teaching needs.
  • Powerful Computational Support
    The edge computing terminal provides computational power, supporting the deployment of mainstream AI frameworks such as PyTorch and TensorFlow, ensuring efficient computational performance for experiments.
  • Integration of Various Technologies
    Combines deep learning, machine vision, robotic control, vision-robot collaboration, and production line motion control to provide a comprehensive learning experience for students.
  • Built-in AI and Vision Algorithm Library
    The built-in AI and vision algorithm library includes object classification, target detection, defect detection, OCR character recognition, and other use cases, fulfilling the needs for both basic application and development teaching.

Product Features and Functions:

1. Open Experiment Environment

The experiment code is written in the Jupyter Notebook environment with the following features:

  • Interactive Programming
    Teachers and students can perform interactive programming experiments directly in the browser.
  • Markdown Editing Support
    Supports Markdown editing, cells for coding, and text writing, completing title, mathematical formulas, etc., making it easier to explain code, especially for teaching purposes. Code can be separated into different cells for debugging and monitoring variable values and types interactively.
  • Terminal Command Verification
    The provided experimental environment can verify experiments through commands executed in the terminal.
  • Multiple Students Using Different Models
    The environment supports multiple students calling different models to identify samples, meeting the requirements of various experiment projects.
  • Support for Multiple Frameworks
    Supports multiple deep learning frameworks, including but not limited to TensorFlow, PyTorch, etc.
  • Teacher’s Flexible Settings
    Teachers can set up and leave code modules empty for students to complete the programming exercises.

2. Open Source Code

  • Open Software Framework & Algorithm Code
    The entire software framework and algorithm-level source code are open, supporting secondary development. Full experimental manuals and technical documentation are provided to help users with development.
  • Product Documentation
    Provides documentation related to the software and hardware architecture and design methodology.

3. Graphical Interactive Software

  • Software Overview
    The 2D machine vision software uses Hikvision Vision Master software, with an intuitive graphical interface. Function icons are clear and easy to understand. Drag-and-drop operations allow quick construction of vision solutions, with independent module status indicators displayed in real-time.
  • Key Features and Parameters:
    • Includes nearly 1,000 image processing operators and many interactive development tools, with over 130 modules, supporting various operating systems and image capture hardware.
    • Users can create vision solutions based on needs and customize operation interfaces for personalized use.
    • Compatible with GigE Vision and USB3 Vision protocols, supports various camera brands, and allows for both local image processing and camera data image processing.
    • Easy secondary development, saving up to 90% of the code through simplified interfaces. New controls can be imported into Visual Studio with one click and embedded in QT, MFC, WPF, and WinForm interface development.
    • Supports user-defined module development. Users can encapsulate custom algorithms as VM modules for drag-and-drop use.
    • Supports common industrial communication protocols such as TCP/IP, ModBus, serial, UDP, and Ethernet/IP, and is compatible with mainstream PLC models.

4. Machine Vision and Object Recognition

Using the open-source OpenCV vision library for algorithm development, applicable for scenarios like document recognition, license plate recognition, character recognition, object identification, defect detection, and more.

5. Robot Hand-Eye Calibration

Introducing a vision system to the robot enables autonomous, intelligent motion, allowing the robot to perform tasks such as waste sorting, logistics handling, palletizing, and object sorting.

6. Vision-based Robotic Applications

Combining robotic arms with vision systems to recognize targets of various sizes and appearances, supporting applications like object sorting, smart palletizing, object recognition, and character recognition.

7. Production Line System Integration

The production line integrates stepper motors, photoelectric sensors, and edge computing terminals through an IO controller to control the start, stop, pause of the production line, trigger visual system photo capture, and control robot grabbing actions, completing a full process in the integrated system operation.