Delivering effective teaching tools and resources for higher education.

AI Experiment Box

AI Experiment Box(NVIDIA Processor)
Integrated AI technology, empowering learning.

Add Your Tooltip Text Here

The AI Experiment Box is designed for AI-related disciplines and integrates various embedded application modules, including a computer vision systemspeech processing systemrobotic armgesture sensortemperature and humidity sensor, and barometric pressure sensor. By setting up an edge computing terminal, it provides a unified communication protocol and interface for AI-related application development. The experiment box is based on the Linux operating system and uses Python for the development of course resources, making it suitable for the teaching and practical implementation of more than eight AI-related courses.

  • Professional Educational Tool
  • Customizable Solutions
  • Rich curriculum

Specifications

Dimensions (in)

18.9×15×7.9

Weight (kg)

11

Communication Interface

USB, Wi-Fi, Bluetooth

Structure

Aluminum alloy, integrated design, includes keyboard, mouse, power adapter, and educational tools; plug-and-play

Display

17-inch IPS screen, ≥ 1920×1080 resolution

Integrated Components

Robotic arms, 2D vision, depth vision, two-axis gimbal, voice module, embedded sensors, etc.

Computing Unit

6-core NVIDIA Carmel ARM CPU, 8GB RAM, 128GB storage, NVIDIA Volta GPU with 8GB memory

Vision Systems

2D vision: ≥ 640×480 resolution, Depth camera: ≥ 640×400 resolution

Robot Arm

5-axis, 15cm gripping range, two-finger gripper, kinematic solver, one-button start/reset

Sensors

Ultrasonic, temperature, heart rate, pressure, Bluetooth, gyro, OLED display

Open-Source Software

Full software and source code for secondary development

Video

1.Voice-Based Robot Arm Control

2. Voice based intelligent sensor control

3. Flame sensor

4.Human Radar Detection

5.Face Detection and Distance Measurement

6.Object Edge Length and Area Measurement

Description

  • Supports the teaching of courses or knowledge points in Python programming, machine learning, deep learning, digital image processing, computer vision, speech recognition, embedded systems and applications, intelligent robotics, etc.
  • Features an integrated design with a metallic aluminum alloy frame for enhanced durability.
  • Equipped with a 17-inch HD display, full keyboard, mouse, and experimental tools, supporting plug-and-play with no additional configuration needed by the user.
  • Utilizes an edge computing terminal for computational power, supporting the deployment of mainstream AI frameworks such as PyTorch and TensorFlow.
  • Combines Linux operating system, deep learning, machine vision, speech recognition, robot arm control, and embedded sensors, among other components and technologies.
  • Supports a variety of experimental combinations, including 2D vision + robotic arm, depth vision + PTZ, speech + sensors, speech + 2D/depth vision, and speech + robotic arm.

Open Experimental Environment

The experiment code can be executed in the Jupyter Notebook environment with the following features:

  • Both teachers and students can directly conduct interactive programming experiments through their browsers.
  • Markdown editing is supported, with cells used for coding and writing text, including formatting titles, mathematical formulas, etc., making it easier to explain code and suitable for teaching scenarios. Code can be separated into different cells for step-by-step debugging, with interactive monitoring of variable values and types during testing.
  • The provided experimental environment allows for experiment verification through command execution in the terminal.
  • The environment supports multiple students working with different models for sample recognition, meeting the requirements of various experimental projects.
  • The experimental environment supports multiple deep learning frameworks, including but not limited to TensorFlow, PyTorch, etc.

Open Source Code

  • All software frameworks are fully open-source.
  • Algorithm-level source code is provided for transparency and customization.
  • The product supports secondary development for further adaptation.
  • Comprehensive experiment guides are included to assist users.
  • Technical documentation is available to facilitate understanding and usage.
  • Architectural and design documentation for both hardware and software is provided.
  • AI + Vision Sorting

The robotic arm combines with the vision system for target sorting and smart stacking. It uses deep learning models for complex object recognition and real-world industry training.

  • AI + Depth Vision

Depth vision enables height, distance, and contour detection, ideal for obstacle detection, live object recognition, and target tracking experiments.

  • AI + Speech Processing

The microphone supports sound detection and recognition. Interaction with the AI processor guides the robotic arm to perform tasks based on voice commands.

  • AI + Embedded Sensors

Offers 12 types of sensors for experiments like facial recognition, voice control, and temperature control systems.

Components

Freely add or remove devices as needed. Contact us for assistance.

Courses

Education-focused, enhances learning​

(1) Numeric types, conversion, and operations

(2) Python operators, built-in functions, and basic sequence usage

(3) Program selection structure experiments

(4) Program loop structure experiments

(5) List experiments

(6) Set experiments

(7) Function experiments

(8) String manipulation experiments

(9) Regular expression experiments

(10) Data visualization

(11) Python data processing

(12) File handling in Python

(13) Multi-process programming in Python

(14) Multi-threading in Python

(15) Differences between processes and threads in Python

(16) Object-oriented programming in Python

(17) Using Python classes and object instantiation

(18) Using instantiated Python objects

(19) Inheritance in Python classes

(20) Serial communication in Python

(21) Socket TCP communication in Python

(22) Socket UDP communication in Python

(23) Modbus communication in Python

(24) Setting up PyQt5 environment

(25) Using PyQt5

(26) Using Qt Designer and PyUIC

(1) Boston housing price prediction using linear regression

(2) Movie genre classification with K-Nearest Neighbors

(3) Unsupervised data classification using K-Means

(4) Breast cancer diagnosis using decision trees

(5) Data classification using AdaBoost

(6) Double coin toss model validation with EM inference

(7) Spam email filtering with Naive Bayes

(8) Face recognition system design using Random Forests

(9) Dynamic pedestrian detection with Support Vector Machines

(10) Lane detection system design using deep learning

(11) Traffic sign recognition using CNN and SVM

(12) Traffic sign recognition using HOG and SVM

(1) Linear regression modeling and application — housing price prediction

(2) Building neural networks — clothing classification

(3) Neural network regularization — clothing classification optimization

(4) Neural network parameter optimization — finding minima for nonlinear functions

(5) Neural network modeling and testing

(6) Designing optimization models using Residual Networks

(7) Neural network optimizers — handwritten digit recognition

(8) Text classification — JD.com shopping categories

(9) Handwritten digit recognition using LeNet

(10) Song auto-arrangement using RNN

(11) Image data labeling using deep learning

(12) Object detection model training using YOLOV5

(13) Defect detection case study with YOLOV5

(1) Algebraic operations on images

(2) Image encoding and decoding

(3) Geometric affine transformations on images

(4) Spatial domain filtering of images

(5) Frequency domain filtering of images

(6) Grain detection using morphological methods

(7) Image segmentation using the Canny algorithm

(8) Image contour segmentation with watershed algorithm

(9) Shape matching using Hu moments

(1) Understanding visual systems

(2) Pixel size measurement

(3) Object location and angle measurement

(4) Edge length measurement and area detection

(5) Object color and shape recognition

(6) Barcode and QR code recognition

(7) OCR character segmentation and training

(8) OCR character recognition

(9) Product surface defect detection using morphology

(10) Camera calibration using a checkerboard pattern

(11) License plate recognition with OpenCV

(12) Electronic product recognition using template matching

(13) License plate recognition using vision systems

(14) Barcode recognition with vision systems

(15) QR code recognition with vision systems

(16) Object shape and color recognition with vision systems

(17) Fruit recognition using computer vision

(18) NanoDet object detection model using image data

(19) Workpiece defect detection using vision systems

(20) Document recognition using vision systems

(1) Face detection and distance measurement

(2) Face detection with pan-tilt follow

(3) Face detection and recognition

(4) Mask detection

(5) Dynamic pedestrian detection

(1) Introduction to smart sensor systems

(2) Setting up Arduino programming environment

(3) OLED display experiments

(4) Human radar detection experiments

(5) Light intensity detection experiments

(6) Heart rate monitor experiments

(7) Ultrasonic distance measurement experiments

(8) Smart traffic light control experiments

(9) Fan speed control experiments

(10) Gyroscope-based posture control system

(11) Bluetooth-based smart security system design

(1) Introduction to speech processing modules

(2) LED light control

(3) LED ring control using SPI

(4) Sound source localization

(5) Voice control of lighting

(6) Voice control for music playback

(7) Voice recognition and response

(8) Voice-controlled robotic arm with visual object grasping

(9) Smart sensor control using voice commands

(10) Voice and vision-based robotic arm object classification

(1) Introduction to robotic arms and basic operations

(2) Robotic arm teaching and motion control

(3) Robot-vision system calibration

(4) Vision-based robotic arm object classification

(5) Vision-based robotic arm object stacking

(6) Vision-based robotic arm digital sorting

(7) Vision-based robotic arm fruit classification

Contact us if you need a custom course.

Cases

FAQ

The AI Experiment Box is an educational tool that integrates various artificial intelligence technologies, designed to help students understand and practice core AI and robotics concepts. It includes components like sensors, drivers, control systems, and supports experiments in machine learning, image recognition, robotics control, and more.
This experiment box is designed for use in higher education institutions and related educational organizations. It offers a range of experiments that are suitable for both foundational AI knowledge as well as more advanced research and development needs.
The experiment box contains a variety of hardware components, including a 5-axis robotic arm, sensors, actuators, AI modules, control boards, and more. It also comes with accompanying textbooks and experiment manuals to help users get started easily.
Yes, we offer comprehensive teaching resources, including textbooks, experiment plans, and user manuals, to help both teachers and students effectively utilize the equipment for teaching and experiments.
Yes, we can customize experiment projects and course content based on the needs of schools or educational institutions. We offer various levels of difficulty and different areas of focus to meet specific teaching objectives.
Yes, certain experiment projects support cloud connectivity, allowing for data uploading, remote control, and sharing of experiment results. Additionally, the box can connect to large models, enabling more intelligent experiments.
The experiment box is designed to be easy to install and set up. It comes with a detailed installation manual and video tutorials, making it accessible even for users without prior experience.
We offer a one-year warranty for the equipment, during which defective parts will be replaced free of charge. For maintenance, you can contact our customer service for technical support. We also provide equipment upgrades and repair services.
Yes, the equipment is designed to be compatible with other educational tools. It supports connection with various external sensors, devices, and platforms, making it easy to integrate with existing teaching setups.
We provide round-the-clock online technical support and after-sales service. You can reach our customer service team via phone, email, or our official website for any inquiries or assistance.

Download

Documentation

Course - 3D Camera-1

Course - 3D Camera-2

Please submit your download request, and we will send the relevant information to your email within 2 business days.