Close Menu
  • Home
  • About
  • Wearables
  • Earbuds
  • Computer Accessories
    • Routers
    • Remotes
  • Embedded
    • Single Board Computers
    • Addon Boards
    • Raspberry Pi
  • Wireless Routers
  • Contact
Facebook X (Twitter) Instagram
Trending News
  • Orange Pi 6 Plus Review: A 12-Core SBC Powerhouse (Benchmarks, Specs, and Pricing)
  • Orange Pi 6 Unveiled: The Affordable, Slimmed-Down SBC Alternative to the 6 Plus
  • Ugoos SK4 Review (2025): The Best Mid-Range Android TV Box for Value & Features
  • AliExpress Black Friday 2025: 🚨 LATEST Deals & ALL Exclusive Country Promo Codes (Save 90%!)
  • LattePanda IOTA Review: 4 Reasons It Beats Raspberry Pi 5
  • POCO F8 Ultra/Pro global release date unveiled, showcasing flagship-level specs
  • Ugoos GaN Chargers 2025: 35W, 65W, 140W – Fast, Compact & Universal
  • Orange Pi RG: A retro handheld gaming console powered by RISC-V is currently in dev.
AndroidPIMP
Facebook X (Twitter) Instagram
  • Home
  • About
  • Wearables
  • Earbuds
  • Computer Accessories
    • Routers
    • Remotes
  • Embedded
    • Single Board Computers
    • Addon Boards
    • Raspberry Pi
  • Wireless Routers
  • Contact
AndroidPIMP
Edge AI Hardware

HUSKYLENS 2: DFRobot’s New AI Machine Vision Sensor Unveiled

By androidpimpNovember 5, 2025Updated:December 2, 2025No Comments16 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Follow Us
Google News Flipboard
HuskyLens 2 AI Machine Vision sensor
Table of contents
  1. HUSKYLENS 2: An advanced AI-powered machine vision camera perfect for AI and robotics applications.
  2. Part I: Introducing the Product
  3. A full up-close view of the product
  4. Key Features
  5. Model Context Protocol (MCP) support and large language models (LLMs)
    1. 🧠 MCP (Model Context Protocol)
    2. 🤖 Leveraging the LLM
    3. 🚀 Why It Matters
    4. 🧠 What is LLM all about?
    5. 📚 How It Works
  6. Specifications
  7. With over 20 built-in algorithms ready to use
  8. Applications in education and AI project development.
  9. Broad compatibility with popular controllers and SBCs
    1. Wide Controller Support
  10. What’s included in the package?
  11. Additional accessories
  12. Part II: HUSKYLENS 2 Review (Currently being updated)
    1. The Product Package
  13. Unboxing all Items
    1. Package Contents
  14. The Main Module/Unit
  15. The camera module
  16. Interfaces
    1. HUSKYLENS 2 Power Board
      1. The advantages of the Power Board
      2. So, when would you need to use it?
  17. Operation
    1. Standalone mode
    2. Self-Learning Capabilities (A partial list)
  18. YOLO-Style Workflow
    1. How It Works on HuskyLens 2
  19. Connecting with external hardware
    1. Interfacing with the Orange Pi RV2
      1. Orange Pi RV2 to HuskyLens 2 Wiring (I2C Mode)
      2. Orange Pi RV2 to HuskyLens 2 Wiring (UART Mode)
  20. Connecting the Orange Pi RV2 to the HuskyLens 2 in I2C mode.
  21. Software Setup
    1. 1. Enabling the I2C3 interface
    2. 2. Installing I2C Tools
    3. 3. Installing the HuskYLENS Python library
    4. 4. Checking for available I2C devices connected to the HUSKYLENS 2
  22. Price and availability
  • Maybe you missed it? LattePanda IOTA Review (Full hands-on review).
  • HUSKYLENS 2 Review: Exploring DFRobot’s newest AI Machine Vision sensor board.

HUSKYLENS 2: An advanced AI-powered machine vision camera perfect for AI and robotics applications.

Part I: Introducing the Product

[Launched at 9:00 PM (Beijing Time) on October 30, 2025] Following the launch of the original HUSKYLENS on July 28, 2019, DFRobot has now unveiled the upgraded HUSKYLENS 2 board, an easy-to-use SBC featuring an integrated AI vision sensor, perfect for robotics, automation, and AI education projects.

Powered by the Kendryte K230 AI RISC-V chip, this device features a built-in 2.4-inch IPS display, enabling users to train, interact with, and monitor the camera without needing an external PC. It offers capabilities like face recognition, object tracking, object recognition, line following, color detection, and QR code scanning, making it easier to develop embedded products without requiring advanced coding skills.


A full up-close view of the product

This product integrates three key components into a single piece of hardware: a low-power microcontroller (the K230), a camera sensor, and an integrated display, making it ideal for creative AI, DIY, and robotics projects. Powered by the Kendryte K230 processor, this Edge AI camera excels in object tracking and precise face recognition without requiring any coding. Its built-in screen provides instant feedback, perfect for students creating line-following robots or other smart devices.


Key Features

  • Powered by a RISC-V processor: Built on the Kendryte K230 Processor chip, it offers fast on-device processing, making it perfect for real-time applications with low latency and very low power consumption.
  • AI computing performance: The Neural Processing Unit (NPU) offers a performance of up to 6 TOPS, making it perfect for light AI applications.
  • Storage and memory: Featuring 1GB of LPDDR4 RAM and 8GB of eMMC storage, it also supports TF card expansion for additional storage.
  • Integrated Camera: Feature a 2MP camera (GC2093 sensor) for sufficient resolution and dependable identification tasks.​
  • Built-in speaker (1W) & Microphone – Included.
  • Wireless connectivity: Wi-Fi is available through a slot-in module.
  • 20+ recognition algorithms: Include over 20 recognition algorithms to choose from, including face, object, color, line, and tag detection, everything is accessible and trainable with just a simple button press.​​​​
  • Integrated Display: 2.4-inch IPS LCD (640×480) lets you view results and tweak settings directly on the device with an intuitive menu system.​
  • Learning button: A standalone “learning button” enables fast training of new objects, faces, colors, or lines without requiring any coding.
  • Low power consumption: (230–320mA typical) and compatibility with 3.3–5V, it’s perfect for most hobby and educational projects.
  • MCP server (Model Context Protocol server) support: By utilizing the MCP protocol, seamless real-time context synchronization is achieved between AI agents and their interactive tools.
  • Compatibility with standard controllers and single-board computers: With standard UART and I2C interfaces, the board easily connects to a variety of microcontrollers and single-board computers, such as Arduino, micro:bit, Raspberry Pi 5, and ESP32.
  • Real-Time Video Streaming: Live view for monitoring and debugging can be done either through a wired connection using USB-C or wirelessly via a Wi-Fi 6 module.
  • Replaceable Camera Module: Swappable 2 MP GC2093 sensor featuring compatibility with various lenses, including microscope, telephoto, and night vision.
  • Model Hub: Download and use both official and user-created models for various purposes like agriculture, retail, safety, and more.
  • Mounting Options: It can be attached to a helmet, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, tripod and more.

Model Context Protocol (MCP) support and large language models (LLMs)

The Model Context Protocol (MCP) built into the HUSKYLENS 2 AI camera enables it to connect to and utilize the power of a Large Language Model (LLM).

🧠 MCP (Model Context Protocol)

  • MCP is a service integrated into HUSKYLENS 2, serving as a link between the camera’s built-in AI vision and external LLMs.
  • It enables the camera not only to recognize objects, such as food or gestures, but also to grasp their meaning and context by leveraging the extensive knowledge base of the LLM.

🤖 Leveraging the LLM

  • By using MCP, HUSKYLENS 2 can send visual recognition data to an LLM, which then interprets it in a more intelligent, contextual way.
  • For example, if the camera sees your lunch, the LLM can go beyond just identifying “a sandwich” — it might analyze nutritional value, suggest dietary tips, or even recommend recipes.

🚀 Why It Matters

  • This integration turns HUSKYLENS 2 from a basic vision sensor into a context-aware AI assistant, enabling real-world understanding and interaction.
  • It’s a significant step toward enhancing AI vision systems, making them smarter, more versatile, and beneficial for fields like education, robotics, and automation.

🧠 What is LLM all about?

LLM, short for Large Language Model, is a type of computer program designed to understand and generate human language. It can perform tasks like answering questions, writing stories, translating text, or engaging in conversations.

📚 How It Works

  • It’s trained on lots of text from books, websites, and articles.
  • It learns patterns in language — how words and ideas connect.
  • When you ask something, it uses those patterns to give a smart response.
  • The LLM normally runs on powerful cloud servers, not on a local device.
  • When a device like HUSKYLENS 2 sends a request (e.g., “What is this object?”), it:
    1. Captures and processes the visual data locally
    2. Sends structured context (via MCP) to the LLM server
    3. Receives a natural language response from the LLM

Specifications

FeatureHUSKYLENS 2 Specification
ProcessorKendryte K230
Dual-core 21.6GHz 64-bit RISC-V CPU
Memory1GB LPDDR4
AI Computing Power6TOPS
Camera2MP GC2093, 1/2.9″, 60 FPS (Replaceable)
Display2.4-inch IPS LCD, 640×480 pixels
Speaker1W
WiFi Module (Optional)2.4GHz WiFi 6 (Card based)
Interfaces for connectivityType-C, Gravity (I2C / UART), TF Card Slot
MEMS Microphones✔
Other Components1x Button
1x RGB light
2x Fill light
Built-in Algorithms✔ Face Detection/Recognition
✔ Facial Feature Detection
✔ Object Recognition / Tracking
✔ Color Recognition
✔ Object Classification
✔ Self-learning Classification
✔ Instance Segmentation
✔ Hand Detection
✔ Hand Keypoint Detection
✔ Gesture Recognition
✔ Humen detection
✔ Humen keypoint detection
✔ Pose Recognition
✔ License Plate Recognition
✔ Text Recognition
✔ Line Tracking
✔ Expression Recognition
✔ Gaze Direction Detection
✔ Face Orientation Detection
✔ QR Code Recognition
✔ Barcode Recognition
✔ Tag Recognition
✔ Fall Detection
Operating Voltage3.3–5.0V
Power Consumption1.5W–3W
Dimension70mm x 58mm x 19mm
Weight90g (Without packaging)

Product links and documentation.

  • DFRobot Official Website: Website
  • GitHub: DFRobot/DFRobot_HuskylensV2
  • Product Wiki page: SEN0638
  • HUSKYLENS 2 Official Website: Product Page

With over 20 built-in algorithms ready to use

HUSKYLENS 2 offers over 20 built-in AI models, featuring handy capabilities like object tracking, hand recognition, and instance segmentation, making it versatile for various applications. Users can also train and deploy custom AI models, enabling it to recognize anything they need.

Applications in education and AI project development.

HUSKYLENS 2 might look simple at first glance, but the product is actually quite complex and packed with tons of practical features. it’s ideal for educational and AI developmental use, especially in STEAM (Science, Technology, Engineering, Arts, and Mathematics) education. With its intuitive interface and preloaded models, it’s a fantastic tool for practical learning and creating complex projects that require minimal power consumption. Here are some example applications you could potentially use with this product.

  • Robotics: Building robots that recognize and respond to gestures or facial expressions.
  • Driving assistance: Creating fixed or mobile systems capable of tracking and categorizing objects in real-time.
  • QA: Creating QA systems for analyzing colors and identifying surface damages on objects.
  • Elderly assistance: Designing systems to monitor the elderly, prevent falls, and predict behavioral scenarios.
  • Clinical dermatologic diagnosis: Diagnosing and identifying different skin diseases.
  • Security systems: Human detection and risk hazards pose dangers.
  • Surveillance: Streaming video feeds and observing environments.

Broad compatibility with popular controllers and SBCs

The hardware is highly compatible with popular controllers like Arduino, Raspberry Pi, and micro:bit, making it easy to integrate AI vision into various projects.

Wide Controller Support

Designed to integrate effortlessly with widely used microcontroller platforms:

ControllerCompatibility Notes
ArduinoPlug-and-play via UART or I2C. Libraries and tutorials available for quick setup.
Raspberry PiConnects via UART or USB. Python libraries support advanced integration.
micro:bitCompatible through I2C. Ideal for educational and beginner-friendly projects.
ESP32/ESP8266Supported via UART. Enables wireless AI vision applications.
UNIHIKER M10/K10Still not verified

What’s included in the package?

Item descriptionQTY
HUSKYLENS 2 AI Vision Sensorx1
M3 Screwsx6
Mounting Bracketx1
Heightening Bracketx1
Gravity-4P Sensor Connector Cable (30cm)x1
Dual-Plug PH2.0-4P Silicone Cable (20cm)x1
Power Adapter Boardx1

Additional accessories

Along with the board, DFRobot team offers few extra accessories such as a Microscope Lens Module that you can purchase to build an electronic-based microscope. This interesting module comes with a camera sensor as well as lenses designed for close-up zoom capabilities. A WiFi module is available as an optional accessory and is great for projects or products that require wireless connectivity.

1 2 3 4 5 6
AI Machine Vision Sensor HUSKYLENS 2
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleOrange Pi 4 Pro RISC-V SBC Debuts with Allwinner SoC and WiFi 6 Support
Next Article Orange Pi RG: A retro handheld gaming console powered by RISC-V is currently in dev.
androidpimp
  • Website

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Categories
PureVPN
PureVPN Ad Banner
VyprVPN – No. 1 in Anonymity
Vyprvpn 350x260
PrivadoVPN – No. 1 in Parental Control
PrivadoVPN 350x437
Recent Posts
  • Orange Pi 6 Plus Review: A 12-Core SBC Powerhouse (Benchmarks, Specs, and Pricing)
  • Orange Pi 6 Unveiled: The Affordable, Slimmed-Down SBC Alternative to the 6 Plus
  • Ugoos SK4 Review (2025): The Best Mid-Range Android TV Box for Value & Features
  • AliExpress Black Friday 2025: 🚨 LATEST Deals & ALL Exclusive Country Promo Codes (Save 90%!)
  • LattePanda IOTA Review: 4 Reasons It Beats Raspberry Pi 5
  • POCO F8 Ultra/Pro global release date unveiled, showcasing flagship-level specs
  • Ugoos GaN Chargers 2025: 35W, 65W, 140W – Fast, Compact & Universal
  • Orange Pi RG: A retro handheld gaming console powered by RISC-V is currently in dev.
  • HUSKYLENS 2: DFRobot’s New AI Machine Vision Sensor Unveiled
  • Orange Pi 4 Pro RISC-V SBC Debuts with Allwinner SoC and WiFi 6 Support
  • Orange Pi RV2 Plus with Integrated Wi-Fi 6 Chip Confirmed!
  • Banana Pi BPI-R4 Pro (8X): Wi-Fi 7 Router with 10G SFP+ Ports Released
  • Youyeetoo K1 CoM Packs an Intel N100 processor and X2 Gigabit Ethernet ports
  • Youyeetoo YY3588 Review: Ultimate RK3588 COM for AI, 8K Video & Edge Computing in 2025
  • Xiaomi 17 Pro Max vs iPhone 17 Pro Max: Specs, Price & Honest Review
  • Premium Budget RGB Gaming Keyboard 2025? Rii RK805 Full Review
  • Orange Pi AI Studio / Pro launched Globally, Rivaling Nvidia Jetson Orin Nano
  • Redmi Pad 2 Review (2025): The Best Budget Tablet Value? Full Review
  • Orange Pi 600 specs revealed: A compact Intel PC built into a keyboard, featuring an Intel N150 processor
RSS Recent RSS Feed
  • Orange Pi 6 Plus Review: A 12-Core SBC Powerhouse (Benchmarks, Specs, and Pricing) December 1, 2025
  • Orange Pi 6 Unveiled: The Affordable, Slimmed-Down SBC Alternative to the 6 Plus November 30, 2025
  • Ugoos SK4 Review (2025): The Best Mid-Range Android TV Box for Value & Features November 26, 2025
  • AliExpress Black Friday 2025: 🚨 LATEST Deals & ALL Exclusive Country Promo Codes (Save 90%!) November 24, 2025
  • LattePanda IOTA Review: 4 Reasons It Beats Raspberry Pi 5 November 23, 2025
Facebook X (Twitter) Instagram Pinterest
@2025 - All Right Reserved. Designed by AndroidPIMP

Type above and press Enter to search. Press Esc to cancel.

Accessibility
Accessibility modes
Epilepsy Safe Mode
Dampens color and removes blinks
This mode enables people with epilepsy to use the website safely by eliminating the risk of seizures that result from flashing or blinking animations and risky color combinations.
Visually Impaired Mode
Improves website's visuals
This mode adjusts the website for the convenience of users with visual impairments such as Degrading Eyesight, Tunnel Vision, Cataract, Glaucoma, and others.
Cognitive Disability Mode
Helps to focus on specific content
This mode provides different assistive options to help users with cognitive impairments such as Dyslexia, Autism, CVA, and others, to focus on the essential elements of the website more easily.
ADHD Friendly Mode
Reduces distractions and improve focus
This mode helps users with ADHD and Neurodevelopmental disorders to read, browse, and focus on the main website elements more easily while significantly reducing distractions.
Blindness Mode
Allows using the site with your screen-reader
This mode configures the website to be compatible with screen-readers such as JAWS, NVDA, VoiceOver, and TalkBack. A screen-reader is software for blind users that is installed on a computer and smartphone, and websites must be compatible with it.
Readable Experience
Content Scaling
Default
Text Magnifier
Readable Font
Dyslexia Friendly
Highlight Titles
Highlight Links
Font Sizing
Default
Letter Spacing
Default
Left Aligned
Center Aligned
Right Aligned
Visually Pleasing Experience
Dark Contrast
Light Contrast
Monochrome
High Contrast
High Saturation
Low Saturation
Adjust Text Colors
Adjust Title Colors
Adjust Background Colors
Easy Orientation
Mute Sounds
Hide Images
Hide Emoji
Reading Guide
Stop Animations
Reading Mask
Highlight Hover
Highlight Focus
Big Dark Cursor
Big Light Cursor
Cognitive Reading
Virtual Keyboard
Navigation Keys
Voice Navigation

AndroidPIMP

Accessibility Statement

  • AndroidPIMP.com
  • 10/18/2025

Compliance status

We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.

To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.

This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.

Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.

If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email

Screen-reader and keyboard navigation

Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:

  1. Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.

    These adjustments are compatible with all popular screen readers, including JAWS and NVDA.

  2. Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside it.

    Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.

Disability profiles supported in our website

  • Epilepsy Safe Mode: this profile enables people with epilepsy to use the website safely by eliminating the risk of seizures that result from flashing or blinking animations and risky color combinations.
  • Visually Impaired Mode: this mode adjusts the website for the convenience of users with visual impairments such as Degrading Eyesight, Tunnel Vision, Cataract, Glaucoma, and others.
  • Cognitive Disability Mode: this mode provides different assistive options to help users with cognitive impairments such as Dyslexia, Autism, CVA, and others, to focus on the essential elements of the website more easily.
  • ADHD Friendly Mode: this mode helps users with ADHD and Neurodevelopmental disorders to read, browse, and focus on the main website elements more easily while significantly reducing distractions.
  • Blindness Mode: this mode configures the website to be compatible with screen-readers such as JAWS, NVDA, VoiceOver, and TalkBack. A screen-reader is software for blind users that is installed on a computer and smartphone, and websites must be compatible with it.
  • Keyboard Navigation Profile (Motor-Impaired): this profile enables motor-impaired persons to operate the website using the keyboard Tab, Shift+Tab, and the Enter keys. Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.

Additional UI, design, and readability adjustments

  1. Font adjustments – users, can increase and decrease its size, change its family (type), adjust the spacing, alignment, line height, and more.
  2. Color adjustments – users can select various color contrast profiles such as light, dark, inverted, and monochrome. Additionally, users can swap color schemes of titles, texts, and backgrounds, with over seven different coloring options.
  3. Animations – person with epilepsy can stop all running animations with the click of a button. Animations controlled by the interface include videos, GIFs, and CSS flashing transitions.
  4. Content highlighting – users can choose to emphasize important elements such as links and titles. They can also choose to highlight focused or hovered elements only.
  5. Audio muting – users with hearing devices may experience headaches or other issues due to automatic audio playing. This option lets users mute the entire website instantly.
  6. Cognitive disorders – we utilize a search engine that is linked to Wikipedia and Wiktionary, allowing people with cognitive disorders to decipher meanings of phrases, initials, slang, and others.
  7. Additional functions – we provide users the option to change cursor color and size, use a printing mode, enable a virtual keyboard, and many other functions.

Browser and assistive technology compatibility

We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers).

Notes, comments, and feedback

Despite our very best efforts to allow anybody to adjust the website to their needs. There may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to

wpDiscuz