logo

Shanghai Neardi Technology Co., Ltd. sales@neardi.com +86 17612192553

Shanghai Neardi Technology Co., Ltd. Προφίλ εταιρείας
Ειδήσεις
Σπίτι > Ειδήσεις >
Εταιρικές ειδήσεις AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System

AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System

2025-11-03
Latest company news about AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System

Why are more and more edge devices talking about NPUs and coprocessors? The RK3588 is already a powerful 6 TOPS (INT8) SoC, yet in complex scenes such as multi-task inference, model parallelism and video-AI analytics the compute ceiling of a single chip is still there. RK1820 was created exactly to take over that slice of load and relieve the main SoC’s “compute anxiety”. In edge-AI equipment the host processor no longer fights alone; when AI tasks outgrow the scheduling capacity of the traditional CPU/NPU, the coprocessor quietly steps in and assumes part of the intelligent workload.

Coprocessor RK1820

τα τελευταία νέα της εταιρείας για AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System  0

RK1820 is a coprocessor purpose-built for AI inference and compute expansion; it pairs flexibly with host SoCs such as RK3588 and RK3576 and communicates with them efficiently through PCIe or USB interfaces.

Capability Category Key Parameters & Functions
Processor Architecture 3× 64-bit RISC-V cores; 32 KB L1 I-cache + 32 KB L1 D-cache per core, 128 KB shared L2 cache; RISC-V H/F/D-precision FPU
Memory 2.5 GB on-chip high-bandwidth DRAM + 512 KB SRAM; external support for eMMC 4.51 (HS200), SD 3.0, SPI Flash
Codec JPEG encode: 16×16–65520×65520, YUV400/420/422/444; JPEG decode: 48×48–65520×65520, multiple YUV/RGB formats
NPU 20 TOPS INT8; mixed-precision INT4/INT8/INT16/FP8/FP16/BF16; frameworks: TensorFlow/MXNet/PyTorch/Caffe; Qwen2.5-3B (INT4) 67 token/s, YOLOv8n (INT8) 125 FPS
Communication PCIe 2.1 (2 lanes, 2.5/5 Gbps), USB 3.0 (5 Gbps, shared with PCIe)
Main Functions Edge-AI inference (detection / classification / LLM), RISC-V general compute, 2-D graphics acceleration (scale / rotate), AES/SM4 security
From the system-architecture perspective of division of labor: who is doing what?

In the RK3588 + RK1820 system, the AI-task pipeline is decomposed into a four-tier architecture:
Application → Middleware → Co-processor Execution → Control & Presentation.
RK3588 host: handles task scheduling, data pre-processing, and result output, governing the entire workflow.
RK1820 co-processor: dedicated to high-compute AI inference, coupled to the host via PCIe, forming a “light control + heavy compute” collaboration model.

Stage Actor Action
App Request RK3588 AI-task call issued from app layer (recognition/detection)
Dispatch RK3588 dispatcher Decide whether to offload to co-processor
Inference RK1820 Run deep-learning model computation
Return RK1820 → RK3588 Send back inference results; host displays or continues logic
1. Application Layer: The “initiator” of AI tasks

The application layer is where every AI task begins; it translates user requirements—image analytics, object detection, edge-side LLM Q&A, etc.—into system-executable task commands and passes them to the middleware layer through standardized APIs. This layer is handled entirely by the RK3588 host, which manages user interaction, business logic, and peripheral data.

τα τελευταία νέα της εταιρείας για AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System  1

Task reception: acquires user commands via cameras, touch panels, Ethernet, UART, etc.

  • Smart security: detect persons in video frames
  • Industrial inspection: identify surface defects on products
  • Edge LLM: convert voice input to text and form a Q&A task

Command standardization: turns unstructured input into structured task parameters

  • Vision task: input resolution, model version, output requirements
  • LLM task: input tokens, model version, max output length
2.Middleware Layer: The “dispatcher” of AI tasks

The middleware layer is the collaborative hub: it judges each task, allocates resources, preprocesses data, and governs bus traffic. It decides whether the task runs on the host or is off-loaded to the co-processor.
RK3588 only; RK1820 takes no part in PCIe configuration or interrupt management—it simply executes the inference jobs dispatched by the host.

Task classification and scheduling

  • Local processing: low-compute or latency-critical tasks (image scaling, lightweight AI inference) are handled by RK3588 CPU/NPU/RGA.
  • Offload to RK1820: high-compute tasks (YOLOv8 multi-class detection, LLM inference, semantic segmentation) are sent to RK1820. Once RK1820 takes over, the host CPU/NPU is freed for other work.

Data preprocessing

  • Vision: crop, denoise, normalize, channel reorder.
  • Text: tokenize, pad, encode.

τα τελευταία νέα της εταιρείας για AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System  2

Bus communication control

  • Establishes link via PCIe or USB3.
  • Data transfer uses DMA, no CPU intervention.
  • Issues inference-control commands: start NPU, set precision, raise completion interrupt.
3. Co-processor Execution Layer: the “compute engine” of AI tasks

This layer is the inference core, driven exclusively by the RK1820 co-processor, dedicated to high-compute AI inference.
RK1820 active; RK3588 does not interfere with inference, it only waits for results. Time-out or exceptions are handled by RK3588 via PCIe reset commands.

Task reception and preparation

Receives data, model weights, and commands dispatched by RK3588; writes them into local high-bandwidth DRAM, loads the model, and configures the NPU.

NPU inference compute

  • Object detection (YOLOv8n): conv → BN → activate → pool → post-process NMS.
  • LLM inference (Qwen2.5 3B): prefill input tokens → decode token-by-token generation.
  • Inference optimization: operator fusion, weight compression.

Result return

  • Returns bounding-box coordinates, class IDs, and confidence scores for detection.
  • Returns token array for the LLM.
4. Control & Presentation Layer: the “outputter” of AI tasks

This layer is the terminus of every AI task: it converts the raw inference results from RK1820 into visual or business-ready output and closes the loop.
RK3588 active; RK1820 only supplies the raw inference data.

τα τελευταία νέα της εταιρείας για AI Compute Is More Than the Main SoC! A Clear Look at RK1820's Real Role in the RK3588 System  3

Result post-processing

  • Map coordinates back to original image size.
  • Decode tokens into natural language.
  • Count defects in industrial inspection.

System control & feedback output

  • Smart security: display video with detection overlays, trigger alarms.
  • Industrial inspection: command line to reject defective products.
  • Edge LLM: show text + voice announcement.

Value of synergy: not just faster, but smarter

Stage Actor Action
App Request RK3588 AI-task call issued from app layer (recognition/detection)
Dispatch RK3588 dispatcher Decide whether to offload to co-processor
Inference RK1820 Run deep-learning model computation
Return RK1820 → RK3588 Send back inference results; host displays or continues logic

Put simply: RK3588 runs the show and keeps everything on track, while RK1820 delivers raw compute bursts; together they make edge-AI devices “smarter, faster, and hassle-free.”
Follow us for more RK1820 news and SDK updates, fresh tutorials and ready-to-run demos.

ΕΠΙΠΕΔΗΜΑΤΙΚΑ
Επαφές
Επαφές: Mr. Cola
Επικοινωνήστε τώρα
Στείλε μας ένα μήνυμα.