Rknn 模型运行

Rknn 模型运行

前提:有转换好Onnx模型

安装Rknn-toolkit2

git clone https://github.com/airockchip/rknn-toolkit2.git --depth 1

应该有如下文件夹,RockChip合并了之前分散的项目,并且在此代码库更新新内容,旧仓库不再更新

├── rknn-toolkit2
├── rknn-toolkit-lite2
└── rknpu2

先要在个人主机(非板端),根据rknn-toolkit2/packages中的版本号来选择python版本,并且安装对应版本的rknn-toolkit2

注意,随着rknn-toolkit2的更新,版本铁定会不同,看好版本号

例如主机Ubuntu22.04.4(x86_64),packages文件夹内有

.
├── md5sum.txt
├── requirements_cp310-2.1.0.txt
├── requirements_cp311-2.1.0.txt
├── requirements_cp36-2.1.0.txt
├── requirements_cp37-2.1.0.txt
├── requirements_cp38-2.1.0.txt
├── requirements_cp39-2.1.0.txt
├── rknn_toolkit2-2.1.0+708089d1-cp310-cp310-linux_x86_64.whl
├── rknn_toolkit2-2.1.0+708089d1-cp311-cp311-linux_x86_64.whl
├── rknn_toolkit2-2.1.0+708089d1-cp36-cp36m-linux_x86_64.whl
├── rknn_toolkit2-2.1.0+708089d1-cp37-cp37m-linux_x86_64.whl
├── rknn_toolkit2-2.1.0+708089d1-cp38-cp38-linux_x86_64.whl
└── rknn_toolkit2-2.1.0+708089d1-cp39-cp39-linux_x86_64.whl

使用conda创建环境

conda create -n rknn python=3.10

激活环境

conda activate rknn

安装依赖和本体

pip install -r requirements_cp310-2.1.0.txt
pip install rknn_toolkit2-2.1.0+708089d1-cp310-cp310-linux_x86_64.whl

此时已安装完成rknn-toolkit2

转换模型

任意创建文件夹,例如

mkdir onnx2rknn
cd onnx2rknn

将转换代码写入到 onnx2rknn.py

from rknn.api import RKNN
import os

if __name__ == '__main__':
platform = 'rk3588'

'''step 1: create RKNN object'''
rknn = RKNN()

'''step 2: load the .onnx model'''
rknn.config(target_platform='rk3588')
print('--> Loading model')
ret = rknn.load_onnx('gmstereo-scale1-sceneflow-124a438f_1x3x480x640_sim.onnx') # 这里填写onnx模型的路径
if ret != 0:
print('load model failed')
exit(ret)
print('done')

'''step 3: building model'''
print('-->Building model')
ret = rknn.build(do_quantization=False)
if ret != 0:
print('build model failed')
exit()
print('done')

'''step 4: export and save the .rknn model'''
RKNN_MODEL_PATH = 'unimatch_stereo_scale1_1x3x480x640_sim.rknn' # 这里填写rknn模型的名称
ret = rknn.export_rknn(RKNN_MODEL_PATH)
if ret != 0:
print('Export rknn model failed.')
exit(ret)
print('done')

'''step 5: release the model'''
rknn.release()

更改需要输入的onnx模型和输出rknn模型位置

板端rknn-toolkit-lite2安装

与主机端同理,到packages文件夹下,创建conda环境,安装指定包

.
├── rknn_toolkit_lite2-2.1.0-cp310-cp310-linux_aarch64.whl
├── rknn_toolkit_lite2-2.1.0-cp311-cp311-linux_aarch64.whl
├── rknn_toolkit_lite2-2.1.0-cp312-cp312-linux_aarch64.whl
├── rknn_toolkit_lite2-2.1.0-cp37-cp37m-linux_aarch64.whl
├── rknn_toolkit_lite2-2.1.0-cp38-cp38-linux_aarch64.whl
└── rknn_toolkit_lite2-2.1.0-cp39-cp39-linux_aarch64.whl

创建

conda create -n rknn python=3.10
conda activate rknn

安装rknn-toolkit-lite2和opencv

pip install rknn_toolkit_lite2-2.1.0-cp310-cp310-linux_aarch64.whl
pip install opencv-python

测试运行

cd ../examples/resnet18
python test.py

可能会报错

W rknn-toolkit-lite2 version: 2.1.0
--> Load RKNN model
done
--> Init runtime environment
I RKNN: [15:54:49.666] RKNN Runtime Information: librknnrt version: 1.4.0 (a10f100eb@2022-09-09T09:07:14)
I RKNN: [15:54:49.666] RKNN Driver Information: version: 0.9.3
E RKNN: [15:54:49.666] 6, 1
E RKNN: [15:54:49.666] Invalid RKNN model version 6
E RKNN: [15:54:49.666] rknn_init, load model failed!
E Catch exception when init runtime!
E Traceback (most recent call last):
File "/home/orangepi/miniconda3/envs/rknn/lib/python3.10/site-packages/rknnlite/api/rknn_lite.py", line 157, in init_runtime
self.rknn_runtime.build_graph(self.rknn_data, self.load_model_in_npu)
File "rknnlite/api/rknn_runtime.py", line 921, in rknnlite.api.rknn_runtime.RKNNRuntime.build_graph
Exception: RKNN init failed. error code: RKNN_ERR_FAIL

Init runtime environment failed

若出现同样的错误,那么应该Rknpu2的lib等在板端是古董级别的,需要更新下

首先到方才克隆的根目录中rknpu2/runtime文件夹下,复制需要的文件

sudo cp Linux/librknn_api/aarch64/librknnrt.so /usr/lib/librknnrt.so
sudo Linux/rknn_server/aarch64/usr/bin/rknn_server /usr/bin/rknn_server

另一个librknn_api.so不需要更新

然后应该能正常跑test.py了

W rknn-toolkit-lite2 version: 2.1.0
--> Load RKNN model
done
--> Init runtime environment
I RKNN: [16:06:56.773] RKNN Runtime Information, librknnrt version: 2.1.0 (967d001cc8@2024-08-07T19:28:19)
I RKNN: [16:06:56.773] RKNN Driver Information, version: 0.9.3
I RKNN: [16:06:56.773] RKNN Model Information, version: 6, toolkit version: 2.1.0+708089d1(compiler version: 2.1.0 (967d001cc8@2024-08-07T11:32:45)), target: RKNPU v2, target platform: rk3588, framework name: PyTorch, framework layout: NCHW, model inference type: static_shape
W RKNN: [16:06:56.787] query RKNN_QUERY_INPUT_DYNAMIC_RANGE error, rknn model is static shape type, please export rknn with dynamic_shapes
W Query dynamic range failed. Ret code: RKNN_ERR_MODEL_INVALID. (If it is a static shape RKNN model, please ignore the above warning message.)
done
--> Running model
resnet18
-----TOP 5-----
[812] score:0.999680 class:"space shuttle"
[404] score:0.000249 class:"airliner"
[657] score:0.000013 class:"missile"
[466] score:0.000009 class:"bullet train, bullet"
[895] score:0.000008 class:"warplane, military plane"

done

板端运行自定义rknn模型

示例代码如下

import time
import cv2
import numpy as np
import platform
from rknnlite.api import RKNNLite

# decice tree for rk356x/rk3588
DEVICE_COMPATIBLE_NODE = '/proc/device-tree/compatible'

INPUT_SIZE = 224

RK3588_RKNN_MODEL = 'unimatch_stereo_scale1_1x3x480x640_sim.rknn' # 这里修改为前面转换得到的rknn

IMAGENET_MEAN = np.array([0.485, 0.456, 0.406], dtype=np.float16)
IMAGENET_STD = np.array([0.229, 0.224, 0.225], dtype=np.float16)

left_image = './im0.png'
right_image = './im1.png'

output_path = 'output.png'

if __name__ == '__main__':

input_height = 480
input_width = 640

print(f"input_height={input_height}")
print(f"input_width={input_width}")

left = cv2.resize(cv2.cvtColor(cv2.imread(left_image), cv2.COLOR_BGR2RGB), (input_width, input_height)).astype(
np.float32) / 255.0
right = cv2.resize(cv2.cvtColor(cv2.imread(right_image), cv2.COLOR_BGR2RGB), (input_width, input_height)).astype(
np.float32) / 255.0

left = (left - IMAGENET_MEAN) / IMAGENET_STD
right = (right - IMAGENET_MEAN) / IMAGENET_STD

left = np.transpose(left, (2, 0, 1))[np.newaxis, :, :, :]
right = np.transpose(right, (2, 0, 1))[np.newaxis, :, :, :]

# RKNN Init
rknn_model = RK3588_RKNN_MODEL
rknn_lite = RKNNLite()

# load RKNN model
print('--> Load RKNN model')
ret = rknn_lite.load_rknn(rknn_model)
if ret != 0:
print('Load RKNN model failed')
exit(ret)
print('done')

# init runtime environment
print('--> Init runtime environment')
ret = rknn_lite.init_runtime(core_mask=RKNNLite.NPU_CORE_0)

if ret != 0:
print('Init runtime environment failed')
exit(ret)
print('done')

# Inference
print('--> Running model')
output = ''

t = time.time()
output = rknn_lite.inference(inputs=[left, right])
dt = time.time() - t
print(f"\033[34mElapsed: {dt:.3f} sec, {1 / dt:.3f} FPS\033[0m")

disp = output[0][0]

norm = ((disp - disp.min()) / (disp.max() - disp.min()) * 255).astype(np.uint8)
colored = cv2.applyColorMap(norm, cv2.COLORMAP_PLASMA)
cv2.imwrite(output_path, colored)
print(f"\033[32moutput: {output_path}\033[0m")

print('done')

rknn_lite.release()

ps. 转换似乎有不少问题,推理不出来正确结果,不过流程确实是这样,或许是onnx模型用到了特殊的算子,转换有失误