拷贝托盘库系统架构
This commit is contained in:
74
.gitignore
vendored
Normal file
74
.gitignore
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
# Build directories
|
||||
# 忽略所有build目录(包括根目录和子目录)
|
||||
build/
|
||||
# 注意:项目只在 image_capture/build/ 目录下构建
|
||||
# 根目录的 build/ 文件夹应该被忽略,如果存在可以安全删除
|
||||
bin/
|
||||
lib/
|
||||
!camport3/lib/
|
||||
image_capture/src/images_template/
|
||||
image_capture/build_debug
|
||||
# CMake generated files
|
||||
CMakeCache.txt
|
||||
CMakeFiles/
|
||||
cmake_install.cmake
|
||||
Makefile
|
||||
*.cmake
|
||||
|
||||
# Visual Studio files
|
||||
.vs/
|
||||
*.vcxproj
|
||||
*.vcxproj.filters
|
||||
*.vcxproj.user
|
||||
*.sln
|
||||
*.suo
|
||||
*.user
|
||||
*.sdf
|
||||
*.opensdf
|
||||
|
||||
# Qt autogen files
|
||||
*_autogen/
|
||||
.qt/
|
||||
ui_*.h
|
||||
moc_*.cpp
|
||||
qrc_*.cpp
|
||||
|
||||
# Compiled files
|
||||
*.o
|
||||
*.obj
|
||||
*.exe
|
||||
*.a
|
||||
*.lib
|
||||
!image_capture/camera_sdk/lib/**/*.lib
|
||||
|
||||
# Saved images
|
||||
*.png
|
||||
*.jpg
|
||||
*.jpeg
|
||||
*.ply
|
||||
|
||||
# IDE files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.temp
|
||||
*.log
|
||||
compile_commands.json.tmp*
|
||||
|
||||
# OS files
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
*.gif
|
||||
|
||||
.cache/
|
||||
!image_capture/camera_sdk/
|
||||
!image_capture/camera_sdk/lib/
|
||||
|
||||
|
||||
!image_capture/cmake/*.cmake
|
||||
221
docs/cmake_configuration_summary.md
Normal file
221
docs/cmake_configuration_summary.md
Normal file
@@ -0,0 +1,221 @@
|
||||
# CMake 配置文档
|
||||
|
||||
本文档总结了 `image_capture` 项目的 CMake 构建系统配置。
|
||||
|
||||
---
|
||||
|
||||
## 目录结构
|
||||
|
||||
```
|
||||
image_capture/
|
||||
├── CMakeLists.txt # 主构建配置文件
|
||||
└── cmake/ # CMake 模块目录
|
||||
├── CompilerOptions.cmake # 编译器选项配置
|
||||
├── Dependencies.cmake # 外部依赖管理
|
||||
└── PercipioSDK.cmake # 相机 SDK 配置
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 主配置文件:[CMakeLists.txt](file:///d:/Git/stereo_warehouse_inspection/image_capture/CMakeLists.txt)
|
||||
|
||||
### 基本信息
|
||||
- **CMake 最低版本**: 3.10
|
||||
- **项目名称**: `image_capture`
|
||||
- **编程语言**: C++
|
||||
- **构建生成器**: Visual Studio 17 2022 (MSVC)
|
||||
|
||||
### 输出目录
|
||||
```cmake
|
||||
CMAKE_RUNTIME_OUTPUT_DIRECTORY = ${CMAKE_BINARY_DIR}/bin/Release # 可执行文件
|
||||
CMAKE_LIBRARY_OUTPUT_DIRECTORY = ${CMAKE_BINARY_DIR}/lib/Release # 动态库
|
||||
CMAKE_ARCHIVE_OUTPUT_DIRECTORY = ${CMAKE_BINARY_DIR}/lib/Release # 静态库
|
||||
```
|
||||
|
||||
### 模块化设计
|
||||
项目采用模块化的 CMake 配置,通过 `cmake/` 目录下的三个模块文件组织:
|
||||
|
||||
1. **CompilerOptions.cmake** - 编译器和全局设置
|
||||
2. **Dependencies.cmake** - Qt6、OpenCV、Open3D 依赖
|
||||
3. **PercipioSDK.cmake** - 图漾相机 SDK 配置
|
||||
|
||||
### 库和可执行文件
|
||||
|
||||
#### 1. Algorithm Library (`algorithm_lib`)
|
||||
**类型**: 静态库
|
||||
|
||||
**源文件**:
|
||||
- `src/algorithm/core/detection_base.cpp`
|
||||
- `src/algorithm/core/detection_result.cpp`
|
||||
- `src/algorithm/utils/image_processor.cpp`
|
||||
- `src/algorithm/detections/slot_occupancy_detection.cpp`
|
||||
- `src/algorithm/detections/pallet_offset_detection.cpp`
|
||||
- `src/algorithm/detections/beam_rack_deflection_detection.cpp`
|
||||
- `src/algorithm/detections/visual_inventory_detection.cpp`
|
||||
- `src/algorithm/detections/visual_inventory_end_detection.cpp`
|
||||
|
||||
**包含路径**:
|
||||
- `src`
|
||||
- `third_party/percipio/common` (修复 json11.hpp 引用)
|
||||
|
||||
**依赖**: OpenCV, Open3D
|
||||
|
||||
#### 2. Main Executable (`image_capture`)
|
||||
**类型**: 可执行文件
|
||||
|
||||
**主要源文件**:
|
||||
- `src/main.cpp`
|
||||
- `src/camera/ty_multi_camera_capture.cpp`
|
||||
- `src/camera/mvs_multi_camera_capture.cpp`
|
||||
- `src/device/device_manager.cpp`
|
||||
- `src/redis/redis_communicator.cpp`
|
||||
- `src/task/task_manager.cpp`
|
||||
- `src/vision/vision_controller.cpp`
|
||||
- `src/common/log_manager.cpp`
|
||||
- `src/common/config_manager.cpp`
|
||||
- `src/gui/mainwindow.cpp` / `.h` / `.ui`
|
||||
|
||||
**链接的库**:
|
||||
- `algorithm_lib` (项目内部算法库)
|
||||
- `cpp_api_lib` (相机 SDK C++ API 封装)
|
||||
- `tycam` (相机 SDK 动态库)
|
||||
- `${OpenCV_LIBS}` (OpenCV 库)
|
||||
- `Open3D::Open3D` (Open3D 库)
|
||||
- `Qt6::Core` 和 `Qt6::Widgets` (Qt 框架)
|
||||
- `MvCameraControl.lib` (海康 MVS SDK)
|
||||
|
||||
### 测试配置
|
||||
- **选项**: `BUILD_TESTS` (默认 ON)
|
||||
- **测试目录**: `tests/` (通过 `add_subdirectory` 添加)
|
||||
|
||||
---
|
||||
|
||||
## CMake 模块详解
|
||||
|
||||
### 1. [CompilerOptions.cmake](file:///d:/Git/stereo_warehouse_inspection/image_capture/cmake/CompilerOptions.cmake)
|
||||
|
||||
#### C++ 标准
|
||||
- **标准**: C++17
|
||||
- **要求**: 必须支持
|
||||
|
||||
#### Qt 自动化工具
|
||||
```cmake
|
||||
CMAKE_AUTOMOC ON # 自动 Meta-Object Compiler
|
||||
CMAKE_AUTORCC ON # 自动 Resource Compiler
|
||||
CMAKE_AUTOUIC ON # 自动 UI Compiler
|
||||
```
|
||||
|
||||
#### 编译器优化选项 (MSVC)
|
||||
|
||||
**Release 模式** (默认):
|
||||
```cmake
|
||||
/O2 # 优化速度
|
||||
/Ob2 # 内联任何合适的函数
|
||||
/Oi # 启用内建函数
|
||||
/Ot # 代码速度优先
|
||||
/Oy # 省略帧指针
|
||||
/GL # 全局程序优化
|
||||
```
|
||||
|
||||
**Debug 模式**:
|
||||
```cmake
|
||||
/Od # 禁用优化
|
||||
/Zi # 生成完整调试信息
|
||||
```
|
||||
|
||||
#### 其他设置
|
||||
- **定义**: `OPENCV_DEPENDENCIES`
|
||||
- **compile_commands.json**: 自动生成(用于 IDE 智能提示)
|
||||
|
||||
---
|
||||
|
||||
### 2. [Dependencies.cmake](file:///d:/Git/stereo_warehouse_inspection/image_capture/cmake/Dependencies.cmake)
|
||||
|
||||
#### Qt6 配置
|
||||
```cmake
|
||||
find_package(Qt6 REQUIRED COMPONENTS Widgets)
|
||||
```
|
||||
|
||||
#### OpenCV 配置
|
||||
```cmake
|
||||
find_package(OpenCV REQUIRED)
|
||||
```
|
||||
|
||||
#### Open3D 配置
|
||||
```cmake
|
||||
find_package(Open3D REQUIRED)
|
||||
```
|
||||
用于点云处理和算法运算。
|
||||
|
||||
---
|
||||
|
||||
### 3. [PercipioSDK.cmake](file:///d:/Git/stereo_warehouse_inspection/image_capture/cmake/PercipioSDK.cmake)
|
||||
|
||||
#### 相机 SDK 路径配置
|
||||
```cmake
|
||||
CAMPORT3_ROOT = ${CMAKE_CURRENT_SOURCE_DIR}/camera_sdk
|
||||
CAMPORT3_LIB_DIR = ${CAMPORT3_ROOT}/lib/win/x64
|
||||
```
|
||||
|
||||
#### 导入 tycam 动态库
|
||||
```cmake
|
||||
add_library(tycam SHARED IMPORTED)
|
||||
```
|
||||
|
||||
#### C++ API 封装库 (`cpp_api_lib`)
|
||||
**类型**: 静态库
|
||||
|
||||
**源文件**:
|
||||
`camera_sdk/sample_v2/cpp/*`, `camera_sdk/common/*`
|
||||
|
||||
**依赖**: OpenCV
|
||||
|
||||
---
|
||||
|
||||
## 构建流程
|
||||
|
||||
### 配置项目
|
||||
```bash
|
||||
cd image_capture/build
|
||||
cmake ..
|
||||
```
|
||||
|
||||
可选参数:
|
||||
```bash
|
||||
-DOpenCV_DIR=<path> # 指定 OpenCV 路径
|
||||
-DQt6_DIR=<path> # 指定 Qt6 路径
|
||||
-DOpen3D_DIR=<path> # 指定 Open3D 路径
|
||||
```
|
||||
|
||||
### 编译项目
|
||||
```bash
|
||||
cmake --build . --config Release
|
||||
# 或
|
||||
cmake --build . --config Debug
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 依赖项总结
|
||||
|
||||
| 依赖项 | 版本要求 | 用途 |
|
||||
|--------|---------|------|
|
||||
| CMake | ≥ 3.10 | 构建系统 |
|
||||
| C++ | C++17 | 编程语言标准 |
|
||||
| Qt6 | Widgets 组件 | GUI 框架 |
|
||||
| OpenCV | 4.x | 图像处理 |
|
||||
| Open3D | 0.17+ | 3D点云处理 |
|
||||
| Percipio SDK | tycam.dll | 相机驱动 |
|
||||
| MSVC | VS2022 (v143) | 编译器 |
|
||||
|
||||
---
|
||||
|
||||
## 维护建议
|
||||
|
||||
1. **环境一致性**: 确保所有依赖项(Qt, OpenCV, Open3D)都是使用 MSVC 编译的 x64 版本。
|
||||
2. **DLL 管理**: 运行时确保所有必要的 DLL 都在可执行文件目录下。
|
||||
3. **版本检测**: 保持 Open3D 和 OpenCV 版本的一致性,避免 ABI 冲突。
|
||||
|
||||
---
|
||||
|
||||
*文档更新时间: 2025-12-19*
|
||||
513
docs/project_architecture.md
Normal file
513
docs/project_architecture.md
Normal file
@@ -0,0 +1,513 @@
|
||||
# 项目架构及调用关系文档
|
||||
|
||||
## 1. 系统概述
|
||||
|
||||
本系统是一个基于立体视觉的仓库巡检图像采集与处理系统。它集成了图漾(Percipio)工业深度相机SDK和海康(MVS)工业2D相机SDK进行多相机图像采集,使用OpenCV进行图像处理,Qt6作为用户界面框架,并通过Redis与外部系统(如WMS仓库管理系统、机器人控制系统)进行通信和任务调度。
|
||||
|
||||
系统主要功能包括:
|
||||
- **多相机统一管理**:同时支持深度相机(Percipio)和2D相机(MVS)的数据采集
|
||||
- **实时图像预览与状态监控**:GUI界面实时显示相机图像,支持深度图伪彩色显示
|
||||
- **基于Redis的任务触发与结果上报**:支持跨数据库的任务监听和结果写入
|
||||
- **多种检测算法**:
|
||||
- 货位占用检测(Flag 1):基于2D图像的目标检测
|
||||
- 托盘位置偏移检测(Flag 2):基于深度数据的3D位置计算
|
||||
- 横梁/立柱变形检测(Flag 3):基于深度数据的结构变形测量
|
||||
- 视觉盘点检测(Flag 4):基于Halcon的QR码识别,支持连续扫描和去重
|
||||
- 盘点停止信号(Flag 5):停止Flag 4的连续扫描循环
|
||||
- **智能相机分配**:根据任务类型自动选择合适的相机设备
|
||||
- **系统配置管理与日志记录**:支持参数持久化、实时日志显示和错误处理
|
||||
|
||||
## 2. 目录结构说明
|
||||
|
||||
```text
|
||||
scripts/ # 批处理脚本 (Redis数据库配置、模拟WMS任务等)
|
||||
docs/ # 项目文档
|
||||
├── project_architecture.md # 项目架构文档 (本文档)
|
||||
├── project_class_interaction.md # 类交互关系文档
|
||||
└── cmake_configuration_summary.md # CMake构建配置文档
|
||||
|
||||
image_capture/
|
||||
├── CMakeLists.txt # 主构建配置文件
|
||||
├── cmake/ # CMake模块配置
|
||||
│ ├── CompilerOptions.cmake # 编译器选项配置
|
||||
│ ├── Dependencies.cmake # 依赖项管理 (Qt6, OpenCV, Open3D)
|
||||
│ └── PercipioSDK.cmake # Percipio相机SDK配置
|
||||
├── config.json # 系统配置文件 (相机参数、算法阈值、Redis配置等)
|
||||
└── src/
|
||||
├── algorithm/ # 核心算法库
|
||||
│ ├── core/ # 算法基类与结果定义
|
||||
│ │ ├── detection_base.h/cpp # 检测算法基类
|
||||
│ │ └── detection_result.h/cpp # 检测结果数据结构
|
||||
│ ├── detections/ # 具体检测算法实现
|
||||
│ │ ├── slot_occupancy/ # 货位占用检测
|
||||
│ │ ├── pallet_offset/ # 托盘偏移检测
|
||||
│ │ ├── beam_rack_deflection/ # 横梁立柱变形检测
|
||||
│ │ └── visual_inventory/ # 视觉盘点检测
|
||||
│ └── utils/ # 图像处理工具
|
||||
├── camera/ # 相机驱动层
|
||||
│ ├── ty_multi_camera_capture.h/cpp # 图漾(Percipio)深度相机封装
|
||||
│ └── mvs_multi_camera_capture.h/cpp # 海康(MVS)2D相机封装
|
||||
├── common/ # 通用设施
|
||||
│ ├── config_manager.h/cpp # 配置管理单例 (JSON配置加载/保存)
|
||||
│ ├── log_manager.h/cpp # 日志管理 (spdlog封装)
|
||||
│ └── log_streambuf.h # std::cout重定向到GUI
|
||||
├── device/ # 硬件设备管理
|
||||
│ └── device_manager.h/cpp # 相机设备单例管理 (统一设备接口)
|
||||
├── gui/ # 用户界面 (Qt6)
|
||||
│ └── mainwindow.h/cpp/ui # 主窗口实现 (实时预览+设置界面)
|
||||
├── redis/ # 通信模块
|
||||
│ └── redis_communicator.h/cpp # Redis客户端封装 (跨数据库支持)
|
||||
├── task/ # 任务调度
|
||||
│ └── task_manager.h/cpp # 任务管理器 (队列+线程+算法调度)
|
||||
├── vision/ # 系统控制
|
||||
│ └── vision_controller.h/cpp # 顶层控制器 (Redis+Task协调)
|
||||
├── common_types.h # 通用数据类型 (Point3D, CameraIntrinsics等)
|
||||
├── tools/ # 工具程序目录
|
||||
│ ├── calibration_tool/ # 相机标定工具
|
||||
│ ├── slot_algo_tuner/ # 货位算法调参工具
|
||||
│ └── intrinsic_dumper/ # 相机内参导出工具
|
||||
└── main.cpp # 程序入口
|
||||
```
|
||||
|
||||
## 3. 核心架构设计
|
||||
|
||||
系统采用分层架构设计,各模块职责明确:
|
||||
|
||||
- **展示层 (Presentation)**: `MainWindow` 负责Qt6界面显示、实时相机预览、手动控制、参数配置及日志展示。
|
||||
- **控制层 (Control)**: `VisionController` 作为系统级控制器,负责服务的启动/停止,协调Redis通信和任务管理,使用回调机制解耦模块间依赖。
|
||||
- **业务逻辑层 (Business Logic)**: `TaskManager` 负责任务队列管理、算法调度和结果处理;`DeviceManager` 作为硬件资源的统一访问点(单例模式)。
|
||||
- **算法层 (Algorithm)**: 提供具体的视觉检测功能,所有算法继承自 `DetectionBase`,支持统一的 `execute()` 接口。
|
||||
- **基础设施层 (Infrastructure)**: `CameraCapture` 封装底层相机SDK调用,`RedisCommunicator` 处理跨数据库通信,`ConfigManager` 管理系统配置。
|
||||
|
||||
### 系统分层架构图
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph Presentation ["展示层 (Presentation)"]
|
||||
direction TB
|
||||
GUI[MainWindow]
|
||||
end
|
||||
|
||||
subgraph Control ["控制层 (Control)"]
|
||||
VC[VisionController]
|
||||
end
|
||||
|
||||
subgraph Business ["业务逻辑层 (Business Logic)"]
|
||||
direction TB
|
||||
TM[TaskManager]
|
||||
DM[DeviceManager]
|
||||
end
|
||||
|
||||
subgraph Algorithm ["算法层 (Algorithm)"]
|
||||
direction TB
|
||||
DB[DetectionBase]
|
||||
Det[Concrete Detections<br/>(Slot, Beam, etc.)]
|
||||
end
|
||||
|
||||
subgraph Infrastructure ["基础设施层 (Infrastructure)"]
|
||||
direction TB
|
||||
Cam[CameraCapture]
|
||||
Redis[RedisCommunicator]
|
||||
Conf[ConfigManager]
|
||||
end
|
||||
|
||||
%% 层级调用关系
|
||||
GUI --> VC
|
||||
VC --> TM
|
||||
VC --> Redis
|
||||
|
||||
TM --> DM
|
||||
TM --> DB
|
||||
DB <|-- Det
|
||||
|
||||
DM --> Cam
|
||||
DM --> MVS[MvsMultiCameraCapture]
|
||||
|
||||
%% 跨层辅助调用
|
||||
GUI -.-> Conf
|
||||
TM -.-> Conf
|
||||
|
||||
style Presentation fill:#e1f5fe,stroke:#01579b
|
||||
style Control fill:#e8f5e9,stroke:#2e7d32
|
||||
style Business fill:#fff3e0,stroke:#ef6c00
|
||||
style Algorithm fill:#f3e5f5,stroke:#7b1fa2
|
||||
style Infrastructure fill:#eceff1,stroke:#455a64
|
||||
```
|
||||
|
||||
### 系统类图
|
||||
```mermaid
|
||||
classDiagram
|
||||
class MainWindow {
|
||||
+VisionController visionController_
|
||||
+QTimer imageTimer_
|
||||
+updateImage()
|
||||
+onSaveSettings()
|
||||
+showLogMessage()
|
||||
}
|
||||
|
||||
class VisionController {
|
||||
+shared_ptr<RedisCommunicator> redis_comm_
|
||||
+shared_ptr<RedisCommunicator> redis_result_comm_
|
||||
+shared_ptr<TaskManager> task_manager_
|
||||
+initialize()
|
||||
+start()
|
||||
+stop()
|
||||
-onTaskReceived()
|
||||
}
|
||||
|
||||
class DeviceManager {
|
||||
<<Singleton>>
|
||||
+shared_ptr<CameraCapture> capture_
|
||||
+unique_ptr<MvsMultiCameraCapture> mvs_cameras_
|
||||
+initialize()
|
||||
+startAll()
|
||||
+getLatestImages()
|
||||
+computePointCloud()
|
||||
+get2DCameraImage()
|
||||
}
|
||||
|
||||
class TaskManager {
|
||||
+queue<RedisTaskData> task_queue_
|
||||
+map<int, DetectionBase*> detectors_
|
||||
+thread execution_thread_
|
||||
+handleTask()
|
||||
+executeDetectionTask()
|
||||
-executeVisualInventoryLoop()
|
||||
-processResult()
|
||||
-addWarningAlarmSignals()
|
||||
}
|
||||
|
||||
class CameraCapture {
|
||||
+vector<CameraInfo> cameras_
|
||||
+getLatestImages()
|
||||
+computePointCloud()
|
||||
+start()
|
||||
-captureThreadFunc()
|
||||
}
|
||||
|
||||
class MvsMultiCameraCapture {
|
||||
+vector<CameraInfo> cameras_
|
||||
+getLatestImage()
|
||||
+start()
|
||||
}
|
||||
|
||||
class RedisCommunicator {
|
||||
+connect()
|
||||
+startListening()
|
||||
+writeString()
|
||||
+setTaskCallback()
|
||||
}
|
||||
|
||||
class ConfigManager {
|
||||
<<Singleton>>
|
||||
+json config_data_
|
||||
+loadConfig()
|
||||
+saveConfig()
|
||||
+getValue()
|
||||
}
|
||||
|
||||
class DetectionBase {
|
||||
<<Abstract>>
|
||||
+execute(depth, color, side, result, point_cloud, beam_length)
|
||||
}
|
||||
|
||||
class SlotOccupancyDetection {
|
||||
+execute()
|
||||
}
|
||||
|
||||
class PalletOffsetDetection {
|
||||
+execute()
|
||||
}
|
||||
|
||||
class BeamRackDeflectionDetection {
|
||||
+execute()
|
||||
}
|
||||
|
||||
class VisualInventoryDetection {
|
||||
+execute()
|
||||
}
|
||||
|
||||
MainWindow --> VisionController : 拥有并管理
|
||||
VisionController --> RedisCommunicator : 管理 (任务监听)
|
||||
VisionController --> TaskManager : 分发任务
|
||||
RedisCommunicator --> VisionController : 回调通知 (onTaskReceived)
|
||||
|
||||
VisionController ..> DeviceManager : 依赖(全局单例)
|
||||
TaskManager ..> DeviceManager : 获取图像数据 (Dependency)
|
||||
DeviceManager --> CameraCapture : 拥有 (深度相机)
|
||||
DeviceManager --> MvsMultiCameraCapture : 拥有 (2D相机)
|
||||
|
||||
TaskManager --> DetectionBase : 调用算法
|
||||
DetectionBase <|-- SlotOccupancyDetection : 继承
|
||||
DetectionBase <|-- PalletOffsetDetection : 继承
|
||||
DetectionBase <|-- BeamRackDeflectionDetection : 继承
|
||||
DetectionBase <|-- VisualInventoryDetection : 继承
|
||||
|
||||
MainWindow ..> ConfigManager : 读写配置 (Dependency)
|
||||
TaskManager ..> ConfigManager : 读取参数 (Dependency)
|
||||
MainWindow ..> DeviceManager : 图像显示 (Dependency)
|
||||
```
|
||||
|
||||
## 4. 关键模块详解
|
||||
|
||||
### 4.1 GUI与主入口 (MainWindow)
|
||||
- **职责**: Qt6应用程序主窗口,负责UI渲染、用户交互、参数配置、实时预览及日志展示。
|
||||
- **调用关系**:
|
||||
- 程序启动时创建 `VisionController` 并初始化系统。
|
||||
- 通过 `QTimer` (30FPS) 定期从 `DeviceManager` 获取最新图像更新界面显示。
|
||||
- **实时预览**: 支持深度图伪彩色显示和彩色图显示,带自适应缩放。
|
||||
- **设置界面**: Settings Tab提供完整的算法参数配置,包括:
|
||||
- Beam/Rack Deflection: 横梁/立柱变形检测阈值和ROI配置
|
||||
- Pallet Offset: 托盘位置偏移检测参数
|
||||
- 系统配置: Redis连接参数、相机设置等
|
||||
- **日志显示**: 通过 `LogStreamBuf` 将 `std::cout/cerr` 重定向到GUI日志窗口。
|
||||
- 通过 `ConfigManager` 加载和保存 `config.json` 配置,支持热重载。
|
||||
|
||||
### 4.2 视觉控制器 (VisionController)
|
||||
- **职责**: 系统的核心控制器,协调Redis通信和任务管理,支持无头模式运行。
|
||||
- **架构特点**:
|
||||
- 使用智能指针管理 `RedisCommunicator` 和 `TaskManager` 生命周期。
|
||||
- 支持跨数据库Redis操作:任务监听DB(输入)和结果写入DB(输出)。
|
||||
- 通过回调机制实现模块解耦,避免循环依赖。
|
||||
- **工作流程**:
|
||||
1. `initialize()`: 创建并初始化两个Redis连接器(任务DB和结果DB)。
|
||||
2. 初始化 `TaskManager`,传入Redis连接器用于结果写入和任务状态清空。
|
||||
3. `start()`: 启动Redis任务监听线程,设置任务接收回调。
|
||||
4. `onTaskReceived()`: 收到Redis任务时,通过回调转发给 `TaskManager::handleTask()`。
|
||||
|
||||
### 4.3 任务管理 (TaskManager)
|
||||
- **职责**: 任务队列管理、算法调度、结果处理和跨线程执行的核心业务逻辑处理器。
|
||||
- **架构特点**:
|
||||
- **异步处理**: 使用任务队列 + 独立执行线程,避免阻塞Redis监听和GUI。
|
||||
- **相机智能分配**: 根据任务Flag自动选择合适的相机设备和数据类型。
|
||||
- **去重机制**: Flag 4视觉盘点支持连续扫描和QR码去重。
|
||||
- **状态管理**: 提供任务执行状态查询接口,支持外部监控。
|
||||
- **工作流**:
|
||||
1. `handleTask()`: 接收Redis任务,加入线程安全的任务队列。
|
||||
2. `taskExecutionThreadFunc()`: 后台线程持续处理队列任务。
|
||||
3. **相机选择**: 根据Flag选择相机:
|
||||
- Flag 1: MVS 2D相机 (SN: DA8743029左/DA8742900右)
|
||||
- Flag 2/3: Percipio深度相机 (SN: 207000146458左/207000146703右)
|
||||
- Flag 4: MVS 2D相机 (SN: DA8789631) + 连续扫描循环
|
||||
4. **数据获取**: 调用 `DeviceManager` 获取图像,Flag 2/3时生成点云。
|
||||
5. **算法执行**: 调用对应的 `DetectionBase::execute()` 方法。
|
||||
6. **结果处理**: `processResult()` 格式化JSON、计算警告/报警、写入Redis结果DB。
|
||||
|
||||
### 4.4 设备管理 (DeviceManager)
|
||||
- **职责**: 多类型相机的统一管理接口,全系统硬件资源的单例访问点。
|
||||
- **架构特点**:
|
||||
- **双SDK支持**: 同时管理Percipio深度相机和MVS 2D相机。
|
||||
- **统一接口**: 提供一致的设备枚举、启动/停止和数据获取接口。
|
||||
- **线程安全**: 所有接口都是线程安全的,支持并发访问。
|
||||
- **资源管理**: 使用智能指针和RAII确保相机资源正确释放。
|
||||
- **功能**:
|
||||
- `initialize()`: 扫描并初始化所有类型的相机设备。
|
||||
- `getLatestImages()`: 统一的图像获取接口,支持深度图+彩色图。
|
||||
- `get2DCameraImage()`: 专门的2D相机图像获取接口。
|
||||
- `computePointCloud()`: 基于深度图和相机内参计算3D点云。
|
||||
- **相机索引映射**: 内部管理深度相机和2D相机的索引映射。
|
||||
|
||||
### 4.5 相机驱动层
|
||||
- **Percipio深度相机** (`ty_multi_camera_capture.cpp`):
|
||||
- 基于图漾工业相机SDK,支持TY系列深度相机。
|
||||
- 为每个相机维护独立采集线程和帧缓冲区。
|
||||
- 支持深度图和彩色图同步采集,内部处理时间戳对齐。
|
||||
- **点云计算**: 集成 `TYMapDepthImageToPoint3d`,利用相机标定参数生成精确3D点云。
|
||||
- 自动畸变校正和深度数据滤波。
|
||||
|
||||
- **MVS 2D相机** (`mvs_multi_camera_capture.cpp`):
|
||||
- 基于海康工业相机SDK,支持MV系列2D相机。
|
||||
- 支持连续采集模式,内部缓冲区管理。
|
||||
- 提供高帧率彩色图像采集,适用于快速检测场景。
|
||||
- 支持相机序列号匹配,便于多相机场景下的设备识别。
|
||||
|
||||
### 4.6 配置管理 (ConfigManager)
|
||||
- **职责**: 管理 `config.json` 文件,集中管理系统配置。
|
||||
- **管理内容**:
|
||||
- Redis 连接信息。
|
||||
- 算法阈值 (Beam/Rack, Pallet Offset 等)。
|
||||
- ROI (Region of Interest) 坐标点。
|
||||
- 系统通用参数 (最小/最大深度等)。
|
||||
- **特性**: 单例模式,支持热加载(部分参数)和持久化保存。程序启动时由 `MainWindow` 加载,确保算法使用持久化的用户设置。GUI中的Settings Tab直接操作此模块。
|
||||
|
||||
## 5. 系统执行与数据流
|
||||
|
||||
### 5.1 初始化流程
|
||||
1. **程序启动**: `main.cpp` 创建Qt6应用程序,设置Fusion样式。
|
||||
2. **MainWindow构造**:
|
||||
- 初始化Qt6 UI界面(主窗口 + Settings选项卡)。
|
||||
- **配置加载**: 调用 `ConfigManager::loadConfig()` 从 `config.json` 加载系统配置。
|
||||
- **设备初始化**: 调用 `DeviceManager::initialize()` 扫描Percipio和MVS相机。
|
||||
- **控制器创建**: 实例化 `VisionController`,传入Redis配置参数。
|
||||
- **Redis初始化**: `VisionController::initialize()` 创建任务监听和结果写入的Redis连接器。
|
||||
- **定时器启动**: 启动30FPS的 `QTimer` 用于实时图像预览。
|
||||
3. **设备启动**: 调用 `DeviceManager::startAll()` 启动所有相机采集线程。
|
||||
4. **服务启动**: 调用 `VisionController::start()` 开启Redis监听,确保设备就绪后再接收任务。
|
||||
|
||||
### 5.2 自动任务执行流 (Redis触发)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant WMS as WMS/外部系统
|
||||
participant Redis_Task as Redis_Task_DB
|
||||
participant RC_Task as RedisCommunicator_Task
|
||||
participant VC as VisionController
|
||||
participant TM as TaskManager
|
||||
participant DM as DeviceManager
|
||||
participant Algo as DetectionAlgorithm
|
||||
participant RC_Result as RedisCommunicator_Result
|
||||
participant Redis_Result as Redis_Result_DB
|
||||
|
||||
WMS->>Redis_Task: SET vision_task_flag=1,side=left,time=xxx
|
||||
Redis_Task->>RC_Task: Key change notification
|
||||
RC_Task->>VC: onTaskReceived(task_data)
|
||||
VC->>TM: handleTask(task_data)
|
||||
|
||||
activate TM
|
||||
TM->>TM: Queue task (async)
|
||||
TM->>DM: getLatestImages() or get2DCameraImage()
|
||||
DM-->>TM: images (depth+color or 2D only)
|
||||
|
||||
alt Flag 2/3 (需要点云)
|
||||
TM->>DM: computePointCloud(depth)
|
||||
DM-->>TM: point_cloud (vector<Point3D>)
|
||||
end
|
||||
|
||||
TM->>Algo: execute(images, point_cloud, ...)
|
||||
activate Algo
|
||||
Algo-->>TM: DetectionResult
|
||||
deactivate Algo
|
||||
|
||||
TM->>TM: processResult() + addWarningAlarmSignals()
|
||||
TM->>RC_Result: writeDetectionResult(json_map)
|
||||
RC_Result->>Redis_Result: MSET key1=value1 key2=value2 ...
|
||||
|
||||
TM->>RC_Task: writeString(vision_task_flag, "0")
|
||||
TM->>RC_Task: writeString(vision_task_side, "")
|
||||
TM->>RC_Task: writeString(vision_task_time, "")
|
||||
RC_Task->>Redis_Task: Clear task flags
|
||||
deactivate TM
|
||||
```
|
||||
|
||||
1. **外部触发**: WMS系统通过Redis Task DB发布任务(设置 `vision_task_flag`、`side`、`time`)。
|
||||
2. **异步接收**: `RedisCommunicator_Task` 监听Task DB,触发回调给 `VisionController`。
|
||||
3. **任务队列**: `VisionController` 将任务加入 `TaskManager` 的线程安全队列。
|
||||
4. **后台执行**: `TaskManager` 执行线程处理任务,根据Flag选择相机和算法:
|
||||
- **Flag 1**: MVS 2D相机 → `SlotOccupancyDetection`
|
||||
- **Flag 2**: Percipio深度相机 → `PalletOffsetDetection` (带点云)
|
||||
- **Flag 3**: Percipio深度相机 → `BeamRackDeflectionDetection` (带点云)
|
||||
- **Flag 4**: MVS 2D相机 → `VisualInventoryDetection` (连续循环+QR识别)
|
||||
5. **智能数据获取**: 根据任务类型调用相应的 `DeviceManager` 接口。
|
||||
6. **结果处理**: 计算警告/报警信号,格式化JSON结果。
|
||||
7. **跨DB写入**: 结果写入Redis Result DB,任务状态清理写入Task DB。
|
||||
|
||||
### 5.3 实时监控执行流 (GUI)
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Timer as QTimer (30FPS)
|
||||
participant MainWin as MainWindow
|
||||
participant DM as DeviceManager
|
||||
|
||||
loop 每33ms
|
||||
Timer->>MainWin: timeout()
|
||||
activate MainWin
|
||||
MainWin->>DM: getLatestImages(0) + get2DCameraImage(0)
|
||||
DM-->>MainWin: depth_img, color_img, mvs_img
|
||||
|
||||
alt 深度相机活跃
|
||||
MainWin->>MainWin: applyColorMap(depth_img) → 伪彩色显示
|
||||
else 2D相机活跃
|
||||
MainWin->>MainWin: 显示彩色图像
|
||||
end
|
||||
|
||||
MainWin->>MainWin: MatToQImage() + scaleToFit()
|
||||
MainWin->>MainWin: update QLabel displays
|
||||
deactivate MainWin
|
||||
end
|
||||
```
|
||||
|
||||
1. **高频刷新**: `QTimer` 以30FPS触发 `updateImage()`,确保流畅的实时预览。
|
||||
2. **多相机预览**: 同时获取深度相机和2D相机的最新图像,支持混合显示。
|
||||
3. **图像处理**: 深度图应用伪彩色映射,便于观察深度信息;彩色图直接显示。
|
||||
4. **自适应渲染**: OpenCV Mat转换为QImage,支持窗口大小自适应缩放。
|
||||
5. **状态同步**: 图像显示与任务执行异步进行,不影响检测性能。
|
||||
|
||||
## 6. 异常处理与日志
|
||||
- **日志**: 使用 `LogManager` 和 `spdlog` (如果集成) 或标准输出。
|
||||
- **重定向**: `LogStreamBuf` 将 `std::cout/cerr` 重定向到GUI的日志窗口,方便现场调试。
|
||||
- **错误恢复**: 相机掉线重连机制(在驱动层实现或计划中)。
|
||||
|
||||
## 6. 检测算法详解
|
||||
|
||||
### 6.1 算法框架 (DetectionBase)
|
||||
所有检测算法继承自 `DetectionBase` 抽象基类,提供统一的接口:
|
||||
```cpp
|
||||
virtual bool execute(const cv::Mat& depth_img,
|
||||
const cv::Mat& color_img,
|
||||
const std::string& side,
|
||||
DetectionResult& result,
|
||||
const std::vector<Point3D>* point_cloud = nullptr,
|
||||
double beam_length = 0.0) = 0;
|
||||
```
|
||||
|
||||
### 6.2 具体算法实现
|
||||
|
||||
#### Flag 1: 货位占用检测 (SlotOccupancyDetection)
|
||||
- **输入**: 2D彩色图像 (MVS相机)
|
||||
- **算法**: 基于图像处理的目标检测和位置判断
|
||||
- **输出**: 货位占用状态 (occupied/free)
|
||||
- **相机**: DA8743029 (左侧), DA8742900 (右侧)
|
||||
|
||||
#### Flag 2: 托盘位置偏移检测 (PalletOffsetDetection)
|
||||
- **输入**: 深度图 + 彩色图 + 3D点云
|
||||
- **算法**: 基于点云的3D位置计算,检测托盘相对于基准位置的偏移
|
||||
- **输出**: 左右偏移(mm)、前后偏移(mm)、插孔变形(mm)、旋转角度(°)
|
||||
- **相机**: 207000146458 (左侧), 207000146703 (右侧)
|
||||
- **警告/报警**: 基于阈值的四级判断 (正常/警告/报警)
|
||||
|
||||
#### Flag 3: 横梁/立柱变形检测 (BeamRackDeflectionDetection)
|
||||
- **输入**: 深度图 + 彩色图 + 3D点云
|
||||
- **算法**: 基于点云的结构变形测量
|
||||
- **输出**: 横梁弯曲量(mm)、立柱弯曲量(mm)
|
||||
- **相机**: 207000146458 (左侧), 207000146703 (右侧)
|
||||
- **警告/报警**: 基于阈值的四级判断
|
||||
|
||||
#### Flag 4: 视觉盘点检测 (VisualInventoryDetection)
|
||||
- **输入**: 2D彩色图像 (MVS相机)
|
||||
- **算法**: 基于Halcon的QR码识别,支持连续扫描和去重
|
||||
- **特殊机制**: 循环执行直到收到Flag 5停止信号,支持实时去重
|
||||
- **输出**: JSON格式的条码列表 `{"side": ["BOX001", "BOX002", ...]}`
|
||||
- **相机**: DA8789631 (专用盘点相机)
|
||||
|
||||
#### Flag 5: 盘点停止信号
|
||||
- **功能**: 停止Flag 4的连续扫描循环
|
||||
- **无算法执行**: 仅作为控制信号
|
||||
|
||||
### 6.3 相机分配策略
|
||||
系统根据任务Flag智能选择相机:
|
||||
|
||||
| Flag | 相机类型 | 序列号 | 位置 | 数据类型 |
|
||||
|------|---------|--------|------|----------|
|
||||
| 1 | MVS 2D | DA8743029 / DA8742900 | 左/右 | 彩色图 |
|
||||
| 2 | Percipio深度 | 207000146458 / 207000146703 | 左/右 | 深度+彩色+点云 |
|
||||
| 3 | Percipio深度 | 207000146458 / 207000146703 | 左/右 | 深度+彩色+点云 |
|
||||
| 4 | MVS 2D | DA8789631 | 盘点专用 | 彩色图 |
|
||||
|
||||
## 7. 编译与构建
|
||||
- **构建系统**: CMake 3.10+
|
||||
- **编程语言**: C++17
|
||||
- **目标平台**: Windows 10/11 (MSVC 2022 v143)
|
||||
- **主要依赖**:
|
||||
- **Qt6**: Widgets组件 (GUI框架)
|
||||
- **OpenCV 4.x**: 图像处理和计算机视觉
|
||||
- **Open3D 0.17+**: 3D点云处理
|
||||
- **Percipio SDK**: 图漾工业相机驱动
|
||||
- **MVS SDK**: 海康工业相机驱动
|
||||
- **Redis C++ Client**: hiredis + redis-plus-plus (Redis通信)
|
||||
- **可选依赖**: Halcon (用于QR码识别,在Flag 4中使用)
|
||||
- **构建流程**: 标准CMake流程,支持Release/Debug配置
|
||||
|
||||
---
|
||||
|
||||
*文档更新时间: 2025-01-06*
|
||||
133
docs/project_class_interaction.md
Normal file
133
docs/project_class_interaction.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# 项目功能类调用关系说明 (Project Class Interaction Documentation)
|
||||
|
||||
本主要介绍 `image_capture` 项目核心功能类之间的调用关系、数据流向以及模块划分。
|
||||
|
||||
## 1. 核心模块概览 (Core Modules Overview)
|
||||
|
||||
系统主要由以下几个核心模块组成:
|
||||
|
||||
* **GUI 模块 (`MainWindow`)**: 程序的入口与界面显示,负责系统初始化。
|
||||
* **Vision 控制器 (`VisionController`)**: 系统的核心中枢,协调通信与任务管理。
|
||||
* **任务管理 (`TaskManager`)**: 负责具体的业务逻辑执行、算法调度和结果处理。
|
||||
* **设备管理 (`DeviceManager`)**: 负责相机等硬件设备的统一管理(单例模式)。
|
||||
* **通信模块 (`RedisCommunicator`)**: 负责与外部系统(如 WMS)通过 Redis 交互。
|
||||
* **算法模块 (`DetectionBase` 及其子类)**: 具体的图像处理算法。
|
||||
|
||||
## 2. 类调用关系图 (Class Interaction Diagram)
|
||||
|
||||
```mermaid
|
||||
classDiagram
|
||||
class MainWindow {
|
||||
+VisionController vision_controller
|
||||
+init()
|
||||
}
|
||||
|
||||
class VisionController {
|
||||
-shared_ptr<RedisCommunicator> redis_comm
|
||||
-shared_ptr<TaskManager> task_manager
|
||||
+start()
|
||||
+stop()
|
||||
-onTaskReceived()
|
||||
}
|
||||
|
||||
class RedisCommunicator {
|
||||
+startListening()
|
||||
+writeDetectionResult()
|
||||
+setTaskCallback()
|
||||
}
|
||||
|
||||
class TaskManager {
|
||||
-queue<RedisTaskData> task_queue
|
||||
-map detectors
|
||||
+handleTask()
|
||||
-executeDetectionTask()
|
||||
-getDetector(flag)
|
||||
}
|
||||
|
||||
class DeviceManager {
|
||||
<<Singleton>>
|
||||
+getInstance()
|
||||
+getLatestImages()
|
||||
+startAll()
|
||||
}
|
||||
|
||||
class DetectionBase {
|
||||
<<Abstract>>
|
||||
+execute(depth, color, ...)
|
||||
}
|
||||
|
||||
class ConcreteDetection {
|
||||
+execute()
|
||||
}
|
||||
|
||||
MainWindow --> VisionController : 拥有并管理
|
||||
VisionController --> RedisCommunicator : 管理 (监听/发送)
|
||||
VisionController --> TaskManager : 分发任务
|
||||
RedisCommunicator --> VisionController : 回调通知 (Callback)
|
||||
TaskManager ..> DeviceManager : 获取图像数据 (Dependency)
|
||||
TaskManager --> DetectionBase : 调用算法
|
||||
DetectionBase <|-- ConcreteDetection : 继承
|
||||
```
|
||||
|
||||
## 3. 详细调用流程 (Detailed Call Flow)
|
||||
|
||||
### 3.1 系统初始化与启动 (Initialization & Startup)
|
||||
1. **Entry Point**: `main.cpp` 创建 `QApplication` 并实例化 `MainWindow`。
|
||||
2. **MainWindow**:
|
||||
* 构造函数中初始化界面。
|
||||
* 调用 `DeviceManager::getInstance().initialize()` 扫描并初始化相机设备。
|
||||
* 实例化 `VisionController` 成员变量。
|
||||
* 调用 `VisionController::initialize()`,配置 Redis 连接参数。
|
||||
* 调用 `VisionController::start()` 启动后台服务。
|
||||
3. **VisionController**:
|
||||
* 在 `start()` 中调用 `RedisCommunicator::startListening()` 开启监听线程。
|
||||
|
||||
### 3.2 任务触发与执行 (Task Trigger & Execution)
|
||||
当 Redis 中 `vision_task_flag` 发生变化时,流程如下:
|
||||
|
||||
1. **RedisCommunicator**:
|
||||
* 监听线程检测到 Flag 变化。
|
||||
* 通过回调函数 `VisionController::onTaskReceived` 通知控制器。
|
||||
2. **VisionController**:
|
||||
* `onTaskReceived` 将接收到的 `RedisTaskData` 传递给 `TaskManager::handleTask`。
|
||||
3. **TaskManager**:
|
||||
* `handleTask` 将任务推入内部的任务队列 `task_queue_`。
|
||||
* 工作线程 `taskExecutionThreadFunc` 从队列中取出任务。
|
||||
* **获取图像**: 调用 `DeviceManager::getInstance().getLatestImages(...)` 获取当前最新的深度图和彩色图。
|
||||
* **选择算法**: 根据任务 Flag 调用 `getDetector(flag)` 获取对应的算法实例(如 `PalletOffsetDetection`)。
|
||||
* **执行算法**: 调用 `detector->execute(depth_img, color_img, ...)` 进行计算。
|
||||
* **结果封装**: 将算法返回的数据填充到 `DetectionResult` 结构体中。
|
||||
|
||||
### 3.3 结果处理 (Result Handling)
|
||||
算法执行完成后:
|
||||
|
||||
1. **TaskManager**:
|
||||
* 调用 `processResult(result)`。
|
||||
* 该函数会格式化结果为 JSON 字符串,并计算报警/警告状态。
|
||||
* 调用 `redis_result_comm_->writeDetectionResult(json)` 将结果写入 Redis。
|
||||
2. **RedisCommunicator**:
|
||||
* 执行 Redis SET 命令,将 JSON 数据写入指定的 Key。
|
||||
|
||||
## 4. 关键类说明 (Key Class Descriptions)
|
||||
|
||||
### VisionController (`src/vision/vision_controller.h`)
|
||||
* **职责**: 作为系统的外观(Facade),对外提供统一的 start/stop 接口,对内协调 Redis 和 TaskManager。
|
||||
* **特点**: 它是 MainWindow 唯一直接交互的非 GUI 业务类。
|
||||
|
||||
### DeviceManager (`src/device/device_manager.h`)
|
||||
* **职责**: 屏蔽底层相机 SDK(Percipio / MVS)的差异,提供统一的图像获取接口。
|
||||
* **模式**: 单例模式 (Singleton)。确保系统中只有一份硬件控制实例。
|
||||
|
||||
### TaskManager (`src/task/task_manager.h`)
|
||||
* **职责**: 真正的“大脑”。负责任务的缓冲(队列)、图像获取、算法调度和结果回传。
|
||||
* **并发**: 拥有独立的执行线程,避免阻塞 Redis 监听线程或 GUI 线程。
|
||||
|
||||
### RedisCommunicator (`src/redis/redis_communicator.h`)
|
||||
* **职责**: 封装 Redis 的底层 socket 操作,提供易用的读写接口和异步监听机制。
|
||||
|
||||
### DetectionBase (`src/algorithm/core/detection_base.h`)
|
||||
* **职责**: 定义所有检测算法的统一接口 `execute`。
|
||||
* **扩展**: 新增算法只需继承此类并在 `TaskManager` 中注册即可。
|
||||
|
||||
---
|
||||
*文档生成时间: 2025-12-29*
|
||||
251
image_capture/CMakeLists.txt
Normal file
251
image_capture/CMakeLists.txt
Normal file
@@ -0,0 +1,251 @@
|
||||
cmake_minimum_required(VERSION 3.10)
|
||||
|
||||
# 支持 MSVC
|
||||
# 注意:配置 CMake 时请选择合适的生成器(例如 "Visual Studio 17 2022" 或 "Ninja")
|
||||
|
||||
project(image_capture LANGUAGES CXX)
|
||||
|
||||
# 检查是否使用 MSVC 风格的编译器
|
||||
if(NOT (MSVC OR CMAKE_CXX_COMPILER_ID STREQUAL "MSVC"))
|
||||
message(FATAL_ERROR "This project requires MSVC (Visual Studio) compiler. Please use Ninja with MSVC or Visual Studio generator.")
|
||||
endif()
|
||||
|
||||
# ============================================================================
|
||||
# 输出目录
|
||||
# ============================================================================
|
||||
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
|
||||
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
|
||||
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
|
||||
|
||||
# 生成 compile_commands.json 文件,供 IntelliSense 使用
|
||||
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
|
||||
|
||||
# ============================================================================
|
||||
# CMake 模块路径
|
||||
# ============================================================================
|
||||
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
|
||||
include(CompilerOptions)
|
||||
|
||||
# ============================================================================
|
||||
# 依赖项 (Qt6, OpenCV)
|
||||
# ============================================================================
|
||||
include(Dependencies)
|
||||
|
||||
# ============================================================================
|
||||
# 相机 SDK 配置
|
||||
# ============================================================================
|
||||
include(PercipioSDK)
|
||||
|
||||
# ============================================================================
|
||||
# 算法库
|
||||
# ============================================================================
|
||||
add_library(algorithm_lib STATIC
|
||||
src/algorithm/core/detection_base.cpp
|
||||
src/algorithm/core/detection_result.cpp
|
||||
src/algorithm/utils/image_processor.cpp
|
||||
|
||||
src/algorithm/detections/slot_occupancy/slot_occupancy_detection.cpp
|
||||
src/algorithm/detections/pallet_offset/pallet_offset_detection.cpp
|
||||
src/algorithm/detections/beam_rack_deflection/beam_rack_deflection_detection.cpp
|
||||
src/algorithm/detections/visual_inventory/visual_inventory_detection.cpp
|
||||
|
||||
)
|
||||
|
||||
target_link_libraries(algorithm_lib PUBLIC
|
||||
${OpenCV_LIBS}
|
||||
Open3D::Open3D
|
||||
Qt6::Core
|
||||
${HALCON_LIBRARIES}
|
||||
)
|
||||
|
||||
target_include_directories(algorithm_lib PUBLIC
|
||||
${HALCON_INCLUDE_DIRS}
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
src
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/percipio/common
|
||||
)
|
||||
|
||||
target_link_directories(algorithm_lib PUBLIC ${OpenCV_LIB_DIRS})
|
||||
|
||||
# ============================================================================
|
||||
# 主可执行文件
|
||||
# ============================================================================
|
||||
set(SOURCES
|
||||
src/main.cpp
|
||||
src/camera/ty_multi_camera_capture.cpp
|
||||
src/camera/mvs_multi_camera_capture.cpp
|
||||
src/device/device_manager.cpp
|
||||
src/redis/redis_communicator.cpp
|
||||
src/task/task_manager.cpp
|
||||
src/vision/vision_controller.cpp
|
||||
src/common/log_manager.cpp
|
||||
src/common/config_manager.cpp
|
||||
src/gui/mainwindow.cpp
|
||||
src/gui/mainwindow.h
|
||||
src/gui/mainwindow.ui
|
||||
src/gui/settings_widget.cpp
|
||||
src/gui/settings_widget.h
|
||||
)
|
||||
|
||||
add_executable(${PROJECT_NAME} WIN32 ${SOURCES})
|
||||
|
||||
target_include_directories(${PROJECT_NAME} PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/src
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/mvs/Includes
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
${CMAKE_CURRENT_BINARY_DIR} # Qt AUTOUIC 生成的头文件
|
||||
)
|
||||
|
||||
target_link_libraries(${PROJECT_NAME} PRIVATE
|
||||
algorithm_lib
|
||||
cpp_api_lib
|
||||
tycam
|
||||
${OpenCV_LIBS}
|
||||
Qt6::Core
|
||||
Qt6::Widgets
|
||||
ws2_32
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/mvs/Libraries/win64/MvCameraControl.lib
|
||||
)
|
||||
|
||||
target_link_directories(${PROJECT_NAME} PRIVATE
|
||||
${OpenCV_LIB_DIRS}
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/percipio/lib/win/x64
|
||||
)
|
||||
|
||||
if(Open3D_RUNTIME_DLLS)
|
||||
foreach(DLL_FILE ${Open3D_RUNTIME_DLLS})
|
||||
get_filename_component(DLL_NAME "${DLL_FILE}" NAME)
|
||||
add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_if_different
|
||||
"${DLL_FILE}"
|
||||
"$<TARGET_FILE_DIR:${PROJECT_NAME}>"
|
||||
COMMENT "Copying runtime dependency: ${DLL_NAME}"
|
||||
)
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
# Copy tycam.dll to executable directory
|
||||
add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_if_different
|
||||
"${CAMPORT3_LIB_DIR}/tycam.dll"
|
||||
"$<TARGET_FILE_DIR:${PROJECT_NAME}>"
|
||||
COMMENT "Copying tycam.dll to executable directory"
|
||||
)
|
||||
|
||||
# Copy Halcon DLLs
|
||||
if(HALCON_ROOT)
|
||||
set(HALCON_BIN_DIR "${HALCON_ROOT}/bin/x64-win64")
|
||||
# Verify directory exists
|
||||
if(EXISTS "${HALCON_BIN_DIR}")
|
||||
set(HALCON_DLLS "halcon.dll" "halconcpp.dll")
|
||||
foreach(DLL_NAME ${HALCON_DLLS})
|
||||
add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_if_different
|
||||
"${HALCON_BIN_DIR}/${DLL_NAME}"
|
||||
"$<TARGET_FILE_DIR:${PROJECT_NAME}>"
|
||||
COMMENT "Copying Halcon DLL: ${DLL_NAME}"
|
||||
)
|
||||
endforeach()
|
||||
else()
|
||||
message(WARNING "Halcon bin directory not found at: ${HALCON_BIN_DIR}. DLLs will not be copied.")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
||||
|
||||
|
||||
# ============================================================================
|
||||
# 工具链
|
||||
# ============================================================================
|
||||
add_executable(slot_algo_tuner WIN32
|
||||
src/tools/slot_algo_tuner/main.cpp
|
||||
src/tools/slot_algo_tuner/tuner_widget.cpp
|
||||
src/tools/slot_algo_tuner/tuner_widget.h
|
||||
)
|
||||
|
||||
target_include_directories(slot_algo_tuner PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/src
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
${CMAKE_CURRENT_BINARY_DIR}
|
||||
)
|
||||
|
||||
target_link_libraries(slot_algo_tuner PRIVATE
|
||||
${OpenCV_LIBS}
|
||||
Qt6::Core
|
||||
Qt6::Widgets
|
||||
)
|
||||
|
||||
target_link_directories(slot_algo_tuner PRIVATE ${OpenCV_LIB_DIRS})
|
||||
|
||||
add_executable(calibration_tool WIN32
|
||||
src/tools/calibration_tool/main.cpp
|
||||
src/tools/calibration_tool/calibration_widget.cpp
|
||||
src/tools/calibration_tool/calibration_widget.h
|
||||
)
|
||||
|
||||
target_include_directories(calibration_tool PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/src
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
${CMAKE_CURRENT_BINARY_DIR}
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/percipio/include
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/mvs/Includes
|
||||
)
|
||||
|
||||
target_link_libraries(calibration_tool PRIVATE
|
||||
${OpenCV_LIBS}
|
||||
Qt6::Core
|
||||
Qt6::Widgets
|
||||
Open3D::Open3D
|
||||
tycam
|
||||
)
|
||||
|
||||
target_compile_definitions(calibration_tool PRIVATE NOMINMAX)
|
||||
|
||||
target_link_directories(calibration_tool PRIVATE ${OpenCV_LIB_DIRS})
|
||||
|
||||
# Intrinsic Dumper Tool
|
||||
add_executable(intrinsic_dumper
|
||||
src/tools/intrinsic_dumper/main.cpp
|
||||
)
|
||||
|
||||
target_include_directories(intrinsic_dumper PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/src
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
${CMAKE_CURRENT_BINARY_DIR}
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/percipio/include
|
||||
)
|
||||
|
||||
target_link_libraries(intrinsic_dumper PRIVATE
|
||||
Qt6::Core
|
||||
tycam
|
||||
)
|
||||
|
||||
# Reference Generator (Teach Tool)
|
||||
add_executable(generate_reference
|
||||
src/tools/generate_reference/main.cpp
|
||||
src/device/device_manager.cpp
|
||||
src/camera/ty_multi_camera_capture.cpp
|
||||
src/camera/mvs_multi_camera_capture.cpp
|
||||
src/common/log_manager.cpp
|
||||
src/common/config_manager.cpp
|
||||
)
|
||||
|
||||
target_include_directories(generate_reference PRIVATE
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/src
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
${CMAKE_CURRENT_BINARY_DIR}
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/percipio/include
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/mvs/Includes
|
||||
)
|
||||
|
||||
target_link_libraries(generate_reference PRIVATE
|
||||
algorithm_lib
|
||||
cpp_api_lib
|
||||
${OpenCV_LIBS}
|
||||
Qt6::Core
|
||||
Qt6::Widgets
|
||||
tycam
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/third_party/mvs/Libraries/win64/MvCameraControl.lib
|
||||
)
|
||||
|
||||
target_link_directories(generate_reference PRIVATE ${OpenCV_LIB_DIRS})
|
||||
32
image_capture/cmake/CompilerOptions.cmake
Normal file
32
image_capture/cmake/CompilerOptions.cmake
Normal file
@@ -0,0 +1,32 @@
|
||||
# C++ Standard
|
||||
set(CMAKE_CXX_STANDARD 17)
|
||||
set(CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
|
||||
# Output Directories
|
||||
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
|
||||
|
||||
# Generate compile_commands.json
|
||||
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
|
||||
|
||||
# Definitions
|
||||
add_definitions(-DOPENCV_DEPENDENCIES)
|
||||
|
||||
# Qt6 Setup (Global)
|
||||
set(CMAKE_AUTOMOC ON)
|
||||
set(CMAKE_AUTORCC ON)
|
||||
set(CMAKE_AUTOUIC ON)
|
||||
|
||||
# Compiler Specific Options
|
||||
if(MSVC)
|
||||
# MSVC specific options
|
||||
add_compile_options(/utf-8) # Fix C4819 encoding warning
|
||||
add_compile_options(/W3) # Warning level 3
|
||||
add_compile_options(/MP) # Multi-processor compilation
|
||||
add_definitions(-D_CRT_SECURE_NO_WARNINGS) # Suppress C4996 deprecated warnings
|
||||
|
||||
add_compile_options($<$<CONFIG:Release>:/O2>) # Maximize speed
|
||||
add_compile_options($<$<CONFIG:Release>:/Ob2>) # Inline function expansion
|
||||
|
||||
|
||||
|
||||
endif()
|
||||
129
image_capture/cmake/Dependencies.cmake
Normal file
129
image_capture/cmake/Dependencies.cmake
Normal file
@@ -0,0 +1,129 @@
|
||||
# Qt6
|
||||
if(NOT Qt6_DIR AND NOT ENV{Qt6_DIR} AND NOT CMAKE_PREFIX_PATH)
|
||||
message(WARNING "Qt6 not found in environment. Please set CMAKE_PREFIX_PATH or Qt6_DIR.")
|
||||
endif()
|
||||
|
||||
find_package(Qt6 REQUIRED COMPONENTS Widgets)
|
||||
|
||||
# OpenCV
|
||||
if(DEFINED ENV{OpenCV_DIR})
|
||||
set(OpenCV_DIR $ENV{OpenCV_DIR})
|
||||
message(STATUS "Using OpenCV_DIR from environment: ${OpenCV_DIR}")
|
||||
elseif(NOT OpenCV_DIR)
|
||||
message(STATUS "OpenCV_DIR not set, trying to find OpenCV in standard locations...")
|
||||
set(LEGACY_OPENCV_PATH "D:/enviroments/OPencv4.55/OPencv4.55_MSVC/opencv/build/x64/vc15/lib")
|
||||
if(EXISTS ${LEGACY_OPENCV_PATH})
|
||||
set(OpenCV_DIR ${LEGACY_OPENCV_PATH})
|
||||
message(STATUS "Found legacy OpenCV path: ${OpenCV_DIR}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
find_package(OpenCV REQUIRED)
|
||||
|
||||
message(STATUS "OpenCV found: ${OpenCV_VERSION}")
|
||||
message(STATUS "OpenCV libraries: ${OpenCV_LIBS}")
|
||||
message(STATUS "OpenCV include dirs: ${OpenCV_INCLUDE_DIRS}")
|
||||
|
||||
# Open3D
|
||||
# Open3D
|
||||
if(DEFINED ENV{Open3D_DIR})
|
||||
set(Open3D_DIR $ENV{Open3D_DIR})
|
||||
message(STATUS "Using Open3D_DIR from environment: ${Open3D_DIR}")
|
||||
elseif(NOT Open3D_DIR)
|
||||
# Default to 0.18 Release
|
||||
set(DEFAULT_OPEN3D_PATH "D:/enviroments/Open3d/open3d-devel-windows-amd64-0.18.0-release/CMake")
|
||||
# Debug path: D:/enviroments/Open3d/open3d-devel-windows-amd64-0.18.0-debug/CMake
|
||||
|
||||
if(EXISTS ${DEFAULT_OPEN3D_PATH})
|
||||
set(Open3D_DIR ${DEFAULT_OPEN3D_PATH})
|
||||
message(STATUS "Using default Open3D path: ${Open3D_DIR}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
find_package(Open3D REQUIRED)
|
||||
message(STATUS "Open3D found: ${Open3D_VERSION}")
|
||||
message(STATUS "Open3D DIR: ${Open3D_DIR}")
|
||||
|
||||
# Find Open3D DLL and dependencies (TBB)
|
||||
# Adjust ROOT calculation based on where Config is found.
|
||||
get_filename_component(DIR_NAME "${Open3D_DIR}" NAME)
|
||||
if("${DIR_NAME}" STREQUAL "CMake")
|
||||
# Structure: root/CMake/Open3DConfig.cmake -> root is up one level
|
||||
get_filename_component(Open3D_ROOT "${Open3D_DIR}/.." ABSOLUTE)
|
||||
else()
|
||||
# Assume standard install: root/lib/cmake/Open3D/Open3DConfig.cmake -> root is up 3 levels
|
||||
get_filename_component(Open3D_ROOT "${Open3D_DIR}/../../.." ABSOLUTE)
|
||||
endif()
|
||||
|
||||
set(Open3D_BIN_DIR "${Open3D_ROOT}/bin")
|
||||
set(Open3D_RUNTIME_DLLS "")
|
||||
|
||||
find_file(Open3D_DLL NAMES Open3D.dll PATHS ${Open3D_BIN_DIR} NO_DEFAULT_PATH)
|
||||
if(Open3D_DLL)
|
||||
list(APPEND Open3D_RUNTIME_DLLS ${Open3D_DLL})
|
||||
message(STATUS "Found Open3D DLL: ${Open3D_DLL}")
|
||||
else()
|
||||
message(WARNING "Open3D DLL not found in ${Open3D_BIN_DIR}. You might need to add it to your PATH manually.")
|
||||
endif()
|
||||
|
||||
# Find TBB DLLs (tbb.dll or tbb12_debug.dll etc)
|
||||
# We glob for tbb*.dll but filter based on build type to avoid mixing runtimes
|
||||
file(GLOB TBB_ALL_DLLS "${Open3D_BIN_DIR}/tbb*.dll")
|
||||
set(TBB_DLLS ${TBB_ALL_DLLS})
|
||||
|
||||
# Filter out debug DLLs (ending in _debug.dll or d.dll)
|
||||
list(FILTER TBB_DLLS EXCLUDE REGEX ".*(_debug|d)\\.dll$")
|
||||
|
||||
if(NOT TBB_DLLS)
|
||||
# If no release DLLs found, check if we only have debug ones
|
||||
if(TBB_ALL_DLLS)
|
||||
message(WARNING "Only Debug TBB DLLs found in ${Open3D_BIN_DIR}. Release build might crash due to ABI mismatch!")
|
||||
# Fallback: copy everything (dangerous but better than nothing?)
|
||||
set(TBB_DLLS ${TBB_ALL_DLLS})
|
||||
else()
|
||||
message(WARNING "No TBB DLLs found in ${Open3D_BIN_DIR}.")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
||||
if(TBB_DLLS)
|
||||
list(APPEND Open3D_RUNTIME_DLLS ${TBB_DLLS})
|
||||
message(STATUS "Found TBB DLLs: ${TBB_DLLS}")
|
||||
endif()
|
||||
|
||||
# Halcon
|
||||
# Force usage of the user known path if possible, or fallback to environment
|
||||
set(USER_PROVIDED_HALCON_ROOT "C:/Users/cve/AppData/Local/Programs/MVTec/HALCON-23.11-Progress")
|
||||
|
||||
if(EXISTS "${USER_PROVIDED_HALCON_ROOT}")
|
||||
set(HALCON_ROOT "${USER_PROVIDED_HALCON_ROOT}")
|
||||
message(STATUS "Using user provided HALCON_ROOT: ${HALCON_ROOT}")
|
||||
elseif(DEFINED ENV{HALCONROOT})
|
||||
set(HALCON_ROOT $ENV{HALCONROOT})
|
||||
file(TO_CMAKE_PATH "${HALCON_ROOT}" HALCON_ROOT)
|
||||
message(STATUS "Using HALCON_ROOT from environment: ${HALCON_ROOT}")
|
||||
else()
|
||||
message(WARNING "HALCONROOT not found.")
|
||||
endif()
|
||||
|
||||
if(HALCON_ROOT)
|
||||
set(HALCON_INCLUDE_DIRS
|
||||
"${HALCON_ROOT}/include"
|
||||
"${HALCON_ROOT}/include/halconcpp"
|
||||
)
|
||||
|
||||
if(WIN32)
|
||||
set(HALCON_LIB_DIR "${HALCON_ROOT}/lib/x64-win64")
|
||||
if(NOT EXISTS "${HALCON_LIB_DIR}")
|
||||
set(HALCON_LIB_DIR "${HALCON_ROOT}/lib")
|
||||
endif()
|
||||
|
||||
set(HALCON_LIBRARIES
|
||||
"${HALCON_LIB_DIR}/halcon.lib"
|
||||
"${HALCON_LIB_DIR}/halconcpp.lib"
|
||||
)
|
||||
endif()
|
||||
|
||||
message(STATUS "Halcon include: ${HALCON_INCLUDE_DIRS}")
|
||||
message(STATUS "Halcon libs: ${HALCON_LIBRARIES}")
|
||||
endif()
|
||||
55
image_capture/cmake/PercipioSDK.cmake
Normal file
55
image_capture/cmake/PercipioSDK.cmake
Normal file
@@ -0,0 +1,55 @@
|
||||
# Camera SDK Paths
|
||||
set(CAMPORT3_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/third_party/percipio)
|
||||
set(CAMPORT3_LIB_DIR ${CAMPORT3_ROOT}/lib/win/x64)
|
||||
|
||||
# Import tycam library (MinGW)
|
||||
add_library(tycam SHARED IMPORTED)
|
||||
|
||||
if(EXISTS ${CAMPORT3_LIB_DIR}/libtycam.dll.a)
|
||||
set_target_properties(tycam PROPERTIES
|
||||
IMPORTED_LOCATION ${CAMPORT3_LIB_DIR}/tycam.dll
|
||||
IMPORTED_IMPLIB ${CAMPORT3_LIB_DIR}/libtycam.dll.a
|
||||
)
|
||||
message(STATUS "Using libtycam.dll.a (MinGW compatible)")
|
||||
elseif(EXISTS ${CAMPORT3_LIB_DIR}/tycam.lib)
|
||||
set_target_properties(tycam PROPERTIES
|
||||
IMPORTED_LOCATION ${CAMPORT3_LIB_DIR}/tycam.dll
|
||||
IMPORTED_IMPLIB ${CAMPORT3_LIB_DIR}/tycam.lib
|
||||
)
|
||||
message(STATUS "Using tycam.lib (may require conversion to .dll.a if linking fails)")
|
||||
else()
|
||||
message(FATAL_ERROR "Neither libtycam.dll.a nor tycam.lib found in ${CAMPORT3_LIB_DIR}")
|
||||
endif()
|
||||
|
||||
# Static API Library Sources
|
||||
set(CPP_API_SOURCES
|
||||
${CAMPORT3_ROOT}/sample_v2/cpp/Device.cpp
|
||||
${CAMPORT3_ROOT}/sample_v2/cpp/Frame.cpp
|
||||
${CAMPORT3_ROOT}/common/MatViewer.cpp
|
||||
${CAMPORT3_ROOT}/common/TYThread.cpp
|
||||
${CAMPORT3_ROOT}/common/crc32.cpp
|
||||
${CAMPORT3_ROOT}/common/json11.cpp
|
||||
${CAMPORT3_ROOT}/common/ParametersParse.cpp
|
||||
${CAMPORT3_ROOT}/common/huffman.cpp
|
||||
${CAMPORT3_ROOT}/common/ImageSpeckleFilter.cpp
|
||||
${CAMPORT3_ROOT}/common/DepthInpainter.cpp
|
||||
)
|
||||
|
||||
add_library(cpp_api_lib STATIC ${CPP_API_SOURCES})
|
||||
|
||||
target_include_directories(cpp_api_lib PUBLIC
|
||||
${CAMPORT3_ROOT}/include
|
||||
${CAMPORT3_ROOT}/sample_v2/hpp
|
||||
${CAMPORT3_ROOT}/common
|
||||
${OpenCV_INCLUDE_DIRS}
|
||||
)
|
||||
|
||||
# Fix for MinGW: Ensure standard C++ headers are found
|
||||
if(MINGW)
|
||||
target_include_directories(cpp_api_lib SYSTEM PUBLIC
|
||||
${CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES}
|
||||
)
|
||||
endif()
|
||||
|
||||
target_link_libraries(cpp_api_lib PUBLIC ${OpenCV_LIBS})
|
||||
target_link_directories(cpp_api_lib PUBLIC ${OpenCV_LIB_DIRS})
|
||||
130
image_capture/config.json
Normal file
130
image_capture/config.json
Normal file
@@ -0,0 +1,130 @@
|
||||
{
|
||||
"redis": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 6379,
|
||||
"db": 0
|
||||
},
|
||||
"cameras": {
|
||||
"depth_enabled": true,
|
||||
"color_enabled": true,
|
||||
"mapping": [
|
||||
{
|
||||
"id": "camera_0",
|
||||
"index": 0
|
||||
},
|
||||
{
|
||||
"id": "camera_1",
|
||||
"index": 1
|
||||
},
|
||||
{
|
||||
"id": "camera_2",
|
||||
"index": 2
|
||||
},
|
||||
{
|
||||
"id": "camera_3",
|
||||
"index": 3
|
||||
}
|
||||
]
|
||||
},
|
||||
"vision": {
|
||||
"save_path": "./images",
|
||||
"log_level": 1
|
||||
},
|
||||
"algorithms": {
|
||||
"beam_rack_deflection": {
|
||||
"beam_roi_points": [
|
||||
{
|
||||
"x": 100,
|
||||
"y": 50
|
||||
},
|
||||
{
|
||||
"x": 540,
|
||||
"y": 80
|
||||
},
|
||||
{
|
||||
"x": 540,
|
||||
"y": 280
|
||||
},
|
||||
{
|
||||
"x": 100,
|
||||
"y": 280
|
||||
}
|
||||
],
|
||||
"rack_roi_points": [
|
||||
{
|
||||
"x": 50,
|
||||
"y": 50
|
||||
},
|
||||
{
|
||||
"x": 150,
|
||||
"y": 50
|
||||
},
|
||||
{
|
||||
"x": 150,
|
||||
"y": 430
|
||||
},
|
||||
{
|
||||
"x": 50,
|
||||
"y": 430
|
||||
}
|
||||
],
|
||||
"beam_thresholds": {
|
||||
"A": -10.0,
|
||||
"B": -5.0,
|
||||
"C": 5.0,
|
||||
"D": 10.0
|
||||
},
|
||||
"rack_thresholds": {
|
||||
"A": -6.0,
|
||||
"B": -3.0,
|
||||
"C": 3.0,
|
||||
"D": 6.0
|
||||
}
|
||||
},
|
||||
"pallet_offset": {
|
||||
"offset_lat_mm_thresholds": {
|
||||
"A": -20.0,
|
||||
"B": -10.0,
|
||||
"C": 10.0,
|
||||
"D": 20.0
|
||||
},
|
||||
"offset_lon_mm_thresholds": {
|
||||
"A": -20.0,
|
||||
"B": -10.0,
|
||||
"C": 10.0,
|
||||
"D": 20.0
|
||||
},
|
||||
"rotation_angle_thresholds": {
|
||||
"A": -5.0,
|
||||
"B": -2.5,
|
||||
"C": 2.5,
|
||||
"D": 5.0
|
||||
},
|
||||
"hole_def_mm_left_thresholds": {
|
||||
"A": -8.0,
|
||||
"B": -4.0,
|
||||
"C": 4.0,
|
||||
"D": 8.0
|
||||
},
|
||||
"hole_def_mm_right_thresholds": {
|
||||
"A": -8.0,
|
||||
"B": -4.0,
|
||||
"C": 4.0,
|
||||
"D": 8.0
|
||||
}
|
||||
},
|
||||
"slot_occupancy": {
|
||||
"depth_threshold_mm": 100.0,
|
||||
"confidence_threshold": 0.8
|
||||
},
|
||||
"visual_inventory": {
|
||||
"barcode_confidence_threshold": 0.7,
|
||||
"roi_enabled": true
|
||||
},
|
||||
"general": {
|
||||
"min_depth_mm": 800.0,
|
||||
"max_depth_mm": 3000.0,
|
||||
"sample_points": 50
|
||||
}
|
||||
}
|
||||
}
|
||||
6
image_capture/note.md
Normal file
6
image_capture/note.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# 确保在 image_capture 目录下
|
||||
cd d:\Git\stereo_warehouse_inspection\image_capture
|
||||
Remove-Item -Recurse -Force build
|
||||
# 使用 Visual Studio 生成器重新配置项目: 指定 -G "Visual Studio 17 2022" (根据你的VS版本调整,通常是 16 2019 或 17 2022)。
|
||||
cmake -G "Visual Studio 17 2022" -A x64 -B build
|
||||
cmake --build build --config Release
|
||||
0
image_capture/run_log.txt
Normal file
0
image_capture/run_log.txt
Normal file
172
image_capture/src/algorithm/core/detection_base.cpp
Normal file
172
image_capture/src/algorithm/core/detection_base.cpp
Normal file
@@ -0,0 +1,172 @@
|
||||
#include "detection_base.h"
|
||||
#include "../detections/beam_rack_deflection/beam_rack_deflection_detection.h"
|
||||
#include "../detections/pallet_offset/pallet_offset_detection.h"
|
||||
#include "../detections/slot_occupancy/slot_occupancy_detection.h"
|
||||
#include "../detections/visual_inventory/visual_inventory_detection.h"
|
||||
|
||||
#include "detection_result.h"
|
||||
#include <chrono>
|
||||
#include <ctime>
|
||||
#include <iomanip>
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <sstream>
|
||||
|
||||
/**
|
||||
* @brief 获取当前时间戳字符串
|
||||
*/
|
||||
static std::string getCurrentTimeString() {
|
||||
auto now = std::chrono::system_clock::now();
|
||||
auto time_t = std::chrono::system_clock::to_time_t(now);
|
||||
std::tm *tm = std::localtime(&time_t);
|
||||
std::stringstream ss;
|
||||
ss << std::put_time(tm, "%Y-%m-%d %H:%M:%S");
|
||||
return ss.str();
|
||||
}
|
||||
|
||||
// ========== SlotOccupancyDetection ==========
|
||||
bool SlotOccupancyDetection::execute(const cv::Mat &depth_img,
|
||||
const cv::Mat &color_img,
|
||||
const std::string &side,
|
||||
DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud,
|
||||
int beam_length) {
|
||||
result.result_type = 1;
|
||||
result.result_status = "fail";
|
||||
|
||||
// 调用算法进行检测
|
||||
SlotOccupancyResult algo_result;
|
||||
if (!SlotOccupancyAlgorithm::detect(depth_img, color_img, side,
|
||||
algo_result)) {
|
||||
std::cout
|
||||
<< "[Detection] SlotOccupancy: Detection failed (Algorithm error)."
|
||||
<< std::endl;
|
||||
result.result_status = "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
return false;
|
||||
}
|
||||
|
||||
// 将算法结果填充到 DetectionResult
|
||||
result.slot_occupied = algo_result.slot_occupied;
|
||||
result.result_status = algo_result.success ? "success" : "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
|
||||
// 日志输出到界面 (UI Log)
|
||||
std::cout << "[Detection] SlotOccupancy Result: "
|
||||
<< (result.slot_occupied ? "Occupied (有货)" : "Empty (无货)")
|
||||
<< std::endl;
|
||||
|
||||
return algo_result.success;
|
||||
}
|
||||
|
||||
// ========== PalletOffsetDetection ==========
|
||||
bool PalletOffsetDetection::execute(const cv::Mat &depth_img,
|
||||
const cv::Mat &color_img,
|
||||
const std::string &side,
|
||||
DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud,
|
||||
int beam_length) {
|
||||
result.result_type = 2;
|
||||
result.result_status = "fail";
|
||||
|
||||
// 调用算法进行检测
|
||||
PalletOffsetResult algo_result;
|
||||
if (!PalletOffsetAlgorithm::detect(depth_img, color_img, side, algo_result,
|
||||
point_cloud)) {
|
||||
result.result_status = "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
return false;
|
||||
}
|
||||
|
||||
// 将算法结果填充到 DetectionResult
|
||||
result.offset_lat_mm_value = algo_result.offset_lat_mm_value;
|
||||
result.offset_lon_mm_value = algo_result.offset_lon_mm_value;
|
||||
result.rotation_angle_value = algo_result.rotation_angle_value;
|
||||
result.hole_def_mm_left_value = algo_result.hole_def_mm_left_value;
|
||||
result.hole_def_mm_right_value = algo_result.hole_def_mm_right_value;
|
||||
|
||||
result.offset_lat_mm_threshold = algo_result.offset_lat_mm_threshold;
|
||||
result.offset_lon_mm_threshold = algo_result.offset_lon_mm_threshold;
|
||||
result.rotation_angle_threshold = algo_result.rotation_angle_threshold;
|
||||
result.hole_def_mm_left_threshold = algo_result.hole_def_mm_left_threshold;
|
||||
result.hole_def_mm_right_threshold = algo_result.hole_def_mm_right_threshold;
|
||||
|
||||
result.offset_lat_mm_warning_alarm = algo_result.offset_lat_mm_warning_alarm;
|
||||
result.offset_lon_mm_warning_alarm = algo_result.offset_lon_mm_warning_alarm;
|
||||
result.rotation_angle_warning_alarm =
|
||||
algo_result.rotation_angle_warning_alarm;
|
||||
result.hole_def_mm_left_warning_alarm =
|
||||
algo_result.hole_def_mm_left_warning_alarm;
|
||||
result.hole_def_mm_right_warning_alarm =
|
||||
algo_result.hole_def_mm_right_warning_alarm;
|
||||
|
||||
result.result_status = algo_result.success ? "success" : "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
|
||||
return algo_result.success;
|
||||
}
|
||||
|
||||
// ========== BeamRackDeflectionDetection ==========
|
||||
bool BeamRackDeflectionDetection::execute(
|
||||
const cv::Mat &depth_img, const cv::Mat &color_img, const std::string &side,
|
||||
DetectionResult &result, const std::vector<Point3D> *point_cloud,
|
||||
int beam_length) {
|
||||
result.result_type = 3;
|
||||
result.result_status = "fail";
|
||||
|
||||
// Select ROI based on beam_length
|
||||
std::vector<cv::Point2i> beam_roi;
|
||||
if (beam_length == 2180) {
|
||||
beam_roi = BeamRackDeflectionAlgorithm::BEAM_ROI_2180;
|
||||
} else if (beam_length == 1380) {
|
||||
beam_roi = BeamRackDeflectionAlgorithm::BEAM_ROI_1380;
|
||||
}
|
||||
|
||||
// 调用算法进行检测
|
||||
BeamRackDeflectionResult algo_result;
|
||||
if (!BeamRackDeflectionAlgorithm::detect(
|
||||
depth_img, color_img, side, algo_result, point_cloud, beam_roi)) {
|
||||
result.result_status = "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
return false;
|
||||
}
|
||||
|
||||
// 将算法结果填充到 DetectionResult
|
||||
result.beam_def_mm_value = algo_result.beam_def_mm_value;
|
||||
result.rack_def_mm_value = algo_result.rack_def_mm_value;
|
||||
result.beam_def_mm_threshold = algo_result.beam_def_mm_threshold;
|
||||
result.rack_def_mm_threshold = algo_result.rack_def_mm_threshold;
|
||||
result.beam_def_mm_warning_alarm = algo_result.beam_def_mm_warning_alarm;
|
||||
result.rack_def_mm_warning_alarm = algo_result.rack_def_mm_warning_alarm;
|
||||
|
||||
result.result_status = algo_result.success ? "success" : "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
|
||||
return algo_result.success;
|
||||
}
|
||||
|
||||
// ========== VisualInventoryDetection ==========
|
||||
bool VisualInventoryDetection::execute(const cv::Mat &depth_img,
|
||||
const cv::Mat &color_img,
|
||||
const std::string &side,
|
||||
DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud,
|
||||
int beam_length) {
|
||||
result.result_type = 4;
|
||||
result.result_status = "fail";
|
||||
|
||||
// 调用算法进行检测
|
||||
VisualInventoryResult algo_result;
|
||||
if (!VisualInventoryAlgorithm::detect(depth_img, color_img, side,
|
||||
algo_result)) {
|
||||
result.result_status = "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
return false;
|
||||
}
|
||||
|
||||
// 将算法结果填充到 DetectionResult
|
||||
result.result_barcodes = algo_result.result_barcodes;
|
||||
result.result_status = algo_result.success ? "success" : "fail";
|
||||
result.last_update_time = getCurrentTimeString();
|
||||
|
||||
return algo_result.success;
|
||||
}
|
||||
116
image_capture/src/algorithm/core/detection_base.h
Normal file
116
image_capture/src/algorithm/core/detection_base.h
Normal file
@@ -0,0 +1,116 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
|
||||
struct DetectionResult;
|
||||
namespace cv {
|
||||
class Mat;
|
||||
}
|
||||
#include "../../common_types.h"
|
||||
|
||||
/**
|
||||
* @brief 检测任务基类
|
||||
*
|
||||
* 所有检测任务都继承自此类,实现统一的接口
|
||||
*/
|
||||
class DetectionBase {
|
||||
public:
|
||||
DetectionBase() {}
|
||||
virtual ~DetectionBase() {}
|
||||
|
||||
/**
|
||||
* 执行检测任务
|
||||
* @param depth_img 深度图像(可选)
|
||||
* @param color_img 彩色图像(可选)
|
||||
* @param side 货架侧("left"或"right")
|
||||
* @param result [输出] 检测结果
|
||||
* @param point_cloud [可选] 点云数据
|
||||
* @return 是否检测成功
|
||||
*/
|
||||
virtual bool execute(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud = nullptr,
|
||||
int beam_length = 0) = 0;
|
||||
|
||||
/**
|
||||
* 获取任务类型(对应flag值)
|
||||
*/
|
||||
virtual int getTaskType() const = 0;
|
||||
|
||||
/**
|
||||
* 获取任务名称
|
||||
*/
|
||||
virtual std::string getTaskName() const = 0;
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Task 1: 货位有无检测
|
||||
*/
|
||||
class SlotOccupancyDetection : public DetectionBase {
|
||||
public:
|
||||
SlotOccupancyDetection() {}
|
||||
virtual ~SlotOccupancyDetection() {}
|
||||
|
||||
bool execute(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud = nullptr,
|
||||
int beam_length = 0) override;
|
||||
|
||||
int getTaskType() const override { return 1; }
|
||||
std::string getTaskName() const override { return "SlotOccupancyDetection"; }
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Task 2: 托盘位置偏移检测 - 插孔变形检测(取货时)
|
||||
*/
|
||||
class PalletOffsetDetection : public DetectionBase {
|
||||
public:
|
||||
PalletOffsetDetection() {}
|
||||
virtual ~PalletOffsetDetection() {}
|
||||
|
||||
bool execute(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud = nullptr,
|
||||
int beam_length = 0) override;
|
||||
|
||||
int getTaskType() const override { return 2; }
|
||||
std::string getTaskName() const override { return "PalletOffsetDetection"; }
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Task 3: 横梁变形检测 - 货架立柱变形检测(放货时)
|
||||
*/
|
||||
class BeamRackDeflectionDetection : public DetectionBase {
|
||||
public:
|
||||
BeamRackDeflectionDetection() {}
|
||||
virtual ~BeamRackDeflectionDetection() {}
|
||||
|
||||
bool execute(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud = nullptr,
|
||||
int beam_length = 0) override;
|
||||
|
||||
int getTaskType() const override { return 3; }
|
||||
std::string getTaskName() const override {
|
||||
return "BeamRackDeflectionDetection";
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief Task 4: 视觉盘点(扫码)
|
||||
*/
|
||||
class VisualInventoryDetection : public DetectionBase {
|
||||
public:
|
||||
VisualInventoryDetection() {}
|
||||
virtual ~VisualInventoryDetection() {}
|
||||
|
||||
bool execute(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, DetectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud = nullptr,
|
||||
int beam_length = 0) override;
|
||||
|
||||
int getTaskType() const override { return 4; }
|
||||
std::string getTaskName() const override {
|
||||
return "VisualInventoryDetection";
|
||||
}
|
||||
};
|
||||
184
image_capture/src/algorithm/core/detection_result.cpp
Normal file
184
image_capture/src/algorithm/core/detection_result.cpp
Normal file
@@ -0,0 +1,184 @@
|
||||
#include "detection_result.h"
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
|
||||
std::string DetectionResult::toJson() const {
|
||||
// TODO: 使用JSON库(如nlohmann/json)生成JSON字符串
|
||||
// 当前使用简单的字符串拼接方式
|
||||
std::ostringstream oss;
|
||||
oss << "{";
|
||||
|
||||
// 基础字段
|
||||
oss << "\"result_status\":\"" << result_status << "\",";
|
||||
oss << "\"result_type\":" << result_type << ",";
|
||||
oss << "\"last_update_time\":\"" << last_update_time << "\"";
|
||||
|
||||
// Flag 1
|
||||
if (result_type == 1) {
|
||||
oss << ",\"slot_occupied\":" << (slot_occupied ? "true" : "false");
|
||||
}
|
||||
|
||||
// Flag 2
|
||||
if (result_type == 2) {
|
||||
oss << ",\"offset_lat_mm_value\":" << offset_lat_mm_value;
|
||||
if (!offset_lat_mm_threshold.empty()) {
|
||||
oss << ",\"offset_lat_mm_threshold\":" << offset_lat_mm_threshold;
|
||||
}
|
||||
if (!offset_lat_mm_warning_alarm.empty()) {
|
||||
oss << ",\"offset_lat_mm_warning_alarm\":" << offset_lat_mm_warning_alarm;
|
||||
}
|
||||
|
||||
oss << ",\"offset_lon_mm_value\":" << offset_lon_mm_value;
|
||||
if (!offset_lon_mm_threshold.empty()) {
|
||||
oss << ",\"offset_lon_mm_threshold\":" << offset_lon_mm_threshold;
|
||||
}
|
||||
if (!offset_lon_mm_warning_alarm.empty()) {
|
||||
oss << ",\"offset_lon_mm_warning_alarm\":" << offset_lon_mm_warning_alarm;
|
||||
}
|
||||
|
||||
oss << ",\"hole_def_mm_left_value\":" << hole_def_mm_left_value;
|
||||
if (!hole_def_mm_left_threshold.empty()) {
|
||||
oss << ",\"hole_def_mm_left_threshold\":" << hole_def_mm_left_threshold;
|
||||
}
|
||||
if (!hole_def_mm_left_warning_alarm.empty()) {
|
||||
oss << ",\"hole_def_mm_left_warning_alarm\":"
|
||||
<< hole_def_mm_left_warning_alarm;
|
||||
}
|
||||
|
||||
oss << ",\"hole_def_mm_right_value\":" << hole_def_mm_right_value;
|
||||
if (!hole_def_mm_right_threshold.empty()) {
|
||||
oss << ",\"hole_def_mm_right_threshold\":" << hole_def_mm_right_threshold;
|
||||
}
|
||||
if (!hole_def_mm_right_warning_alarm.empty()) {
|
||||
oss << ",\"hole_def_mm_right_warning_alarm\":"
|
||||
<< hole_def_mm_right_warning_alarm;
|
||||
}
|
||||
|
||||
oss << ",\"rotation_angle_value\":" << rotation_angle_value;
|
||||
if (!rotation_angle_threshold.empty()) {
|
||||
oss << ",\"rotation_angle_threshold\":" << rotation_angle_threshold;
|
||||
}
|
||||
if (!rotation_angle_warning_alarm.empty()) {
|
||||
oss << ",\"rotation_angle_warning_alarm\":"
|
||||
<< rotation_angle_warning_alarm;
|
||||
}
|
||||
}
|
||||
|
||||
// Flag 3
|
||||
if (result_type == 3) {
|
||||
oss << ",\"beam_def_mm_value\":" << beam_def_mm_value;
|
||||
if (!beam_def_mm_threshold.empty()) {
|
||||
oss << ",\"beam_def_mm_threshold\":" << beam_def_mm_threshold;
|
||||
}
|
||||
if (!beam_def_mm_warning_alarm.empty()) {
|
||||
oss << ",\"beam_def_mm_warning_alarm\":" << beam_def_mm_warning_alarm;
|
||||
}
|
||||
|
||||
oss << ",\"rack_def_mm_value\":" << rack_def_mm_value;
|
||||
if (!rack_def_mm_threshold.empty()) {
|
||||
oss << ",\"rack_def_mm_threshold\":" << rack_def_mm_threshold;
|
||||
}
|
||||
if (!rack_def_mm_warning_alarm.empty()) {
|
||||
oss << ",\"rack_def_mm_warning_alarm\":" << rack_def_mm_warning_alarm;
|
||||
}
|
||||
}
|
||||
|
||||
// Flag 4 & 5
|
||||
if (result_type == 4 || result_type == 5) {
|
||||
if (!result_barcodes.empty()) {
|
||||
oss << ",\"result_barcodes\":" << result_barcodes;
|
||||
}
|
||||
}
|
||||
|
||||
oss << "}";
|
||||
return oss.str();
|
||||
}
|
||||
|
||||
bool DetectionResult::fromJson(const std::string &json_str) {
|
||||
// TODO: 使用JSON库解析JSON字符串
|
||||
// 当前实现为占位符
|
||||
std::cerr << "[DetectionResult] TODO: Implement JSON parsing" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
std::map<std::string, std::string> DetectionResult::toMap() const {
|
||||
std::map<std::string, std::string> m;
|
||||
|
||||
// Helper to convert float to string
|
||||
auto floatToStr = [](float val) { return std::to_string(val); };
|
||||
|
||||
// Helper to convert bool to string "true"/"false"
|
||||
auto boolToStr = [](bool val) { return val ? "true" : "false"; };
|
||||
|
||||
// 基础字段 (总是写入)
|
||||
m["result_status"] = result_status;
|
||||
m["result_type"] = std::to_string(result_type);
|
||||
m["last_update_time"] = last_update_time;
|
||||
|
||||
// Flag 1: 货位有无
|
||||
if (result_type == 1) {
|
||||
m["slot_occupied"] = boolToStr(slot_occupied);
|
||||
}
|
||||
|
||||
// Flag 2: 托盘检测
|
||||
if (result_type == 2) {
|
||||
m["offset_lat_mm_value"] = floatToStr(offset_lat_mm_value);
|
||||
m["offset_lat_mm_threshold"] =
|
||||
offset_lat_mm_threshold.empty() ? "{}" : offset_lat_mm_threshold;
|
||||
m["offset_lat_mm_warning_alarm"] = offset_lat_mm_warning_alarm.empty()
|
||||
? "{}"
|
||||
: offset_lat_mm_warning_alarm;
|
||||
|
||||
m["offset_lon_mm_value"] = floatToStr(offset_lon_mm_value);
|
||||
m["offset_lon_mm_threshold"] =
|
||||
offset_lon_mm_threshold.empty() ? "{}" : offset_lon_mm_threshold;
|
||||
m["offset_lon_mm_warning_alarm"] = offset_lon_mm_warning_alarm.empty()
|
||||
? "{}"
|
||||
: offset_lon_mm_warning_alarm;
|
||||
|
||||
m["hole_def_mm_left_value"] = floatToStr(hole_def_mm_left_value);
|
||||
m["hole_def_mm_left_threshold"] =
|
||||
hole_def_mm_left_threshold.empty() ? "{}" : hole_def_mm_left_threshold;
|
||||
m["hole_def_mm_left_warning_alarm"] = hole_def_mm_left_warning_alarm.empty()
|
||||
? "{}"
|
||||
: hole_def_mm_left_warning_alarm;
|
||||
|
||||
m["hole_def_mm_right_value"] = floatToStr(hole_def_mm_right_value);
|
||||
m["hole_def_mm_right_threshold"] = hole_def_mm_right_threshold.empty()
|
||||
? "{}"
|
||||
: hole_def_mm_right_threshold;
|
||||
m["hole_def_mm_right_warning_alarm"] =
|
||||
hole_def_mm_right_warning_alarm.empty()
|
||||
? "{}"
|
||||
: hole_def_mm_right_warning_alarm;
|
||||
|
||||
m["rotation_angle_value"] = floatToStr(rotation_angle_value);
|
||||
m["rotation_angle_threshold"] =
|
||||
rotation_angle_threshold.empty() ? "{}" : rotation_angle_threshold;
|
||||
m["rotation_angle_warning_alarm"] = rotation_angle_warning_alarm.empty()
|
||||
? "{}"
|
||||
: rotation_angle_warning_alarm;
|
||||
}
|
||||
|
||||
// Flag 3: 横梁/立柱检测
|
||||
if (result_type == 3) {
|
||||
m["beam_def_mm_value"] = floatToStr(beam_def_mm_value);
|
||||
m["beam_def_mm_threshold"] =
|
||||
beam_def_mm_threshold.empty() ? "{}" : beam_def_mm_threshold;
|
||||
m["beam_def_mm_warning_alarm"] =
|
||||
beam_def_mm_warning_alarm.empty() ? "{}" : beam_def_mm_warning_alarm;
|
||||
|
||||
m["rack_def_mm_value"] = floatToStr(rack_def_mm_value);
|
||||
m["rack_def_mm_threshold"] =
|
||||
rack_def_mm_threshold.empty() ? "{}" : rack_def_mm_threshold;
|
||||
m["rack_def_mm_warning_alarm"] =
|
||||
rack_def_mm_warning_alarm.empty() ? "{}" : rack_def_mm_warning_alarm;
|
||||
}
|
||||
|
||||
// Flag 4 & 5: 视觉盘点 & 结束
|
||||
if (result_type == 4 || result_type == 5) {
|
||||
m["result_barcodes"] = result_barcodes.empty() ? "{}" : result_barcodes;
|
||||
}
|
||||
|
||||
return m;
|
||||
}
|
||||
96
image_capture/src/algorithm/core/detection_result.h
Normal file
96
image_capture/src/algorithm/core/detection_result.h
Normal file
@@ -0,0 +1,96 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <map>
|
||||
|
||||
// TODO: 添加nlohmann/json库依赖
|
||||
// 临时使用简单的JSON字符串表示,后续替换为nlohmann::json
|
||||
// 为了简化,这里使用std::string存储JSON字符串
|
||||
// 实际实现时应该使用nlohmann::json或类似的JSON库
|
||||
using JsonValue = std::string; // 临时定义,实际应使用nlohmann::json
|
||||
|
||||
/**
|
||||
* @brief 检测结果数据结构
|
||||
*
|
||||
* 包含所有检测任务的结果数据,根据任务类型(flag)填充相应字段
|
||||
*/
|
||||
struct DetectionResult {
|
||||
// 基础字段
|
||||
std::string result_status; // "success" 或 "fail"
|
||||
int result_type; // 对应 vision_task_flag(1~5)
|
||||
std::string last_update_time; // "YYYY-MM-DD HH:MM:SS"
|
||||
|
||||
// Flag 1: 货位有无检测
|
||||
bool slot_occupied; // 货位是否有托盘/货物
|
||||
|
||||
// Flag 2: 托盘位置偏移检测 - 插孔变形检测(取货时)
|
||||
// 左右偏移量
|
||||
float offset_lat_mm_value;
|
||||
JsonValue offset_lat_mm_threshold; // {"A": -5.0, "B": -3.0, "C": 3.0, "D": 5.0}
|
||||
JsonValue offset_lat_mm_warning_alarm; // {"warning": false, "alarm": false}
|
||||
|
||||
// 前后偏移量
|
||||
float offset_lon_mm_value;
|
||||
JsonValue offset_lon_mm_threshold;
|
||||
JsonValue offset_lon_mm_warning_alarm;
|
||||
|
||||
// 左侧插孔变形
|
||||
float hole_def_mm_left_value;
|
||||
JsonValue hole_def_mm_left_threshold;
|
||||
JsonValue hole_def_mm_left_warning_alarm;
|
||||
|
||||
// 右侧插孔变形
|
||||
float hole_def_mm_right_value;
|
||||
JsonValue hole_def_mm_right_threshold;
|
||||
JsonValue hole_def_mm_right_warning_alarm;
|
||||
|
||||
// 托盘整体旋转角度
|
||||
float rotation_angle_value;
|
||||
JsonValue rotation_angle_threshold;
|
||||
JsonValue rotation_angle_warning_alarm;
|
||||
|
||||
// Flag 3: 横梁变形检测 - 货架立柱变形检测(放货时)
|
||||
// 横梁弯曲量
|
||||
float beam_def_mm_value;
|
||||
JsonValue beam_def_mm_threshold;
|
||||
JsonValue beam_def_mm_warning_alarm;
|
||||
|
||||
// 立柱弯曲量
|
||||
float rack_def_mm_value;
|
||||
JsonValue rack_def_mm_threshold;
|
||||
JsonValue rack_def_mm_warning_alarm;
|
||||
|
||||
// Flag 4: 视觉盘点(扫码)
|
||||
JsonValue result_barcodes; // {"A01":["BOX111","BOX112"], "A02":["BOX210"]}
|
||||
|
||||
DetectionResult()
|
||||
: result_status("fail")
|
||||
, result_type(0)
|
||||
, slot_occupied(false)
|
||||
, offset_lat_mm_value(0.0f)
|
||||
, offset_lon_mm_value(0.0f)
|
||||
, hole_def_mm_left_value(0.0f)
|
||||
, hole_def_mm_right_value(0.0f)
|
||||
, rotation_angle_value(0.0f)
|
||||
, beam_def_mm_value(0.0f)
|
||||
, rack_def_mm_value(0.0f)
|
||||
{
|
||||
}
|
||||
|
||||
/**
|
||||
* 转换为JSON字符串
|
||||
*/
|
||||
std::string toJson() const;
|
||||
|
||||
/**
|
||||
* 从JSON字符串解析
|
||||
*/
|
||||
bool fromJson(const std::string& json_str);
|
||||
|
||||
/**
|
||||
* 转换为Key-Value Map
|
||||
* 用于分别写入Redis各个Key
|
||||
*/
|
||||
std::map<std::string, std::string> toMap() const;
|
||||
};
|
||||
|
||||
@@ -0,0 +1,932 @@
|
||||
#include "beam_rack_deflection_detection.h"
|
||||
#include "../../../common/config_manager.h"
|
||||
#define DEBUG_ROI_SELECTION // 启用交互式ROI选择(调试模式)
|
||||
#include <algorithm>
|
||||
#include <cmath>
|
||||
#include <iostream>
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
#include <Eigen/Dense>
|
||||
|
||||
#include <QCoreApplication>
|
||||
#include <QDebug>
|
||||
#include <QDir>
|
||||
#include <QFile>
|
||||
#include <QJsonArray>
|
||||
#include <QJsonDocument>
|
||||
#include <QJsonObject>
|
||||
|
||||
//====================
|
||||
// 步骤1:默认ROI点定义
|
||||
//====================
|
||||
// 定义默认ROI点(四个点:左上、右上、右下、左下)
|
||||
// 横梁ROI默认点(示例值,可根据实际场景调整)
|
||||
const std::vector<cv::Point2i>
|
||||
BeamRackDeflectionAlgorithm::DEFAULT_BEAM_ROI_POINTS = {
|
||||
cv::Point2i(100, 50), // 左上
|
||||
cv::Point2i(540, 80), // 右上
|
||||
cv::Point2i(540, 280), // 右下
|
||||
cv::Point2i(100, 280) // 左下
|
||||
};
|
||||
|
||||
// 2180mm 横梁 ROI (Placeholder - Same as Default for now)
|
||||
const std::vector<cv::Point2i> BeamRackDeflectionAlgorithm::BEAM_ROI_2180 = {
|
||||
cv::Point2i(100, 50), cv::Point2i(540, 80), cv::Point2i(540, 280),
|
||||
cv::Point2i(100, 280)};
|
||||
|
||||
// 1380mm 横梁 ROI (Placeholder - Same as Default for now)
|
||||
const std::vector<cv::Point2i> BeamRackDeflectionAlgorithm::BEAM_ROI_1380 = {
|
||||
cv::Point2i(100, 50), cv::Point2i(540, 80), cv::Point2i(540, 280),
|
||||
cv::Point2i(100, 280)};
|
||||
|
||||
//====================
|
||||
// 步骤2:立柱ROI默认点定义
|
||||
//====================
|
||||
// 立柱ROI默认点(示例值,可根据实际场景调整)
|
||||
const std::vector<cv::Point2i>
|
||||
BeamRackDeflectionAlgorithm::DEFAULT_RACK_ROI_POINTS = {
|
||||
cv::Point2i(50, 50), // 左上
|
||||
cv::Point2i(150, 50), // 右上
|
||||
cv::Point2i(150, 430), // 右下
|
||||
cv::Point2i(50, 430) // 左下
|
||||
};
|
||||
|
||||
//====================
|
||||
// 步骤3:横梁阈值默认值定义
|
||||
//====================
|
||||
// 定义默认阈值(四个值:A负方向报警, B负方向警告, C正方向警告, D正方向报警)
|
||||
// 横梁阈值默认值(示例值,可根据实际需求调整)
|
||||
const std::vector<float> BeamRackDeflectionAlgorithm::DEFAULT_BEAM_THRESHOLDS =
|
||||
{
|
||||
-50.0f, // A: 负方向报警阈值 (横梁Y+方向忽略)
|
||||
-30.0f, // B: 负方向警告阈值 (横梁Y+方向忽略)
|
||||
30.0f, // C: 正方向警告阈值 (>30mm)
|
||||
50.0f // D: 正方向报警阈值 (>50mm)
|
||||
};
|
||||
|
||||
//====================
|
||||
// 步骤4:立柱阈值默认值定义
|
||||
//====================
|
||||
// 立柱阈值默认值(示例值,可根据实际需求调整)
|
||||
const std::vector<float> BeamRackDeflectionAlgorithm::DEFAULT_RACK_THRESHOLDS =
|
||||
{
|
||||
-50.0f, // A: 负方向报警阈值 (对称参考)
|
||||
-30.0f, // B: 负方向警告阈值 (对称参考)
|
||||
30.0f, // C: 正方向警告阈值 (绝对值 > 30mm)
|
||||
50.0f // D: 正方向报警阈值 (绝对值 > 50mm)
|
||||
};
|
||||
|
||||
//====================
|
||||
// 步骤5:加载标定参数
|
||||
//====================
|
||||
bool BeamRackDeflectionAlgorithm::loadCalibration(Eigen::Matrix4d &transform) {
|
||||
// 在当前目录查找 calibration_result_*.json 文件
|
||||
QDir dir = QDir::current();
|
||||
QStringList filters;
|
||||
filters << "calibration_result_*.json";
|
||||
dir.setNameFilters(filters);
|
||||
QFileInfoList list = dir.entryInfoList(QDir::Files, QDir::Time); // 按时间排序
|
||||
|
||||
if (list.empty()) {
|
||||
std::cerr << "[BeamRackDeflectionAlgorithm] Warning: No calibration file "
|
||||
"found. Using Identity."
|
||||
<< std::endl;
|
||||
transform = Eigen::Matrix4d::Identity();
|
||||
return false;
|
||||
}
|
||||
|
||||
// 使用最新的文件
|
||||
QString filePath = list.first().absoluteFilePath();
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Loading calibration from: "
|
||||
<< filePath.toStdString() << std::endl;
|
||||
|
||||
QFile file(filePath);
|
||||
if (!file.open(QIODevice::ReadOnly)) {
|
||||
std::cerr << "[BeamRackDeflectionAlgorithm] Error: Could not open file."
|
||||
<< std::endl;
|
||||
transform = Eigen::Matrix4d::Identity();
|
||||
return false;
|
||||
}
|
||||
|
||||
QByteArray data = file.readAll();
|
||||
QJsonDocument doc = QJsonDocument::fromJson(data);
|
||||
if (doc.isNull()) {
|
||||
std::cerr << "[BeamRackDeflectionAlgorithm] Error: Invalid JSON."
|
||||
<< std::endl;
|
||||
transform = Eigen::Matrix4d::Identity();
|
||||
return false;
|
||||
}
|
||||
|
||||
QJsonObject root = doc.object();
|
||||
if (root.contains("transformation_matrix")) {
|
||||
QJsonArray arr = root["transformation_matrix"].toArray();
|
||||
if (arr.size() == 16) {
|
||||
for (int i = 0; i < 4; ++i) {
|
||||
for (int j = 0; j < 4; ++j) {
|
||||
transform(i, j) = arr[i * 4 + j].toDouble();
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
std::cerr << "[BeamRackDeflectionAlgorithm] Error: transformation_matrix "
|
||||
"missing or invalid."
|
||||
<< std::endl;
|
||||
transform = Eigen::Matrix4d::Identity();
|
||||
return false;
|
||||
}
|
||||
|
||||
//====================
|
||||
// 步骤6:横梁和立柱变形检测主函数
|
||||
//====================
|
||||
bool BeamRackDeflectionAlgorithm::detect(
|
||||
const cv::Mat &depth_img, const cv::Mat &color_img, const std::string &side,
|
||||
BeamRackDeflectionResult &result, const std::vector<Point3D> *point_cloud,
|
||||
const std::vector<cv::Point2i> &beam_roi_points,
|
||||
const std::vector<cv::Point2i> &rack_roi_points,
|
||||
const std::vector<float> &beam_thresholds,
|
||||
const std::vector<float> &rack_thresholds) {
|
||||
// 算法启用开关
|
||||
const bool USE_ALGORITHM = true;
|
||||
|
||||
if (USE_ALGORITHM) {
|
||||
// --- 真实算法逻辑 ---
|
||||
// 6.1 初始化结果
|
||||
result.success = false;
|
||||
result.beam_def_mm_value = 0.0f;
|
||||
result.rack_def_mm_value = 0.0f;
|
||||
|
||||
// 6.2 验证深度图
|
||||
if (depth_img.empty()) {
|
||||
std::cerr << "[BeamRackDeflectionAlgorithm] ERROR: Depth image empty!"
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 6.3 检查点云
|
||||
if (!point_cloud || point_cloud->empty()) {
|
||||
std::cerr
|
||||
<< "[BeamRackDeflectionAlgorithm] ERROR: Point cloud empty or null!"
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 6.4 加载标定参数
|
||||
Eigen::Matrix4d transform;
|
||||
loadCalibration(transform);
|
||||
|
||||
// 6.5 转换点云并按ROI组织
|
||||
// 注意:假设点云与深度图分辨率匹配(行优先)
|
||||
// 如果点云只是有效点的列表而没有结构,我们无法轻松映射2D ROI
|
||||
// 但通常标准会保持 size = width * height
|
||||
if (point_cloud->size() != depth_img.cols * depth_img.rows) {
|
||||
std::cerr << "[BeamRackDeflectionAlgorithm] Warning: Point cloud size "
|
||||
"mismatch. Assuming organized."
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
int width = depth_img.cols;
|
||||
int height = depth_img.rows;
|
||||
|
||||
std::vector<Eigen::Vector3d> beam_points_3d;
|
||||
std::vector<Eigen::Vector3d> rack_points_3d;
|
||||
|
||||
// 6.6 辅助函数:检查点是否在ROI内
|
||||
auto isInRoi = [](const std::vector<cv::Point2i> &roi, int x, int y) {
|
||||
if (roi.size() < 3)
|
||||
return false;
|
||||
return cv::pointPolygonTest(roi, cv::Point2f((float)x, (float)y),
|
||||
false) >= 0;
|
||||
};
|
||||
|
||||
// 6.7 确定实际使用的ROI(使用默认值或自定义值)
|
||||
std::vector<cv::Point2i> actual_beam_roi =
|
||||
beam_roi_points.empty() ? DEFAULT_BEAM_ROI_POINTS : beam_roi_points;
|
||||
std::vector<cv::Point2i> actual_rack_roi =
|
||||
rack_roi_points.empty() ? DEFAULT_RACK_ROI_POINTS : rack_roi_points;
|
||||
|
||||
// 6.8 交互式ROI选择(调试模式)
|
||||
#ifdef DEBUG_ROI_SELECTION
|
||||
// 辅助lambda函数:用于4点ROI选择
|
||||
auto selectPolygonROI =
|
||||
[&](const std::string &winName,
|
||||
const cv::Mat &bg_img) -> std::vector<cv::Point2i> {
|
||||
std::vector<cv::Point> clicks;
|
||||
std::string fullWinName = winName + " (Click 4 points)";
|
||||
cv::namedWindow(fullWinName, cv::WINDOW_AUTOSIZE);
|
||||
|
||||
cv::setMouseCallback(
|
||||
fullWinName,
|
||||
[](int event, int x, int y, int flags, void *userdata) {
|
||||
auto *points = static_cast<std::vector<cv::Point> *>(userdata);
|
||||
if (event == cv::EVENT_LBUTTONDOWN) {
|
||||
if (points->size() < 4) {
|
||||
points->push_back(cv::Point(x, y));
|
||||
std::cout << "Clicked: (" << x << ", " << y << ")" << std::endl;
|
||||
}
|
||||
}
|
||||
},
|
||||
&clicks);
|
||||
|
||||
while (clicks.size() < 4) {
|
||||
cv::Mat display = bg_img.clone();
|
||||
for (size_t i = 0; i < clicks.size(); ++i) {
|
||||
cv::circle(display, clicks[i], 4, cv::Scalar(0, 0, 255), -1);
|
||||
if (i > 0)
|
||||
cv::line(display, clicks[i - 1], clicks[i], cv::Scalar(0, 255, 0),
|
||||
2);
|
||||
}
|
||||
cv::imshow(fullWinName, display);
|
||||
int key = cv::waitKey(10);
|
||||
if (key == 27)
|
||||
return {}; // ESC键取消
|
||||
}
|
||||
// 闭合多边形可视化
|
||||
cv::Mat final_display = bg_img.clone();
|
||||
for (size_t i = 0; i < clicks.size(); ++i) {
|
||||
cv::circle(final_display, clicks[i], 4, cv::Scalar(0, 0, 255), -1);
|
||||
if (i > 0)
|
||||
cv::line(final_display, clicks[i - 1], clicks[i],
|
||||
cv::Scalar(0, 255, 0), 2);
|
||||
}
|
||||
cv::line(final_display, clicks.back(), clicks.front(),
|
||||
cv::Scalar(0, 255, 0), 2);
|
||||
cv::imshow(fullWinName, final_display);
|
||||
cv::waitKey(500); // Show for a bit
|
||||
|
||||
cv::destroyWindow(fullWinName);
|
||||
|
||||
// Convert to Point2i
|
||||
std::vector<cv::Point2i> result;
|
||||
for (const auto &p : clicks)
|
||||
result.push_back(p);
|
||||
return result;
|
||||
};
|
||||
|
||||
static bool showed_debug_warning = false;
|
||||
if (!showed_debug_warning) {
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] DEBUG INFO: Interactive "
|
||||
"Rectified ROI Selection Enabled."
|
||||
<< std::endl;
|
||||
showed_debug_warning = true;
|
||||
}
|
||||
|
||||
if (!depth_img.empty()) {
|
||||
// --- 矫正逻辑 ---
|
||||
cv::Mat display_img;
|
||||
cv::normalize(depth_img, display_img, 0, 255, cv::NORM_MINMAX, CV_8U);
|
||||
cv::cvtColor(display_img, display_img, cv::COLOR_GRAY2BGR);
|
||||
|
||||
// 尝试加载内参以进行矫正
|
||||
cv::Mat H = cv::Mat::eye(3, 3, CV_64F);
|
||||
bool can_rectify = false;
|
||||
|
||||
QDir dir_curr = QDir::current();
|
||||
QStringList filters;
|
||||
filters << "intrinsics_*.json";
|
||||
dir_curr.setNameFilters(filters);
|
||||
QFileInfoList list = dir_curr.entryInfoList(QDir::Files, QDir::Time);
|
||||
|
||||
if (!list.empty()) {
|
||||
QFile i_file(list.first().absoluteFilePath());
|
||||
if (i_file.open(QIODevice::ReadOnly)) {
|
||||
QJsonDocument i_doc = QJsonDocument::fromJson(i_file.readAll());
|
||||
if (!i_doc.isNull() && i_doc.object().contains("depth")) {
|
||||
QJsonObject d_obj = i_doc.object()["depth"].toObject();
|
||||
if (d_obj.contains("intrinsic")) {
|
||||
QJsonArray i_arr = d_obj["intrinsic"].toArray();
|
||||
if (i_arr.size() >= 9) {
|
||||
double fx = i_arr[0].toDouble();
|
||||
double fy = i_arr[4].toDouble();
|
||||
double cx = i_arr[2].toDouble();
|
||||
double cy = i_arr[5].toDouble();
|
||||
|
||||
Eigen::Matrix3d K;
|
||||
K << fx, 0, cx, 0, fy, cy, 0, 0, 1;
|
||||
|
||||
Eigen::Matrix3d R = transform.block<3, 3>(0, 0);
|
||||
// 单应性矩阵 H = K * R * K_inv
|
||||
// 这将图像变换为仿佛相机已按 R 旋转
|
||||
Eigen::Matrix3d H_eig = K * R * K.inverse();
|
||||
|
||||
for (int r = 0; r < 3; ++r)
|
||||
for (int c = 0; c < 3; ++c)
|
||||
H.at<double>(r, c) = H_eig(r, c);
|
||||
|
||||
can_rectify = true;
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Intrinsics loaded. "
|
||||
"Rectification enabled."
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cv::Mat warp_img;
|
||||
cv::Mat H_final = H.clone(); // 复制原始 H 以开始
|
||||
|
||||
if (can_rectify) {
|
||||
// 1. 计算变换后的角点以找到新的边界框
|
||||
std::vector<cv::Point2f> corners = {
|
||||
cv::Point2f(0, 0), cv::Point2f((float)width, 0),
|
||||
cv::Point2f((float)width, (float)height),
|
||||
cv::Point2f(0, (float)height)};
|
||||
std::vector<cv::Point2f> warped_corners;
|
||||
cv::perspectiveTransform(corners, warped_corners, H);
|
||||
|
||||
cv::Rect bbox = cv::boundingRect(warped_corners);
|
||||
|
||||
// 2. 创建平移矩阵以将图像移入视野
|
||||
cv::Mat T = cv::Mat::eye(3, 3, CV_64F);
|
||||
T.at<double>(0, 2) = -bbox.x;
|
||||
T.at<double>(1, 2) = -bbox.y;
|
||||
|
||||
// 3. 更新单应性矩阵
|
||||
H_final = T * H;
|
||||
|
||||
// 4. 使用新尺寸和 H 进行变换
|
||||
cv::warpPerspective(display_img, warp_img, H_final, bbox.size());
|
||||
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Rectified Image Size: "
|
||||
<< bbox.width << "x" << bbox.height << std::endl;
|
||||
} else {
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Warning: Intrinsics not "
|
||||
"found. Showing unrectified image."
|
||||
<< std::endl;
|
||||
warp_img = display_img.clone();
|
||||
}
|
||||
|
||||
// --- 选择横梁 ROI ---
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Please click 4 points for "
|
||||
"BEAM ROI..."
|
||||
<< std::endl;
|
||||
auto beam_poly_visual = selectPolygonROI("Select BEAM", warp_img);
|
||||
|
||||
// 如果已矫正,则映射回原始坐标
|
||||
if (beam_poly_visual.size() == 4) {
|
||||
if (can_rectify) {
|
||||
std::vector<cv::Point2f> src, dst;
|
||||
for (auto p : beam_poly_visual)
|
||||
src.push_back(cv::Point2f(p.x, p.y));
|
||||
cv::perspectiveTransform(src, dst,
|
||||
H_final.inv()); // Use H_final.inv()
|
||||
actual_beam_roi.clear();
|
||||
for (auto p : dst)
|
||||
actual_beam_roi.push_back(
|
||||
cv::Point2i(std::round(p.x), std::round(p.y)));
|
||||
} else {
|
||||
actual_beam_roi = beam_poly_visual;
|
||||
}
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Beam ROI Updated."
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
// --- 选择立柱 ROI ---
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Please click 4 points for "
|
||||
"RACK ROI..."
|
||||
<< std::endl;
|
||||
auto rack_poly_visual = selectPolygonROI("Select RACK", warp_img);
|
||||
|
||||
if (rack_poly_visual.size() == 4) {
|
||||
if (can_rectify) {
|
||||
std::vector<cv::Point2f> src, dst;
|
||||
for (auto p : rack_poly_visual)
|
||||
src.push_back(cv::Point2f(p.x, p.y));
|
||||
cv::perspectiveTransform(src, dst,
|
||||
H_final.inv()); // Use H_final.inv()
|
||||
actual_rack_roi.clear();
|
||||
for (auto p : dst)
|
||||
actual_rack_roi.push_back(
|
||||
cv::Point2i(std::round(p.x), std::round(p.y)));
|
||||
} else {
|
||||
actual_rack_roi = rack_poly_visual;
|
||||
}
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Rack ROI Updated."
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
// ============================================
|
||||
|
||||
cv::Rect beam_bbox = cv::boundingRect(actual_beam_roi);
|
||||
cv::Rect rack_bbox = cv::boundingRect(actual_rack_roi);
|
||||
|
||||
// 处理横梁 ROI 区域
|
||||
float max_beam_deflection = 0.0f;
|
||||
float max_rack_deflection = 0.0f;
|
||||
|
||||
auto process_roi = [&](const cv::Rect &bbox,
|
||||
const std::vector<cv::Point2i> &poly,
|
||||
std::vector<Eigen::Vector3d> &out_pts) {
|
||||
int start_x = std::max(0, bbox.x);
|
||||
int end_x = std::min(width, bbox.x + bbox.width);
|
||||
int start_y = std::max(0, bbox.y);
|
||||
int end_y = std::min(height, bbox.y + bbox.height);
|
||||
|
||||
for (int y = start_y; y < end_y; ++y) {
|
||||
for (int x = start_x; x < end_x; ++x) {
|
||||
if (!isInRoi(poly, x, y))
|
||||
continue;
|
||||
|
||||
int idx = y * width + x;
|
||||
if (idx >= point_cloud->size())
|
||||
continue;
|
||||
|
||||
const Point3D &pt = (*point_cloud)[idx];
|
||||
if (pt.z <= 0.0f || std::isnan(pt.x))
|
||||
continue;
|
||||
|
||||
// Transform
|
||||
Eigen::Vector4d p(pt.x, pt.y, pt.z, 1.0);
|
||||
Eigen::Vector4d p_trans = transform * p;
|
||||
|
||||
out_pts.emplace_back(p_trans.head<3>());
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
process_roi(beam_bbox, actual_beam_roi, beam_points_3d);
|
||||
process_roi(rack_bbox, actual_rack_roi, rack_points_3d);
|
||||
|
||||
// ===========================================
|
||||
// FIX: 自动旋转矫正 (PCA)
|
||||
// 解决 "基准线不水平" 的问题,确保横梁水平,立柱垂直
|
||||
// 通过将数据旋转到水平/垂直,基准线(连接端点)将变为水平/垂直。
|
||||
// 从而使变形量(点到线的距离)等于 Y 轴(横梁)或 X 轴(立柱)的偏差。
|
||||
// ===========================================
|
||||
auto correctRotation = [](std::vector<Eigen::Vector3d> &points,
|
||||
bool is_beam) {
|
||||
if (points.size() < 10)
|
||||
return;
|
||||
|
||||
// 1. Convert to cv::Mat for PCA (Only use X, Y)
|
||||
int n = points.size();
|
||||
cv::Mat data(n, 2, CV_64F);
|
||||
for (int i = 0; i < n; ++i) {
|
||||
data.at<double>(i, 0) = points[i].x();
|
||||
data.at<double>(i, 1) = points[i].y();
|
||||
}
|
||||
|
||||
// 2. Perform PCA
|
||||
cv::PCA pca(data, cv::Mat(), cv::PCA::DATA_AS_ROW);
|
||||
|
||||
// 3. Get primary eigenvector (direction of max variance)
|
||||
// Eigenvectors are stored in rows. Row 0 is the primary vector.
|
||||
cv::Point2d eigen_vec(pca.eigenvectors.at<double>(0, 0),
|
||||
pca.eigenvectors.at<double>(0, 1));
|
||||
|
||||
// 4. Calculate angle relative to desired axis
|
||||
// Beam (is_beam=true): Should align with X-axis (1, 0)
|
||||
// Rack (is_beam=false): Should align with Y-axis (0, 1)
|
||||
|
||||
double angle = std::atan2(eigen_vec.y, eigen_vec.x); // Angle of the data
|
||||
|
||||
double rotation_angle = 0.0;
|
||||
|
||||
if (is_beam) {
|
||||
// Target: Horizontal (0 degrees)
|
||||
rotation_angle = -angle;
|
||||
} else {
|
||||
// Target: Vertical (90 degrees or PI/2)
|
||||
rotation_angle = (CV_PI / 2.0) - angle;
|
||||
}
|
||||
|
||||
// Normalize to -PI ~ PI
|
||||
while (rotation_angle > CV_PI)
|
||||
rotation_angle -= 2 * CV_PI;
|
||||
while (rotation_angle < -CV_PI)
|
||||
rotation_angle += 2 * CV_PI;
|
||||
|
||||
// Safety check: Don't rotate if angle is suspicious huge (> 45 deg)
|
||||
// unless confident For now, we trust PCA for standard slight tilts (< 30
|
||||
// deg).
|
||||
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Correcting "
|
||||
<< (is_beam ? "Beam" : "Rack")
|
||||
<< " Rotation: " << rotation_angle * 180.0 / CV_PI << " deg."
|
||||
<< std::endl;
|
||||
|
||||
// 5. Apply Rotation
|
||||
double c = std::cos(rotation_angle);
|
||||
double s = std::sin(rotation_angle);
|
||||
|
||||
// Center of rotation: PCA mean
|
||||
double cx = pca.mean.at<double>(0);
|
||||
double cy = pca.mean.at<double>(1);
|
||||
|
||||
for (int i = 0; i < n; ++i) {
|
||||
double x = points[i].x() - cx;
|
||||
double y = points[i].y() - cy;
|
||||
|
||||
double x_new = x * c - y * s;
|
||||
double y_new = x * s + y * c;
|
||||
|
||||
points[i].x() = x_new + cx;
|
||||
points[i].y() = y_new + cy;
|
||||
// Z unchanged
|
||||
}
|
||||
};
|
||||
|
||||
// Apply corrections
|
||||
correctRotation(beam_points_3d, true);
|
||||
correctRotation(rack_points_3d, false);
|
||||
// ===========================================
|
||||
|
||||
// 6.9 计算变形量
|
||||
|
||||
// 分箱(切片)方法辅助函数
|
||||
auto calculate_deflection_binned = [&](std::vector<Eigen::Vector3d> &points,
|
||||
bool is_beam_y_check,
|
||||
const std::string &label) -> float {
|
||||
if (points.empty())
|
||||
return 0.0f;
|
||||
|
||||
// 1. 沿主轴排序点
|
||||
std::sort(points.begin(), points.end(),
|
||||
[is_beam_y_check](const Eigen::Vector3d &a,
|
||||
const Eigen::Vector3d &b) {
|
||||
return is_beam_y_check ? (a.x() < b.x()) : (a.y() < b.y());
|
||||
});
|
||||
|
||||
// 2. 分箱
|
||||
int num_bins = 50;
|
||||
if (points.size() < 100)
|
||||
num_bins = 10; // Reduce bins for small sets
|
||||
|
||||
double min_u = is_beam_y_check ? points.front().x() : points.front().y();
|
||||
double max_u = is_beam_y_check ? points.back().x() : points.back().y();
|
||||
|
||||
// 可视化辅助
|
||||
#ifdef DEBUG_ROI_SELECTION
|
||||
int viz_w = 800;
|
||||
int viz_h = 400;
|
||||
cv::Mat viz_img = cv::Mat::zeros(viz_h, viz_w, CV_8UC3);
|
||||
double disp_min_u = min_u;
|
||||
double disp_max_u = max_u;
|
||||
double min_v = 1e9, max_v = -1e9;
|
||||
|
||||
auto map_u = [&](double u) -> int {
|
||||
return (int)((u - disp_min_u) / (disp_max_u - disp_min_u) *
|
||||
(viz_w - 40) +
|
||||
20);
|
||||
};
|
||||
// Will define map_v later after range finding
|
||||
#endif
|
||||
|
||||
std::vector<Eigen::Vector3d> raw_centroids;
|
||||
std::vector<int> counts;
|
||||
|
||||
double range_min = min_u;
|
||||
double range_max = max_u;
|
||||
double bin_step = (range_max - range_min) / num_bins;
|
||||
|
||||
if (bin_step < 1.0)
|
||||
return 0.0f;
|
||||
|
||||
auto it = points.begin();
|
||||
double avg_pts_per_bin = 0;
|
||||
int filled_bins = 0;
|
||||
|
||||
for (int i = 0; i < num_bins; ++i) {
|
||||
double bin_start = range_min + i * bin_step;
|
||||
double bin_end = bin_start + bin_step;
|
||||
|
||||
std::vector<Eigen::Vector3d> bin_pts;
|
||||
while (it != points.end()) {
|
||||
double val = is_beam_y_check ? it->x() : it->y();
|
||||
// double val_v = is_beam_y_check ? it->y() : it->x();
|
||||
if (val > bin_end)
|
||||
break;
|
||||
bin_pts.push_back(*it);
|
||||
++it;
|
||||
}
|
||||
|
||||
if (!bin_pts.empty()) {
|
||||
// Robust Centroid (Trimmed Mean)
|
||||
std::sort(bin_pts.begin(), bin_pts.end(),
|
||||
[is_beam_y_check](const Eigen::Vector3d &a,
|
||||
const Eigen::Vector3d &b) {
|
||||
double val_a = is_beam_y_check ? a.y() : a.x(); // V axis
|
||||
double val_b = is_beam_y_check ? b.y() : b.x();
|
||||
return val_a < val_b;
|
||||
});
|
||||
|
||||
size_t n = bin_pts.size();
|
||||
size_t start = (size_t)(n * 0.25);
|
||||
size_t end = (size_t)(n * 0.75);
|
||||
if (end <= start) {
|
||||
start = 0;
|
||||
end = n;
|
||||
}
|
||||
|
||||
Eigen::Vector3d sum(0, 0, 0);
|
||||
int count = 0;
|
||||
for (size_t k = start; k < end; ++k) {
|
||||
sum += bin_pts[k];
|
||||
count++;
|
||||
}
|
||||
|
||||
if (count > 0) {
|
||||
raw_centroids.push_back(sum / count);
|
||||
counts.push_back(bin_pts.size());
|
||||
avg_pts_per_bin += bin_pts.size();
|
||||
filled_bins++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (filled_bins < 2)
|
||||
return 0.0f;
|
||||
avg_pts_per_bin /= filled_bins;
|
||||
|
||||
// --- 2.1 Bin Filtering (Remove Noise) ---
|
||||
// Filter out bins with significantly low density (e.g. < 20% of average)
|
||||
std::vector<Eigen::Vector3d> bin_centroids;
|
||||
for (size_t i = 0; i < raw_centroids.size(); ++i) {
|
||||
if (counts[i] > avg_pts_per_bin * 0.2) {
|
||||
bin_centroids.push_back(raw_centroids[i]);
|
||||
|
||||
// Track V range for filtered points
|
||||
double v =
|
||||
is_beam_y_check ? raw_centroids[i].y() : raw_centroids[i].x();
|
||||
#ifdef DEBUG_ROI_SELECTION
|
||||
if (v < min_v)
|
||||
min_v = v;
|
||||
if (v > max_v)
|
||||
max_v = v;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
if (bin_centroids.size() < 2) {
|
||||
std::cerr << "[BeamRack] Filtered bins too few." << std::endl;
|
||||
return 0.0f;
|
||||
}
|
||||
|
||||
#ifdef DEBUG_ROI_SELECTION
|
||||
// Adjust V range
|
||||
double v_range = max_v - min_v;
|
||||
if (v_range < 1.0)
|
||||
v_range = 10.0;
|
||||
min_v -= v_range * 0.5; // More margin
|
||||
max_v += v_range * 0.5;
|
||||
|
||||
auto map_v = [&](double v) -> int {
|
||||
return (int)((v - min_v) / (max_v - min_v) * (viz_h - 40) + 20);
|
||||
};
|
||||
|
||||
// Draw Points
|
||||
for (size_t i = 0; i < bin_centroids.size(); ++i) {
|
||||
double u =
|
||||
is_beam_y_check ? bin_centroids[i].x() : bin_centroids[i].y();
|
||||
double v =
|
||||
is_beam_y_check ? bin_centroids[i].y() : bin_centroids[i].x();
|
||||
cv::circle(viz_img, cv::Point(map_u(u), map_v(v)), 3,
|
||||
cv::Scalar(255, 255, 0), -1); // Cyan Centroids
|
||||
}
|
||||
#endif
|
||||
|
||||
// --- 3. Robust Baseline Fitting (Support Line) ---
|
||||
// Instead of simple endpoints, fit a line to "valid support regions"
|
||||
// Support Regions: First 15% and Last 15% of VALID centroids.
|
||||
|
||||
std::vector<Eigen::Vector3d> support_points;
|
||||
int support_count = (int)(bin_centroids.size() * 0.15);
|
||||
if (support_count < 2)
|
||||
support_count = 2; // At least 2 points at each end
|
||||
if (support_count * 2 > bin_centroids.size())
|
||||
support_count = bin_centroids.size() / 2;
|
||||
|
||||
for (int i = 0; i < support_count; ++i)
|
||||
support_points.push_back(bin_centroids[i]);
|
||||
for (int i = 0; i < support_count; ++i)
|
||||
support_points.push_back(bin_centroids[bin_centroids.size() - 1 - i]);
|
||||
|
||||
// Fit Line to Support Points (Least Squares)
|
||||
// Model: v = m * u + c (since rotated, m should be close to 0)
|
||||
double sum_u = 0, sum_v = 0, sum_uv = 0, sum_uu = 0;
|
||||
int N = support_points.size();
|
||||
for (const auto &p : support_points) {
|
||||
double u = is_beam_y_check ? p.x() : p.y();
|
||||
double v = is_beam_y_check ? p.y() : p.x();
|
||||
sum_u += u;
|
||||
sum_v += v;
|
||||
sum_uv += u * v;
|
||||
sum_uu += u * u;
|
||||
}
|
||||
|
||||
double slope = 0, intercept = 0;
|
||||
double denom = N * sum_uu - sum_u * sum_u;
|
||||
if (std::abs(denom) > 1e-6) {
|
||||
slope = (N * sum_uv - sum_u * sum_v) / denom;
|
||||
intercept = (sum_v - slope * sum_u) / N;
|
||||
} else {
|
||||
// Vertical line? Should not happen after rotation. Fallback average.
|
||||
slope = 0;
|
||||
intercept = sum_v / N;
|
||||
}
|
||||
|
||||
std::cout << "[BeamRack] Baseline Fit: slope=" << slope
|
||||
<< ", intercept=" << intercept << " (Support Pts: " << N << ")"
|
||||
<< std::endl;
|
||||
|
||||
// --- 4. Calculate Max Deflection ---
|
||||
double max_def = 0.0;
|
||||
Eigen::Vector3d max_pt;
|
||||
double max_theoretical_v = 0;
|
||||
|
||||
for (const auto &p : bin_centroids) {
|
||||
double u = is_beam_y_check ? p.x() : p.y();
|
||||
double v = is_beam_y_check ? p.y() : p.x();
|
||||
|
||||
double theoretical_v = slope * u + intercept;
|
||||
double def = 0;
|
||||
|
||||
if (is_beam_y_check) {
|
||||
// Beam: Y+ is down. Deflection = ActualY - TheoreticalY
|
||||
// We rotated data, so Y+ might still be relevant if rotation was just
|
||||
// alignment. Assuming standard coords: Sag (Down) is Y decreasing? Or
|
||||
// increasing? In Camera Coords: Y is DOWN. So Sag is INCREASING Y.
|
||||
// Deflection = v - theoretical_v. Positive = Down.
|
||||
def = v - theoretical_v;
|
||||
} else {
|
||||
// Rack: Deflection is absolute distance
|
||||
def = std::abs(v - theoretical_v);
|
||||
}
|
||||
|
||||
if (def > max_def) {
|
||||
max_def = def;
|
||||
max_pt = p;
|
||||
max_theoretical_v = theoretical_v;
|
||||
}
|
||||
}
|
||||
|
||||
// Robust Average of Max Region (Top 3)
|
||||
// ... (Simplified: use raw max for now, or implement top-k avg if
|
||||
// preferred) Sticking to Max for simplicity as requested, but previous
|
||||
// code used Average. Let's reimplement Top 3 Average roughly around max
|
||||
// peak? Actually, just returning max_def is cleaner for "maximum sag".
|
||||
|
||||
#ifdef DEBUG_ROI_SELECTION
|
||||
// Draw Baseline
|
||||
double u_start = disp_min_u;
|
||||
double v_start = slope * u_start + intercept;
|
||||
double u_end = disp_max_u;
|
||||
double v_end = slope * u_end + intercept;
|
||||
cv::line(viz_img, cv::Point(map_u(u_start), map_v(v_start)),
|
||||
cv::Point(map_u(u_end), map_v(v_end)), cv::Scalar(0, 255, 0), 2);
|
||||
|
||||
// Draw Max Deflection
|
||||
if (max_def != 0.0) {
|
||||
double u_d = is_beam_y_check ? max_pt.x() : max_pt.y();
|
||||
double v_d = is_beam_y_check ? max_pt.y() : max_pt.x();
|
||||
cv::line(viz_img, cv::Point(map_u(u_d), map_v(v_d)),
|
||||
cv::Point(map_u(u_d), map_v(max_theoretical_v)),
|
||||
cv::Scalar(0, 0, 255), 2);
|
||||
cv::putText(viz_img, "Max: " + std::to_string(max_def),
|
||||
cv::Point(viz_w / 2, 50), cv::FONT_HERSHEY_SIMPLEX, 0.8,
|
||||
cv::Scalar(0, 0, 255), 2);
|
||||
}
|
||||
|
||||
cv::imshow("Robust Deflection: " + label, viz_img);
|
||||
cv::waitKey(100);
|
||||
#endif
|
||||
|
||||
return (float)max_def;
|
||||
};
|
||||
|
||||
// 6.10 Run Calculation logic
|
||||
|
||||
// --- 横梁变形(Y+ 方向)---
|
||||
|
||||
max_beam_deflection =
|
||||
calculate_deflection_binned(beam_points_3d, true, "Beam");
|
||||
|
||||
// --- 立柱变形(X 方向)---
|
||||
max_rack_deflection =
|
||||
calculate_deflection_binned(rack_points_3d, false, "Rack");
|
||||
|
||||
// 存储结果
|
||||
result.beam_def_mm_value = max_beam_deflection;
|
||||
result.rack_def_mm_value = max_rack_deflection;
|
||||
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Results: Beam="
|
||||
<< max_beam_deflection << "mm, Rack=" << max_rack_deflection
|
||||
<< "mm"
|
||||
<< " (Beam Points: " << beam_points_3d.size()
|
||||
<< ", Rack Points: " << rack_points_3d.size() << ")" << std::endl;
|
||||
|
||||
// 使用默认或提供的阈值
|
||||
// std::vector<float> actual_beam_thresh = beam_thresholds.empty() ?
|
||||
// DEFAULT_BEAM_THRESHOLDS : beam_thresholds; // OLD std::vector<float>
|
||||
// actual_rack_thresh = rack_thresholds.empty() ? DEFAULT_RACK_THRESHOLDS :
|
||||
// rack_thresholds; // OLD
|
||||
|
||||
// NEW: Use ConfigManager
|
||||
std::vector<float> actual_beam_thresh =
|
||||
ConfigManager::getInstance().getBeamThresholds();
|
||||
std::vector<float> actual_rack_thresh =
|
||||
ConfigManager::getInstance().getRackThresholds();
|
||||
|
||||
// Fallback if empty (should not happen with getBeamThresholds defaults)
|
||||
if (actual_beam_thresh.size() < 4)
|
||||
actual_beam_thresh = DEFAULT_BEAM_THRESHOLDS;
|
||||
if (actual_rack_thresh.size() < 4)
|
||||
actual_rack_thresh = DEFAULT_RACK_THRESHOLDS;
|
||||
|
||||
// 制作 json 阈值字符串的辅助函数
|
||||
auto make_json_thresh = [](const std::vector<float> &t) {
|
||||
return "{\"A\":" + std::to_string(t[0]) +
|
||||
",\"B\":" + std::to_string(t[1]) +
|
||||
",\"C\":" + std::to_string(t[2]) +
|
||||
",\"D\":" + std::to_string(t[3]) + "}";
|
||||
};
|
||||
|
||||
if (actual_beam_thresh.size() >= 4) {
|
||||
result.beam_def_mm_threshold = make_json_thresh(actual_beam_thresh);
|
||||
}
|
||||
if (actual_rack_thresh.size() >= 4) {
|
||||
result.rack_def_mm_threshold = make_json_thresh(actual_rack_thresh);
|
||||
}
|
||||
|
||||
// 检查状态
|
||||
// 横梁:正值为向下(Y+)。检查 C 和 D。
|
||||
// 负值为向上(Y-)。检查 A 和 B?
|
||||
// 用户要求:“横梁由于货物仅向下弯曲...(Y 正方向)”
|
||||
// 所以我们主要使用 max_beam_deflection检查 C(警告)和 D(报警)(这是
|
||||
// >0)。 遗留阈值具有负值,可能用于范围检查。 我们将假设标准 [A(neg),
|
||||
// B(neg), C(pos), D(pos)] 格式。
|
||||
|
||||
bool beam_warn = (max_beam_deflection >= actual_beam_thresh[2]); // > C
|
||||
bool beam_alrm = (max_beam_deflection >= actual_beam_thresh[3]); // > D
|
||||
|
||||
// Rack: Bends Left or Right. We took Abs() -> always positive.
|
||||
// So we check against C and D.
|
||||
bool rack_warn = (max_rack_deflection >= actual_rack_thresh[2]);
|
||||
bool rack_alrm = (max_rack_deflection >= actual_rack_thresh[3]);
|
||||
|
||||
auto make_json_status = [](bool w, bool a) {
|
||||
return "{\"warning\":" + std::string(w ? "true" : "false") +
|
||||
",\"alarm\":" + std::string(a ? "true" : "false") + "}";
|
||||
};
|
||||
|
||||
result.beam_def_mm_warning_alarm = make_json_status(beam_warn, beam_alrm);
|
||||
result.rack_def_mm_warning_alarm = make_json_status(rack_warn, rack_alrm);
|
||||
|
||||
// 标记为成功
|
||||
result.success = true;
|
||||
|
||||
#ifdef DEBUG_ROI_SELECTION
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Press ANY KEY to close graphs "
|
||||
"and continue..."
|
||||
<< std::endl;
|
||||
cv::waitKey(0);
|
||||
cv::destroyAllWindows();
|
||||
#endif
|
||||
|
||||
return result.success;
|
||||
} else {
|
||||
// --- 模拟数据逻辑 ---
|
||||
std::cout << "[BeamRackDeflectionAlgorithm] Using FAKE DATA implementation "
|
||||
"(Switch OFF)."
|
||||
<< std::endl;
|
||||
|
||||
result.beam_def_mm_value = 5.5f; // 模拟横梁弯曲
|
||||
result.rack_def_mm_value = 2.2f; // 模拟立柱弯曲
|
||||
result.success = true;
|
||||
|
||||
// 设置模拟阈值
|
||||
// std::vector<float> actual_beam_thresh = beam_thresholds.empty() ?
|
||||
// DEFAULT_BEAM_THRESHOLDS : beam_thresholds; std::vector<float>
|
||||
// actual_rack_thresh = rack_thresholds.empty() ? DEFAULT_RACK_THRESHOLDS :
|
||||
// rack_thresholds;
|
||||
std::vector<float> actual_beam_thresh =
|
||||
ConfigManager::getInstance().getBeamThresholds();
|
||||
std::vector<float> actual_rack_thresh =
|
||||
ConfigManager::getInstance().getRackThresholds();
|
||||
if (actual_beam_thresh.size() < 4)
|
||||
actual_beam_thresh = DEFAULT_BEAM_THRESHOLDS;
|
||||
if (actual_rack_thresh.size() < 4)
|
||||
actual_rack_thresh = DEFAULT_RACK_THRESHOLDS;
|
||||
|
||||
auto make_json_thresh = [](const std::vector<float> &t) {
|
||||
return "{\"A\":" + std::to_string(t[0]) +
|
||||
",\"B\":" + std::to_string(t[1]) +
|
||||
",\"C\":" + std::to_string(t[2]) +
|
||||
",\"D\":" + std::to_string(t[3]) + "}";
|
||||
};
|
||||
if (actual_beam_thresh.size() >= 4)
|
||||
result.beam_def_mm_threshold = make_json_thresh(actual_beam_thresh);
|
||||
if (actual_rack_thresh.size() >= 4)
|
||||
result.rack_def_mm_threshold = make_json_thresh(actual_rack_thresh);
|
||||
|
||||
// 设置模拟警告/报警状态
|
||||
result.beam_def_mm_warning_alarm = "{\"warning\":false,\"alarm\":false}";
|
||||
result.rack_def_mm_warning_alarm = "{\"warning\":false,\"alarm\":false}";
|
||||
|
||||
return result.success;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,134 @@
|
||||
#pragma once
|
||||
|
||||
#include "../../../common_types.h"
|
||||
#include <Eigen/Dense>
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <string>
|
||||
|
||||
|
||||
/**
|
||||
* @brief 四边形ROI结构(四个点定义)
|
||||
*/
|
||||
struct QuadrilateralROI {
|
||||
cv::Point2i points[4]; // 四个点:左上、右上、右下、左下(按顺序)
|
||||
|
||||
QuadrilateralROI() {
|
||||
for (int i = 0; i < 4; ++i) {
|
||||
points[i] = cv::Point2i(0, 0);
|
||||
}
|
||||
}
|
||||
|
||||
QuadrilateralROI(const cv::Point2i &pt0, const cv::Point2i &pt1,
|
||||
const cv::Point2i &pt2, const cv::Point2i &pt3) {
|
||||
points[0] = pt0;
|
||||
points[1] = pt1;
|
||||
points[2] = pt2;
|
||||
points[3] = pt3;
|
||||
}
|
||||
|
||||
bool isValid() const {
|
||||
// 检查是否有有效的点(不全为0)
|
||||
for (int i = 0; i < 4; ++i) {
|
||||
if (points[i].x > 0 || points[i].y > 0) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
cv::Rect getBoundingRect() const {
|
||||
if (!isValid()) {
|
||||
return cv::Rect();
|
||||
}
|
||||
|
||||
int min_x = points[0].x, max_x = points[0].x;
|
||||
int min_y = points[0].y, max_y = points[0].y;
|
||||
|
||||
for (int i = 1; i < 4; ++i) {
|
||||
min_x = std::min(min_x, points[i].x);
|
||||
max_x = std::max(max_x, points[i].x);
|
||||
min_y = std::min(min_y, points[i].y);
|
||||
max_y = std::max(max_y, points[i].y);
|
||||
}
|
||||
|
||||
return cv::Rect(min_x, min_y, max_x - min_x, max_y - min_y);
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 横梁变形检测算法结果
|
||||
*/
|
||||
struct BeamRackDeflectionResult {
|
||||
// 变形量
|
||||
float beam_def_mm_value; // 横梁弯曲量(mm)
|
||||
float rack_def_mm_value; // 立柱弯曲量(mm)
|
||||
|
||||
// 阈值(JSON字符串)
|
||||
std::string beam_def_mm_threshold;
|
||||
std::string rack_def_mm_threshold;
|
||||
|
||||
// 警告和报警信号(JSON字符串)
|
||||
std::string beam_def_mm_warning_alarm;
|
||||
std::string rack_def_mm_warning_alarm;
|
||||
|
||||
bool success; // 算法是否执行成功
|
||||
|
||||
BeamRackDeflectionResult()
|
||||
: beam_def_mm_value(0.0f), rack_def_mm_value(0.0f), success(false) {}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 横梁变形检测算法
|
||||
*
|
||||
* 检测横梁和货架立柱的变形
|
||||
*/
|
||||
class BeamRackDeflectionAlgorithm {
|
||||
public:
|
||||
// 默认ROI点定义(四个点:左上、右上、右下、左下)
|
||||
// 横梁ROI默认点
|
||||
static const std::vector<cv::Point2i> DEFAULT_BEAM_ROI_POINTS;
|
||||
// 立柱ROI默认点
|
||||
static const std::vector<cv::Point2i> DEFAULT_RACK_ROI_POINTS;
|
||||
|
||||
// 默认阈值定义(四个值:A负方向报警, B负方向警告, C正方向警告, D正方向报警)
|
||||
// 横梁阈值默认值
|
||||
static const std::vector<float> DEFAULT_BEAM_THRESHOLDS;
|
||||
// 立柱阈值默认值
|
||||
static const std::vector<float> DEFAULT_RACK_THRESHOLDS;
|
||||
|
||||
// 2180mm 横梁 ROI (Placeholder)
|
||||
static const std::vector<cv::Point2i> BEAM_ROI_2180;
|
||||
// 1380mm 横梁 ROI (Placeholder)
|
||||
static const std::vector<cv::Point2i> BEAM_ROI_1380;
|
||||
|
||||
// ... (keep class definition)
|
||||
|
||||
/**
|
||||
* 执行横梁变形检测(使用深度图方案)
|
||||
* @param depth_img 深度图像
|
||||
* @param color_img 彩色图像
|
||||
* @param side 货架侧("left"或"right")
|
||||
* @param result [输出] 检测结果
|
||||
* @param point_cloud [可选] 点云数据
|
||||
* @param beam_roi_points
|
||||
* 横梁ROI的四个点(左上、右上、右下、左下),为空时使用默认值
|
||||
* @param rack_roi_points
|
||||
* 立柱ROI的四个点(左上、右上、右下、左下),为空时使用默认值
|
||||
* @param beam_thresholds 横梁阈值四个值[A,B,C,D],为空时使用默认值
|
||||
* @param rack_thresholds 立柱阈值四个值[A,B,C,D],为空时使用默认值
|
||||
* @return 是否检测成功
|
||||
*/
|
||||
static bool
|
||||
detect(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, BeamRackDeflectionResult &result,
|
||||
const std::vector<Point3D> *point_cloud = nullptr,
|
||||
const std::vector<cv::Point2i> &beam_roi_points =
|
||||
std::vector<cv::Point2i>(),
|
||||
const std::vector<cv::Point2i> &rack_roi_points =
|
||||
std::vector<cv::Point2i>(),
|
||||
const std::vector<float> &beam_thresholds = std::vector<float>(),
|
||||
const std::vector<float> &rack_thresholds = std::vector<float>());
|
||||
|
||||
private:
|
||||
static bool loadCalibration(Eigen::Matrix4d &transform);
|
||||
};
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,104 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <Eigen/Dense>
|
||||
|
||||
#include "../../../common_types.h"
|
||||
|
||||
namespace cv { class Mat; }
|
||||
|
||||
/**
|
||||
* @brief 托盘位置偏移检测算法结果
|
||||
*/
|
||||
struct PalletOffsetResult {
|
||||
// 位置偏移量 (相对于参考得出的世界坐标系下的差异)
|
||||
float offset_lat_mm_value; // 左右偏移量(mm)- X轴 World
|
||||
float offset_lon_mm_value; // 前后偏移量(mm)- Z轴 World
|
||||
float rotation_angle_value; // 旋转角度(度)- 绕 Y轴 World
|
||||
|
||||
// 插孔变形
|
||||
float hole_def_mm_left_value; // 左侧插孔变形(mm)
|
||||
float hole_def_mm_right_value; // 右侧插孔变形(mm)
|
||||
|
||||
// 绝对坐标 (用于生成参考模板)
|
||||
float abs_x;
|
||||
float abs_y;
|
||||
float abs_z;
|
||||
|
||||
// 个体插孔坐标 (可选,用于调试或Reference生成)
|
||||
Point3D left_hole_pos;
|
||||
Point3D right_hole_pos;
|
||||
|
||||
// 阈值(JSON字符串)
|
||||
std::string offset_lat_mm_threshold;
|
||||
std::string offset_lon_mm_threshold;
|
||||
std::string rotation_angle_threshold;
|
||||
std::string hole_def_mm_left_threshold;
|
||||
std::string hole_def_mm_right_threshold;
|
||||
|
||||
// 警告和报警信号(JSON字符串)
|
||||
std::string offset_lat_mm_warning_alarm;
|
||||
std::string offset_lon_mm_warning_alarm;
|
||||
std::string rotation_angle_warning_alarm;
|
||||
std::string hole_def_mm_left_warning_alarm;
|
||||
std::string hole_def_mm_right_warning_alarm;
|
||||
|
||||
bool success; // 算法是否执行成功
|
||||
|
||||
PalletOffsetResult()
|
||||
: offset_lat_mm_value(0.0f)
|
||||
, offset_lon_mm_value(0.0f)
|
||||
, rotation_angle_value(0.0f)
|
||||
, hole_def_mm_left_value(0.0f)
|
||||
, hole_def_mm_right_value(0.0f)
|
||||
, abs_x(0.0f), abs_y(0.0f), abs_z(0.0f)
|
||||
, success(false)
|
||||
{}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 托盘位置偏移检测算法
|
||||
*
|
||||
* 检测托盘位置偏移和插孔变形
|
||||
* 核心逻辑:
|
||||
* 1. 纯深度图输入,不依赖点云。
|
||||
* 2. 交互式 ROI 选择(当未提供 ROI 时)。
|
||||
* 3. 2D 特征检测 + 稀疏 3D 转换(利用内参和标定矩阵)。
|
||||
* 4. 世界坐标系下的偏移与变形计算。
|
||||
*/
|
||||
class PalletOffsetAlgorithm {
|
||||
public:
|
||||
/**
|
||||
* @brief 执行托盘位置偏移检测
|
||||
*
|
||||
* @param depth_img 深度图像 (CV_16U or CV_32F)
|
||||
* @param color_img 彩色图像 (仅用于显示,可选)
|
||||
* @param side 货架侧("left"或"right")
|
||||
* @param result [输出] 检测结果
|
||||
* @param point_cloud [可选] 点云数据 (若为空,则使用 depth + intrinsics 计算)
|
||||
* @param roi_points [可选] ROI区域,若为空则触发交互式选择
|
||||
* @param intrinsics [可选] 相机内参,用于 2D->3D 转换。若为0则尝试自动获取。
|
||||
* @return 是否检测成功
|
||||
*/
|
||||
static bool detect(const cv::Mat& depth_img,
|
||||
const cv::Mat& color_img,
|
||||
const std::string& side,
|
||||
PalletOffsetResult& result,
|
||||
const std::vector<Point3D>* point_cloud = nullptr,
|
||||
const std::vector<cv::Point2i>& roi_points = {},
|
||||
const CameraIntrinsics& intrinsics = CameraIntrinsics(),
|
||||
const cv::Mat* calib_mat_override = nullptr);
|
||||
|
||||
private:
|
||||
/**
|
||||
* @brief 从 JSON 文件加载标定矩阵
|
||||
*/
|
||||
static bool loadCalibration(Eigen::Matrix4d& transform);
|
||||
|
||||
/**
|
||||
* @brief 交互式 ROI 选择
|
||||
*/
|
||||
static std::vector<cv::Point2i> selectPolygonROI(const cv::Mat& visual_img);
|
||||
};
|
||||
@@ -0,0 +1,261 @@
|
||||
#include "slot_occupancy_detection.h"
|
||||
#include <iostream>
|
||||
#include <mutex>
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <vector>
|
||||
|
||||
|
||||
//====================
|
||||
// 步骤1:配置参数
|
||||
//====================
|
||||
namespace Config {
|
||||
// 基准图文件相对路径列表 (按顺序尝试)
|
||||
const std::vector<std::string> TEMPLATE_PATHS = {
|
||||
"src\\images_template\\temp.bmp",
|
||||
"..\\src\\images_template\\temp.bmp",
|
||||
"..\\..\\src\\images_template\\temp.bmp",
|
||||
"..\\..\\..\\src\\images_template\\temp.bmp",
|
||||
"..\\..\\..\\..\\src\\images_template\\temp.bmp",
|
||||
"d:\\Git\\stereo_warehouse_inspection\\image_capture\\src\\images_"
|
||||
"template\\temp.bmp"};
|
||||
|
||||
// 差异阈值:当前像素与背景像素相差多少算“有变化” (0-255)
|
||||
// 建议:如果环境光稳定,设为 20-30;如果光照波动大,设为 40-50
|
||||
const int DIFF_THRESHOLD = 28;
|
||||
|
||||
// 面积阈值:差异像素总数超过多少算“有货”
|
||||
// 建议:根据 ROI 大小调整,通常设为 ROI 面积的 5% - 10%
|
||||
const int AREA_THRESHOLD = 1000000;
|
||||
|
||||
// 高斯模糊核大小 (必须是奇数)
|
||||
const int BLUR_SIZE = 7;
|
||||
|
||||
// 目标工作分辨率 (相机分辨率)
|
||||
const cv::Size TARGET_SIZE(4024, 3036);
|
||||
|
||||
// ROI (感兴趣区域) - 默认值
|
||||
const cv::Rect ROI_DEFAULT(1400, 600, 1200, 1800);
|
||||
} // namespace Config
|
||||
|
||||
//====================
|
||||
// 步骤2:算法上下文(用于管理静态资源)
|
||||
//====================
|
||||
|
||||
class SlotAlgoContext {
|
||||
public:
|
||||
SlotAlgoContext() : initialized_(false) {}
|
||||
|
||||
// 初始化:加载并预处理基准图
|
||||
// 返回值:是否初始化成功
|
||||
bool ensureInitialized() {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (initialized_)
|
||||
return true;
|
||||
|
||||
cv::Mat raw_ref;
|
||||
// 1. 尝试加载基准图
|
||||
for (const auto &path : Config::TEMPLATE_PATHS) {
|
||||
raw_ref = cv::imread(path, cv::IMREAD_GRAYSCALE);
|
||||
if (!raw_ref.empty()) {
|
||||
std::cout << "[SlotAlgo] Loaded template from: " << path << std::endl;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (raw_ref.empty()) {
|
||||
std::cerr << "[SlotAlgo] CRITICAL: Failed to load template image."
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 2. 尺寸对齐 (Resize)
|
||||
// 只有当尺寸不匹配时才执行 Resize,确保 ref_img_processed_ 始终是
|
||||
// TARGET_SIZE
|
||||
if (raw_ref.size() != Config::TARGET_SIZE) {
|
||||
std::cout << "[SlotAlgo] Resizing template from " << raw_ref.cols << "x"
|
||||
<< raw_ref.rows << " to " << Config::TARGET_SIZE.width << "x"
|
||||
<< Config::TARGET_SIZE.height << std::endl;
|
||||
cv::resize(raw_ref, ref_img_processed_, Config::TARGET_SIZE);
|
||||
} else {
|
||||
ref_img_processed_ = raw_ref;
|
||||
}
|
||||
|
||||
// 3. 预处理:高斯模糊
|
||||
// 提前对整张基准图进行模糊,避免每帧对 ROI 进行模糊,减少计算量
|
||||
// (注:如果内存紧张,可以只存原图,但为了速度建议存模糊后的图)
|
||||
cv::GaussianBlur(ref_img_processed_, ref_img_processed_,
|
||||
cv::Size(Config::BLUR_SIZE, Config::BLUR_SIZE), 0);
|
||||
|
||||
// 4. 初始化形态学核
|
||||
morph_kernel_ = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 5));
|
||||
|
||||
std::cout << "[SlotAlgo] Initialization complete. Reference size: "
|
||||
<< ref_img_processed_.cols << "x" << ref_img_processed_.rows
|
||||
<< std::endl;
|
||||
|
||||
initialized_ = true;
|
||||
return true;
|
||||
}
|
||||
|
||||
// 获取处理后的基准图 (只读引用)
|
||||
const cv::Mat &getRefImage() const { return ref_img_processed_; }
|
||||
|
||||
// 获取形态学核
|
||||
const cv::Mat &getMorphKernel() const { return morph_kernel_; }
|
||||
|
||||
bool isInitialized() const { return initialized_; }
|
||||
|
||||
private:
|
||||
std::mutex mutex_;
|
||||
bool initialized_;
|
||||
cv::Mat ref_img_processed_; // 存储 Resize 并 Blur 后的基准图
|
||||
cv::Mat morph_kernel_;
|
||||
};
|
||||
|
||||
// 全局静态上下文实例
|
||||
static SlotAlgoContext g_algo_context;
|
||||
|
||||
//====================
|
||||
// 步骤3:辅助函数
|
||||
//====================
|
||||
|
||||
static cv::Rect getSafeROI(const cv::Rect &request_roi, int img_width,
|
||||
int img_height) {
|
||||
cv::Rect roi = request_roi;
|
||||
roi.x = std::max(0, roi.x);
|
||||
roi.y = std::max(0, roi.y);
|
||||
roi.width = std::min(roi.width, img_width - roi.x);
|
||||
roi.height = std::min(roi.height, img_height - roi.y);
|
||||
return roi;
|
||||
}
|
||||
|
||||
//====================
|
||||
// 步骤4:核心算法实现
|
||||
//====================
|
||||
|
||||
bool SlotOccupancyAlgorithm::detect(const cv::Mat &depth_img,
|
||||
const cv::Mat &color_img,
|
||||
const std::string &side,
|
||||
SlotOccupancyResult &result) {
|
||||
// 算法启用开关
|
||||
const bool USE_ALGORITHM = false;
|
||||
|
||||
if (USE_ALGORITHM) {
|
||||
// --- 真实算法逻辑 ---
|
||||
// 初始化结果
|
||||
result.success = false;
|
||||
result.slot_occupied = false;
|
||||
|
||||
// 1. 确保算法所需的资源已加载 (懒加载模式)
|
||||
if (!g_algo_context.ensureInitialized()) {
|
||||
std::cerr << "[SlotAlgo] Algorithm not initialized, skipping detection."
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 2. 输入检查
|
||||
if (color_img.empty()) {
|
||||
std::cerr << "[SlotAlgo] Input image is empty." << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
// 3. 准备当前帧灰度图 (高效转换)
|
||||
cv::Mat curr_gray;
|
||||
if (color_img.channels() == 3) {
|
||||
cv::cvtColor(color_img, curr_gray, cv::COLOR_BGR2GRAY);
|
||||
} else {
|
||||
// 如果已经是灰度图,直接引用,避免拷贝
|
||||
curr_gray = color_img;
|
||||
}
|
||||
|
||||
// 验证输入尺寸 (假设输入应该匹配目标分辨率)
|
||||
if (curr_gray.size() != Config::TARGET_SIZE) {
|
||||
// 如果输入尺寸不对,这里选择报错或者 Resize
|
||||
// 鉴于这是一个工业场景,分辨率突变通常是异常,这里建议打印警告如果必须处理则
|
||||
// Resize 但为了效率,我们尽量避免每帧 Resize。
|
||||
// 如果确实不一样,这里做一个临时 Resize 以保证程序不崩,但会影响性能
|
||||
// std::cout << "[SlotAlgo] Warning: Input size mismatch. Resizing..."
|
||||
// << std::endl; 暂时不处理 resize,依靠 getSafeROI 防止崩坏,或者在 ROI
|
||||
// 截取时会出错
|
||||
}
|
||||
|
||||
// 4. 确定 ROI
|
||||
// 实际应用中根据 side 选择 ROI; 目前使用默认
|
||||
cv::Rect roi =
|
||||
getSafeROI(Config::ROI_DEFAULT, curr_gray.cols, curr_gray.rows);
|
||||
|
||||
// 5. 截取 ROI
|
||||
// 直接从 input 和 cached reference 中截取,无需 clone
|
||||
cv::Mat img_roi = curr_gray(roi);
|
||||
const cv::Mat &ref_full = g_algo_context.getRefImage();
|
||||
|
||||
// 确保 ref_full 够大覆盖 ROI (理论上 init 中已经 resize 到了 TARGET_SIZE)
|
||||
// 双重保险
|
||||
cv::Rect ref_roi_rect = getSafeROI(roi, ref_full.cols, ref_full.rows);
|
||||
if (ref_roi_rect != roi) {
|
||||
std::cerr << "[SlotAlgo] Error: Reference image size mismatch with ROI."
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
cv::Mat ref_roi = ref_full(ref_roi_rect);
|
||||
|
||||
// 6. 图像处理 pipeline
|
||||
// 只对当前帧 ROI 做高斯模糊 (基准图已经预处理过了)
|
||||
cv::Mat img_roi_blurred;
|
||||
cv::GaussianBlur(img_roi, img_roi_blurred,
|
||||
cv::Size(Config::BLUR_SIZE, Config::BLUR_SIZE), 0);
|
||||
|
||||
// 绝对差分
|
||||
cv::Mat diff;
|
||||
cv::absdiff(img_roi_blurred, ref_roi, diff);
|
||||
|
||||
// 二值化
|
||||
cv::Mat mask;
|
||||
cv::threshold(diff, mask, Config::DIFF_THRESHOLD, 255, cv::THRESH_BINARY);
|
||||
|
||||
// 形态学滤波 (去除噪点)
|
||||
cv::morphologyEx(mask, mask, cv::MORPH_OPEN,
|
||||
g_algo_context.getMorphKernel());
|
||||
|
||||
// 7. 统计判定
|
||||
int non_zero_pixels = cv::countNonZero(mask);
|
||||
|
||||
// 可选:仅在状态变化时打印,避免刷屏
|
||||
// std::cout << "[SlotAlgo] Diff pixels: " << non_zero_pixels <<
|
||||
// std::endl;
|
||||
|
||||
if (non_zero_pixels > Config::AREA_THRESHOLD) {
|
||||
result.slot_occupied = true;
|
||||
} else {
|
||||
result.slot_occupied = false;
|
||||
}
|
||||
|
||||
result.success = true;
|
||||
// std::cout << "[SlotAlgo] Result: " << (result.slot_occupied ?
|
||||
// "Occupied" : "Empty") << std::endl;
|
||||
|
||||
} catch (const cv::Exception &e) {
|
||||
std::cerr << "[SlotAlgo] OpenCV Exception: " << e.what() << std::endl;
|
||||
result.success = false;
|
||||
return false;
|
||||
} catch (...) {
|
||||
std::cerr << "[SlotAlgo] Unknown Exception during detection."
|
||||
<< std::endl;
|
||||
result.success = false;
|
||||
return false;
|
||||
}
|
||||
|
||||
return result.success;
|
||||
} else {
|
||||
// --- 模拟数据逻辑 ---
|
||||
std::cout << "[SlotOccupancyAlgorithm] Using FAKE DATA implementation "
|
||||
"(Switch OFF)."
|
||||
<< std::endl;
|
||||
|
||||
result.slot_occupied = false; // 模拟无货
|
||||
result.success = true;
|
||||
|
||||
return result.success;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,37 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
|
||||
namespace cv { class Mat; }
|
||||
|
||||
/**
|
||||
* @brief 货位有无检测算法结果
|
||||
*/
|
||||
struct SlotOccupancyResult {
|
||||
bool slot_occupied; // 货位是否有托盘/货物
|
||||
bool success; // 算法是否执行成功
|
||||
|
||||
SlotOccupancyResult() : slot_occupied(false), success(false) {}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 货位有无检测算法
|
||||
*
|
||||
* 分析深度图或彩色图,判断货位是否有托盘/货物
|
||||
*/
|
||||
class SlotOccupancyAlgorithm {
|
||||
public:
|
||||
/**
|
||||
* 执行货位有无检测
|
||||
* @param depth_img 深度图像
|
||||
* @param color_img 彩色图像
|
||||
* @param side 货架侧("left"或"right")
|
||||
* @param result [输出] 检测结果
|
||||
* @return 是否检测成功
|
||||
*/
|
||||
static bool detect(const cv::Mat& depth_img,
|
||||
const cv::Mat& color_img,
|
||||
const std::string& side,
|
||||
SlotOccupancyResult& result);
|
||||
};
|
||||
|
||||
@@ -0,0 +1,129 @@
|
||||
#include "visual_inventory_detection.h"
|
||||
#include "HalconCpp.h"
|
||||
#include <iostream>
|
||||
#include <opencv2/opencv.hpp>
|
||||
|
||||
using namespace HalconCpp;
|
||||
|
||||
// Helper to convert cv::Mat to Halcon HImage
|
||||
HImage MatToHImage(const cv::Mat &image) {
|
||||
HImage hImage;
|
||||
if (image.empty())
|
||||
return hImage;
|
||||
|
||||
cv::Mat gray;
|
||||
if (image.channels() == 3) {
|
||||
cv::cvtColor(image, gray, cv::COLOR_BGR2GRAY);
|
||||
} else {
|
||||
gray = image.clone();
|
||||
}
|
||||
|
||||
// Fix: Create a copy of the data to ensure it persists beyond function scope
|
||||
// GenImage1 with "byte" type expects the data to remain valid
|
||||
void *data_copy = new unsigned char[gray.total()];
|
||||
memcpy(data_copy, gray.data, gray.total());
|
||||
|
||||
try {
|
||||
hImage.GenImage1("byte", gray.cols, gray.rows, data_copy);
|
||||
} catch (...) {
|
||||
delete[] static_cast<unsigned char *>(data_copy);
|
||||
throw;
|
||||
}
|
||||
|
||||
// Note: The data_copy will be managed by Halcon's HImage
|
||||
// We don't delete it here as Halcon takes ownership
|
||||
return hImage;
|
||||
}
|
||||
|
||||
bool VisualInventoryAlgorithm::detect(const cv::Mat &depth_img,
|
||||
const cv::Mat &color_img,
|
||||
const std::string &side,
|
||||
VisualInventoryResult &result) {
|
||||
result.success = false;
|
||||
|
||||
try {
|
||||
if (color_img.empty()) {
|
||||
std::cerr << "[VisualInventoryAlgorithm] Error: Empty image input."
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 1. Convert to HImage
|
||||
HImage hImage = MatToHImage(color_img);
|
||||
|
||||
// 2. Setup Halcon QR Code Model
|
||||
HDataCode2D dataCode2d;
|
||||
dataCode2d.CreateDataCode2dModel("QR Code", HTuple(), HTuple());
|
||||
dataCode2d.SetDataCode2dParam("default_parameters", "enhanced_recognition");
|
||||
|
||||
HTuple resultHandles, decodedDataStrings;
|
||||
|
||||
// 3. Detect
|
||||
// stop_after_result_num: 100 ensures we get up to 100 codes
|
||||
HXLDCont symbolXLDs =
|
||||
dataCode2d.FindDataCode2d(hImage, "stop_after_result_num", 100,
|
||||
&resultHandles, &decodedDataStrings);
|
||||
|
||||
// 4. Transform Results to JSON
|
||||
// Format: {"A01":["BOX111","BOX112"], "A02":["BOX210"]}
|
||||
// Since we don't have position information, group all codes under a generic
|
||||
// key
|
||||
std::string json_barcodes = "\"" + side + "\":[";
|
||||
Hlong count = decodedDataStrings.Length();
|
||||
|
||||
for (Hlong i = 0; i < count; i++) {
|
||||
if (i > 0)
|
||||
json_barcodes += ",";
|
||||
// Access string from HTuple using S() which returns const char*
|
||||
HTuple s = decodedDataStrings[i];
|
||||
std::string code = std::string(s.S());
|
||||
|
||||
// Save raw code for deduplication
|
||||
result.codes.push_back(code);
|
||||
|
||||
// Escape special characters in JSON strings
|
||||
// Replace backslashes first, then quotes
|
||||
size_t pos = 0;
|
||||
while ((pos = code.find('\\', pos)) != std::string::npos) {
|
||||
code.replace(pos, 1, "\\\\");
|
||||
pos += 2;
|
||||
}
|
||||
pos = 0;
|
||||
while ((pos = code.find('"', pos)) != std::string::npos) {
|
||||
code.replace(pos, 1, "\\\"");
|
||||
pos += 2;
|
||||
}
|
||||
json_barcodes += "\"" + code + "\"";
|
||||
}
|
||||
json_barcodes += "]";
|
||||
|
||||
result.result_barcodes = "{" + json_barcodes + "}";
|
||||
result.success = true;
|
||||
|
||||
std::cout << "[VisualInventoryAlgorithm] Side: " << side
|
||||
<< " | Detected: " << count << " codes." << std::endl;
|
||||
|
||||
} catch (HException &except) {
|
||||
std::cerr << "[VisualInventoryAlgorithm] Halcon Exception: "
|
||||
<< except.ErrorMessage().Text() << std::endl;
|
||||
result.result_barcodes = "{\"" + side +
|
||||
"\":[], \"error\":\"Halcon Exception: " +
|
||||
std::string(except.ErrorMessage().Text()) + "\"}";
|
||||
result.success = false;
|
||||
} catch (std::exception &e) {
|
||||
std::cerr << "[VisualInventoryAlgorithm] Exception: " << e.what()
|
||||
<< std::endl;
|
||||
result.result_barcodes =
|
||||
"{\"" + side + "\":[], \"error\":\"" + std::string(e.what()) + "\"}";
|
||||
result.success = false;
|
||||
} catch (...) {
|
||||
std::cerr
|
||||
<< "[VisualInventoryAlgorithm] Unknown Exception during detection."
|
||||
<< std::endl;
|
||||
result.result_barcodes =
|
||||
"{\"" + side + "\":[], \"error\":\"Unknown exception\"}";
|
||||
result.success = false;
|
||||
}
|
||||
|
||||
return result.success;
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
namespace cv {
|
||||
class Mat;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 视觉盘点检测算法结果
|
||||
*/
|
||||
struct VisualInventoryResult {
|
||||
std::string
|
||||
result_barcodes; // 条码扫描结果JSON: {"left":["BOX111","BOX112"]} 或
|
||||
// 错误时: {"left":[], "error":"error message"}
|
||||
std::vector<std::string> codes; // 原始条码列表,便于去重
|
||||
bool success; // 算法是否执行成功
|
||||
|
||||
VisualInventoryResult() : success(false) {}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 视觉盘点检测算法
|
||||
*
|
||||
* 识别货位位置并扫描条码
|
||||
*/
|
||||
class VisualInventoryAlgorithm {
|
||||
public:
|
||||
/**
|
||||
* 执行视觉盘点检测
|
||||
* @param depth_img 深度图像
|
||||
* @param color_img 彩色图像
|
||||
* @param side 货架侧("left"或"right")
|
||||
* @param result [输出] 检测结果
|
||||
* @return 是否检测成功
|
||||
*/
|
||||
static bool detect(const cv::Mat &depth_img, const cv::Mat &color_img,
|
||||
const std::string &side, VisualInventoryResult &result);
|
||||
};
|
||||
150
image_capture/src/algorithm/utils/image_processor.cpp
Normal file
150
image_capture/src/algorithm/utils/image_processor.cpp
Normal file
@@ -0,0 +1,150 @@
|
||||
#include "image_processor.h"
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <iomanip>
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
#include <chrono>
|
||||
#include <ctime>
|
||||
#include <filesystem>
|
||||
|
||||
// ========== ImageProcessor 类实现 ==========
|
||||
|
||||
/**
|
||||
* 构造函数
|
||||
*/
|
||||
ImageProcessor::ImageProcessor() {}
|
||||
|
||||
/**
|
||||
* 处理深度图像
|
||||
*
|
||||
* @param depth_img 输入的深度图像(16位无符号整数,CV_16U类型)
|
||||
* @return cv::Mat 处理后的深度图(已应用伪彩色映射)
|
||||
*/
|
||||
cv::Mat ImageProcessor::processDepthImage(const cv::Mat& depth_img) {
|
||||
if (depth_img.empty())
|
||||
return cv::Mat();
|
||||
|
||||
// 确保输入是16位深度图
|
||||
cv::Mat depthMap;
|
||||
if (depth_img.type() == CV_16U) {
|
||||
depthMap = depth_img;
|
||||
} else {
|
||||
// 如果不是16位,尝试转换或返回空
|
||||
return cv::Mat();
|
||||
}
|
||||
|
||||
// 创建掩码,标记无效深度值(0值表示无效/无数据)
|
||||
cv::Mat invalid_mask = (depthMap == 0);
|
||||
|
||||
// 性能优化:避免不必要的clone,直接使用depthMap的视图
|
||||
// 只有在需要修改时才创建副本
|
||||
cv::Mat depthProcessed;
|
||||
|
||||
// 检查是否有无效值需要处理
|
||||
int invalid_count = cv::countNonZero(invalid_mask);
|
||||
if (invalid_count > 0) {
|
||||
// 有无效值,需要创建副本并修改
|
||||
depthProcessed = depthMap.clone();
|
||||
// 将无效值设置为一个很大的值,这样在归一化时会被排除
|
||||
depthProcessed.setTo(65535, invalid_mask);
|
||||
} else {
|
||||
// 没有无效值,直接使用原图(避免不必要的复制)
|
||||
depthProcessed = depthMap;
|
||||
}
|
||||
|
||||
// 计算有效深度值的范围(排除无效值)
|
||||
double minVal, maxVal;
|
||||
cv::minMaxLoc(depthProcessed, &minVal, &maxVal, nullptr, nullptr,
|
||||
~invalid_mask);
|
||||
|
||||
// 如果所有值都无效,返回黑色图像
|
||||
if (maxVal == 0 || minVal == 65535) {
|
||||
cv::Mat blackImg = cv::Mat::zeros(depthMap.size(), CV_8UC3);
|
||||
return blackImg;
|
||||
}
|
||||
|
||||
// 如果所有有效深度值都相同(maxVal == minVal),避免除零错误
|
||||
// 返回一个统一颜色的深度图(中等灰色,对应JET色图的中间值)
|
||||
if (maxVal == minVal) {
|
||||
// 创建单通道灰度图,有效区域设置为中等灰度值(128对应JET色图的中间颜色)
|
||||
cv::Mat grayMat = cv::Mat::zeros(depthMap.size(), CV_8UC1);
|
||||
grayMat.setTo(128, ~invalid_mask);
|
||||
// 应用伪彩色映射
|
||||
cv::Mat uniformImg;
|
||||
cv::applyColorMap(grayMat, uniformImg, cv::COLORMAP_JET);
|
||||
// 确保无效区域保持黑色
|
||||
uniformImg.setTo(cv::Scalar(0, 0, 0), invalid_mask);
|
||||
return uniformImg;
|
||||
}
|
||||
|
||||
// 归一化有效深度值到0-255范围
|
||||
cv::Mat depthVis;
|
||||
depthProcessed.convertTo(depthVis, CV_8U, 255.0 / (maxVal - minVal),
|
||||
-minVal * 255.0 / (maxVal - minVal));
|
||||
|
||||
// 将无效区域设置为0(黑色)
|
||||
depthVis.setTo(0, invalid_mask);
|
||||
|
||||
// 应用伪彩色映射以提高可视性(JET色图:蓝色=近,红色=远)
|
||||
cv::applyColorMap(depthVis, depthVis, cv::COLORMAP_JET);
|
||||
|
||||
// 确保无效区域保持黑色(伪彩色映射后可能改变,需要再次设置)
|
||||
depthVis.setTo(cv::Scalar(0, 0, 0), invalid_mask);
|
||||
|
||||
return depthVis;
|
||||
}
|
||||
|
||||
/**
|
||||
* 保存图像到文件
|
||||
*
|
||||
* @param depth_img 深度图像(可选)
|
||||
* @param color_img 彩色图像(可选)
|
||||
* @param frame_num 帧编号,用于文件命名
|
||||
* @param save_dir 保存目录(可选,默认为当前目录)
|
||||
*/
|
||||
void ImageProcessor::saveImages(const cv::Mat& depth_img, const cv::Mat& color_img,
|
||||
int frame_num, const std::string& save_dir) {
|
||||
// 创建保存目录(如果指定了且不存在)
|
||||
std::string actual_dir = save_dir.empty() ? "." : save_dir;
|
||||
if (!save_dir.empty()) {
|
||||
try {
|
||||
std::filesystem::create_directories(actual_dir);
|
||||
} catch (const std::exception& e) {
|
||||
std::cerr << "[Save] Failed to create directory: " << e.what() << std::endl;
|
||||
actual_dir = "."; // 回退到当前目录
|
||||
}
|
||||
}
|
||||
|
||||
// 获取当前系统时间用于文件命名
|
||||
auto now = std::chrono::system_clock::now();
|
||||
auto time_t = std::chrono::system_clock::to_time_t(now);
|
||||
std::tm* tm = std::localtime(&time_t);
|
||||
|
||||
char time_str[64];
|
||||
std::strftime(time_str, sizeof(time_str), "%Y%m%d_%H%M%S", tm);
|
||||
|
||||
// 保存深度图
|
||||
if (!depth_img.empty()) {
|
||||
std::stringstream depth_filename;
|
||||
depth_filename << actual_dir << "/depth_" << time_str << "_frame"
|
||||
<< std::setfill('0') << std::setw(6) << frame_num << ".png";
|
||||
if (cv::imwrite(depth_filename.str(), depth_img)) {
|
||||
std::cout << "[Save] Depth image saved: " << depth_filename.str() << std::endl;
|
||||
} else {
|
||||
std::cerr << "[Save] Failed to save depth image: " << depth_filename.str() << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
// 保存彩色图
|
||||
if (!color_img.empty()) {
|
||||
std::stringstream color_filename;
|
||||
color_filename << actual_dir << "/color_" << time_str << "_frame"
|
||||
<< std::setfill('0') << std::setw(6) << frame_num << ".png";
|
||||
if (cv::imwrite(color_filename.str(), color_img)) {
|
||||
std::cout << "[Save] Color image saved: " << color_filename.str() << std::endl;
|
||||
} else {
|
||||
std::cerr << "[Save] Failed to save color image: " << color_filename.str() << std::endl;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
63
image_capture/src/algorithm/utils/image_processor.h
Normal file
63
image_capture/src/algorithm/utils/image_processor.h
Normal file
@@ -0,0 +1,63 @@
|
||||
#pragma once
|
||||
|
||||
// OpenCV图像处理模块头文件
|
||||
// 负责深度图的处理(伪彩色映射)、显示和保存
|
||||
// 注意:此模块不依赖SDK,只使用OpenCV标准类型
|
||||
// 注意:彩色图的颜色空间转换已在图像采集层完成,此处不再处理
|
||||
|
||||
#include <string>
|
||||
|
||||
namespace cv { class Mat; }
|
||||
|
||||
/**
|
||||
* 图像处理器类
|
||||
*
|
||||
* 功能说明:
|
||||
* - 处理深度图(归一化、伪彩色映射)
|
||||
* - 保存图像到文件
|
||||
*
|
||||
* 注意:彩色图的颜色空间转换已在图像采集层(camera层)完成,
|
||||
* 统一输出BGR格式,此处不再需要处理
|
||||
*
|
||||
* 设计原则:
|
||||
* - 此模块完全独立于SDK,只使用OpenCV的cv::Mat类型
|
||||
* - SDK到OpenCV的转换应在图像采集层(camera层)完成
|
||||
* - 图像显示由GUI层(MainWindow)负责,使用Qt的QLabel分别显示
|
||||
*/
|
||||
class ImageProcessor
|
||||
{
|
||||
public:
|
||||
/**
|
||||
* 构造函数
|
||||
*/
|
||||
ImageProcessor();
|
||||
|
||||
/**
|
||||
* 处理深度图像
|
||||
*
|
||||
* @param depth_img 输入的深度图像(16位无符号整数,CV_16U类型)
|
||||
* @return cv::Mat 处理后的深度图(已应用伪彩色映射,BGR格式)
|
||||
*
|
||||
* 功能说明:
|
||||
* - 将深度值归一化到0-255范围
|
||||
* - 应用JET伪彩色映射(蓝色=近,红色=远)
|
||||
* - 处理无效深度值(0值)
|
||||
*/
|
||||
cv::Mat processDepthImage(const cv::Mat& depth_img);
|
||||
|
||||
/**
|
||||
* 保存图像到文件
|
||||
*
|
||||
* @param depth_img 深度图像(可选)
|
||||
* @param color_img 彩色图像(可选)
|
||||
* @param frame_num 帧编号,用于文件命名
|
||||
* @param save_dir 保存目录(可选,默认为当前目录)
|
||||
*
|
||||
* 功能说明:
|
||||
* - 获取当前时间戳用于文件命名
|
||||
* - 保存深度图和彩色图到文件
|
||||
*/
|
||||
void saveImages(const cv::Mat& depth_img, const cv::Mat& color_img,
|
||||
int frame_num, const std::string& save_dir = "");
|
||||
};
|
||||
|
||||
306
image_capture/src/camera/mvs_multi_camera_capture.cpp
Normal file
306
image_capture/src/camera/mvs_multi_camera_capture.cpp
Normal file
@@ -0,0 +1,306 @@
|
||||
/**
|
||||
* @file mvs_multi_camera_capture.cpp
|
||||
* @brief 海康 MVS 相机采集实现文件
|
||||
*
|
||||
* 此文件包含了 MvsMultiCameraCapture 类的完整实现
|
||||
* - 封装 MVS SDK (MvCameraControl.h),管理多相机采集
|
||||
* - 将 SDK 的原始帧数据转换为 OpenCV 的 cv::Mat 格式
|
||||
* - 管理采集线程和缓冲区
|
||||
*
|
||||
* 设计说明:
|
||||
* - 每个相机使用独立的采集线程,避免阻塞
|
||||
* - 使用线程安全的缓冲区存储最新图像
|
||||
* - 统一输出 BGR 格式的彩色图
|
||||
*/
|
||||
|
||||
#include "mvs_multi_camera_capture.h"
|
||||
#include "MvCameraControl.h"
|
||||
#include <iostream>
|
||||
#include <chrono>
|
||||
#include <cstring>
|
||||
|
||||
/**
|
||||
* @brief 构造函数
|
||||
* 初始化运行标志和状态
|
||||
*/
|
||||
MvsMultiCameraCapture::MvsMultiCameraCapture() : running_(false), initialized_(false) {}
|
||||
|
||||
/**
|
||||
* @brief 析构函数
|
||||
* 确保在对象销毁时正确停止所有采集线程和相机,并清理资源
|
||||
*/
|
||||
MvsMultiCameraCapture::~MvsMultiCameraCapture() {
|
||||
stop();
|
||||
// 清理句柄
|
||||
for (auto& cam : cameras_) {
|
||||
if (cam.handle) {
|
||||
MV_CC_DestroyHandle(cam.handle);
|
||||
cam.handle = nullptr;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 初始化相机
|
||||
*
|
||||
* 此函数完成以下工作:
|
||||
* 1. 枚举所有连接的 GenTL GigE 和 USB 设备
|
||||
* 2. 创建并打开相机句柄
|
||||
* 3. 配置相机参数(如触发模式、包大小等)
|
||||
* 4. 初始化图像缓冲区
|
||||
*
|
||||
* @return true 初始化成功且至少找到一个设备, false 失败
|
||||
*/
|
||||
bool MvsMultiCameraCapture::initialize() {
|
||||
if (initialized_) return true;
|
||||
|
||||
MV_CC_DEVICE_INFO_LIST stDeviceList;
|
||||
memset(&stDeviceList, 0, sizeof(MV_CC_DEVICE_INFO_LIST));
|
||||
|
||||
// 枚举 GenTL GigE 和 USB 设备
|
||||
int nRet = MV_CC_EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE, &stDeviceList);
|
||||
if (MV_OK != nRet) {
|
||||
std::cerr << "[MVS] EnumDevices failed: " << std::hex << nRet << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
if (stDeviceList.nDeviceNum == 0) {
|
||||
std::cout << "[MVS] No devices found." << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
std::cout << "[MVS] Found " << stDeviceList.nDeviceNum << " devices." << std::endl;
|
||||
|
||||
for (unsigned int i = 0; i < stDeviceList.nDeviceNum; i++) {
|
||||
MV_CC_DEVICE_INFO* pDeviceInfo = stDeviceList.pDeviceInfo[i];
|
||||
if (NULL == pDeviceInfo) continue;
|
||||
|
||||
CameraInfo camInfo;
|
||||
camInfo.index = static_cast<int>(cameras_.size());
|
||||
|
||||
// 获取序列号
|
||||
if (pDeviceInfo->nTLayerType == MV_GIGE_DEVICE) {
|
||||
camInfo.serial_number = std::string((char*)pDeviceInfo->SpecialInfo.stGigEInfo.chSerialNumber);
|
||||
} else if (pDeviceInfo->nTLayerType == MV_USB_DEVICE) {
|
||||
camInfo.serial_number = std::string((char*)pDeviceInfo->SpecialInfo.stUsb3VInfo.chSerialNumber);
|
||||
}
|
||||
|
||||
// 创建句柄
|
||||
nRet = MV_CC_CreateHandle(&camInfo.handle, pDeviceInfo);
|
||||
if (MV_OK != nRet) {
|
||||
std::cerr << "[MVS] CreateHandle failed for device " << i << std::endl;
|
||||
continue;
|
||||
}
|
||||
|
||||
// 打开设备
|
||||
nRet = MV_CC_OpenDevice(camInfo.handle);
|
||||
if (MV_OK != nRet) {
|
||||
std::cerr << "[MVS] OpenDevice failed for device " << i << std::endl;
|
||||
MV_CC_DestroyHandle(camInfo.handle);
|
||||
continue;
|
||||
}
|
||||
|
||||
// 确保触发模式为 OFF 以进行连续采集
|
||||
nRet = MV_CC_SetEnumValue(camInfo.handle, "TriggerMode", MV_TRIGGER_MODE_OFF);
|
||||
if (MV_OK != nRet) {
|
||||
std::cerr << "[MVS] Warning: Failed to set TriggerMode to Off. Ret = " << std::hex << nRet << std::endl;
|
||||
}
|
||||
|
||||
// 检查 GigE 的最佳包大小并设置
|
||||
if (pDeviceInfo->nTLayerType == MV_GIGE_DEVICE) {
|
||||
int nPacketSize = MV_CC_GetOptimalPacketSize(camInfo.handle);
|
||||
if (nPacketSize > 0) {
|
||||
MV_CC_SetIntValue(camInfo.handle, "GevSCPSPacketSize", nPacketSize);
|
||||
}
|
||||
}
|
||||
|
||||
cameras_.push_back(camInfo);
|
||||
buffers_.push_back(std::make_shared<ImageBuffer>());
|
||||
|
||||
// 记录相机分辨率
|
||||
MVCC_INTVALUE stWidth = {0};
|
||||
MVCC_INTVALUE stHeight = {0};
|
||||
int nRetW = MV_CC_GetIntValue(camInfo.handle, "Width", &stWidth);
|
||||
int nRetH = MV_CC_GetIntValue(camInfo.handle, "Height", &stHeight);
|
||||
|
||||
std::cout << "[MVS] Initialized camera " << camInfo.index << ": " << camInfo.serial_number;
|
||||
if (MV_OK == nRetW && MV_OK == nRetH) {
|
||||
std::cout << " (Resolution: " << stWidth.nCurValue << "x" << stHeight.nCurValue << ")";
|
||||
}
|
||||
std::cout << std::endl;
|
||||
}
|
||||
|
||||
initialized_ = true;
|
||||
return !cameras_.empty();
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 开始采集
|
||||
* 启动所有相机的抓图,并为每个相机创建一个采集线程
|
||||
* @return true 启动成功, false 失败
|
||||
*/
|
||||
bool MvsMultiCameraCapture::start() {
|
||||
if (!initialized_ || running_) return false;
|
||||
|
||||
running_ = true;
|
||||
for (const auto& cam : cameras_) {
|
||||
// 开始抓取
|
||||
int nRet = MV_CC_StartGrabbing(cam.handle);
|
||||
if (MV_OK != nRet) {
|
||||
std::cerr << "[MVS] StartGrabbing failed for camera " << cam.index << std::endl;
|
||||
}
|
||||
|
||||
threads_.emplace_back(&MvsMultiCameraCapture::captureThreadFunc, this, cam.index);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 停止采集
|
||||
* 停止所有采集线程和相机抓图
|
||||
*/
|
||||
void MvsMultiCameraCapture::stop() {
|
||||
running_ = false;
|
||||
for (auto& t : threads_) {
|
||||
if (t.joinable()) t.join();
|
||||
}
|
||||
threads_.clear();
|
||||
|
||||
for (const auto& cam : cameras_) {
|
||||
MV_CC_StopGrabbing(cam.handle);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取相机 ID (序列号)
|
||||
* @param camera_index 相机索引
|
||||
* @return 相机序列号
|
||||
*/
|
||||
std::string MvsMultiCameraCapture::getCameraId(int camera_index) const {
|
||||
if (camera_index >= 0 && camera_index < cameras_.size()) {
|
||||
return cameras_[camera_index].serial_number;
|
||||
}
|
||||
return "";
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取指定相机的最新图像
|
||||
* 从线程安全的缓冲区中读取最新图像数据
|
||||
*
|
||||
* @param camera_index 相机索引
|
||||
* @param[out] image 输出图像
|
||||
* @param[out] fps 当前帧率
|
||||
* @return true 成功获取, false 索引无效或无新图像
|
||||
*/
|
||||
bool MvsMultiCameraCapture::getLatestImage(int camera_index, cv::Mat& image, double& fps) {
|
||||
if (camera_index < 0 || camera_index >= buffers_.size()) return false;
|
||||
|
||||
auto& buffer = buffers_[camera_index];
|
||||
std::lock_guard<std::mutex> lock(buffer->mtx);
|
||||
|
||||
if (buffer->image.empty()) return false;
|
||||
|
||||
image = buffer->image.clone();
|
||||
fps = buffer->fps;
|
||||
buffer->updated = false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 转换为 OpenCV Mat 格式
|
||||
*
|
||||
* 将 SDK 返回的帧数据转换为 OpenCV 的 BGR8 Mat。
|
||||
*
|
||||
* @param handle 相机句柄
|
||||
* @param pFrame MVS 帧信息结构体指针 (MV_FRAME_OUT*)
|
||||
* @param pUser 用户数据 (未使用)
|
||||
* @return cv::Mat 转换后的 OpenCV 图像
|
||||
*/
|
||||
cv::Mat MvsMultiCameraCapture::convertToMat(void* handle, void* pFrame, void* pUser) {
|
||||
// pFrame 在 captureThreadFunc 中传入的是 MV_FRAME_OUT* 指针
|
||||
MV_FRAME_OUT* stFrameOut = (MV_FRAME_OUT*)pFrame;
|
||||
MV_FRAME_OUT_INFO_EX* stUserInfo = stFrameOut->stFrameInfo.enPixelType == 0 ? nullptr : &stFrameOut->stFrameInfo;
|
||||
|
||||
if (!handle || !stUserInfo) return cv::Mat();
|
||||
|
||||
cv::Mat image;
|
||||
|
||||
MV_CC_PIXEL_CONVERT_PARAM stConvertParam = {0};
|
||||
stConvertParam.nWidth = stUserInfo->nWidth;
|
||||
stConvertParam.nHeight = stUserInfo->nHeight;
|
||||
stConvertParam.pSrcData = stFrameOut->pBufAddr;
|
||||
stConvertParam.nSrcDataLen = stUserInfo->nFrameLen;
|
||||
stConvertParam.enSrcPixelType = stUserInfo->enPixelType;
|
||||
stConvertParam.enDstPixelType = PixelType_Gvsp_BGR8_Packed; // 转换为 OpenCV 的 BGR8
|
||||
stConvertParam.nDstBufferSize = stUserInfo->nWidth * stUserInfo->nHeight * 3;
|
||||
|
||||
// 分配目标缓冲区
|
||||
image.create(stUserInfo->nHeight, stUserInfo->nWidth, CV_8UC3);
|
||||
stConvertParam.pDstBuffer = image.data;
|
||||
|
||||
int nRet = MV_CC_ConvertPixelType(handle, &stConvertParam);
|
||||
if (MV_OK != nRet) {
|
||||
std::cerr << "[MVS] ConvertPixelType failed: " << std::hex << nRet << std::endl;
|
||||
return cv::Mat();
|
||||
}
|
||||
|
||||
return image;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 采集线程函数
|
||||
*
|
||||
* 每个相机的独立工作线程:
|
||||
* 1. 循环调用 MV_CC_GetImageBuffer 获取图像
|
||||
* 2. 调用 convertToMat 转换为 cv::Mat
|
||||
* 3. 更新线程安全缓冲区
|
||||
* 4. 计算并更新 FPS
|
||||
*
|
||||
* @param camera_index 相机索引
|
||||
*/
|
||||
void MvsMultiCameraCapture::captureThreadFunc(int camera_index) {
|
||||
auto& cam = cameras_[camera_index];
|
||||
auto& buffer = buffers_[camera_index];
|
||||
|
||||
MV_FRAME_OUT stFrameOut;
|
||||
memset(&stFrameOut, 0, sizeof(MV_FRAME_OUT));
|
||||
|
||||
auto start_time = std::chrono::steady_clock::now();
|
||||
int frame_count = 0;
|
||||
|
||||
while (running_) {
|
||||
// 获取图像缓冲区,超时 1000ms
|
||||
int nRet = MV_CC_GetImageBuffer(cam.handle, &stFrameOut, 1000);
|
||||
if (MV_OK == nRet) {
|
||||
|
||||
try {
|
||||
// 传递 stFrameOut 指针进行转换
|
||||
cv::Mat image = convertToMat(cam.handle, &stFrameOut, nullptr);
|
||||
|
||||
if (!image.empty()) {
|
||||
std::lock_guard<std::mutex> lock(buffer->mtx);
|
||||
buffer->image = image;
|
||||
buffer->updated = true;
|
||||
|
||||
frame_count++;
|
||||
auto now = std::chrono::steady_clock::now();
|
||||
double elapsed = std::chrono::duration_cast<std::chrono::seconds>(now - start_time).count();
|
||||
if (elapsed >= 1.0) {
|
||||
buffer->fps = frame_count / elapsed;
|
||||
frame_count = 0;
|
||||
start_time = now;
|
||||
// std::cout << "[MVS] Cam " << camera_index << " FPS: " << buffer->fps << std::endl;
|
||||
}
|
||||
}
|
||||
} catch (const std::exception& e) {
|
||||
std::cerr << "[MVS] Exception in conversion: " << e.what() << std::endl;
|
||||
}
|
||||
|
||||
// 释放图像缓冲区
|
||||
MV_CC_FreeImageBuffer(cam.handle, &stFrameOut);
|
||||
} else {
|
||||
// 如果触发器正在等待,超时是预期的,但我们设置的是连续采集。
|
||||
std::cerr << "[MVS] GetImageBuffer failed: " << std::hex << nRet << std::endl;
|
||||
}
|
||||
}
|
||||
}
|
||||
122
image_capture/src/camera/mvs_multi_camera_capture.h
Normal file
122
image_capture/src/camera/mvs_multi_camera_capture.h
Normal file
@@ -0,0 +1,122 @@
|
||||
#pragma once
|
||||
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
#include <atomic>
|
||||
#include <thread>
|
||||
#include <mutex>
|
||||
#include <memory>
|
||||
|
||||
/**
|
||||
* @file mvs_multi_camera_capture.h
|
||||
* @brief 海康 MVS 相机采集类定义
|
||||
*/
|
||||
|
||||
/**
|
||||
* @brief 图像缓冲区结构体
|
||||
* 存储从相机采集到的最新图像及相关元数据
|
||||
*/
|
||||
struct ImageBuffer {
|
||||
cv::Mat image; ///< 存储图像数据 (BGR格式)
|
||||
std::mutex mtx; ///< 互斥锁,保证多线程访问安全
|
||||
bool updated = false; ///< 标记图像是否已更新
|
||||
double fps = 0.0; ///< 当前帧率
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 相机信息结构体
|
||||
* 存储相机的句柄和序列号等信息
|
||||
*/
|
||||
struct CameraInfo {
|
||||
void* handle = nullptr; ///< MVS SDK 相机句柄
|
||||
std::string serial_number; ///< 相机序列号
|
||||
int index = -1; ///< 相机索引
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief MVS 多相机采集类
|
||||
*
|
||||
* 功能说明:
|
||||
* - 封装海康 MVS SDK,管理多相机采集
|
||||
* - 将 SDK 原始图像转换为 OpenCV Mat 格式
|
||||
* - 管理采集线程和缓冲区
|
||||
* - 提供最新的图像数据供上层调用
|
||||
*/
|
||||
class MvsMultiCameraCapture {
|
||||
public:
|
||||
MvsMultiCameraCapture();
|
||||
~MvsMultiCameraCapture();
|
||||
|
||||
/**
|
||||
* @brief 初始化相机
|
||||
* 枚举并打开所有连接的 GenTL GigE 和 USB 设备
|
||||
* @return true 初始化成功且至少找到一个设备, false 失败
|
||||
*/
|
||||
bool initialize();
|
||||
|
||||
/**
|
||||
* @brief 开始采集
|
||||
* 启动所有相机的抓图,并开启采集线程
|
||||
* @return true 启动成功, false 失败
|
||||
*/
|
||||
bool start();
|
||||
|
||||
/**
|
||||
* @brief 停止采集
|
||||
* 停止抓图并关闭所有线程
|
||||
*/
|
||||
void stop();
|
||||
|
||||
/**
|
||||
* @brief 获取相机数量
|
||||
* @return 已初始化的相机数量
|
||||
*/
|
||||
int getCameraCount() const { return static_cast<int>(cameras_.size()); }
|
||||
|
||||
/**
|
||||
* @brief 获取指定相机的最新图像
|
||||
* @param camera_index 相机索引
|
||||
* @param[out] image 输出图像 (BGR格式)
|
||||
* @param[out] fps 当前帧率
|
||||
* @return true 成功获取, false 索引无效或无新图像
|
||||
*/
|
||||
bool getLatestImage(int camera_index, cv::Mat& image, double& fps);
|
||||
|
||||
/**
|
||||
* @brief 获取相机 ID (序列号)
|
||||
* @param camera_index 相机索引
|
||||
* @return 相机序列号字符串
|
||||
*/
|
||||
std::string getCameraId(int camera_index) const;
|
||||
|
||||
/**
|
||||
* @brief 检查是否正在运行
|
||||
* @return true 运行中, false 已停止
|
||||
*/
|
||||
bool isRunning() const { return running_; }
|
||||
|
||||
private:
|
||||
/**
|
||||
* @brief 采集线程函数
|
||||
* 每个相机运行在独立的线程中,持续从 SDK 获取图像
|
||||
* @param camera_index 相机索引
|
||||
*/
|
||||
void captureThreadFunc(int camera_index);
|
||||
|
||||
/**
|
||||
* @brief 转换为 OpenCV Mat 格式
|
||||
* 使用 SDK 的 MV_CC_ConvertPixelType 函数将原始数据转换为 BGR8 格式
|
||||
* @param handle 相机句柄
|
||||
* @param pFrame 帧数据指针 (MV_FRAME_OUT*)
|
||||
* @param pUser 用户数据 (保留,未使用)
|
||||
* @return cv::Mat 转换后的图像
|
||||
*/
|
||||
static cv::Mat convertToMat(void* handle, void* pFrame, void* pUser);
|
||||
|
||||
std::vector<CameraInfo> cameras_; ///< 相机列表
|
||||
std::vector<std::shared_ptr<ImageBuffer>> buffers_; ///< 缓冲区列表
|
||||
std::vector<std::thread> threads_; ///< 采集线程列表
|
||||
std::atomic<bool> running_; ///< 运行状态标志
|
||||
bool initialized_ = false; ///< 初始化状态标志
|
||||
};
|
||||
950
image_capture/src/camera/ty_multi_camera_capture.cpp
Normal file
950
image_capture/src/camera/ty_multi_camera_capture.cpp
Normal file
@@ -0,0 +1,950 @@
|
||||
/**
|
||||
* @file ty_multi_camera_capture.cpp
|
||||
* @brief TY相机采集实现文件
|
||||
*
|
||||
* 此文件包含了 CameraCapture 类的完整实现
|
||||
* - 封装TY相机SDK,管理多相机采集
|
||||
* - 将SDK的TYImage转换为OpenCV的cv::Mat格式
|
||||
* - 管理采集线程和缓冲区
|
||||
* - 输出原始cv::Mat格式的图像,供上层使用
|
||||
*
|
||||
* 设计说明:
|
||||
* - 每个相机使用独立的采集线程,避免阻塞
|
||||
* - 使用线程安全的缓冲区存储最新图像
|
||||
* - 使用clone()确保数据安全,避免悬空指针
|
||||
* - 统一输出BGR格式的彩色图,便于上层处理
|
||||
*/
|
||||
|
||||
#include "ty_multi_camera_capture.h"
|
||||
#include "TYCoordinateMapper.h"
|
||||
#include <chrono>
|
||||
#include <iostream>
|
||||
|
||||
#ifdef _WIN32
|
||||
#include <windows.h>
|
||||
#endif
|
||||
|
||||
/**
|
||||
* @brief 构造函数
|
||||
*
|
||||
* 初始化所有成员变量为默认值:初始化列表的方式初始化成员变量,避免在构造函数体中初始化,提高代码可读性。
|
||||
* - streams_configured_: 流未配置
|
||||
* - depth_enabled_: 深度流未启用
|
||||
* - color_enabled_: 彩色流未启用
|
||||
* - running_: 未运行状态
|
||||
*/
|
||||
CameraCapture::CameraCapture()
|
||||
: streams_configured_(false), depth_enabled_(false), color_enabled_(false),
|
||||
running_(false) {}
|
||||
|
||||
/**
|
||||
* @brief 析构函数
|
||||
*
|
||||
* 确保在对象销毁时正确停止所有采集线程和相机
|
||||
* 调用stop()来清理资源,避免资源泄漏
|
||||
*/
|
||||
CameraCapture::~CameraCapture() { stop(); }
|
||||
|
||||
/**
|
||||
* @brief 初始化并配置相机
|
||||
*
|
||||
* 此函数完成以下工作:
|
||||
* 1. 清理现有资源(如果之前已初始化)
|
||||
* 2. 查询并打开所有可用的相机设备
|
||||
* 3. 为每个相机配置深度流和彩色流
|
||||
* 4. 创建图像处理器和缓冲区
|
||||
*
|
||||
* @param enable_depth 是否启用深度流,true表示启用深度图采集
|
||||
* @param enable_color 是否启用彩色流,true表示启用彩色图采集
|
||||
* @return true 初始化成功,false 初始化失败(无设备或打开失败)
|
||||
*
|
||||
* @note 如果部分相机打开失败,只要至少有一个相机成功打开,函数仍返回true
|
||||
*/
|
||||
bool CameraCapture::initialize(bool enable_depth, bool enable_color) {
|
||||
// 设置控制台代码页为UTF-8,确保中文正确显示
|
||||
#ifdef _WIN32
|
||||
SetConsoleOutputCP(65001); // UTF-8代码页
|
||||
SetConsoleCP(65001); // 设置输入代码页也为UTF-8
|
||||
#endif
|
||||
|
||||
// 清理现有资源,确保重新初始化时状态干净
|
||||
stop();
|
||||
cameras_.clear();
|
||||
depth_processers_.clear();
|
||||
color_processers_.clear();
|
||||
camera_running_.clear();
|
||||
buffers_.clear();
|
||||
calib_infos_.clear();
|
||||
has_calib_info_.clear();
|
||||
|
||||
// 通过TY SDK的上下文查询设备列表
|
||||
// TYContext是单例模式,获取全局唯一的上下文实例
|
||||
auto &context = TYContext::getInstance();
|
||||
auto device_list = context.queryDeviceList();
|
||||
|
||||
// 检查是否找到设备-检查device_list是否为空或者设备数量为0
|
||||
if (!device_list || device_list->empty()) {
|
||||
std::cerr << "[CameraCapture] No devices found!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 获取所有可用设备数量,使用所有找到的设备
|
||||
int device_count = device_list->devCount();
|
||||
|
||||
std::cout << "[CameraCapture] Found " << device_count
|
||||
<< " device(s), will use all available devices" << std::endl;
|
||||
|
||||
// 遍历设备列表,逐个打开所有可用相机
|
||||
for (int i = 0; i < device_count; i++) {
|
||||
// 获取设备信息(包含设备ID等)
|
||||
auto device_info = device_list->getDeviceInfo(i);
|
||||
if (!device_info) {
|
||||
std::cerr << "[CameraCapture] Failed to get camera device info! " << i << std::endl;
|
||||
continue; // 跳过此设备,继续处理下一个
|
||||
}
|
||||
|
||||
std::cout << "[CameraCapture] Preparing to open camera! " << i << ": "
|
||||
<< device_info->id() << std::endl;
|
||||
|
||||
// 创建FastCamera对象(SDK提供的相机封装类)
|
||||
auto camera = std::make_shared<FastCamera>();
|
||||
// 使用设备ID打开相机
|
||||
TY_STATUS status = camera->open(device_info->id());
|
||||
|
||||
// 检查打开是否成功
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Failed to open camera! " << i << ": "
|
||||
<< device_info->id() << std::endl;
|
||||
continue; // 打开失败,跳过此设备
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Successfully opened camera! " << i << ": "
|
||||
<< device_info->id() << std::endl;
|
||||
}
|
||||
|
||||
// 成功打开,添加到相机列表
|
||||
cameras_.push_back(camera);
|
||||
camera_running_.push_back(false); // 初始状态为未运行
|
||||
|
||||
// 图像处理器稍后在配置流时创建,这里先占位
|
||||
depth_processers_.push_back(nullptr);
|
||||
color_processers_.push_back(nullptr);
|
||||
|
||||
// 获取并保存标定信息
|
||||
TY_CAMERA_CALIB_INFO calib_info;
|
||||
TY_STATUS calib_status = TYGetStruct(camera->handle(), TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA,
|
||||
&calib_info, sizeof(calib_info));
|
||||
if (calib_status == TY_STATUS_OK) {
|
||||
calib_infos_.push_back(calib_info);
|
||||
has_calib_info_.push_back(true);
|
||||
std::cout << "[CameraCapture] Camera " << i << " calibration info fetched." << std::endl;
|
||||
} else {
|
||||
calib_infos_.push_back(TY_CAMERA_CALIB_INFO());
|
||||
has_calib_info_.push_back(false);
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to fetch calibration info: " << calib_status << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
// 检查是否至少成功打开一个相机
|
||||
if (cameras_.empty()) {
|
||||
std::cerr << "[CameraCapture] No cameras opened successfully!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// ========== 配置流 ==========
|
||||
// 保存流配置标志,供后续使用
|
||||
depth_enabled_ = enable_depth;
|
||||
color_enabled_ = enable_color;
|
||||
|
||||
// 为每个已打开的相机配置流
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
auto &camera = cameras_[i];
|
||||
|
||||
// 启用深度流
|
||||
if (enable_depth) {
|
||||
// 调用SDK接口启用深度流
|
||||
TY_STATUS status = camera->stream_enable(FastCamera::stream_depth);
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to enable depth stream"
|
||||
<< std::endl;
|
||||
} else {
|
||||
// 创建深度图像处理器
|
||||
// ImageProcesser用于处理SDK返回的原始图像数据
|
||||
std::string depth_win_name = "depth_" + std::to_string(i);
|
||||
depth_processers_[i] =
|
||||
std::make_shared<ImageProcesser>(depth_win_name.c_str());
|
||||
std::cout << "[CameraCapture] Camera " << i << " depth stream enabled"
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
// 启用彩色流
|
||||
if (enable_color) {
|
||||
// 调用SDK接口启用彩色流
|
||||
TY_STATUS status = camera->stream_enable(FastCamera::stream_color);
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to enable color stream"
|
||||
<< std::endl;
|
||||
} else {
|
||||
// 创建彩色图像处理器
|
||||
std::string color_win_name = "color_" + std::to_string(i);
|
||||
color_processers_[i] =
|
||||
std::make_shared<ImageProcesser>(color_win_name.c_str());
|
||||
std::cout << "[CameraCapture] Camera " << i << " color stream enabled"
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ========== 设置分辨率 ==========
|
||||
// 为每个相机设置深度图和彩色图分辨率为640x480
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
auto &camera = cameras_[i];
|
||||
TY_DEV_HANDLE hDevice = camera->handle();
|
||||
|
||||
if (hDevice == 0) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " handle is invalid, skip resolution setting"
|
||||
<< std::endl;
|
||||
continue;
|
||||
}
|
||||
|
||||
// 设置深度图分辨率为1280x960(使用图像模式)
|
||||
if (enable_depth) {
|
||||
// 方法1:尝试使用图像模式(推荐,同时设置分辨率和格式)
|
||||
TY_IMAGE_MODE depth_mode = TY_IMAGE_MODE_DEPTH16_1280x960;
|
||||
TY_STATUS status = TYSetEnum(hDevice, TY_COMPONENT_DEPTH_CAM, TY_ENUM_IMAGE_MODE, depth_mode);
|
||||
if (status != TY_STATUS_OK) {
|
||||
// 方法2:如果图像模式不支持,回退到单独设置宽高
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set depth image mode 1280x960, trying width/height: "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
status = TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_WIDTH, 1280);
|
||||
if (status == TY_STATUS_OK) {
|
||||
status = TYSetInt(hDevice, TY_COMPONENT_DEPTH_CAM, TY_INT_HEIGHT, 960);
|
||||
}
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set depth resolution: "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " depth resolution set to 1280x960 (via width/height)"
|
||||
<< std::endl;
|
||||
}
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " depth resolution set to 1280x960"
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
// 设置彩色图分辨率为1280x960,使用YUYV格式
|
||||
if (enable_color) {
|
||||
// 方法1:尝试使用YUYV格式的1280x960图像模式(推荐)
|
||||
TY_IMAGE_MODE color_mode = TY_IMAGE_MODE_YUYV_1280x960;
|
||||
TY_STATUS status = TYSetEnum(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, color_mode);
|
||||
if (status != TY_STATUS_OK) {
|
||||
// 方法2:如果YUYV模式不支持,尝试其他格式
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set YUYV_1280x960 mode, trying alternatives: "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
|
||||
// 尝试RGB格式
|
||||
color_mode = TY_IMAGE_MODE_RGB_1280x960;
|
||||
status = TYSetEnum(hDevice, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, color_mode);
|
||||
if (status != TY_STATUS_OK) {
|
||||
// 方法3:如果图像模式都不支持,回退到单独设置宽高
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set RGB_1280x960 mode, trying width/height: "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
status = TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_WIDTH, 1280);
|
||||
if (status == TY_STATUS_OK) {
|
||||
status = TYSetInt(hDevice, TY_COMPONENT_RGB_CAM, TY_INT_HEIGHT, 960);
|
||||
}
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set color resolution: "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " color resolution set to 1280x960 (via width/height)"
|
||||
<< std::endl;
|
||||
}
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " color resolution set to 1280x960 (RGB format)"
|
||||
<< std::endl;
|
||||
}
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " color resolution set to 1280x960 YUYV"
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ========== 设置帧率 ==========
|
||||
// 根据相机技术参数,640x480分辨率下:
|
||||
// - 深度图:19 fps
|
||||
// - RGB图(YUYV格式):25 fps
|
||||
// 使用连续模式以获得最高帧率
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
auto &camera = cameras_[i];
|
||||
TY_DEV_HANDLE hDevice = camera->handle();
|
||||
|
||||
if (hDevice == 0) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " handle is invalid, skip frame rate setting"
|
||||
<< std::endl;
|
||||
continue;
|
||||
}
|
||||
|
||||
// 方法1:使用连续模式(TY_TRIGGER_MODE_OFF),让相机以最大帧率连续采集
|
||||
// 640x480分辨率下,连续模式应该能达到:深度19fps,RGB(YUYV)25fps
|
||||
TY_TRIGGER_PARAM trigger_param;
|
||||
trigger_param.mode = TY_TRIGGER_MODE_OFF; // 连续模式,不使用触发
|
||||
trigger_param.fps = 0; // 连续模式下fps参数无效
|
||||
trigger_param.rsvd = 0;
|
||||
|
||||
TY_STATUS status = TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM,
|
||||
&trigger_param, sizeof(trigger_param));
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set trigger mode (continuous): "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
|
||||
// 方法2:如果连续模式不支持,尝试使用周期性触发模式
|
||||
// 根据技术参数,640x480下RGB可达25fps,深度可达19fps
|
||||
// 设置为25fps以匹配RGB的最高帧率
|
||||
trigger_param.mode = TY_TRIGGER_MODE_M_PER; // 主模式,周期性触发
|
||||
trigger_param.fps = 25; // 设置帧率为25fps(匹配RGB YUYV格式的最高帧率)
|
||||
trigger_param.rsvd = 0;
|
||||
|
||||
status = TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM,
|
||||
&trigger_param, sizeof(trigger_param));
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to set trigger mode (25fps): "
|
||||
<< status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " frame rate set to 25fps (trigger mode)"
|
||||
<< std::endl;
|
||||
}
|
||||
} else {
|
||||
std::cout << "[CameraCapture] Camera " << i << " set to continuous mode"
|
||||
<< std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
// 标记流已配置
|
||||
streams_configured_ = true;
|
||||
|
||||
// ========== 创建图像缓冲区 ==========
|
||||
// 为每个相机创建一个独立的图像缓冲区
|
||||
// 缓冲区用于存储采集线程获取的最新图像,供上层读取
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
buffers_.push_back(std::make_shared<ImageBuffer>());
|
||||
}
|
||||
|
||||
std::cout << "[CameraCapture] Initialization complete! Total " << cameras_.size() << " camera(s)"
|
||||
<< std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 启动采集
|
||||
*
|
||||
* 此函数完成以下工作:
|
||||
* 1. 检查相机和流配置状态
|
||||
* 2. 启动所有相机的数据流
|
||||
* 3. 为每个相机创建独立的采集线程
|
||||
*
|
||||
* @return true 启动成功,false 启动失败(无相机或流未配置或启动失败)
|
||||
*
|
||||
* @note 如果部分相机启动失败,函数返回false,但已启动的相机需要手动停止
|
||||
* @note 每个相机使用独立的线程,避免相互阻塞
|
||||
*/
|
||||
bool CameraCapture::start() {
|
||||
// 检查是否已经在运行
|
||||
if (running_) {
|
||||
std::cout << "[CameraCapture] System already running" << std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
// 检查是否有相机
|
||||
if (cameras_.empty()) {
|
||||
std::cerr << "[CameraCapture] No cameras to start!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 检查流是否已配置(必须先调用initialize)
|
||||
if (!streams_configured_) {
|
||||
std::cerr << "[CameraCapture] Streams not configured!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// ========== 启动所有相机 ==========
|
||||
bool all_started = true;
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
auto &camera = cameras_[i];
|
||||
// 调用SDK接口启动相机数据流
|
||||
TY_STATUS status = camera->start();
|
||||
|
||||
if (status == TY_STATUS_OK) {
|
||||
camera_running_[i] = true; // 标记相机为运行状态
|
||||
std::cout << "[CameraCapture] Camera " << i << " started" << std::endl;
|
||||
} else {
|
||||
camera_running_[i] = false;
|
||||
std::cerr << "[CameraCapture] Camera " << i << " failed to start" << std::endl;
|
||||
all_started = false; // 记录有相机启动失败
|
||||
}
|
||||
}
|
||||
|
||||
// 如果有相机启动失败,返回false
|
||||
if (!all_started) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// ========== 启动采集线程 ==========
|
||||
// 设置运行标志,采集线程会检查此标志来决定是否开启
|
||||
running_ = true;
|
||||
|
||||
// 为每个相机创建独立的采集线程
|
||||
// 线程函数:captureThreadFunc
|
||||
// 参数:this指针指向调用 start() 的 CameraCapture
|
||||
// 对象和相机索引、static_cast:类型转换操作符
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
capture_threads_.emplace_back(&CameraCapture::captureThreadFunc, this,
|
||||
static_cast<int>(i));
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 停止采集
|
||||
*
|
||||
* 此函数完成以下工作:
|
||||
* 1. 设置运行标志为false,通知采集线程退出
|
||||
* 2. 等待所有采集线程结束(join)
|
||||
* 3. 停止所有相机的数据流
|
||||
*
|
||||
* @note 此函数是线程安全的,可以在任何线程中调用
|
||||
* @note 析构函数会自动调用此函数,确保资源正确释放
|
||||
*/
|
||||
void CameraCapture::stop() {
|
||||
// 设置运行标志为false,通知所有采集线程退出循环
|
||||
running_ = false;
|
||||
|
||||
// 等待所有采集线程结束
|
||||
// join()会阻塞直到线程执行完毕,确保线程安全退出
|
||||
for (auto &t : capture_threads_) {
|
||||
if (t.joinable()) {
|
||||
t.join();
|
||||
}
|
||||
}
|
||||
capture_threads_.clear(); // 清空线程列表
|
||||
|
||||
// 停止所有相机的数据流
|
||||
for (size_t i = 0; i < cameras_.size(); i++) {
|
||||
if (camera_running_[i] && cameras_[i]) {
|
||||
cameras_[i]->stop(); // 调用SDK接口停止相机
|
||||
camera_running_[i] = false; // 标记相机为停止状态
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取相机数量
|
||||
*
|
||||
* @return 当前已初始化的相机数量
|
||||
*/
|
||||
int CameraCapture::getCameraCount() const {
|
||||
return static_cast<int>(cameras_.size());
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取指定相机的设备ID
|
||||
*
|
||||
* @param index 相机索引,从0开始
|
||||
* @return 相机设备ID字符串,如果索引无效则返回空字符串
|
||||
*
|
||||
* @note 此函数会重新查询设备列表,确保返回最新的设备ID
|
||||
*/
|
||||
std::string CameraCapture::getCameraId(int index) const {
|
||||
// 检查索引有效性
|
||||
if (index < 0 || index >= static_cast<int>(cameras_.size())) {
|
||||
return "";
|
||||
}
|
||||
|
||||
// 重新查询设备列表获取ID
|
||||
// 注意:这里使用SDK的设备列表而不是内部存储,确保ID是最新的
|
||||
auto &context = TYContext::getInstance();
|
||||
auto device_list = context.queryDeviceList();
|
||||
|
||||
if (!device_list || index >= device_list->devCount()) {
|
||||
return "";
|
||||
}
|
||||
|
||||
auto device_info = device_list->getDeviceInfo(index);
|
||||
if (!device_info) {
|
||||
return "";
|
||||
}
|
||||
|
||||
return std::string(device_info->id());
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取指定相机的最新图像
|
||||
*
|
||||
* 从线程安全的缓冲区中读取最新采集的图像数据
|
||||
*
|
||||
* @param camera_index 相机索引,从0开始
|
||||
* @param depth [输出] 深度图,CV_16U格式,包含原始深度值(单位:毫米)
|
||||
* @param color [输出] 彩色图,BGR格式,CV_8UC3类型
|
||||
* @param fps [输出] 当前帧率(帧/秒)
|
||||
* @return true 成功获取图像,false 索引无效或缓冲区为空
|
||||
*
|
||||
* @note 此函数是线程安全的,使用互斥锁保护缓冲区访问
|
||||
* @note 如果某个图像流未启用或尚未采集到数据,对应的Mat将为空
|
||||
* @note 使用copyTo()复制数据,确保返回的图像数据独立于缓冲区
|
||||
*/
|
||||
bool CameraCapture::getLatestImages(int camera_index, cv::Mat &depth,
|
||||
cv::Mat &color, double &fps) {
|
||||
// 检查索引有效性
|
||||
if (camera_index < 0 || camera_index >= static_cast<int>(buffers_.size())) {
|
||||
return false;
|
||||
}
|
||||
|
||||
auto buffer = buffers_[camera_index];
|
||||
|
||||
// 使用互斥锁保护缓冲区访问,确保线程安全
|
||||
// lock_guard自动管理锁的获取和释放
|
||||
std::lock_guard<std::mutex> lock(buffer->mtx);
|
||||
|
||||
// 复制深度图数据
|
||||
if (!buffer->depth.empty()) {
|
||||
buffer->depth.copyTo(depth); // 深拷贝,确保数据独立
|
||||
} else {
|
||||
depth = cv::Mat(); // 如果为空,返回空Mat
|
||||
}
|
||||
|
||||
// 复制彩色图数据
|
||||
if (!buffer->color.empty()) {
|
||||
buffer->color.copyTo(color); // 深拷贝,确保数据独立
|
||||
} else {
|
||||
color = cv::Mat(); // 如果为空,返回空Mat
|
||||
}
|
||||
|
||||
// 复制FPS值
|
||||
fps = buffer->fps;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 检查是否正在运行
|
||||
*
|
||||
* @return true 正在运行,false 已停止
|
||||
*/
|
||||
bool CameraCapture::isRunning() const { return running_; }
|
||||
|
||||
/**
|
||||
* @brief 获取指定相机的深度相机内参
|
||||
*
|
||||
* 从图漾相机SDK获取深度相机的内参(fx, fy, cx, cy)
|
||||
* 内参存储在相机的标定数据中,通过TYGetStruct API获取
|
||||
*
|
||||
* @param camera_index 相机索引,从0开始
|
||||
* @param fx [输出] 焦距x(像素单位)
|
||||
* @param fy [输出] 焦距y(像素单位)
|
||||
* @param cx [输出] 主点x坐标(像素单位)
|
||||
* @param cy [输出] 主点y坐标(像素单位)
|
||||
* @return true 成功获取内参,false 索引无效或获取失败
|
||||
*
|
||||
* @note 内参矩阵格式为3x3:
|
||||
* | fx 0 cx |
|
||||
* | 0 fy cy |
|
||||
* | 0 0 1 |
|
||||
* @note 此函数需要在相机初始化后调用(initialize之后)
|
||||
*/
|
||||
bool CameraCapture::getDepthCameraIntrinsics(int camera_index, float& fx, float& fy, float& cx, float& cy) {
|
||||
// 检查索引有效性
|
||||
if (camera_index < 0 || camera_index >= static_cast<int>(cameras_.size())) {
|
||||
std::cerr << "[CameraCapture] Invalid camera index: " << camera_index << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 检查相机是否已打开
|
||||
auto camera = cameras_[camera_index];
|
||||
if (!camera) {
|
||||
std::cerr << "[CameraCapture] Camera " << camera_index << " is not opened" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 获取相机设备句柄
|
||||
TY_DEV_HANDLE hDevice = camera->handle();
|
||||
if (hDevice == 0) {
|
||||
std::cerr << "[CameraCapture] Camera " << camera_index << " handle is invalid" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 获取深度相机的内参
|
||||
TY_CAMERA_INTRINSIC intrinsic;
|
||||
TY_STATUS status = TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_INTRINSIC,
|
||||
&intrinsic, sizeof(intrinsic));
|
||||
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] Failed to get depth camera intrinsics for camera "
|
||||
<< camera_index << ", error: " << status << "(" << TYErrorString(status) << ")" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 内参矩阵是3x3,按行主序存储:
|
||||
// data[0] = fx, data[1] = 0, data[2] = cx
|
||||
// data[3] = 0, data[4] = fy, data[5] = cy
|
||||
// data[6] = 0, data[7] = 0, data[8] = 1
|
||||
fx = intrinsic.data[0]; // fx
|
||||
fy = intrinsic.data[4]; // fy
|
||||
cx = intrinsic.data[2]; // cx
|
||||
cy = intrinsic.data[5]; // cy
|
||||
|
||||
std::cout << "[CameraCapture] Camera " << camera_index
|
||||
<< " depth intrinsics: fx=" << fx << ", fy=" << fy
|
||||
<< ", cx=" << cx << ", cy=" << cy << std::endl;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 采集线程函数
|
||||
*
|
||||
* 这是每个相机独立运行的采集线程的主函数
|
||||
* 主要工作:
|
||||
* 1. 从SDK获取原始帧数据(TYFrame)
|
||||
* 2. 提取深度图和彩色图(TYImage)
|
||||
* 3. 使用图像处理器处理原始数据(如果需要)
|
||||
* 4. 转换为OpenCV格式(cv::Mat)
|
||||
* 5. 进行颜色空间转换(统一为BGR)
|
||||
* 6. 计算帧率
|
||||
* 7. 更新线程安全的缓冲区
|
||||
*
|
||||
* @param camera_index 相机索引,标识此线程负责哪个相机
|
||||
*
|
||||
* @note 此函数运行在独立的线程中,每个相机一个线程
|
||||
* @note 使用异常处理确保线程异常不会导致程序崩溃
|
||||
* @note 使用超时机制避免长时间阻塞
|
||||
* @note 缓冲区更新使用互斥锁保护,确保线程安全
|
||||
*/
|
||||
void CameraCapture::captureThreadFunc(int camera_index) {
|
||||
// 检查相机索引有效性
|
||||
if (camera_index < 0 || camera_index >= static_cast<int>(cameras_.size())) {
|
||||
return;
|
||||
}
|
||||
|
||||
// 获取此相机对应的缓冲区
|
||||
auto buffer = buffers_[camera_index];
|
||||
if (!buffer) {
|
||||
std::cerr << "[CameraCapture] Camera " << camera_index << " buffer invalid"
|
||||
<< std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
// 初始化帧计数和FPS计算相关变量
|
||||
int frame_count = 0;
|
||||
auto start_time =
|
||||
std::chrono::steady_clock::now(); // 记录开始时间,用于计算FPS
|
||||
int consecutive_timeouts = 0; // 连续超时计数,用于检测相机是否异常
|
||||
|
||||
// 使用try-catch捕获异常,确保线程异常不会导致程序崩溃
|
||||
try {
|
||||
// 主循环:持续采集图像直到停止标志被设置或相机停止运行
|
||||
while (running_ && camera_running_[camera_index]) {
|
||||
// 再次检查相机索引有效性(防止在运行过程中相机被移除)
|
||||
if (camera_index >= static_cast<int>(cameras_.size()) ||
|
||||
!cameras_[camera_index]) {
|
||||
break;
|
||||
}
|
||||
|
||||
// 从相机获取帧数据,超时时间500ms
|
||||
// tryGetFrames是非阻塞的,如果500ms内没有新帧,返回nullptr
|
||||
// 注意:根据实际测试,单帧处理时间约155ms,加上相机采集时间,500ms是合理的超时值
|
||||
// 如果相机帧率很低(<2fps),可以适当增加到1000ms
|
||||
auto frame = cameras_[camera_index]->tryGetFrames(500);
|
||||
if (!frame) {
|
||||
// 获取帧失败(超时或错误)
|
||||
consecutive_timeouts++;
|
||||
// 如果连续超时超过10次,输出警告并短暂休眠
|
||||
// 这样可以减少错误日志的噪音,同时避免CPU占用过高
|
||||
if (consecutive_timeouts == 10) {
|
||||
std::cerr << "[CameraCapture] Camera " << camera_index
|
||||
<< " consecutive timeout 10 times, may be low frame rate or connection issue" << std::endl;
|
||||
}
|
||||
if (consecutive_timeouts > 10) {
|
||||
// 连续超时超过10次后,每次超时都休眠100ms,避免CPU空转
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(100));
|
||||
}
|
||||
continue; // 继续下一次循环
|
||||
}
|
||||
|
||||
// 成功获取帧,重置超时计数
|
||||
consecutive_timeouts = 0;
|
||||
frame_count++; // 帧计数加1
|
||||
|
||||
// 记录帧获取时间,用于性能分析
|
||||
auto frame_start_time = std::chrono::steady_clock::now();
|
||||
|
||||
// ========== 处理深度图 ==========
|
||||
cv::Mat depthMat;
|
||||
auto depth_img = frame->depthImage(); // 从帧中提取深度图
|
||||
if (depth_img) {
|
||||
// 如果深度流已启用,使用图像处理器处理原始数据
|
||||
auto depth_processer = depth_processers_[camera_index];
|
||||
if (depth_processer) {
|
||||
// parse()处理原始TYImage数据,可能进行格式转换或校正
|
||||
depth_processer->parse(depth_img);
|
||||
// image()返回处理后的TYImage
|
||||
depth_img = depth_processer->image();
|
||||
}
|
||||
// 将TYImage转换为OpenCV的Mat格式
|
||||
// TYImageToMat内部使用clone(),确保数据安全
|
||||
depthMat = TYImageToMat(depth_img);
|
||||
}
|
||||
|
||||
// ========== 处理彩色图 ==========
|
||||
cv::Mat colorMat;
|
||||
auto color_img = frame->colorImage(); // 从帧中提取彩色图
|
||||
if (color_img) {
|
||||
// 如果彩色流已启用,使用图像处理器处理原始数据
|
||||
auto color_processer = color_processers_[camera_index];
|
||||
if (color_processer) {
|
||||
// parse()处理原始TYImage数据
|
||||
color_processer->parse(color_img);
|
||||
// image()返回处理后的TYImage
|
||||
color_img = color_processer->image();
|
||||
}
|
||||
|
||||
// 将TYImage转换为OpenCV的Mat格式
|
||||
cv::Mat rawColorMat = TYImageToMat(color_img);
|
||||
if (!rawColorMat.empty()) {
|
||||
// 获取像素格式标识,用于确定颜色空间转换方式
|
||||
int pixel_format = getPixelFormatId(color_img->pixelFormat());
|
||||
|
||||
// 性能优化:根据像素格式进行颜色空间转换,统一输出BGR格式
|
||||
// 优化:直接转换到目标Mat,避免中间变量
|
||||
if (pixel_format == 1) {
|
||||
// RGB格式,转换为BGR
|
||||
cv::cvtColor(rawColorMat, colorMat, cv::COLOR_RGB2BGR);
|
||||
} else if (pixel_format == 2 || pixel_format == 3) {
|
||||
// YUYV或YVYU格式(YUV422),转换为BGR
|
||||
// 注意:YVYU和YUYV使用相同的转换代码
|
||||
cv::cvtColor(rawColorMat, colorMat, cv::COLOR_YUV2BGR_YUYV);
|
||||
} else {
|
||||
// BGR格式或其他,直接使用(假设已经是BGR)
|
||||
// 性能优化:使用move语义,避免不必要的复制
|
||||
colorMat = std::move(rawColorMat);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ========== 计算FPS ==========
|
||||
// 使用已采集的帧数和经过的时间计算平均帧率
|
||||
auto current_time = std::chrono::steady_clock::now();
|
||||
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(
|
||||
current_time - start_time)
|
||||
.count();
|
||||
double fps = 0.0;
|
||||
if (elapsed > 0) {
|
||||
// FPS = 帧数 * 1000 / 经过的毫秒数
|
||||
fps = (frame_count * 1000.0) / elapsed;
|
||||
}
|
||||
|
||||
// ========== 更新缓冲区 ==========
|
||||
// 使用互斥锁保护缓冲区,确保线程安全
|
||||
// 注意:这里使用独立的lock_guard作用域,确保锁在更新完成后立即释放
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(buffer->mtx);
|
||||
|
||||
// 更新深度图缓冲区
|
||||
// 性能优化:使用swap代替copyTo,避免数据复制,直接交换指针
|
||||
if (!depthMat.empty()) {
|
||||
// 使用swap交换数据,这是零拷贝操作,只交换内部指针
|
||||
std::swap(buffer->depth, depthMat);
|
||||
} else {
|
||||
// 如果深度图为空,清空缓冲区
|
||||
buffer->depth = cv::Mat();
|
||||
}
|
||||
|
||||
// 更新彩色图缓冲区
|
||||
// 性能优化:使用swap代替copyTo,避免数据复制
|
||||
if (!colorMat.empty()) {
|
||||
// 使用swap交换数据,这是零拷贝操作,只交换内部指针
|
||||
std::swap(buffer->color, colorMat);
|
||||
} else {
|
||||
// 如果彩色图为空,清空缓冲区
|
||||
buffer->color = cv::Mat();
|
||||
}
|
||||
|
||||
// 更新FPS和帧计数
|
||||
buffer->fps = fps;
|
||||
buffer->frame_count = frame_count;
|
||||
buffer->updated = true; // 标记缓冲区已更新
|
||||
}
|
||||
// lock_guard在这里自动释放锁
|
||||
|
||||
// 性能监控:每100帧输出一次处理时间(可选,用于调试)
|
||||
if (frame_count % 100 == 0) {
|
||||
auto frame_end_time = std::chrono::steady_clock::now();
|
||||
auto frame_process_time =
|
||||
std::chrono::duration_cast<std::chrono::milliseconds>(
|
||||
frame_end_time - frame_start_time)
|
||||
.count();
|
||||
// if (frame_process_time > 50) { // 如果单帧处理时间超过50ms,输出警告
|
||||
// std::cout << "[CameraCapture] 相机 " << camera_index
|
||||
// << " 单帧处理时间: " << frame_process_time << "ms"
|
||||
// << std::endl;
|
||||
// }
|
||||
}
|
||||
} // while循环结束
|
||||
} catch (const std::exception &e) {
|
||||
// 捕获标准异常,记录错误信息
|
||||
std::cerr << "[CameraCapture] Camera " << camera_index << " capture thread exception: "
|
||||
<< e.what() << std::endl;
|
||||
} catch (...) {
|
||||
// 捕获所有其他异常(非标准异常)
|
||||
std::cerr << "[CameraCapture] Camera " << camera_index
|
||||
<< " capture thread unknown exception" << std::endl;
|
||||
}
|
||||
// 线程函数结束,线程自动退出
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 将TYImage转换为OpenCV的Mat格式
|
||||
*
|
||||
* 此函数将SDK的TYImage格式转换为OpenCV的cv::Mat格式
|
||||
* 关键点:
|
||||
* 1. 根据像素格式确定OpenCV的Mat类型
|
||||
* 2. 创建临时Mat包装原始缓冲区(零拷贝视图)
|
||||
* 3. 使用clone()创建数据副本,确保数据安全
|
||||
*
|
||||
* @param img SDK的TYImage智能指针
|
||||
* @return cv::Mat OpenCV格式的图像矩阵,如果输入无效则返回空Mat
|
||||
*
|
||||
* @note 使用clone()创建数据副本是必要的,因为:
|
||||
* - TYImage的数据可能在frame对象销毁后失效
|
||||
* - 如果不clone,返回的Mat会引用已释放的内存,导致悬空指针
|
||||
* - clone()虽然增加内存和CPU开销,但确保了数据安全
|
||||
*
|
||||
* @note 支持的像素格式:
|
||||
* - TY_PIXEL_FORMAT_DEPTH16: 16位深度图 -> CV_16U
|
||||
* - TY_PIXEL_FORMAT_RGB: RGB彩色图 -> CV_8UC3
|
||||
* - TY_PIXEL_FORMAT_BGR: BGR彩色图 -> CV_8UC3
|
||||
* - TY_PIXEL_FORMAT_MONO: 单色图 -> CV_8U
|
||||
* - TY_PIXEL_FORMAT_YUYV/YVYU: YUV422格式 -> CV_8UC2
|
||||
*/
|
||||
cv::Mat CameraCapture::TYImageToMat(const std::shared_ptr<TYImage> &img) {
|
||||
// 检查输入有效性
|
||||
if (!img || !img->buffer())
|
||||
return cv::Mat();
|
||||
|
||||
// 根据SDK的像素格式确定OpenCV的Mat数据类型
|
||||
int type = -1;
|
||||
switch (img->pixelFormat()) {
|
||||
case TY_PIXEL_FORMAT_DEPTH16:
|
||||
type = CV_16U; // 16位无符号整数,用于深度值
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_RGB:
|
||||
type = CV_8UC3; // 8位无符号整数,3通道(RGB)
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_MONO:
|
||||
type = CV_8U; // 8位无符号整数,单通道(灰度)
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_YVYU:
|
||||
case TY_PIXEL_FORMAT_YUYV:
|
||||
type = CV_8UC2; // 8位无符号整数,2通道(YUV422)
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_BGR:
|
||||
type = CV_8UC3; // 8位无符号整数,3通道(BGR)
|
||||
break;
|
||||
default:
|
||||
type = CV_8U; // 默认单通道
|
||||
break;
|
||||
}
|
||||
|
||||
// 创建临时Mat对象,直接包装原始缓冲区(零拷贝)
|
||||
// 注意:这只是创建一个视图,不复制数据
|
||||
// 参数:高度、宽度、数据类型、原始数据指针
|
||||
cv::Mat tempMat(img->height(), img->width(), type, img->buffer());
|
||||
|
||||
// 使用clone()创建数据副本
|
||||
// 这是关键步骤:确保返回的Mat拥有独立的数据副本
|
||||
// 即使原始的TYImage被销毁,返回的Mat仍然有效
|
||||
// 虽然会增加内存和CPU开销,但这是确保数据安全的必要代价
|
||||
return tempMat.clone();
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取像素格式标识
|
||||
*
|
||||
* 将SDK的像素格式枚举转换为简单的整数标识
|
||||
* 用于后续的颜色空间转换判断
|
||||
*
|
||||
* @param pixel_format SDK的像素格式枚举值
|
||||
* @return 像素格式标识:
|
||||
* - 0: BGR格式(或默认)
|
||||
* - 1: RGB格式
|
||||
* - 2: YUYV格式(YUV422)
|
||||
* - 3: YVYU格式(YUV422)
|
||||
*
|
||||
* @note 此函数用于简化颜色空间转换的判断逻辑
|
||||
*/
|
||||
int CameraCapture::getPixelFormatId(TY_PIXEL_FORMAT pixel_format) {
|
||||
switch (pixel_format) {
|
||||
case TY_PIXEL_FORMAT_RGB:
|
||||
return 1; // RGB格式
|
||||
case TY_PIXEL_FORMAT_YUYV:
|
||||
return 2; // YUYV格式(YUV422)
|
||||
case TY_PIXEL_FORMAT_YVYU:
|
||||
return 3; // YVYU格式(YUV422)
|
||||
case TY_PIXEL_FORMAT_BGR:
|
||||
default:
|
||||
return 0; // BGR格式或默认
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 利用SDK生成点云
|
||||
* @param camera_index 相机索引
|
||||
* @param depth_img 深度图
|
||||
* @param out_points 输出点云
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool CameraCapture::computePointCloud(int camera_index, const cv::Mat& depth_img, std::vector<Point3D>& out_points) {
|
||||
if (camera_index < 0 || camera_index >= static_cast<int>(cameras_.size())) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!has_calib_info_[camera_index]) {
|
||||
std::cerr << "[CameraCapture] No calibration info for camera " << camera_index << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
if (depth_img.empty()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check for valid intrinsics to prevent division by zero crash
|
||||
float fx = calib_infos_[camera_index].intrinsic.data[0];
|
||||
float fy = calib_infos_[camera_index].intrinsic.data[4];
|
||||
|
||||
if (std::abs(fx) < 1e-6 || std::abs(fy) < 1e-6) {
|
||||
std::cerr << "[CameraCapture] Invalid intrinsics for camera " << camera_index
|
||||
<< " (fx=" << fx << ", fy=" << fy << "). Cannot compute point cloud." << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 调整输出容器大小
|
||||
out_points.resize(depth_img.cols * depth_img.rows);
|
||||
|
||||
// TY_VECT_3F {float x, y, z} 与 Point3D {float x, y, z} 内存布局兼容
|
||||
// 直接使用 SDK 函数生成点云
|
||||
TY_VECT_3F* p3d = reinterpret_cast<TY_VECT_3F*>(out_points.data());
|
||||
|
||||
TY_STATUS status = TYMapDepthImageToPoint3d(&calib_infos_[camera_index],
|
||||
depth_img.cols, depth_img.rows,
|
||||
(const uint16_t*)depth_img.data,
|
||||
p3d);
|
||||
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "[CameraCapture] TYMapDepthImageToPoint3d failed: " << status << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
159
image_capture/src/camera/ty_multi_camera_capture.h
Normal file
159
image_capture/src/camera/ty_multi_camera_capture.h
Normal file
@@ -0,0 +1,159 @@
|
||||
#pragma once
|
||||
|
||||
#include "Frame.hpp"
|
||||
#include "Device.hpp"
|
||||
#include "TYApi.h"
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <vector>
|
||||
#include <memory>
|
||||
#include <thread>
|
||||
#include <atomic>
|
||||
#include <mutex>
|
||||
#include <string>
|
||||
#include "../common_types.h"
|
||||
|
||||
using namespace percipio_layer;
|
||||
|
||||
/**
|
||||
* @brief CameraCapture
|
||||
* 图像采集层,负责从SDK获取图像并转换为OpenCV格式
|
||||
*
|
||||
* 功能说明:
|
||||
* - 封装TY相机SDK,管理多相机采集
|
||||
* - 将SDK的TYImage转换为OpenCV的cv::Mat格式
|
||||
* - 管理采集线程和缓冲区
|
||||
* - 输出原始cv::Mat格式的图像,供上层使用
|
||||
*
|
||||
* 设计原则:
|
||||
* - 此模块属于图像采集层,可以依赖SDK
|
||||
* - 输出标准OpenCV格式,实现SDK与算法层的隔离
|
||||
* - 不进行图像处理(如伪彩色映射等),只负责采集和格式转换
|
||||
*/
|
||||
class CameraCapture
|
||||
{
|
||||
public:
|
||||
/**
|
||||
* @brief 图像缓冲区结构
|
||||
* 存储原始采集的图像数据(cv::Mat格式)
|
||||
*/
|
||||
struct ImageBuffer {
|
||||
cv::Mat depth; // 原始深度图(CV_16U格式)
|
||||
cv::Mat color; // 原始彩色图(BGR格式)
|
||||
double fps = 0.0; // 当前帧率
|
||||
int frame_count = 0; // 帧计数
|
||||
std::mutex mtx; // 互斥锁
|
||||
std::atomic<bool> updated{false}; // 更新标志、std::atomic 确保在多线程环境中对 updated 的操作是原子的,不会发生竞争条件。
|
||||
};
|
||||
|
||||
CameraCapture();
|
||||
~CameraCapture();
|
||||
|
||||
/**
|
||||
* 初始化并配置相机
|
||||
* @param enable_depth 是否启用深度流
|
||||
* @param enable_color 是否启用彩色流
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool initialize(bool enable_depth = true, bool enable_color = true);
|
||||
|
||||
/**
|
||||
* 启动采集
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool start();
|
||||
|
||||
/**
|
||||
* 停止采集
|
||||
*/
|
||||
void stop();
|
||||
|
||||
/**
|
||||
* 获取相机数量
|
||||
* @return 相机数量
|
||||
*/
|
||||
int getCameraCount() const;
|
||||
|
||||
/**
|
||||
* 获取相机ID
|
||||
* @param index 相机索引
|
||||
* @return 相机ID字符串
|
||||
*/
|
||||
std::string getCameraId(int index) const;
|
||||
|
||||
/**
|
||||
* 获取指定相机的最新图像
|
||||
* @param camera_index 相机索引
|
||||
* @param depth 输出的深度图(CV_16U格式,原始深度值)
|
||||
* @param color 输出的彩色图(BGR格式)
|
||||
* @param fps 输出的帧率
|
||||
* @return 是否成功获取到图像
|
||||
*/
|
||||
bool getLatestImages(int camera_index, cv::Mat& depth, cv::Mat& color, double& fps);
|
||||
|
||||
/**
|
||||
* 检查是否正在运行
|
||||
* @return 是否运行中
|
||||
*/
|
||||
bool isRunning() const;
|
||||
|
||||
/**
|
||||
* 获取指定相机的深度相机内参
|
||||
* @param camera_index 相机索引
|
||||
* @param fx [输出] 焦距x
|
||||
* @param fy [输出] 焦距y
|
||||
* @param cx [输出] 主点x
|
||||
* @param cy [输出] 主点y
|
||||
* @return 是否成功获取内参
|
||||
*/
|
||||
// Added method for depth camera intrinsics
|
||||
bool getDepthCameraIntrinsics(int camera_index, float& fx, float& fy, float& cx, float& cy);
|
||||
|
||||
/**
|
||||
* @brief 利用SDK生成点云
|
||||
* @param camera_index 相机索引
|
||||
* @param depth_img 深度图
|
||||
* @param out_points 输出点云
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool computePointCloud(int camera_index, const cv::Mat& depth_img, std::vector<Point3D>& out_points);
|
||||
|
||||
private:
|
||||
/**
|
||||
* 采集线程函数
|
||||
* @param camera_index 相机索引
|
||||
*/
|
||||
void captureThreadFunc(int camera_index);
|
||||
|
||||
/**
|
||||
* 将TYImage转换为OpenCV的Mat格式
|
||||
* @param img 输入的TYImage智能指针
|
||||
* @return cv::Mat OpenCV格式的图像矩阵
|
||||
*/
|
||||
static cv::Mat TYImageToMat(const std::shared_ptr<TYImage> &img);
|
||||
|
||||
/**
|
||||
* 获取像素格式标识,用于颜色空间转换
|
||||
* @param pixel_format SDK的像素格式枚举
|
||||
* @return 像素格式标识(0: BGR, 1: RGB, 2: YUYV, 3: YVYU)
|
||||
*/
|
||||
static int getPixelFormatId(TY_PIXEL_FORMAT pixel_format);
|
||||
|
||||
// SDK相关成员(原MultiCameraCapture的功能)
|
||||
std::vector<std::shared_ptr<FastCamera>> cameras_; // 相机对象列表
|
||||
std::vector<std::shared_ptr<ImageProcesser>> depth_processers_; // 深度图像处理器
|
||||
std::vector<std::shared_ptr<ImageProcesser>> color_processers_; // 彩色图像处理器
|
||||
|
||||
std::vector<bool> camera_running_; // 相机运行状态
|
||||
bool streams_configured_; // 流是否已配置
|
||||
bool depth_enabled_; // 是否启用深度流
|
||||
bool color_enabled_; // 是否启用彩色流
|
||||
|
||||
// 采集线程和缓冲区
|
||||
std::vector<std::shared_ptr<ImageBuffer>> buffers_; // 图像缓冲区
|
||||
std::vector<std::thread> capture_threads_; // 采集线程
|
||||
std::atomic<bool> running_; // 运行标志
|
||||
|
||||
// 标定信息
|
||||
std::vector<TY_CAMERA_CALIB_INFO> calib_infos_;
|
||||
std::vector<bool> has_calib_info_;
|
||||
};
|
||||
734
image_capture/src/common/config_manager.cpp
Normal file
734
image_capture/src/common/config_manager.cpp
Normal file
@@ -0,0 +1,734 @@
|
||||
#include "config_manager.h"
|
||||
#include <fstream>
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
|
||||
|
||||
ConfigManager &ConfigManager::getInstance() {
|
||||
static ConfigManager instance;
|
||||
return instance;
|
||||
}
|
||||
|
||||
ConfigManager::ConfigManager() {
|
||||
// 默认配置
|
||||
config_json_ = json11::Json::object{};
|
||||
}
|
||||
|
||||
bool ConfigManager::loadConfig(const std::string &config_path) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
std::ifstream file(config_path);
|
||||
if (!file.is_open()) {
|
||||
std::cerr << "ConfigManager: Failed to open config file: " << config_path
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
std::stringstream buffer;
|
||||
buffer << file.rdbuf();
|
||||
std::string content = buffer.str();
|
||||
|
||||
std::string err;
|
||||
config_json_ = json11::Json::parse(content, err);
|
||||
|
||||
if (!err.empty()) {
|
||||
std::cerr << "ConfigManager: Failed to parse JSON: " << err << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
std::cout << "ConfigManager: Successfully loaded config from " << config_path
|
||||
<< std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
bool ConfigManager::saveConfig(const std::string &config_path) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
std::string json_str = config_json_.dump();
|
||||
|
||||
std::ofstream file(config_path);
|
||||
if (!file.is_open()) {
|
||||
std::cerr << "ConfigManager: Failed to open config file for writing: " << config_path
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
file << json_str;
|
||||
file.close();
|
||||
|
||||
std::cout << "ConfigManager: Successfully saved config to " << config_path << std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
// --- Accessors & Setters ---
|
||||
|
||||
std::string ConfigManager::getRedisHost() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["redis"].is_object()) {
|
||||
return config_json_["redis"]["host"].string_value();
|
||||
}
|
||||
return "127.0.0.1"; // Default
|
||||
}
|
||||
|
||||
int ConfigManager::getRedisPort() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["redis"].is_object()) {
|
||||
int port = config_json_["redis"]["port"].int_value();
|
||||
return port > 0 ? port : 6379;
|
||||
}
|
||||
return 6379;
|
||||
}
|
||||
|
||||
int ConfigManager::getRedisDb() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["redis"].is_object()) {
|
||||
return config_json_["redis"]["db"].int_value();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
bool ConfigManager::isDepthEnabled() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["cameras"].is_object()) {
|
||||
auto val = config_json_["cameras"]["depth_enabled"];
|
||||
if (val.is_bool())
|
||||
return val.bool_value();
|
||||
}
|
||||
return true; // Default
|
||||
}
|
||||
|
||||
bool ConfigManager::isColorEnabled() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["cameras"].is_object()) {
|
||||
auto val = config_json_["cameras"]["color_enabled"];
|
||||
if (val.is_bool())
|
||||
return val.bool_value();
|
||||
}
|
||||
return true; // Default
|
||||
}
|
||||
|
||||
std::vector<ConfigManager::CameraMapping>
|
||||
ConfigManager::getCameraMappings() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<CameraMapping> mappings;
|
||||
if (config_json_["cameras"].is_object() &&
|
||||
config_json_["cameras"]["mapping"].is_array()) {
|
||||
for (const auto &item : config_json_["cameras"]["mapping"].array_items()) {
|
||||
CameraMapping m;
|
||||
m.id = item["id"].string_value();
|
||||
m.index = item["index"].int_value();
|
||||
mappings.push_back(m);
|
||||
}
|
||||
}
|
||||
return mappings;
|
||||
}
|
||||
|
||||
std::string ConfigManager::getSavePath() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["vision"].is_object()) {
|
||||
return config_json_["vision"]["save_path"].string_value();
|
||||
}
|
||||
return "./";
|
||||
}
|
||||
|
||||
int ConfigManager::getLogLevel() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_["vision"].is_object()) {
|
||||
return config_json_["vision"]["log_level"].int_value();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
// --- Algorithm Config Accessors ---
|
||||
|
||||
// Beam/Rack Deflection - ROI Points
|
||||
std::vector<cv::Point2i> ConfigManager::getBeamROIPoints() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<cv::Point2i> points;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"]["beam_roi_points"].is_array()) {
|
||||
|
||||
const auto& points_array = config_json_["algorithms"]["beam_rack_deflection"]["beam_roi_points"].array_items();
|
||||
for (const auto& point : points_array) {
|
||||
if (point.is_object()) {
|
||||
int x = point["x"].int_value();
|
||||
int y = point["y"].int_value();
|
||||
points.push_back(cv::Point2i(x, y));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Default values if not configured
|
||||
if (points.empty()) {
|
||||
points = {
|
||||
cv::Point2i(100, 50),
|
||||
cv::Point2i(540, 80),
|
||||
cv::Point2i(540, 280),
|
||||
cv::Point2i(100, 280)
|
||||
};
|
||||
}
|
||||
|
||||
return points;
|
||||
}
|
||||
|
||||
std::vector<cv::Point2i> ConfigManager::getRackROIPoints() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<cv::Point2i> points;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"]["rack_roi_points"].is_array()) {
|
||||
|
||||
const auto& points_array = config_json_["algorithms"]["beam_rack_deflection"]["rack_roi_points"].array_items();
|
||||
for (const auto& point : points_array) {
|
||||
if (point.is_object()) {
|
||||
int x = point["x"].int_value();
|
||||
int y = point["y"].int_value();
|
||||
points.push_back(cv::Point2i(x, y));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Default values if not configured
|
||||
if (points.empty()) {
|
||||
points = {
|
||||
cv::Point2i(50, 50),
|
||||
cv::Point2i(150, 50),
|
||||
cv::Point2i(150, 430),
|
||||
cv::Point2i(50, 430)
|
||||
};
|
||||
}
|
||||
|
||||
return points;
|
||||
}
|
||||
|
||||
// Beam/Rack Deflection - Thresholds
|
||||
std::vector<float> ConfigManager::getBeamThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"]["beam_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["beam_rack_deflection"]["beam_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
// Default values if not configured
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-10.0f, -5.0f, 5.0f, 10.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
std::vector<float> ConfigManager::getRackThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"].is_object() &&
|
||||
config_json_["algorithms"]["beam_rack_deflection"]["rack_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["beam_rack_deflection"]["rack_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
// Default values if not configured
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-6.0f, -3.0f, 3.0f, 6.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
// Pallet Offset - Thresholds
|
||||
std::vector<float> ConfigManager::getPalletOffsetLatThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"]["offset_lat_mm_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["pallet_offset"]["offset_lat_mm_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-20.0f, -10.0f, 10.0f, 20.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
std::vector<float> ConfigManager::getPalletOffsetLonThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"]["offset_lon_mm_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["pallet_offset"]["offset_lon_mm_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-20.0f, -10.0f, 10.0f, 20.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
std::vector<float> ConfigManager::getPalletRotationAngleThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"]["rotation_angle_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["pallet_offset"]["rotation_angle_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-5.0f, -2.5f, 2.5f, 5.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
std::vector<float> ConfigManager::getPalletHoleDefLeftThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"]["hole_def_mm_left_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["pallet_offset"]["hole_def_mm_left_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-8.0f, -4.0f, 4.0f, 8.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
std::vector<float> ConfigManager::getPalletHoleDefRightThresholds() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
std::vector<float> thresholds;
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"].is_object() &&
|
||||
config_json_["algorithms"]["pallet_offset"]["hole_def_mm_right_thresholds"].is_object()) {
|
||||
|
||||
const auto& thresh = config_json_["algorithms"]["pallet_offset"]["hole_def_mm_right_thresholds"];
|
||||
thresholds.push_back(static_cast<float>(thresh["A"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["B"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["C"].number_value()));
|
||||
thresholds.push_back(static_cast<float>(thresh["D"].number_value()));
|
||||
}
|
||||
|
||||
if (thresholds.size() != 4) {
|
||||
thresholds = {-8.0f, -4.0f, 4.0f, 8.0f};
|
||||
}
|
||||
|
||||
return thresholds;
|
||||
}
|
||||
|
||||
// Slot Occupancy
|
||||
float ConfigManager::getSlotOccupancyDepthThreshold() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["slot_occupancy"].is_object()) {
|
||||
return static_cast<float>(config_json_["algorithms"]["slot_occupancy"]["depth_threshold_mm"].number_value());
|
||||
}
|
||||
|
||||
return 100.0f; // Default
|
||||
}
|
||||
|
||||
float ConfigManager::getSlotOccupancyConfidenceThreshold() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["slot_occupancy"].is_object()) {
|
||||
return static_cast<float>(config_json_["algorithms"]["slot_occupancy"]["confidence_threshold"].number_value());
|
||||
}
|
||||
|
||||
return 0.8f; // Default
|
||||
}
|
||||
|
||||
// Visual Inventory
|
||||
float ConfigManager::getVisualInventoryBarcodeConfidence() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["visual_inventory"].is_object()) {
|
||||
return static_cast<float>(config_json_["algorithms"]["visual_inventory"]["barcode_confidence_threshold"].number_value());
|
||||
}
|
||||
|
||||
return 0.7f; // Default
|
||||
}
|
||||
|
||||
bool ConfigManager::getVisualInventoryROIEnabled() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["visual_inventory"].is_object()) {
|
||||
return config_json_["algorithms"]["visual_inventory"]["roi_enabled"].bool_value();
|
||||
}
|
||||
|
||||
return true; // Default
|
||||
}
|
||||
|
||||
// General Algorithm Parameters
|
||||
float ConfigManager::getAlgorithmMinDepth() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["general"].is_object()) {
|
||||
return static_cast<float>(config_json_["algorithms"]["general"]["min_depth_mm"].number_value());
|
||||
}
|
||||
|
||||
return 800.0f; // Default
|
||||
}
|
||||
|
||||
float ConfigManager::getAlgorithmMaxDepth() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["general"].is_object()) {
|
||||
return static_cast<float>(config_json_["algorithms"]["general"]["max_depth_mm"].number_value());
|
||||
}
|
||||
|
||||
return 3000.0f; // Default
|
||||
}
|
||||
|
||||
int ConfigManager::getAlgorithmSamplePoints() const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
|
||||
if (config_json_["algorithms"].is_object() &&
|
||||
config_json_["algorithms"]["general"].is_object()) {
|
||||
return config_json_["algorithms"]["general"]["sample_points"].int_value();
|
||||
}
|
||||
|
||||
return 50; // Default
|
||||
}
|
||||
|
||||
// Beam/Rack Deflection - Setters
|
||||
void ConfigManager::setBeamROIPoints(const std::vector<cv::Point2i>& points) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
// Note: json11 is immutable, we need to reconstruct the object or use a mutable json library.
|
||||
// For this project using json11, we have to rebuild the part of the json tree.
|
||||
// This is a bit expensive but config saving is rare.
|
||||
// Actually, to make it easier with json11, we might need to parse, modify and dump if we want to keep comments?
|
||||
// But json11 parser doesn't keep comments.
|
||||
// Let's just modify the internal map if possible or rebuild.
|
||||
// json11::Json is const. We need to cast it away or rebuild the whole structure?
|
||||
// Rebuilding is safer.
|
||||
|
||||
// Implementation Note: Since json11 is immutable, proper way is to create new Json objects.
|
||||
// For simplicity in this context, we will use a "deep update" strategy helper if we had one.
|
||||
// But here we need to do it manually.
|
||||
|
||||
// Let's cheat a bit and use const_cast for the "value" if it was a simpler lib, but json11 uses shared_ptr...
|
||||
// Okay, we will use a temporary mutable map approach for the 'algorithms' section.
|
||||
|
||||
// Helper to get mutable map from Json object
|
||||
auto get_mutable_map = [](const json11::Json& j) -> json11::Json::object {
|
||||
return j.object_items();
|
||||
};
|
||||
|
||||
json11::Json::object root_map = get_mutable_map(config_json_);
|
||||
json11::Json::object algo_map = get_mutable_map(root_map["algorithms"]);
|
||||
json11::Json::object beam_rack_map = get_mutable_map(algo_map["beam_rack_deflection"]);
|
||||
|
||||
std::vector<json11::Json> points_json;
|
||||
for(const auto& p : points) {
|
||||
points_json.push_back(json11::Json::object{{"x", p.x}, {"y", p.y}});
|
||||
}
|
||||
beam_rack_map["beam_roi_points"] = points_json;
|
||||
|
||||
algo_map["beam_rack_deflection"] = beam_rack_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setRackROIPoints(const std::vector<cv::Point2i>& points) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object beam_rack_map = algo_map["beam_rack_deflection"].object_items();
|
||||
|
||||
std::vector<json11::Json> points_json;
|
||||
for(const auto& p : points) {
|
||||
points_json.push_back(json11::Json::object{{"x", p.x}, {"y", p.y}});
|
||||
}
|
||||
beam_rack_map["rack_roi_points"] = points_json;
|
||||
|
||||
algo_map["beam_rack_deflection"] = beam_rack_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setBeamThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object beam_rack_map = algo_map["beam_rack_deflection"].object_items();
|
||||
|
||||
beam_rack_map["beam_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["beam_rack_deflection"] = beam_rack_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setRackThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object beam_rack_map = algo_map["beam_rack_deflection"].object_items();
|
||||
|
||||
beam_rack_map["rack_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["beam_rack_deflection"] = beam_rack_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
// Pallet Offset Setters
|
||||
void ConfigManager::setPalletOffsetLatThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object pallet_map = algo_map["pallet_offset"].object_items();
|
||||
|
||||
pallet_map["offset_lat_mm_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["pallet_offset"] = pallet_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setPalletOffsetLonThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object pallet_map = algo_map["pallet_offset"].object_items();
|
||||
|
||||
pallet_map["offset_lon_mm_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["pallet_offset"] = pallet_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setPalletRotationAngleThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object pallet_map = algo_map["pallet_offset"].object_items();
|
||||
|
||||
pallet_map["rotation_angle_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["pallet_offset"] = pallet_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setPalletHoleDefLeftThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object pallet_map = algo_map["pallet_offset"].object_items();
|
||||
|
||||
pallet_map["hole_def_mm_left_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["pallet_offset"] = pallet_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setPalletHoleDefRightThresholds(const std::vector<float>& thresholds) {
|
||||
if(thresholds.size() < 4) return;
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object pallet_map = algo_map["pallet_offset"].object_items();
|
||||
|
||||
pallet_map["hole_def_mm_right_thresholds"] = json11::Json::object{
|
||||
{"A", thresholds[0]}, {"B", thresholds[1]}, {"C", thresholds[2]}, {"D", thresholds[3]}
|
||||
};
|
||||
|
||||
algo_map["pallet_offset"] = pallet_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
// Slot Occupancy Setters
|
||||
void ConfigManager::setSlotOccupancyDepthThreshold(float value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object slot_map = algo_map["slot_occupancy"].object_items();
|
||||
|
||||
slot_map["depth_threshold_mm"] = value;
|
||||
|
||||
algo_map["slot_occupancy"] = slot_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setSlotOccupancyConfidenceThreshold(float value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object slot_map = algo_map["slot_occupancy"].object_items();
|
||||
|
||||
slot_map["confidence_threshold"] = value;
|
||||
|
||||
algo_map["slot_occupancy"] = slot_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
// Visual Inventory Setters
|
||||
void ConfigManager::setVisualInventoryBarcodeConfidence(float value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object vis_map = algo_map["visual_inventory"].object_items();
|
||||
|
||||
vis_map["barcode_confidence_threshold"] = value;
|
||||
|
||||
algo_map["visual_inventory"] = vis_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setVisualInventoryROIEnabled(bool value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object vis_map = algo_map["visual_inventory"].object_items();
|
||||
|
||||
vis_map["roi_enabled"] = value;
|
||||
|
||||
algo_map["visual_inventory"] = vis_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
// General Setters
|
||||
void ConfigManager::setAlgorithmMinDepth(float value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object gen_map = algo_map["general"].object_items();
|
||||
|
||||
gen_map["min_depth_mm"] = value;
|
||||
|
||||
algo_map["general"] = gen_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setAlgorithmMaxDepth(float value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object gen_map = algo_map["general"].object_items();
|
||||
|
||||
gen_map["max_depth_mm"] = value;
|
||||
|
||||
algo_map["general"] = gen_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
void ConfigManager::setAlgorithmSamplePoints(int value) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
json11::Json::object root_map = config_json_.object_items();
|
||||
json11::Json::object algo_map = root_map["algorithms"].object_items();
|
||||
json11::Json::object gen_map = algo_map["general"].object_items();
|
||||
|
||||
gen_map["sample_points"] = value;
|
||||
|
||||
algo_map["general"] = gen_map;
|
||||
root_map["algorithms"] = algo_map;
|
||||
config_json_ = root_map;
|
||||
}
|
||||
|
||||
// Generic access
|
||||
std::string ConfigManager::getString(const std::string &key,
|
||||
const std::string &default_value) const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
// Simple top-level access, or implement dot notation parsing if needed.
|
||||
// For now assuming top level.
|
||||
if (config_json_[key].is_string()) {
|
||||
return config_json_[key].string_value();
|
||||
}
|
||||
return default_value;
|
||||
}
|
||||
|
||||
int ConfigManager::getInt(const std::string &key, int default_value) const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_[key].is_number()) {
|
||||
return config_json_[key].int_value();
|
||||
}
|
||||
return default_value;
|
||||
}
|
||||
|
||||
bool ConfigManager::getBool(const std::string &key, bool default_value) const {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (config_json_[key].is_bool()) {
|
||||
return config_json_[key].bool_value();
|
||||
}
|
||||
return default_value;
|
||||
}
|
||||
124
image_capture/src/common/config_manager.h
Normal file
124
image_capture/src/common/config_manager.h
Normal file
@@ -0,0 +1,124 @@
|
||||
#pragma once
|
||||
|
||||
#include "json11.hpp"
|
||||
#include <map>
|
||||
#include <mutex>
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <opencv2/core.hpp>
|
||||
|
||||
|
||||
/**
|
||||
* @brief ConfigManager
|
||||
* 全局配置管理器,单例模式
|
||||
* 负责加载和提供系统配置参数
|
||||
*/
|
||||
class ConfigManager {
|
||||
public:
|
||||
static ConfigManager &getInstance();
|
||||
|
||||
// 禁止拷贝
|
||||
ConfigManager(const ConfigManager &) = delete;
|
||||
ConfigManager &operator=(const ConfigManager &) = delete;
|
||||
|
||||
/**
|
||||
* 加载配置文件
|
||||
* @param config_path 配置文件路径,默认在当前目录查找 config.json
|
||||
* @return 是否成功加载
|
||||
*/
|
||||
bool loadConfig(const std::string &config_path = "config.json");
|
||||
|
||||
/**
|
||||
* 保存配置文件
|
||||
* @param config_path 配置文件路径,默认在当前目录查找 config.json
|
||||
* @return 是否成功保存
|
||||
*/
|
||||
bool saveConfig(const std::string &config_path = "config.json");
|
||||
|
||||
// --- Accessors & Setters ---
|
||||
|
||||
// Redis Config
|
||||
std::string getRedisHost() const;
|
||||
int getRedisPort() const;
|
||||
int getRedisDb() const;
|
||||
|
||||
// Camera Config
|
||||
bool isDepthEnabled() const;
|
||||
bool isColorEnabled() const;
|
||||
struct CameraMapping {
|
||||
std::string id;
|
||||
int index;
|
||||
};
|
||||
std::vector<CameraMapping> getCameraMappings() const;
|
||||
|
||||
// Vision/Global Config
|
||||
std::string getSavePath() const;
|
||||
int getLogLevel() const;
|
||||
|
||||
// Algorithm Config - Beam/Rack Deflection
|
||||
std::vector<cv::Point2i> getBeamROIPoints() const;
|
||||
void setBeamROIPoints(const std::vector<cv::Point2i>& points);
|
||||
|
||||
std::vector<cv::Point2i> getRackROIPoints() const;
|
||||
void setRackROIPoints(const std::vector<cv::Point2i>& points);
|
||||
|
||||
std::vector<float> getBeamThresholds() const; // Returns [A, B, C, D]
|
||||
void setBeamThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
std::vector<float> getRackThresholds() const; // Returns [A, B, C, D]
|
||||
void setRackThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
// Algorithm Config - Pallet Offset
|
||||
std::vector<float> getPalletOffsetLatThresholds() const;
|
||||
void setPalletOffsetLatThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
std::vector<float> getPalletOffsetLonThresholds() const;
|
||||
void setPalletOffsetLonThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
std::vector<float> getPalletRotationAngleThresholds() const;
|
||||
void setPalletRotationAngleThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
std::vector<float> getPalletHoleDefLeftThresholds() const;
|
||||
void setPalletHoleDefLeftThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
std::vector<float> getPalletHoleDefRightThresholds() const;
|
||||
void setPalletHoleDefRightThresholds(const std::vector<float>& thresholds);
|
||||
|
||||
// Algorithm Config - Slot Occupancy
|
||||
float getSlotOccupancyDepthThreshold() const;
|
||||
void setSlotOccupancyDepthThreshold(float value);
|
||||
|
||||
float getSlotOccupancyConfidenceThreshold() const;
|
||||
void setSlotOccupancyConfidenceThreshold(float value);
|
||||
|
||||
// Algorithm Config - Visual Inventory
|
||||
float getVisualInventoryBarcodeConfidence() const;
|
||||
void setVisualInventoryBarcodeConfidence(float value);
|
||||
|
||||
bool getVisualInventoryROIEnabled() const;
|
||||
void setVisualInventoryROIEnabled(bool value);
|
||||
|
||||
// Algorithm Config - General
|
||||
float getAlgorithmMinDepth() const;
|
||||
void setAlgorithmMinDepth(float value);
|
||||
|
||||
float getAlgorithmMaxDepth() const;
|
||||
void setAlgorithmMaxDepth(float value);
|
||||
|
||||
int getAlgorithmSamplePoints() const;
|
||||
void setAlgorithmSamplePoints(int value);
|
||||
|
||||
|
||||
// Generic access (for dynamic access)
|
||||
std::string getString(const std::string &key,
|
||||
const std::string &default_value = "") const;
|
||||
int getInt(const std::string &key, int default_value = 0) const;
|
||||
bool getBool(const std::string &key, bool default_value = false) const;
|
||||
|
||||
private:
|
||||
ConfigManager();
|
||||
~ConfigManager() = default;
|
||||
|
||||
json11::Json config_json_;
|
||||
mutable std::mutex mutex_;
|
||||
};
|
||||
104
image_capture/src/common/log_manager.cpp
Normal file
104
image_capture/src/common/log_manager.cpp
Normal file
@@ -0,0 +1,104 @@
|
||||
#include "log_manager.h"
|
||||
#include <iostream>
|
||||
|
||||
LogManager &LogManager::getInstance() {
|
||||
static LogManager instance;
|
||||
return instance;
|
||||
}
|
||||
|
||||
void LogManager::setCallback(LogCallback callback) {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
callback_ = callback;
|
||||
}
|
||||
|
||||
#include "config_manager.h"
|
||||
#include <cstdarg>
|
||||
#include <cstdio>
|
||||
#include <ctime>
|
||||
#include <iomanip>
|
||||
#include <sstream>
|
||||
#include <vector>
|
||||
|
||||
|
||||
void LogManager::logFormat(LogLevel level, const char *fmt, ...) {
|
||||
// 1. Level Filter
|
||||
if (static_cast<int>(level) < ConfigManager::getInstance().getLogLevel()) {
|
||||
return;
|
||||
}
|
||||
|
||||
// 2. Format Message
|
||||
va_list args;
|
||||
va_start(args, fmt);
|
||||
|
||||
// Determine required size
|
||||
va_list args_copy;
|
||||
va_copy(args_copy, args);
|
||||
int size = std::vsnprintf(nullptr, 0, fmt, args_copy);
|
||||
va_end(args_copy);
|
||||
|
||||
if (size < 0) {
|
||||
va_end(args);
|
||||
return; // Encoding error
|
||||
}
|
||||
|
||||
std::vector<char> buffer(size + 1);
|
||||
std::vsnprintf(buffer.data(), buffer.size(), fmt, args);
|
||||
va_end(args);
|
||||
|
||||
std::string message(buffer.data(), size);
|
||||
|
||||
// 3. Delegate
|
||||
logInternal(level, message);
|
||||
}
|
||||
|
||||
void LogManager::logInternal(LogLevel level, const std::string &message) {
|
||||
// 1. Add Timestamp and Level Prefix
|
||||
const char *levelStr = "[INFO] ";
|
||||
bool isError = false;
|
||||
|
||||
switch (level) {
|
||||
case LogLevel::DEBUG:
|
||||
levelStr = "[DEBUG] ";
|
||||
break;
|
||||
case LogLevel::INFO:
|
||||
levelStr = "[INFO] ";
|
||||
break;
|
||||
case LogLevel::WARNING:
|
||||
levelStr = "[WARN] ";
|
||||
break;
|
||||
case LogLevel::ERROR:
|
||||
levelStr = "[ERROR] ";
|
||||
isError = true;
|
||||
break;
|
||||
}
|
||||
|
||||
auto t = std::time(nullptr);
|
||||
auto tm = *std::localtime(&t);
|
||||
|
||||
std::ostringstream oss;
|
||||
oss << std::put_time(&tm, "[%Y-%m-%d %H:%M:%S] ") << levelStr << message
|
||||
<< "\n";
|
||||
std::string formattedMsg = oss.str();
|
||||
|
||||
// 2. Output
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
if (callback_) {
|
||||
callback_(formattedMsg, isError);
|
||||
} else {
|
||||
if (isError) {
|
||||
std::cerr << formattedMsg;
|
||||
} else {
|
||||
std::cout << formattedMsg;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 兼容旧接口
|
||||
void LogManager::log(const std::string &message, bool isError) {
|
||||
logInternal(isError ? LogLevel::ERROR : LogLevel::INFO, message);
|
||||
}
|
||||
|
||||
void LogManager::clearCallback() {
|
||||
std::lock_guard<std::mutex> lock(mutex_);
|
||||
callback_ = nullptr;
|
||||
}
|
||||
80
image_capture/src/common/log_manager.h
Normal file
80
image_capture/src/common/log_manager.h
Normal file
@@ -0,0 +1,80 @@
|
||||
#pragma once
|
||||
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
#include <mutex>
|
||||
#include <string>
|
||||
|
||||
|
||||
/**
|
||||
* @brief 全局日志管理器(单例模式)
|
||||
*
|
||||
* 用于将标准输出(cout/cerr)重定向到Qt日志系统
|
||||
* 任何模块都可以通过LogManager输出日志,这些日志会被重定向到MainWindow的日志文本框
|
||||
*/
|
||||
// 日志级别枚举
|
||||
enum class LogLevel { DEBUG = 0, INFO = 1, WARNING = 2, ERROR = 3 };
|
||||
|
||||
/**
|
||||
* @brief 全局日志管理器(单例模式)
|
||||
*
|
||||
* 用于将标准输出(cout/cerr)重定向到Qt日志系统
|
||||
* 支持格式化输出和日志级别过滤
|
||||
*/
|
||||
class LogManager {
|
||||
public:
|
||||
// 日志回调函数类型
|
||||
using LogCallback =
|
||||
std::function<void(const std::string &message, bool isError)>;
|
||||
|
||||
/**
|
||||
* @brief 获取单例实例
|
||||
*/
|
||||
static LogManager &getInstance();
|
||||
|
||||
/**
|
||||
* @brief 设置日志回调函数
|
||||
* @param callback 回调函数,接收日志消息和错误标志
|
||||
*/
|
||||
void setCallback(LogCallback callback);
|
||||
|
||||
/**
|
||||
* @brief 格式化并输出日志消息
|
||||
* @param level 日志级别
|
||||
* @param fmt 格式化字符串 (printf style)
|
||||
* @param ... 可变参数
|
||||
*/
|
||||
void logFormat(LogLevel level, const char *fmt, ...);
|
||||
|
||||
/**
|
||||
* @brief 兼容旧接口的日志输出
|
||||
* @param message 日志消息
|
||||
* @param isError 是否为错误消息
|
||||
*/
|
||||
void log(const std::string &message, bool isError = false);
|
||||
|
||||
/**
|
||||
* @brief 清除回调函数
|
||||
*/
|
||||
void clearCallback();
|
||||
|
||||
private:
|
||||
LogManager() = default;
|
||||
~LogManager() = default;
|
||||
|
||||
// 内部实际执行日志输出的方法
|
||||
void logInternal(LogLevel level, const std::string &message);
|
||||
|
||||
LogCallback callback_;
|
||||
std::mutex mutex_; // 保护回调函数的线程安全
|
||||
};
|
||||
|
||||
// 宏定义以便于使用
|
||||
#define LOG_DEBUG(fmt, ...) \
|
||||
LogManager::getInstance().logFormat(LogLevel::DEBUG, fmt, ##__VA_ARGS__)
|
||||
#define LOG_INFO(fmt, ...) \
|
||||
LogManager::getInstance().logFormat(LogLevel::INFO, fmt, ##__VA_ARGS__)
|
||||
#define LOG_WARN(fmt, ...) \
|
||||
LogManager::getInstance().logFormat(LogLevel::WARNING, fmt, ##__VA_ARGS__)
|
||||
#define LOG_ERROR(fmt, ...) \
|
||||
LogManager::getInstance().logFormat(LogLevel::ERROR, fmt, ##__VA_ARGS__)
|
||||
39
image_capture/src/common/log_streambuf.h
Normal file
39
image_capture/src/common/log_streambuf.h
Normal file
@@ -0,0 +1,39 @@
|
||||
#pragma once
|
||||
|
||||
#include <streambuf>
|
||||
#include <string>
|
||||
#include "log_manager.h"
|
||||
|
||||
/**
|
||||
* @brief 自定义streambuf,用于重定向cout/cerr到LogManager
|
||||
*/
|
||||
class LogStreamBuf : public std::streambuf {
|
||||
public:
|
||||
LogStreamBuf(bool isError) : isError_(isError), buffer_() {}
|
||||
|
||||
protected:
|
||||
virtual int_type overflow(int_type c) override {
|
||||
if (c != EOF) {
|
||||
buffer_ += static_cast<char>(c);
|
||||
if (c == '\n') {
|
||||
// 遇到换行符,输出完整行
|
||||
LogManager::getInstance().log(buffer_, isError_);
|
||||
buffer_.clear();
|
||||
}
|
||||
}
|
||||
return c;
|
||||
}
|
||||
|
||||
virtual int sync() override {
|
||||
if (!buffer_.empty()) {
|
||||
LogManager::getInstance().log(buffer_, isError_);
|
||||
buffer_.clear();
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
private:
|
||||
bool isError_;
|
||||
std::string buffer_;
|
||||
};
|
||||
|
||||
27
image_capture/src/common_types.h
Normal file
27
image_capture/src/common_types.h
Normal file
@@ -0,0 +1,27 @@
|
||||
#pragma once
|
||||
|
||||
#include <vector>
|
||||
|
||||
/**
|
||||
* @brief 点云数据结构
|
||||
* 表示一个三维点
|
||||
*/
|
||||
struct Point3D {
|
||||
float x, y, z;
|
||||
Point3D() : x(0), y(0), z(0) {}
|
||||
Point3D(float x, float y, float z) : x(x), y(y), z(z) {}
|
||||
};
|
||||
|
||||
/**
|
||||
* @brief 相机内参结构
|
||||
*/
|
||||
struct CameraIntrinsics {
|
||||
float fx; // 焦距x
|
||||
float fy; // 焦距y
|
||||
float cx; // 主点x
|
||||
float cy; // 主点y
|
||||
|
||||
CameraIntrinsics() : fx(0), fy(0), cx(0), cy(0) {}
|
||||
CameraIntrinsics(float fx, float fy, float cx, float cy)
|
||||
: fx(fx), fy(fy), cx(cx), cy(cy) {}
|
||||
};
|
||||
300
image_capture/src/device/device_manager.cpp
Normal file
300
image_capture/src/device/device_manager.cpp
Normal file
@@ -0,0 +1,300 @@
|
||||
/**
|
||||
* @file device_manager.cpp
|
||||
* @brief 设备管理器实现文件
|
||||
*
|
||||
* 此文件实现了DeviceManager类的完整功能:
|
||||
* - 设备初始化(扫描和配置相机)
|
||||
* - 设备启动和停止
|
||||
* - 图像获取接口
|
||||
* - 设备信息查询
|
||||
*
|
||||
* 设计说明:
|
||||
* - DeviceManager是对CameraCapture的封装,提供统一的设备管理接口
|
||||
* - 不涉及业务逻辑,只负责设备层的管理
|
||||
* - 使用智能指针管理CameraCapture,自动释放资源
|
||||
*/
|
||||
|
||||
#include "device_manager.h"
|
||||
#include "../camera/ty_multi_camera_capture.h"
|
||||
#include "../camera/mvs_multi_camera_capture.h"
|
||||
#include <iostream>
|
||||
|
||||
/**
|
||||
* @brief 获取单例实例
|
||||
*
|
||||
* @return DeviceManager单例引用-DeviceManager&返回的是实例的引用
|
||||
*/
|
||||
DeviceManager& DeviceManager::getInstance() {
|
||||
static DeviceManager instance; // C++11保证线程安全的单例
|
||||
return instance;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 构造函数(私有)
|
||||
*
|
||||
* 初始化设备管理器,设置初始状态为未初始化
|
||||
*/
|
||||
DeviceManager::DeviceManager() : initialized_(false) {}
|
||||
|
||||
/**
|
||||
* @brief 析构函数
|
||||
*
|
||||
* 确保在对象销毁时正确停止所有设备
|
||||
* 调用stopAll()清理资源
|
||||
*/
|
||||
DeviceManager::~DeviceManager() {
|
||||
stopAll();
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 初始化并扫描设备
|
||||
*
|
||||
* 初始化相机采集模块,扫描并配置所有可用的相机设备
|
||||
*
|
||||
* @param enable_depth 是否启用深度流,true表示启用深度图采集
|
||||
* @param enable_color 是否启用彩色流,true表示启用彩色图采集
|
||||
* @return 发现的设备数量,0表示初始化失败或未找到设备
|
||||
*
|
||||
* @note 如果已经初始化,直接返回当前设备数量(避免重复初始化)
|
||||
* @note 初始化失败时,capture_会被重置为nullptr
|
||||
*/
|
||||
int DeviceManager::initialize(bool enable_depth, bool enable_color) {
|
||||
// 如果已经初始化,直接返回当前设备数量
|
||||
if (initialized_) {
|
||||
return getDeviceCount();
|
||||
}
|
||||
|
||||
int total_count = 0;
|
||||
|
||||
// 创建深度相机采集对象
|
||||
capture_ = std::make_shared<CameraCapture>();
|
||||
|
||||
// 初始化深度相机采集(扫描设备、配置流)
|
||||
if (capture_->initialize(enable_depth, enable_color)) {
|
||||
total_count += capture_->getCameraCount();
|
||||
std::cout << "[DeviceManager] Initialized " << capture_->getCameraCount() << " depth camera(s)" << std::endl;
|
||||
} else {
|
||||
std::cerr << "[DeviceManager] Failed to initialize depth cameras" << std::endl;
|
||||
capture_.reset(); // 重置智能指针,释放资源
|
||||
}
|
||||
|
||||
// 初始化MVS 2D相机
|
||||
mvs_cameras_ = std::make_unique<MvsMultiCameraCapture>();
|
||||
if (mvs_cameras_->initialize()) {
|
||||
total_count += mvs_cameras_->getCameraCount();
|
||||
std::cout << "[DeviceManager] Initialized " << mvs_cameras_->getCameraCount() << " 2D camera(s)" << std::endl;
|
||||
} else {
|
||||
std::cout << "[DeviceManager] No 2D cameras found or initialization failed" << std::endl;
|
||||
mvs_cameras_.reset();
|
||||
}
|
||||
|
||||
|
||||
|
||||
// 获取设备数量并标记为已初始化
|
||||
initialized_ = true;
|
||||
std::cout << "[DeviceManager] Total devices initialized: " << total_count << std::endl;
|
||||
return total_count;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 启动所有设备
|
||||
*
|
||||
* 启动所有相机的数据采集
|
||||
*
|
||||
* @return true 启动成功,false 启动失败(capture_为空或启动失败)
|
||||
*
|
||||
* @note 必须先调用initialize()初始化设备
|
||||
*/
|
||||
bool DeviceManager::startAll() {
|
||||
bool success = true;
|
||||
|
||||
// 启动深度相机
|
||||
if (capture_) {
|
||||
if (!capture_->start()) {
|
||||
success = false;
|
||||
std::cerr << "[DeviceManager] Failed to start depth cameras" << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
// 启动2D相机
|
||||
if (mvs_cameras_) {
|
||||
if (!mvs_cameras_->start()) {
|
||||
success = false;
|
||||
std::cerr << "[DeviceManager] Failed to start 2D cameras" << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
return success;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 停止所有设备
|
||||
*
|
||||
* 停止所有相机的数据采集
|
||||
*
|
||||
* @note 此函数是幂等的,可以安全地多次调用
|
||||
*/
|
||||
void DeviceManager::stopAll() {
|
||||
if (capture_) {
|
||||
capture_->stop();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取设备数量
|
||||
*
|
||||
* @return 当前已初始化的设备数量,0表示未初始化或无设备
|
||||
*/
|
||||
int DeviceManager::getDeviceCount() const {
|
||||
int count = 0;
|
||||
if (capture_) {
|
||||
count += capture_->getCameraCount();
|
||||
}
|
||||
|
||||
if (mvs_cameras_) {
|
||||
count += mvs_cameras_->getCameraCount();
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
int DeviceManager::getDepthCameraCount() const {
|
||||
return capture_ ? capture_->getCameraCount() : 0;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* @brief 获取设备ID
|
||||
*
|
||||
* @param index 设备索引,从0开始
|
||||
* @return 设备ID字符串,如果索引无效或未初始化则返回空字符串
|
||||
*/
|
||||
std::string DeviceManager::getDeviceId(int index) const {
|
||||
int percipio_count = capture_ ? capture_->getCameraCount() : 0;
|
||||
|
||||
if (index < percipio_count) {
|
||||
return capture_->getCameraId(index);
|
||||
}
|
||||
|
||||
int mvs_index = index - percipio_count;
|
||||
if (mvs_cameras_ && mvs_index >= 0 && mvs_index < mvs_cameras_->getCameraCount()) {
|
||||
return "2D-" + mvs_cameras_->getCameraId(mvs_index);
|
||||
}
|
||||
|
||||
return "";
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* @brief 获取指定设备的最新图像
|
||||
*
|
||||
* 从设备缓冲区中获取最新采集的图像数据
|
||||
*
|
||||
* @param device_index 设备索引,从0开始
|
||||
* @param depth [输出] 深度图,CV_16U格式,包含原始深度值(单位:毫米)
|
||||
* @param color [输出] 彩色图,BGR格式,CV_8UC3类型
|
||||
* @param fps [输出] 当前帧率(帧/秒)
|
||||
* @return true 成功获取图像,false 获取失败(设备未初始化、索引无效、缓冲区为空)
|
||||
*
|
||||
* @note 此函数是线程安全的,使用互斥锁保护缓冲区访问
|
||||
* @note 如果某个图像流未启用或尚未采集到数据,对应的Mat将为空
|
||||
*/
|
||||
bool DeviceManager::getLatestImages(int device_index, cv::Mat& depth, cv::Mat& color, double& fps) {
|
||||
int percipio_count = capture_ ? capture_->getCameraCount() : 0;
|
||||
|
||||
// 深度相机
|
||||
if (device_index < percipio_count) {
|
||||
if (!capture_) return false;
|
||||
return capture_->getLatestImages(device_index, depth, color, fps);
|
||||
}
|
||||
|
||||
// 2D相机
|
||||
int mvs_index = device_index - percipio_count;
|
||||
if (mvs_cameras_ && mvs_index >= 0 && mvs_index < mvs_cameras_->getCameraCount()) {
|
||||
depth = cv::Mat(); // 2D相机没有深度图
|
||||
return mvs_cameras_->getLatestImage(mvs_index, color, fps);
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* @brief 检查是否正在运行
|
||||
*
|
||||
* @return true 正在运行,false 已停止或未初始化
|
||||
*/
|
||||
bool DeviceManager::isRunning() const {
|
||||
bool anyScaleRunning = capture_ && capture_->isRunning();
|
||||
bool anyMVSRunning = mvs_cameras_ && mvs_cameras_->isRunning();
|
||||
return anyScaleRunning || anyMVSRunning;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 获取指定设备的深度相机内参
|
||||
*
|
||||
* 从相机SDK获取深度相机的内参(fx, fy, cx, cy)
|
||||
* 内参存储在相机的标定数据中
|
||||
*
|
||||
* @param device_index 设备索引,从0开始
|
||||
* @param cy [输出] 主点y坐标(像素单位)
|
||||
* @return 是否成功获取内参
|
||||
*/
|
||||
bool DeviceManager::getDepthCameraIntrinsics(int device_index, float& fx, float& fy, float& cx, float& cy) {
|
||||
if (!capture_) return false;
|
||||
|
||||
// 只有深度相机有内参
|
||||
int percipio_count = capture_->getCameraCount();
|
||||
if (device_index < percipio_count) {
|
||||
return capture_->getDepthCameraIntrinsics(device_index, fx, fy, cx, cy);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 利用SDK生成点云
|
||||
* @param device_index 设备索引
|
||||
* @param depth_img 深度图
|
||||
* @param out_points 输出点云
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool DeviceManager::computePointCloud(int device_index, const cv::Mat& depth_img, std::vector<Point3D>& out_points) {
|
||||
if (!capture_) return false;
|
||||
|
||||
// 只有深度相机可以生成点云
|
||||
int percipio_count = capture_->getCameraCount();
|
||||
if (device_index < percipio_count) {
|
||||
return capture_->computePointCloud(device_index, depth_img, out_points);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
int DeviceManager::get2DCameraCount() const {
|
||||
return mvs_cameras_ ? mvs_cameras_->getCameraCount() : 0;
|
||||
}
|
||||
|
||||
bool DeviceManager::get2DCameraImage(int camera_index, cv::Mat& image, double& fps) {
|
||||
if (!mvs_cameras_) {
|
||||
return false;
|
||||
}
|
||||
return mvs_cameras_->getLatestImage(camera_index, image, fps);
|
||||
}
|
||||
|
||||
std::string DeviceManager::get2DCameraId(int camera_index) const {
|
||||
if (!mvs_cameras_) {
|
||||
return "";
|
||||
}
|
||||
return mvs_cameras_->getCameraId(camera_index);
|
||||
}
|
||||
|
||||
154
image_capture/src/device/device_manager.h
Normal file
154
image_capture/src/device/device_manager.h
Normal file
@@ -0,0 +1,154 @@
|
||||
#pragma once
|
||||
|
||||
// #include "../camera/ty_multi_camera_capture.h" -> Moved to cpp
|
||||
// #include "../camera/mvs_multi_camera_capture.h" -> Moved to cpp
|
||||
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <vector>
|
||||
|
||||
class CameraCapture;
|
||||
class MvsMultiCameraCapture;
|
||||
#include <memory>
|
||||
#include <string>
|
||||
#include "../common_types.h"
|
||||
|
||||
/**
|
||||
* @brief DeviceManager
|
||||
* 设备管理器(Device Manager),负责管理硬件设备(相机、读码器等)
|
||||
*
|
||||
* 采用单例模式,确保全局只有一个设备管理器实例
|
||||
* 任何模块都可以通过getInstance()访问设备
|
||||
*
|
||||
* 功能说明:
|
||||
* - 管理相机采集设备的初始化、启动、停止
|
||||
* - 管理读码器设备的初始化、启动、停止
|
||||
* - 提供设备访问接口(获取图像、设备信息等)
|
||||
* - 支持未来扩展其他设备类型
|
||||
*
|
||||
* 职责范围:
|
||||
* - 设备生命周期管理(初始化、启动、停止)
|
||||
* - 设备数据获取(图像、设备信息)
|
||||
* - 不涉及业务逻辑(任务管理、结果处理等)
|
||||
*/
|
||||
class DeviceManager {
|
||||
public:
|
||||
/**
|
||||
* 获取单例实例
|
||||
* @return DeviceManager单例引用
|
||||
*/
|
||||
static DeviceManager& getInstance();
|
||||
|
||||
// 禁止拷贝和赋值
|
||||
DeviceManager(const DeviceManager&) = delete;
|
||||
DeviceManager& operator=(const DeviceManager&) = delete;
|
||||
|
||||
~DeviceManager();
|
||||
|
||||
/**
|
||||
* 初始化并扫描设备
|
||||
* @param enable_depth 是否启用深度流
|
||||
* @param enable_color 是否启用彩色流
|
||||
* @return 发现的设备数量
|
||||
*/
|
||||
int initialize(bool enable_depth = true, bool enable_color = true);
|
||||
|
||||
/**
|
||||
* 启动所有设备
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool startAll();
|
||||
|
||||
/**
|
||||
* 停止所有设备
|
||||
*/
|
||||
void stopAll();
|
||||
|
||||
/**
|
||||
* 获取设备数量
|
||||
* @return 设备数量
|
||||
*/
|
||||
int getDeviceCount() const;
|
||||
|
||||
/**
|
||||
* 获取设备ID
|
||||
* @param index 设备索引
|
||||
* @return 设备ID字符串
|
||||
*/
|
||||
std::string getDeviceId(int index) const;
|
||||
|
||||
/**
|
||||
* 获取指定设备的最新图像
|
||||
* @param device_index 设备索引
|
||||
* @param depth 输出的深度图
|
||||
* @param color 输出的彩色图
|
||||
* @param fps 输出的帧率
|
||||
* @return 是否成功获取到图像
|
||||
*/
|
||||
bool getLatestImages(int device_index, cv::Mat& depth, cv::Mat& color, double& fps);
|
||||
|
||||
/**
|
||||
* 检查是否正在运行
|
||||
* @return 是否运行中
|
||||
*/
|
||||
bool isRunning() const;
|
||||
|
||||
/**
|
||||
* 获取指定设备的深度相机内参
|
||||
* @param device_index 设备索引
|
||||
* @param fx [输出] 焦距x
|
||||
* @param fy [输出] 焦距y
|
||||
* @param cx [输出] 主点x
|
||||
* @param cy [输出] 主点y
|
||||
* @return 是否成功获取内参
|
||||
*/
|
||||
bool getDepthCameraIntrinsics(int device_index, float& fx, float& fy, float& cx, float& cy);
|
||||
|
||||
/**
|
||||
* @brief 利用SDK生成点云
|
||||
* @param device_index 设备索引
|
||||
* @param depth_img 深度图
|
||||
* @param out_points 输出点云
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool computePointCloud(int device_index, const cv::Mat& depth_img, std::vector<Point3D>& out_points);
|
||||
|
||||
/**
|
||||
* 获取深度相机数量
|
||||
* @return 深度相机数量
|
||||
*/
|
||||
int getDepthCameraCount() const;
|
||||
|
||||
/**
|
||||
* 获取2D (MVS)相机数量
|
||||
* @return 2D相机数量
|
||||
*/
|
||||
int get2DCameraCount() const;
|
||||
|
||||
/**
|
||||
* 获取2D相机图像
|
||||
* @param camera_index 2D相机索引(从0开始)
|
||||
* @param image 输出的彩色图
|
||||
* @param fps 输出的帧率
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool get2DCameraImage(int camera_index, cv::Mat& image, double& fps);
|
||||
|
||||
/**
|
||||
* 获取2D相机ID
|
||||
* @param camera_index 2D相机索引
|
||||
* @return 相机ID字符串
|
||||
*/
|
||||
std::string get2DCameraId(int camera_index) const;
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
private:
|
||||
DeviceManager(); // 私有构造函数,确保单例
|
||||
|
||||
std::shared_ptr<CameraCapture> capture_; // Percipio深度相机采集对象
|
||||
std::unique_ptr<MvsMultiCameraCapture> mvs_cameras_; // MVS 2D相机采集对象
|
||||
bool initialized_; // 是否已初始化
|
||||
};
|
||||
|
||||
1014
image_capture/src/gui/mainwindow.cpp
Normal file
1014
image_capture/src/gui/mainwindow.cpp
Normal file
File diff suppressed because it is too large
Load Diff
106
image_capture/src/gui/mainwindow.h
Normal file
106
image_capture/src/gui/mainwindow.h
Normal file
@@ -0,0 +1,106 @@
|
||||
#pragma once
|
||||
|
||||
#include <QMainWindow>
|
||||
#include <QLabel>
|
||||
#include <QPushButton>
|
||||
#include <QVBoxLayout>
|
||||
#include <QHBoxLayout>
|
||||
#include <QTimer>
|
||||
#include <QPlainTextEdit>
|
||||
#include <QTextStream>
|
||||
#include <QTabWidget>
|
||||
#include <QDoubleSpinBox>
|
||||
#include <QSpinBox>
|
||||
#include <QFormLayout>
|
||||
#include <QGroupBox>
|
||||
#include <QScrollArea>
|
||||
#include <QScrollArea>
|
||||
#include <memory>
|
||||
#include <vector>
|
||||
#include <streambuf>
|
||||
#include <fstream>
|
||||
#include <opencv2/core/mat.hpp>
|
||||
|
||||
// Forward declarations
|
||||
class SettingsWidget;
|
||||
class ImageProcessor;
|
||||
class VisionController;
|
||||
class LogStreamBuf;
|
||||
|
||||
// 必须在QT_BEGIN_NAMESPACE之前包含,确保MOC能看到完整类型定义
|
||||
#include "../common/log_streambuf.h"
|
||||
|
||||
QT_BEGIN_NAMESPACE
|
||||
namespace Ui { class MainWindow; }
|
||||
QT_END_NAMESPACE
|
||||
|
||||
class MainWindow : public QMainWindow
|
||||
{
|
||||
Q_OBJECT
|
||||
|
||||
public:
|
||||
MainWindow(QWidget *parent = nullptr);
|
||||
~MainWindow();
|
||||
|
||||
private slots:
|
||||
void onStartCapture();
|
||||
void onStopCapture();
|
||||
void onSaveImage();
|
||||
void onSavePointCloud();
|
||||
void updateImage();
|
||||
|
||||
private:
|
||||
// Helper to reduce redundancy in save functions
|
||||
bool prepareCapturedData(cv::Mat& depth, cv::Mat& color, QString& timestamp);
|
||||
|
||||
Ui::MainWindow *ui;
|
||||
std::shared_ptr<VisionController> visionController_; // Vision系统控制器(Redis监控和算法触发)
|
||||
std::vector<std::shared_ptr<ImageProcessor>> processors_;
|
||||
|
||||
// Supports max 4 depth cameras
|
||||
static const int MAX_DEPTH_CAMERAS = 4;
|
||||
static const int MAX_2D_CAMERAS = 5;
|
||||
QLabel* depthImageLabels_[MAX_DEPTH_CAMERAS];
|
||||
QLabel* depthInfoLabels_[MAX_DEPTH_CAMERAS];
|
||||
|
||||
// 2D camera display control array
|
||||
QLabel* twoDImageLabels_[MAX_2D_CAMERAS];
|
||||
QLabel* twoDInfoLabels_[MAX_2D_CAMERAS];
|
||||
|
||||
void update2DDisplay();
|
||||
void update2DCameraDisplay(int camera_index, const cv::Mat& image, double fps);
|
||||
// 深度图信息标签(显示相机编号、FPS等)
|
||||
|
||||
|
||||
|
||||
QPushButton* startButton_;
|
||||
QPushButton* stopButton_;
|
||||
QPushButton* saveButton_;
|
||||
QPushButton* savePointCloudButton_;
|
||||
QPlainTextEdit* logTextEdit_;
|
||||
|
||||
QTimer* updateTimer_;
|
||||
bool isCapturing_;
|
||||
int currentDeviceIndex_;
|
||||
|
||||
// 日志重定向相关
|
||||
std::unique_ptr<LogStreamBuf> coutBuf_;
|
||||
std::unique_ptr<LogStreamBuf> cerrBuf_;
|
||||
std::streambuf* originalCout_;
|
||||
std::streambuf* originalCerr_;
|
||||
|
||||
QImage cvMatToQImage(const cv::Mat& mat);
|
||||
void updateDepthDisplay();
|
||||
void updateCameraDisplay(int cameraIndex);
|
||||
|
||||
void appendLog(const QString& message);
|
||||
|
||||
// 辅助函数:简化代码
|
||||
void startImageDisplay(); // 启动图像显示(更新按钮状态和定时器)
|
||||
void stopImageDisplay(); // 停止图像显示(更新按钮状态、停止定时器、清空显示)
|
||||
|
||||
// Settings tab
|
||||
// Settings Widget
|
||||
SettingsWidget* settingsWidget_;
|
||||
};
|
||||
|
||||
1205
image_capture/src/gui/mainwindow.ui
Normal file
1205
image_capture/src/gui/mainwindow.ui
Normal file
File diff suppressed because it is too large
Load Diff
305
image_capture/src/gui/settings_widget.cpp
Normal file
305
image_capture/src/gui/settings_widget.cpp
Normal file
@@ -0,0 +1,305 @@
|
||||
#include "settings_widget.h"
|
||||
#include "../common/config_manager.h"
|
||||
#include <QSpinBox>
|
||||
#include <QDoubleSpinBox>
|
||||
#include <QVBoxLayout>
|
||||
#include <QHBoxLayout>
|
||||
#include <QTabWidget>
|
||||
#include <QPushButton>
|
||||
#include <QGroupBox>
|
||||
#include <QFormLayout>
|
||||
#include <QLabel>
|
||||
#include <QScrollArea>
|
||||
#include <QMessageBox>
|
||||
|
||||
SettingsWidget::SettingsWidget(QWidget *parent) : QWidget(parent) {
|
||||
setupUi();
|
||||
loadSettings();
|
||||
}
|
||||
|
||||
void SettingsWidget::setupUi() {
|
||||
auto mainLayout = new QVBoxLayout(this);
|
||||
|
||||
auto tabWidget = new QTabWidget(this);
|
||||
tabWidget->addTab(createBeamRackTab(), "Beam/Rack Deflection");
|
||||
tabWidget->addTab(createPalletOffsetTab(), "Pallet Offset");
|
||||
tabWidget->addTab(createOtherAlgorithmsTab(), "Other Algorithms");
|
||||
tabWidget->addTab(createGeneralTab(), "General");
|
||||
|
||||
mainLayout->addWidget(tabWidget);
|
||||
|
||||
auto buttonLayout = new QHBoxLayout();
|
||||
buttonLayout->addStretch();
|
||||
|
||||
auto saveButton = new QPushButton("Save Settings", this);
|
||||
saveButton->setMinimumSize(120, 35);
|
||||
QFont font;
|
||||
font.setPointSize(12);
|
||||
saveButton->setFont(font);
|
||||
connect(saveButton, &QPushButton::clicked, this, &SettingsWidget::saveSettings);
|
||||
buttonLayout->addWidget(saveButton);
|
||||
|
||||
mainLayout->addLayout(buttonLayout);
|
||||
}
|
||||
|
||||
QWidget* SettingsWidget::createBeamRackTab() {
|
||||
auto widget = new QWidget(this);
|
||||
auto layout = new QVBoxLayout(widget);
|
||||
|
||||
auto roiGroup = new QGroupBox("感兴趣区域点", this);
|
||||
auto roiLayout = new QGridLayout(roiGroup);
|
||||
|
||||
roiLayout->addWidget(new QLabel("点", this), 0, 0);
|
||||
roiLayout->addWidget(new QLabel("横梁 X", this), 0, 1);
|
||||
roiLayout->addWidget(new QLabel("横梁 Y", this), 0, 2);
|
||||
roiLayout->addWidget(new QLabel("立柱 X", this), 0, 3);
|
||||
roiLayout->addWidget(new QLabel("立柱 Y", this), 0, 4);
|
||||
|
||||
const QStringList pointNames = {"左上", "右上", "右下", "左下"};
|
||||
for (int i = 0; i < 4; ++i) {
|
||||
beamRoiX_[i] = new QSpinBox(this); beamRoiX_[i]->setRange(0, 5000);
|
||||
beamRoiY_[i] = new QSpinBox(this); beamRoiY_[i]->setRange(0, 5000);
|
||||
rackRoiX_[i] = new QSpinBox(this); rackRoiX_[i]->setRange(0, 5000);
|
||||
rackRoiY_[i] = new QSpinBox(this); rackRoiY_[i]->setRange(0, 5000);
|
||||
|
||||
roiLayout->addWidget(new QLabel(pointNames[i], this), i+1, 0);
|
||||
roiLayout->addWidget(beamRoiX_[i], i+1, 1);
|
||||
roiLayout->addWidget(beamRoiY_[i], i+1, 2);
|
||||
roiLayout->addWidget(rackRoiX_[i], i+1, 3);
|
||||
roiLayout->addWidget(rackRoiY_[i], i+1, 4);
|
||||
}
|
||||
layout->addWidget(roiGroup);
|
||||
|
||||
auto threshGroup = new QGroupBox("阈值 (mm)", this);
|
||||
auto threshLayout = new QFormLayout(threshGroup);
|
||||
|
||||
auto createDoubleSpin = [this](QDoubleSpinBox*& box) {
|
||||
box = new QDoubleSpinBox(this);
|
||||
box->setRange(-1000.0, 1000.0);
|
||||
box->setSingleStep(0.1);
|
||||
box->setDecimals(1);
|
||||
};
|
||||
|
||||
createDoubleSpin(beamThresholdA_);
|
||||
createDoubleSpin(beamThresholdB_);
|
||||
createDoubleSpin(beamThresholdC_);
|
||||
createDoubleSpin(beamThresholdD_);
|
||||
createDoubleSpin(rackThresholdA_);
|
||||
createDoubleSpin(rackThresholdB_);
|
||||
createDoubleSpin(rackThresholdC_);
|
||||
createDoubleSpin(rackThresholdD_);
|
||||
|
||||
threshLayout->addRow("横梁负向报警 (A):", beamThresholdA_);
|
||||
threshLayout->addRow("横梁负向预警 (B):", beamThresholdB_);
|
||||
threshLayout->addRow("横梁正向预警 (C):", beamThresholdC_);
|
||||
threshLayout->addRow("横梁正向报警 (D):", beamThresholdD_);
|
||||
threshLayout->addRow("立柱负向报警 (A):", rackThresholdA_);
|
||||
threshLayout->addRow("立柱负向预警 (B):", rackThresholdB_);
|
||||
threshLayout->addRow("立柱正向预警 (C):", rackThresholdC_);
|
||||
threshLayout->addRow("立柱正向报警 (D):", rackThresholdD_);
|
||||
|
||||
layout->addWidget(threshGroup);
|
||||
layout->addStretch();
|
||||
return widget;
|
||||
}
|
||||
|
||||
QWidget* SettingsWidget::createPalletOffsetTab() {
|
||||
auto widget = new QWidget(this);
|
||||
auto scroll = new QScrollArea(widget);
|
||||
auto contentWidget = new QWidget();
|
||||
auto layout = new QVBoxLayout(contentWidget);
|
||||
|
||||
auto createThreshGroup = [this](const QString& title, QDoubleSpinBox*& A, QDoubleSpinBox*& B, QDoubleSpinBox*& C, QDoubleSpinBox*& D) {
|
||||
auto group = new QGroupBox(title, this);
|
||||
auto flo = new QFormLayout(group);
|
||||
|
||||
auto createDS = [this](QDoubleSpinBox*& box) {
|
||||
box = new QDoubleSpinBox(this);
|
||||
box->setRange(-1000.0, 1000.0);
|
||||
box->setSingleStep(0.1);
|
||||
box->setDecimals(1);
|
||||
};
|
||||
|
||||
createDS(A); createDS(B); createDS(C); createDS(D);
|
||||
flo->addRow("低位报警 (A):", A);
|
||||
flo->addRow("低位预警 (B):", B);
|
||||
flo->addRow("高位预警 (C):", C);
|
||||
flo->addRow("高位报警 (D):", D);
|
||||
return group;
|
||||
};
|
||||
|
||||
layout->addWidget(createThreshGroup("横向偏移 (mm)", palletLatA_, palletLatB_, palletLatC_, palletLatD_));
|
||||
layout->addWidget(createThreshGroup("纵向偏移 (mm)", palletLonA_, palletLonB_, palletLonC_, palletLonD_));
|
||||
layout->addWidget(createThreshGroup("旋转角度 (deg)", palletRotA_, palletRotB_, palletRotC_, palletRotD_));
|
||||
layout->addWidget(createThreshGroup("左孔变形 (mm)", palletHoleLeftA_, palletHoleLeftB_, palletHoleLeftC_, palletHoleLeftD_));
|
||||
layout->addWidget(createThreshGroup("右孔变形 (mm)", palletHoleRightA_, palletHoleRightB_, palletHoleRightC_, palletHoleRightD_));
|
||||
|
||||
layout->addStretch();
|
||||
|
||||
auto mainLayout = new QVBoxLayout(widget);
|
||||
scroll->setWidget(contentWidget);
|
||||
scroll->setWidgetResizable(true);
|
||||
mainLayout->addWidget(scroll);
|
||||
|
||||
return widget;
|
||||
}
|
||||
|
||||
QWidget* SettingsWidget::createOtherAlgorithmsTab() {
|
||||
auto widget = new QWidget(this);
|
||||
auto layout = new QVBoxLayout(widget);
|
||||
|
||||
auto slotGroup = new QGroupBox("库位占用", this);
|
||||
auto slotLayout = new QFormLayout(slotGroup);
|
||||
|
||||
slotDepthThreshold_ = new QDoubleSpinBox(this);
|
||||
slotDepthThreshold_->setRange(0.0, 10000.0);
|
||||
|
||||
slotConfidenceThreshold_ = new QDoubleSpinBox(this);
|
||||
slotConfidenceThreshold_->setRange(0.0, 1.0);
|
||||
slotConfidenceThreshold_->setSingleStep(0.05);
|
||||
|
||||
slotLayout->addRow("深度阈值 (mm):", slotDepthThreshold_);
|
||||
slotLayout->addRow("置信度阈值:", slotConfidenceThreshold_);
|
||||
layout->addWidget(slotGroup);
|
||||
|
||||
auto visGroup = new QGroupBox("视觉盘点", this);
|
||||
auto visLayout = new QFormLayout(visGroup);
|
||||
|
||||
visualBarcodeConfidence_ = new QDoubleSpinBox(this);
|
||||
visualBarcodeConfidence_->setRange(0.0, 1.0);
|
||||
visualBarcodeConfidence_->setSingleStep(0.05);
|
||||
|
||||
visLayout->addRow("条码置信度:", visualBarcodeConfidence_);
|
||||
layout->addWidget(visGroup);
|
||||
|
||||
layout->addStretch();
|
||||
return widget;
|
||||
}
|
||||
|
||||
QWidget* SettingsWidget::createGeneralTab() {
|
||||
auto widget = new QWidget(this);
|
||||
auto layout = new QFormLayout(widget);
|
||||
|
||||
minDepth_ = new QDoubleSpinBox(this);
|
||||
minDepth_->setRange(0.0, 10000.0);
|
||||
|
||||
maxDepth_ = new QDoubleSpinBox(this);
|
||||
maxDepth_->setRange(0.0, 10000.0);
|
||||
|
||||
samplePoints_ = new QSpinBox(this);
|
||||
samplePoints_->setRange(1, 1000);
|
||||
|
||||
layout->addRow("最小深度 (mm):", minDepth_);
|
||||
layout->addRow("最大深度 (mm):", maxDepth_);
|
||||
layout->addRow("采样点数:", samplePoints_);
|
||||
|
||||
return widget;
|
||||
}
|
||||
|
||||
void SettingsWidget::loadSettings() {
|
||||
auto& config = ConfigManager::getInstance();
|
||||
|
||||
auto beamPoints = config.getBeamROIPoints();
|
||||
for(int i=0; i<4 && i<beamPoints.size(); ++i) {
|
||||
beamRoiX_[i]->setValue(beamPoints[i].x);
|
||||
beamRoiY_[i]->setValue(beamPoints[i].y);
|
||||
}
|
||||
auto rackPoints = config.getRackROIPoints();
|
||||
for(int i=0; i<4 && i<rackPoints.size(); ++i) {
|
||||
rackRoiX_[i]->setValue(rackPoints[i].x);
|
||||
rackRoiY_[i]->setValue(rackPoints[i].y);
|
||||
}
|
||||
|
||||
auto beamT = config.getBeamThresholds();
|
||||
if(beamT.size() >= 4) {
|
||||
beamThresholdA_->setValue(beamT[0]);
|
||||
beamThresholdB_->setValue(beamT[1]);
|
||||
beamThresholdC_->setValue(beamT[2]);
|
||||
beamThresholdD_->setValue(beamT[3]);
|
||||
}
|
||||
|
||||
auto rackT = config.getRackThresholds();
|
||||
if(rackT.size() >= 4) {
|
||||
rackThresholdA_->setValue(rackT[0]);
|
||||
rackThresholdB_->setValue(rackT[1]);
|
||||
rackThresholdC_->setValue(rackT[2]);
|
||||
rackThresholdD_->setValue(rackT[3]);
|
||||
}
|
||||
|
||||
auto setThresh = [](std::vector<float> v, QDoubleSpinBox* a, QDoubleSpinBox* b, QDoubleSpinBox* c, QDoubleSpinBox* d) {
|
||||
if(v.size() >= 4) {
|
||||
a->setValue(v[0]); b->setValue(v[1]); c->setValue(v[2]); d->setValue(v[3]);
|
||||
}
|
||||
};
|
||||
|
||||
setThresh(config.getPalletOffsetLatThresholds(), palletLatA_, palletLatB_, palletLatC_, palletLatD_);
|
||||
setThresh(config.getPalletOffsetLonThresholds(), palletLonA_, palletLonB_, palletLonC_, palletLonD_);
|
||||
setThresh(config.getPalletRotationAngleThresholds(), palletRotA_, palletRotB_, palletRotC_, palletRotD_);
|
||||
setThresh(config.getPalletHoleDefLeftThresholds(), palletHoleLeftA_, palletHoleLeftB_, palletHoleLeftC_, palletHoleLeftD_);
|
||||
setThresh(config.getPalletHoleDefRightThresholds(), palletHoleRightA_, palletHoleRightB_, palletHoleRightC_, palletHoleRightD_);
|
||||
|
||||
slotDepthThreshold_->setValue(config.getSlotOccupancyDepthThreshold());
|
||||
slotConfidenceThreshold_->setValue(config.getSlotOccupancyConfidenceThreshold());
|
||||
visualBarcodeConfidence_->setValue(config.getVisualInventoryBarcodeConfidence());
|
||||
|
||||
minDepth_->setValue(config.getAlgorithmMinDepth());
|
||||
maxDepth_->setValue(config.getAlgorithmMaxDepth());
|
||||
samplePoints_->setValue(config.getAlgorithmSamplePoints());
|
||||
}
|
||||
|
||||
void SettingsWidget::saveSettings() {
|
||||
auto& config = ConfigManager::getInstance();
|
||||
|
||||
std::vector<cv::Point2i> beamPts, rackPts;
|
||||
for(int i=0; i<4; ++i) {
|
||||
beamPts.push_back(cv::Point2i(beamRoiX_[i]->value(), beamRoiY_[i]->value()));
|
||||
rackPts.push_back(cv::Point2i(rackRoiX_[i]->value(), rackRoiY_[i]->value()));
|
||||
}
|
||||
config.setBeamROIPoints(beamPts);
|
||||
config.setRackROIPoints(rackPts);
|
||||
|
||||
config.setBeamThresholds({
|
||||
(float)beamThresholdA_->value(), (float)beamThresholdB_->value(),
|
||||
(float)beamThresholdC_->value(), (float)beamThresholdD_->value()
|
||||
});
|
||||
config.setRackThresholds({
|
||||
(float)rackThresholdA_->value(), (float)rackThresholdB_->value(),
|
||||
(float)rackThresholdC_->value(), (float)rackThresholdD_->value()
|
||||
});
|
||||
|
||||
config.setPalletOffsetLatThresholds({
|
||||
(float)palletLatA_->value(), (float)palletLatB_->value(),
|
||||
(float)palletLatC_->value(), (float)palletLatD_->value()
|
||||
});
|
||||
config.setPalletOffsetLonThresholds({
|
||||
(float)palletLonA_->value(), (float)palletLonB_->value(),
|
||||
(float)palletLonC_->value(), (float)palletLonD_->value()
|
||||
});
|
||||
config.setPalletRotationAngleThresholds({
|
||||
(float)palletRotA_->value(), (float)palletRotB_->value(),
|
||||
(float)palletRotC_->value(), (float)palletRotD_->value()
|
||||
});
|
||||
config.setPalletHoleDefLeftThresholds({
|
||||
(float)palletHoleLeftA_->value(), (float)palletHoleLeftB_->value(),
|
||||
(float)palletHoleLeftC_->value(), (float)palletHoleLeftD_->value()
|
||||
});
|
||||
config.setPalletHoleDefRightThresholds({
|
||||
(float)palletHoleRightA_->value(), (float)palletHoleRightB_->value(),
|
||||
(float)palletHoleRightC_->value(), (float)palletHoleRightD_->value()
|
||||
});
|
||||
|
||||
config.setSlotOccupancyDepthThreshold((float)slotDepthThreshold_->value());
|
||||
config.setSlotOccupancyConfidenceThreshold((float)slotConfidenceThreshold_->value());
|
||||
config.setVisualInventoryBarcodeConfidence((float)visualBarcodeConfidence_->value());
|
||||
|
||||
config.setAlgorithmMinDepth((float)minDepth_->value());
|
||||
config.setAlgorithmMaxDepth((float)maxDepth_->value());
|
||||
config.setAlgorithmSamplePoints(samplePoints_->value());
|
||||
|
||||
if (config.saveConfig()) {
|
||||
QMessageBox::information(this, "Success", "Configuration saved successfully.");
|
||||
emit settingsSaved();
|
||||
} else {
|
||||
QMessageBox::critical(this, "Error", "Failed to save configuration.");
|
||||
}
|
||||
}
|
||||
75
image_capture/src/gui/settings_widget.h
Normal file
75
image_capture/src/gui/settings_widget.h
Normal file
@@ -0,0 +1,75 @@
|
||||
#pragma once
|
||||
|
||||
#include <QWidget>
|
||||
#include <vector>
|
||||
|
||||
class QSpinBox;
|
||||
class QDoubleSpinBox;
|
||||
|
||||
class SettingsWidget : public QWidget {
|
||||
Q_OBJECT
|
||||
|
||||
public:
|
||||
explicit SettingsWidget(QWidget *parent = nullptr);
|
||||
~SettingsWidget() = default;
|
||||
|
||||
public slots:
|
||||
void loadSettings();
|
||||
void saveSettings();
|
||||
|
||||
signals:
|
||||
void settingsSaved();
|
||||
|
||||
private:
|
||||
void setupUi();
|
||||
QWidget* createBeamRackTab();
|
||||
QWidget* createPalletOffsetTab();
|
||||
QWidget* createOtherAlgorithmsTab();
|
||||
QWidget* createGeneralTab();
|
||||
|
||||
// Beam/Rack Deflection
|
||||
QSpinBox* beamRoiX_[4];
|
||||
QSpinBox* beamRoiY_[4];
|
||||
QSpinBox* rackRoiX_[4];
|
||||
QSpinBox* rackRoiY_[4];
|
||||
QDoubleSpinBox* beamThresholdA_;
|
||||
QDoubleSpinBox* beamThresholdB_;
|
||||
QDoubleSpinBox* beamThresholdC_;
|
||||
QDoubleSpinBox* beamThresholdD_;
|
||||
QDoubleSpinBox* rackThresholdA_;
|
||||
QDoubleSpinBox* rackThresholdB_;
|
||||
QDoubleSpinBox* rackThresholdC_;
|
||||
QDoubleSpinBox* rackThresholdD_;
|
||||
|
||||
// Pallet Offset
|
||||
QDoubleSpinBox* palletLatA_;
|
||||
QDoubleSpinBox* palletLatB_;
|
||||
QDoubleSpinBox* palletLatC_;
|
||||
QDoubleSpinBox* palletLatD_;
|
||||
QDoubleSpinBox* palletLonA_;
|
||||
QDoubleSpinBox* palletLonB_;
|
||||
QDoubleSpinBox* palletLonC_;
|
||||
QDoubleSpinBox* palletLonD_;
|
||||
QDoubleSpinBox* palletRotA_;
|
||||
QDoubleSpinBox* palletRotB_;
|
||||
QDoubleSpinBox* palletRotC_;
|
||||
QDoubleSpinBox* palletRotD_;
|
||||
QDoubleSpinBox* palletHoleLeftA_;
|
||||
QDoubleSpinBox* palletHoleLeftB_;
|
||||
QDoubleSpinBox* palletHoleLeftC_;
|
||||
QDoubleSpinBox* palletHoleLeftD_;
|
||||
QDoubleSpinBox* palletHoleRightA_;
|
||||
QDoubleSpinBox* palletHoleRightB_;
|
||||
QDoubleSpinBox* palletHoleRightC_;
|
||||
QDoubleSpinBox* palletHoleRightD_;
|
||||
|
||||
// Other Algorithms
|
||||
QDoubleSpinBox* slotDepthThreshold_;
|
||||
QDoubleSpinBox* slotConfidenceThreshold_;
|
||||
QDoubleSpinBox* visualBarcodeConfidence_;
|
||||
|
||||
// General Parameters
|
||||
QDoubleSpinBox* minDepth_;
|
||||
QDoubleSpinBox* maxDepth_;
|
||||
QSpinBox* samplePoints_;
|
||||
};
|
||||
18
image_capture/src/main.cpp
Normal file
18
image_capture/src/main.cpp
Normal file
@@ -0,0 +1,18 @@
|
||||
#include "gui/mainwindow.h"
|
||||
#include <QApplication>
|
||||
#include <opencv2/core/utils/logger.hpp>
|
||||
#include <qapplication.h>
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
// 设置OpenCV日志级别
|
||||
cv::utils::logging::setLogLevel(cv::utils::logging::LOG_LEVEL_SILENT);
|
||||
|
||||
QApplication app(argc, argv);
|
||||
app.setStyle("Fusion");
|
||||
//创建MainWindow对象(此时会执行构造函数,初始化所有功能)
|
||||
MainWindow window;
|
||||
window.show();
|
||||
|
||||
return app.exec();
|
||||
}
|
||||
346
image_capture/src/redis/redis_communicator.cpp
Normal file
346
image_capture/src/redis/redis_communicator.cpp
Normal file
@@ -0,0 +1,346 @@
|
||||
/**
|
||||
* @file redis_communicator.cpp
|
||||
* @brief Redis通信模块实现文件
|
||||
*/
|
||||
#include "redis_communicator.h"
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
#include <vector>
|
||||
#include <winsock2.h>
|
||||
#include <ws2tcpip.h>
|
||||
|
||||
#pragma comment(lib, "ws2_32.lib")
|
||||
|
||||
RedisCommunicator::RedisCommunicator()
|
||||
: redis_port_(6379), redis_db_(0), redis_context_(nullptr),
|
||||
listening_(false), connected_(false), last_flag_(0),
|
||||
socket_fd_(INVALID_SOCKET) {
|
||||
// Initialize Winsock
|
||||
WSADATA wsaData;
|
||||
int iResult = WSAStartup(MAKEWORD(2, 2), &wsaData);
|
||||
if (iResult != 0) {
|
||||
std::cerr << "WSAStartup failed: " << iResult << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
RedisCommunicator::~RedisCommunicator() {
|
||||
stopListening();
|
||||
disconnectSocket();
|
||||
WSACleanup();
|
||||
}
|
||||
|
||||
bool RedisCommunicator::initialize(const std::string &host, int port, int db,
|
||||
const std::string &password) {
|
||||
redis_host_ = host;
|
||||
redis_port_ = port;
|
||||
redis_db_ = db;
|
||||
redis_password_ = password;
|
||||
|
||||
// Disconnect if already connected
|
||||
disconnectSocket();
|
||||
|
||||
if (connectSocket()) {
|
||||
connected_ = true;
|
||||
|
||||
// Authenticate if password provided
|
||||
if (!redis_password_.empty()) {
|
||||
std::string cmd = "AUTH " + redis_password_ + "\r\n";
|
||||
std::string response = sendCommand(cmd);
|
||||
if (response.find("+OK") != 0) {
|
||||
std::cerr << "Redis authentication failed: " << response << std::endl;
|
||||
disconnectSocket();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Select database if needed
|
||||
if (db > 0) {
|
||||
std::string cmd = "SELECT " + std::to_string(db) + "\r\n";
|
||||
sendCommand(cmd);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
void RedisCommunicator::setTaskCallback(TaskCallback callback) {
|
||||
std::lock_guard<std::mutex> lock(callback_mutex_);
|
||||
task_callback_ = callback;
|
||||
}
|
||||
|
||||
bool RedisCommunicator::startListening() {
|
||||
if (listening_)
|
||||
return true;
|
||||
if (!connected_)
|
||||
return false;
|
||||
|
||||
listening_ = true;
|
||||
listening_thread_ =
|
||||
std::thread(&RedisCommunicator::listeningThreadFunc, this);
|
||||
std::cout << "[RedisCommunicator] Started listening for task flag changes"
|
||||
<< std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
void RedisCommunicator::stopListening() {
|
||||
if (!listening_)
|
||||
return;
|
||||
listening_ = false;
|
||||
if (listening_thread_.joinable()) {
|
||||
listening_thread_.join();
|
||||
}
|
||||
std::cout << "[RedisCommunicator] Stopped listening" << std::endl;
|
||||
}
|
||||
|
||||
bool RedisCommunicator::connectSocket() {
|
||||
struct addrinfo *result = NULL, *ptr = NULL, hints;
|
||||
|
||||
ZeroMemory(&hints, sizeof(hints));
|
||||
hints.ai_family = AF_UNSPEC;
|
||||
hints.ai_socktype = SOCK_STREAM;
|
||||
hints.ai_protocol = IPPROTO_TCP;
|
||||
|
||||
// Resolve the server address and port
|
||||
std::string port_str = std::to_string(redis_port_);
|
||||
int iResult =
|
||||
getaddrinfo(redis_host_.c_str(), port_str.c_str(), &hints, &result);
|
||||
if (iResult != 0) {
|
||||
std::cerr << "getaddrinfo failed: " << iResult << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
SOCKET connectSocket = INVALID_SOCKET;
|
||||
|
||||
// Attempt to connect to an address until one succeeds
|
||||
for (ptr = result; ptr != NULL; ptr = ptr->ai_next) {
|
||||
connectSocket = socket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol);
|
||||
if (connectSocket == INVALID_SOCKET) {
|
||||
std::cerr << "socket failed with error: " << WSAGetLastError()
|
||||
<< std::endl;
|
||||
continue;
|
||||
}
|
||||
|
||||
iResult = connect(connectSocket, ptr->ai_addr, (int)ptr->ai_addrlen);
|
||||
if (iResult == SOCKET_ERROR) {
|
||||
closesocket(connectSocket);
|
||||
connectSocket = INVALID_SOCKET;
|
||||
continue;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
freeaddrinfo(result);
|
||||
|
||||
if (connectSocket == INVALID_SOCKET) {
|
||||
std::cerr << "Unable to connect to server!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// Set timeout
|
||||
DWORD timeout = 2000; // 2 seconds
|
||||
setsockopt(connectSocket, SOL_SOCKET, SO_RCVTIMEO, (const char *)&timeout,
|
||||
sizeof(timeout));
|
||||
setsockopt(connectSocket, SOL_SOCKET, SO_SNDTIMEO, (const char *)&timeout,
|
||||
sizeof(timeout));
|
||||
|
||||
socket_fd_ = (unsigned long long)connectSocket;
|
||||
return true;
|
||||
}
|
||||
|
||||
void RedisCommunicator::disconnectSocket() {
|
||||
if (socket_fd_ != (unsigned long long)INVALID_SOCKET) {
|
||||
closesocket((SOCKET)socket_fd_);
|
||||
socket_fd_ = (unsigned long long)INVALID_SOCKET;
|
||||
connected_ = false;
|
||||
}
|
||||
}
|
||||
|
||||
std::string RedisCommunicator::sendCommand(const std::string &cmd) {
|
||||
if (socket_fd_ == (unsigned long long)INVALID_SOCKET)
|
||||
return "";
|
||||
|
||||
// Send the command
|
||||
int iResult = send((SOCKET)socket_fd_, cmd.c_str(), (int)cmd.length(), 0);
|
||||
if (iResult == SOCKET_ERROR) {
|
||||
std::cerr << "send failed: " << WSAGetLastError() << std::endl;
|
||||
disconnectSocket();
|
||||
return "";
|
||||
}
|
||||
|
||||
// Read response (simple blocking read for now)
|
||||
char recvbuf[4096];
|
||||
iResult = recv((SOCKET)socket_fd_, recvbuf, 4096, 0);
|
||||
if (iResult > 0) {
|
||||
return std::string(recvbuf, iResult);
|
||||
} else if (iResult == 0) {
|
||||
std::cout << "Connection closed" << std::endl;
|
||||
disconnectSocket();
|
||||
} else {
|
||||
// std::cerr << "recv failed: " << WSAGetLastError() << std::endl;
|
||||
}
|
||||
return "";
|
||||
}
|
||||
|
||||
bool RedisCommunicator::readString(const std::string &key, std::string &value) {
|
||||
if (!connected_)
|
||||
return false;
|
||||
// Simple inline command: GET key
|
||||
std::string cmd = "GET " + key + "\r\n";
|
||||
std::string response = sendCommand(cmd);
|
||||
return parseRedisResponse(response, value);
|
||||
}
|
||||
|
||||
bool RedisCommunicator::writeString(const std::string &key,
|
||||
const std::string &value) {
|
||||
if (!connected_)
|
||||
return false;
|
||||
|
||||
// Using RESP for SET to be safe with spaces in JSON
|
||||
// *3\r\n$3\r\nSET\r\n$<klen>\r\nkey\r\n$<vlen>\r\nvalue\r\n
|
||||
std::string cmd = "*3\r\n$3\r\nSET\r\n$" + std::to_string(key.length()) +
|
||||
"\r\n" + key + "\r\n$" + std::to_string(value.length()) +
|
||||
"\r\n" + value + "\r\n";
|
||||
|
||||
std::string response = sendCommand(cmd);
|
||||
return (response.find("+OK") == 0);
|
||||
}
|
||||
|
||||
bool RedisCommunicator::readInt(const std::string &key, int &value) {
|
||||
std::string str_value;
|
||||
if (readString(key, str_value)) {
|
||||
try {
|
||||
value = std::stoi(str_value);
|
||||
return true;
|
||||
} catch (...) {
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
bool RedisCommunicator::parseRedisResponse(const std::string &response,
|
||||
std::string &value) {
|
||||
if (response.empty())
|
||||
return false;
|
||||
|
||||
// RESP Bulk String: $5\r\nvalue\r\n
|
||||
// RESP Null: $-1\r\n
|
||||
// RESP Simple String: +OK\r\n
|
||||
|
||||
if (response[0] == '$') {
|
||||
size_t rn1 = response.find("\r\n");
|
||||
if (rn1 == std::string::npos)
|
||||
return false;
|
||||
|
||||
std::string lenStr = response.substr(1, rn1 - 1);
|
||||
int len = std::stoi(lenStr);
|
||||
|
||||
if (len == -1)
|
||||
return false; // Key not found
|
||||
|
||||
size_t rn2 = response.find("\r\n", rn1 + 2);
|
||||
if (rn2 == std::string::npos) {
|
||||
// Maybe response is truncated, simplistic check
|
||||
if (response.length() >= rn1 + 2 + len) {
|
||||
value = response.substr(rn1 + 2, len);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
value = response.substr(rn1 + 2, len);
|
||||
return true;
|
||||
} else if (response[0] == '+') {
|
||||
size_t rn = response.find("\r\n");
|
||||
if (rn != std::string::npos) {
|
||||
value = response.substr(1, rn - 1);
|
||||
return true;
|
||||
}
|
||||
} else if (response[0] == ':') {
|
||||
size_t rn = response.find("\r\n");
|
||||
if (rn != std::string::npos) {
|
||||
value = response.substr(1, rn - 1);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
bool RedisCommunicator::readTaskData(RedisTaskData &task_data) {
|
||||
if (!connected_)
|
||||
return false;
|
||||
|
||||
int flag = 0;
|
||||
if (!readInt("vision_task_flag", flag))
|
||||
return false;
|
||||
task_data.flag = flag;
|
||||
|
||||
readString("vision_task_side", task_data.side);
|
||||
readString("vision_task_time", task_data.task_time);
|
||||
|
||||
int beam_length = 0;
|
||||
readInt("vision_task_beam_length", beam_length);
|
||||
task_data.beam_length = beam_length;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
bool RedisCommunicator::writeDetectionResult(const std::string &result_json) {
|
||||
// Write result to redis
|
||||
// Assuming result key logic, or maybe just write to "vision_result"
|
||||
// The previous code didn't specify the key name, I'll assume "vision_result"
|
||||
// for now or use a dynamic one? Let's assume a fixed key or one derived from
|
||||
// task? Let's just use "vision_result" as a generic key based on context. Or,
|
||||
// maybe check if there's a requirement. The header said "write detection
|
||||
// result to Redis".
|
||||
|
||||
return writeString("vision_task_result", result_json);
|
||||
}
|
||||
|
||||
bool RedisCommunicator::isConnected() const { return connected_; }
|
||||
|
||||
void RedisCommunicator::listeningThreadFunc() {
|
||||
std::cout << "[RedisCommunicator] Listening thread started" << std::endl;
|
||||
// Attempt to reconnect loop if needed?
|
||||
// For now simple polling
|
||||
while (listening_) {
|
||||
try {
|
||||
if (!connected_) {
|
||||
// Try to reconnect?
|
||||
// For now just wait
|
||||
std::this_thread::sleep_for(std::chrono::seconds(1));
|
||||
continue;
|
||||
}
|
||||
|
||||
RedisTaskData task_data;
|
||||
if (readTaskData(task_data)) {
|
||||
int current_flag = task_data.flag;
|
||||
// Only notify on new non-zero flag
|
||||
if (current_flag != last_flag_ && current_flag > 0) {
|
||||
last_flag_ = current_flag;
|
||||
std::lock_guard<std::mutex> lock(callback_mutex_);
|
||||
if (task_callback_) {
|
||||
task_callback_(task_data);
|
||||
}
|
||||
std::cout << "[RedisCommunicator] Detected new task flag: "
|
||||
<< current_flag << std::endl;
|
||||
} else if (current_flag == 0 && last_flag_ != 0) {
|
||||
// Reset last flag if it goes back to 0
|
||||
last_flag_ = 0;
|
||||
}
|
||||
}
|
||||
} catch (const std::exception &e) {
|
||||
std::cerr << "[RedisCommunicator] Exception in listeningThreadFunc: "
|
||||
<< e.what() << std::endl;
|
||||
} catch (...) {
|
||||
std::cerr
|
||||
<< "[RedisCommunicator] Unknown exception in listeningThreadFunc."
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
std::this_thread::sleep_for(
|
||||
std::chrono::milliseconds(100)); // Poll every 100ms
|
||||
}
|
||||
std::cout << "[RedisCommunicator] Listening thread ended" << std::endl;
|
||||
}
|
||||
145
image_capture/src/redis/redis_communicator.h
Normal file
145
image_capture/src/redis/redis_communicator.h
Normal file
@@ -0,0 +1,145 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <memory>
|
||||
#include <functional>
|
||||
#include <thread>
|
||||
#include <atomic>
|
||||
#include <mutex>
|
||||
|
||||
/**
|
||||
* @brief Redis通信模块
|
||||
*
|
||||
* 功能说明:
|
||||
* - 监听Redis中vision_task_flag的变化
|
||||
* - 读取WMS写入的任务数据(flag, side, time)
|
||||
* - 将Vision系统的检测结果写入Redis
|
||||
* - 使用Redis Keyspace Notifications或发布/订阅机制监听变化
|
||||
*
|
||||
* 设计原则:
|
||||
* - 线程安全
|
||||
* - 异步监听,不阻塞主线程
|
||||
* - 提供回调接口,当任务标志位变化时通知上层
|
||||
*/
|
||||
#include "task_data.h"
|
||||
|
||||
class RedisCommunicator {
|
||||
public:
|
||||
/**
|
||||
* 任务标志位变化回调函数类型
|
||||
* @param task_data 任务数据
|
||||
*/
|
||||
using TaskCallback = std::function<void(const RedisTaskData&)>;
|
||||
|
||||
RedisCommunicator();
|
||||
~RedisCommunicator();
|
||||
|
||||
/**
|
||||
* 初始化Redis连接
|
||||
* @param host Redis服务器地址(默认"127.0.0.1")
|
||||
* @param port Redis服务器端口(默认6379)
|
||||
* @param db Redis数据库编号(默认0)
|
||||
* @param password Redis密码(可选)
|
||||
* @return 是否成功连接
|
||||
*/
|
||||
bool initialize(const std::string& host = "127.0.0.1",
|
||||
int port = 6379,
|
||||
int db = 0,
|
||||
const std::string& password = "");
|
||||
|
||||
/**
|
||||
* 设置任务标志位变化回调函数
|
||||
* @param callback 回调函数
|
||||
*/
|
||||
void setTaskCallback(TaskCallback callback);
|
||||
|
||||
/**
|
||||
* 开始监听任务标志位变化
|
||||
* @return 是否成功启动监听
|
||||
*/
|
||||
bool startListening();
|
||||
|
||||
/**
|
||||
* 停止监听
|
||||
*/
|
||||
void stopListening();
|
||||
|
||||
/**
|
||||
* 读取任务数据
|
||||
* @param task_data [输出] 任务数据
|
||||
* @return 是否成功读取
|
||||
*/
|
||||
bool readTaskData(RedisTaskData& task_data);
|
||||
|
||||
/**
|
||||
* 写入检测结果到Redis
|
||||
* @param result_json 检测结果的JSON字符串
|
||||
* @return 是否成功写入
|
||||
*/
|
||||
bool writeDetectionResult(const std::string& result_json);
|
||||
|
||||
/**
|
||||
* 向Redis写入字符串值
|
||||
* @param key Redis键名
|
||||
* @param value 值
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool writeString(const std::string& key, const std::string& value);
|
||||
|
||||
/**
|
||||
* 检查Redis连接状态
|
||||
* @return 是否已连接
|
||||
*/
|
||||
bool isConnected() const;
|
||||
|
||||
private:
|
||||
/**
|
||||
* 监听线程函数
|
||||
*/
|
||||
void listeningThreadFunc();
|
||||
|
||||
/**
|
||||
* 从Redis读取字符串值
|
||||
* @param key Redis键名
|
||||
* @param value [输出] 值
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool readString(const std::string& key, std::string& value);
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* 从Redis读取整数值
|
||||
* @param key Redis键名
|
||||
* @param value [输出] 值
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool readInt(const std::string& key, int& value);
|
||||
|
||||
// Redis连接参数
|
||||
std::string redis_host_;
|
||||
int redis_port_;
|
||||
int redis_db_;
|
||||
std::string redis_password_;
|
||||
|
||||
// Redis连接对象(使用void*避免暴露具体实现)
|
||||
void* redis_context_; // 实际类型为redisContext*或类似
|
||||
|
||||
// 监听相关
|
||||
std::atomic<bool> listening_;
|
||||
std::thread listening_thread_;
|
||||
std::mutex callback_mutex_;
|
||||
TaskCallback task_callback_;
|
||||
|
||||
// 状态
|
||||
std::atomic<bool> connected_;
|
||||
std::atomic<int> last_flag_; // 上次读取的flag值,用于检测变化
|
||||
|
||||
// Socket handle (using uintptr_t to avoid including winsock headers here)
|
||||
unsigned long long socket_fd_;
|
||||
bool connectSocket();
|
||||
void disconnectSocket();
|
||||
std::string sendCommand(const std::string& cmd);
|
||||
bool parseRedisResponse(const std::string& response, std::string& value);
|
||||
};
|
||||
|
||||
16
image_capture/src/redis/task_data.h
Normal file
16
image_capture/src/redis/task_data.h
Normal file
@@ -0,0 +1,16 @@
|
||||
#pragma once
|
||||
#include <string>
|
||||
|
||||
/**
|
||||
* @brief Redis任务数据结构
|
||||
*
|
||||
* 独立定义的任务数据结构,包含flag、side和time
|
||||
*/
|
||||
struct RedisTaskData {
|
||||
int flag; // 任务功能编号(1~5)
|
||||
std::string side; // 货架侧(left/right)
|
||||
std::string task_time; // 任务触发时间("YYYY-MM-DD HH:MM:SS")
|
||||
int beam_length; // 横梁长度(mm),仅flag=3时有效,可选值:2180 / 1380
|
||||
|
||||
RedisTaskData() : flag(0), beam_length(0) {}
|
||||
};
|
||||
855
image_capture/src/task/task_manager.cpp
Normal file
855
image_capture/src/task/task_manager.cpp
Normal file
@@ -0,0 +1,855 @@
|
||||
/**
|
||||
* @file task_manager.cpp
|
||||
* @brief 任务管理器实现文件(合并了结果处理功能)
|
||||
*
|
||||
* 此文件实现了TaskManager类的完整功能:
|
||||
* - 任务接收和队列管理
|
||||
* - 任务分发和执行(根据flag选择对应的检测算法)
|
||||
* - 检测结果处理(格式化、计算警告/报警、写入Redis)
|
||||
* - 线程安全的任务执行
|
||||
*
|
||||
* 设计说明:
|
||||
* - 使用任务队列 + 执行线程的模式,实现异步任务处理
|
||||
* - 直接使用DeviceManager单例获取图像,简化架构
|
||||
* - 合并了结果处理功能,简化架构
|
||||
* - 所有共享数据使用互斥锁保护,确保线程安全
|
||||
*/
|
||||
|
||||
#include "task_manager.h"
|
||||
#include "../algorithm/core/detection_base.h"
|
||||
#include "../common_types.h"
|
||||
#include "../device/device_manager.h"
|
||||
#include <chrono>
|
||||
#include <iostream>
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <set>
|
||||
#include <sstream>
|
||||
#include <thread>
|
||||
|
||||
/**
|
||||
* @brief 构造函数
|
||||
*
|
||||
* 初始化任务管理器,创建所有检测器实例
|
||||
* 检测器映射关系:
|
||||
* - Flag 1: SlotOccupancyDetection (货位有无检测)
|
||||
* - Flag 2: PalletOffsetDetection (托盘位置偏移检测)
|
||||
* - Flag 3: BeamRackDeflectionDetection (横梁变形检测)
|
||||
* - Flag 4: VisualInventoryDetection (视觉盘点)
|
||||
|
||||
*
|
||||
* @note 所有检测器在构造函数中创建,避免运行时创建的开销
|
||||
* @note 使用智能指针管理检测器生命周期,自动释放资源
|
||||
*/
|
||||
TaskManager::TaskManager()
|
||||
: current_status_(TaskStatus::IDLE) // 初始状态为空闲
|
||||
,
|
||||
running_(false) // 初始状态为未运行
|
||||
{
|
||||
// 创建所有检测器,建立flag到检测器的映射关系
|
||||
// 使用std::make_shared创建智能指针,自动管理内存
|
||||
detectors_[1] = std::make_shared<SlotOccupancyDetection>();
|
||||
detectors_[2] = std::make_shared<PalletOffsetDetection>();
|
||||
detectors_[3] = std::make_shared<BeamRackDeflectionDetection>();
|
||||
detectors_[4] = std::make_shared<VisualInventoryDetection>();
|
||||
|
||||
std::cout << "[TaskManager] Initialization complete, created "
|
||||
<< detectors_.size() << " detector(s)" << std::endl;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 析构函数
|
||||
*
|
||||
* 确保在对象销毁时正确停止任务执行线程
|
||||
* 清理所有资源,避免资源泄漏
|
||||
*
|
||||
* @note 必须先停止当前任务,再等待线程结束
|
||||
* @note 使用join()确保线程安全退出
|
||||
*/
|
||||
TaskManager::~TaskManager() {
|
||||
stopCurrentTask(); // 停止当前任务,清空任务队列
|
||||
if (execution_thread_.joinable()) {
|
||||
running_ = false; // 设置运行标志为false,通知线程退出
|
||||
execution_thread_.join(); // 等待线程结束
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 初始化任务管理器
|
||||
*
|
||||
* 初始化任务管理器,启动任务执行线程
|
||||
*
|
||||
* @param redis_comm
|
||||
* Redis通信对象(用于写入检测结果),可以为nullptr(如果不需要写入结果)
|
||||
* @return true 初始化成功,false 初始化失败(Redis未连接)
|
||||
*
|
||||
* @note 如果redis_comm为nullptr,则不会写入结果到Redis,但任务仍可正常执行
|
||||
* @note 任务执行线程在后台运行,持续从队列中取出任务并执行
|
||||
*/
|
||||
bool TaskManager::initialize(
|
||||
std::shared_ptr<RedisCommunicator> redis_result_comm,
|
||||
std::shared_ptr<RedisCommunicator> redis_task_comm) {
|
||||
// 如果已经在运行,直接返回成功(避免重复初始化)
|
||||
if (running_) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// 保存Redis通信对象引用(用于后续写入结果和清空触发键)
|
||||
redis_result_comm_ = redis_result_comm;
|
||||
redis_task_comm_ = redis_task_comm;
|
||||
|
||||
// 如果提供了Redis对象,检查连接状态
|
||||
if (redis_result_comm_ && !redis_result_comm_->isConnected()) {
|
||||
std::cerr << "[TaskManager] Redis not connected" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 设置运行标志为true,启动任务执行线程
|
||||
running_ = true;
|
||||
// 创建线程,执行taskExecutionThreadFunc函数
|
||||
// this指针指向当前TaskManager对象,用于在线程中访问成员函数
|
||||
execution_thread_ = std::thread(&TaskManager::taskExecutionThreadFunc, this);
|
||||
|
||||
std::cout << "[TaskManager] Task manager initialized" << std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 处理新任务
|
||||
*
|
||||
* 将新任务加入任务队列,等待执行线程处理
|
||||
* 此函数是线程安全的,可以在任何线程中调用
|
||||
*
|
||||
* @param task_data
|
||||
* 任务数据,包含flag(任务类型)、side(货架侧)、task_time(任务时间)
|
||||
*
|
||||
* @note flag必须在1-5范围内,对应5种不同的检测任务
|
||||
* @note 使用互斥锁保护任务队列,确保线程安全
|
||||
* @note 任务队列采用FIFO(先进先出)模式
|
||||
*/
|
||||
void TaskManager::handleTask(const RedisTaskData &task_data) {
|
||||
// 验证任务标志位有效性(1-5对应5种检测任务)
|
||||
if (task_data.flag < 1 || task_data.flag > 5) {
|
||||
std::cerr << "[TaskManager] Invalid task flag: " << task_data.flag
|
||||
<< std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
// 使用互斥锁保护任务队列,确保线程安全
|
||||
// lock_guard自动管理锁的获取和释放
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(task_queue_mutex_);
|
||||
task_queue_.push(task_data); // 将任务加入队列
|
||||
} // 释放锁后再通知,避免执行线程被唤醒后立即阻塞
|
||||
|
||||
// 性能优化:使用条件变量通知等待的线程,避免轮询
|
||||
task_queue_cv_.notify_one();
|
||||
|
||||
std::cout << "[TaskManager] Received new task: flag=" << task_data.flag
|
||||
<< ", side=" << task_data.side << ", time=" << task_data.task_time
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
TaskManager::TaskStatus TaskManager::getCurrentTaskStatus() const {
|
||||
return current_status_;
|
||||
}
|
||||
|
||||
bool TaskManager::getLatestResult(DetectionResult &result) {
|
||||
std::lock_guard<std::mutex> lock(result_mutex_);
|
||||
if (current_status_ == TaskStatus::COMPLETED ||
|
||||
current_status_ == TaskStatus::FAILED) {
|
||||
result = latest_result_;
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
void TaskManager::stopCurrentTask() {
|
||||
current_status_ = TaskStatus::IDLE;
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(task_queue_mutex_);
|
||||
while (!task_queue_.empty()) {
|
||||
task_queue_.pop();
|
||||
}
|
||||
}
|
||||
// 通知等待的线程,避免在停止时阻塞
|
||||
task_queue_cv_.notify_one();
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 任务执行线程函数
|
||||
*
|
||||
* 这是任务执行线程的主函数,在后台持续运行
|
||||
* 主要工作流程:
|
||||
* 1. 从任务队列中取出任务(如果队列为空则休眠等待)
|
||||
* 2. 执行检测任务(获取图像、调用检测算法)
|
||||
* 3. 处理检测结果(格式化、计算警告/报警、写入Redis)
|
||||
* 4. 更新任务状态
|
||||
*
|
||||
* @note 此函数运行在独立的线程中,不会阻塞主线程
|
||||
* @note 使用轮询方式检查任务队列,队列为空时休眠100ms避免CPU空转
|
||||
* @note 任务执行失败时,状态设置为FAILED,但不会影响后续任务执行
|
||||
*/
|
||||
void TaskManager::taskExecutionThreadFunc() {
|
||||
std::cout << "[TaskManager] Task execution thread started" << std::endl;
|
||||
|
||||
// 主循环:持续处理任务直到running_为false
|
||||
while (running_) {
|
||||
// 从队列中取出任务
|
||||
RedisTaskData task_data;
|
||||
{
|
||||
// 性能优化:使用条件变量代替轮询,减少CPU占用
|
||||
// unique_lock支持条件变量,lock_guard不支持
|
||||
std::unique_lock<std::mutex> lock(task_queue_mutex_);
|
||||
|
||||
// 等待条件:队列非空或running_为false
|
||||
// 如果队列为空,线程会阻塞在这里,直到有新任务到达或running_变为false
|
||||
task_queue_cv_.wait(lock,
|
||||
[this] { return !task_queue_.empty() || !running_; });
|
||||
|
||||
// 检查是否因为running_变为false而退出
|
||||
if (!running_ && task_queue_.empty()) {
|
||||
break;
|
||||
}
|
||||
|
||||
// 队列非空,取出队列头部的任务(FIFO模式)
|
||||
if (!task_queue_.empty()) {
|
||||
task_data = task_queue_.front();
|
||||
task_queue_.pop(); // 从队列中移除
|
||||
} else {
|
||||
continue; // 队列为空但running_仍为true,继续等待
|
||||
}
|
||||
} // unique_lock在这里自动释放锁
|
||||
|
||||
// 执行任务
|
||||
current_status_ = TaskStatus::RUNNING; // 设置状态为运行中
|
||||
DetectionResult result; // 用于存储检测结果
|
||||
|
||||
std::cout << "[TaskManager] Starting task execution: flag="
|
||||
<< task_data.flag << std::endl;
|
||||
|
||||
// 执行检测任务(获取图像、调用检测算法)
|
||||
if (executeDetectionTask(task_data, result)) {
|
||||
// 任务执行成功
|
||||
current_status_ = TaskStatus::COMPLETED;
|
||||
|
||||
// 保存结果到latest_result_(使用互斥锁保护)
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(result_mutex_);
|
||||
latest_result_ = result;
|
||||
}
|
||||
|
||||
// 处理结果(格式化、计算警告/报警、写入Redis)
|
||||
// 注意:processResult内部会处理所有结果相关的操作
|
||||
processResult(result);
|
||||
|
||||
std::cout << "[TaskManager] Task execution completed: flag="
|
||||
<< task_data.flag << std::endl;
|
||||
} else {
|
||||
// 任务执行失败
|
||||
current_status_ = TaskStatus::FAILED;
|
||||
std::cerr << "[TaskManager] Task execution failed: flag="
|
||||
<< task_data.flag << std::endl;
|
||||
}
|
||||
} // while循环结束
|
||||
|
||||
std::cout << "[TaskManager] Task execution thread exited" << std::endl;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 执行检测任务
|
||||
*
|
||||
* 执行具体的检测任务,包括:
|
||||
* 1. 根据flag获取对应的检测器
|
||||
* 2. 通过DeviceManager单例获取相机图像(根据side选择相机)
|
||||
* 3. 调用检测器的execute()方法执行检测算法
|
||||
*
|
||||
* @param task_data 任务数据,包含flag、side、task_time
|
||||
* @param result [输出] 检测结果,由检测算法填充
|
||||
* @return true 检测成功,false
|
||||
* 检测失败(检测器不存在、图像获取失败、算法执行失败)
|
||||
*
|
||||
* @note 直接使用DeviceManager单例获取图像,简化架构
|
||||
* @note side参数用于选择相机:left对应索引0,right对应索引1
|
||||
* @note 如果图像获取失败,depth_img和color_img将为空,检测算法需要处理这种情况
|
||||
*/
|
||||
bool TaskManager::executeDetectionTask(const RedisTaskData &task_data,
|
||||
DetectionResult &result) {
|
||||
// 根据flag获取对应的检测器
|
||||
auto detector = getDetector(task_data.flag);
|
||||
if (!detector) {
|
||||
std::cerr << "[TaskManager] Detector not found for flag=" << task_data.flag
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 针对 flag=4 (Visual Inventory) 使用专门的循环逻辑
|
||||
if (task_data.flag == 4) {
|
||||
DetectionResult final_result; // Create a temporary result object if needed,
|
||||
// or modify signature to expect one?
|
||||
// Wait, executeDetectionTask takes `DetectionResult &result`. I can pass
|
||||
// that. But I need to match the signature in header: bool
|
||||
// executeVisualInventoryLoop(const RedisTaskData& task_data,
|
||||
// std::shared_ptr<DetectionBase> detector, DetectionResult& final_result);
|
||||
return executeVisualInventoryLoop(task_data, detector, result);
|
||||
}
|
||||
|
||||
cv::Mat depth_img, color_img;
|
||||
double fps = 0.0;
|
||||
bool image_acquired = false; // 标记图像是否成功获取
|
||||
std::vector<Point3D> point_cloud;
|
||||
|
||||
// 根据任务标志选择不同的相机采集策略
|
||||
if (task_data.flag == 1) {
|
||||
// Flag 1: Slot Occupancy Detection
|
||||
// 需求:vision_task_side=left选择相机:sn:DA8743029,vision_task_side=right选择相机:sn:DA8742900
|
||||
// 使用 MVS 2D 相机
|
||||
|
||||
std::string target_sn;
|
||||
if (task_data.side == "left") {
|
||||
target_sn = "DA8743029";
|
||||
} else if (task_data.side == "right") {
|
||||
target_sn = "DA8742900";
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Invalid side for flag 1: " << task_data.side
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 查找匹配 SN 的相机索引
|
||||
int mvs_count = DeviceManager::getInstance().get2DCameraCount();
|
||||
int found_index = -1;
|
||||
|
||||
for (int i = 0; i < mvs_count; ++i) {
|
||||
std::string sn = DeviceManager::getInstance().get2DCameraId(i);
|
||||
// 简单的字符串匹配 (包含关系或相等)
|
||||
if (sn.find(target_sn) != std::string::npos) {
|
||||
found_index = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (found_index >= 0) {
|
||||
std::cout << "[TaskManager] Using MVS Camera Index " << found_index
|
||||
<< " (SN: " << target_sn << ") for Flag 1" << std::endl;
|
||||
|
||||
// 重试逻辑:尝试多次获取图像,应对相机刚启动第一次抓拍可能为空的情况
|
||||
for (int retry = 0; retry < 15; ++retry) {
|
||||
image_acquired = DeviceManager::getInstance().get2DCameraImage(
|
||||
found_index, color_img, fps);
|
||||
if (image_acquired && !color_img.empty()) {
|
||||
break;
|
||||
}
|
||||
std::cout << "[TaskManager] Waiting for image from camera "
|
||||
<< found_index << " (Retry " << retry + 1 << "/15)..."
|
||||
<< std::endl;
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(200));
|
||||
}
|
||||
|
||||
// depth_img 为空,Flag 1 算法只需要 color_img (或处理空 depth)
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Camera with SN " << target_sn
|
||||
<< " not found for Flag 1!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
} else if (task_data.flag == 2 || task_data.flag == 3) {
|
||||
// Flag 2: PalletOffsetDetection (需要深度图)
|
||||
// Flag 3: BeamRackDeflectionDetection (需要深度图)
|
||||
|
||||
std::string target_sn;
|
||||
if (task_data.side == "left") {
|
||||
target_sn = "207000146458";
|
||||
} else if (task_data.side == "right") {
|
||||
target_sn = "207000146703";
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Invalid side for flag " << task_data.flag
|
||||
<< ": " << task_data.side << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 查找匹配 SN 的深度相机索引
|
||||
int depth_count = DeviceManager::getInstance().getDepthCameraCount();
|
||||
int found_index = -1;
|
||||
|
||||
for (int i = 0; i < depth_count; ++i) {
|
||||
// 注意:DeviceManager::getDeviceId 对于深度相机直接返回 SN
|
||||
std::string sn = DeviceManager::getInstance().getDeviceId(i);
|
||||
if (sn.find(target_sn) != std::string::npos) {
|
||||
found_index = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (found_index >= 0) {
|
||||
std::cout << "[TaskManager] Using Depth Camera Index " << found_index
|
||||
<< " (SN: " << target_sn << ") for Flag " << task_data.flag
|
||||
<< std::endl;
|
||||
|
||||
// 重试逻辑
|
||||
for (int retry = 0; retry < 15; ++retry) {
|
||||
image_acquired = DeviceManager::getInstance().getLatestImages(
|
||||
found_index, depth_img, color_img, fps);
|
||||
if (image_acquired &&
|
||||
!depth_img.empty()) { // 深度图任务通常更关注深度图
|
||||
break;
|
||||
}
|
||||
std::cout << "[TaskManager] Waiting for depth/color images from camera "
|
||||
<< found_index << " (Retry " << retry + 1 << "/15)..."
|
||||
<< std::endl;
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(200));
|
||||
}
|
||||
|
||||
// 获取点云
|
||||
if (image_acquired) {
|
||||
if (DeviceManager::getInstance().computePointCloud(
|
||||
found_index, depth_img, point_cloud)) {
|
||||
std::cout << "[TaskManager] Computed Point Cloud for Camera "
|
||||
<< found_index << ", Points: " << point_cloud.size()
|
||||
<< std::endl;
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Failed to compute point cloud for camera "
|
||||
<< found_index << std::endl;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Depth Camera with SN " << target_sn
|
||||
<< " not found for Flag " << task_data.flag << "!" << std::endl;
|
||||
// 调试输出所有可用相机
|
||||
std::cout << "[TaskManager] Available Depth Cameras: ";
|
||||
for (int i = 0; i < depth_count; ++i) {
|
||||
std::cout << DeviceManager::getInstance().getDeviceId(i) << " ";
|
||||
}
|
||||
std::cout << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
} else if (task_data.flag == 4) {
|
||||
// Flag 4: VisualInventoryDetection
|
||||
// 使用指定的2D相机进行视觉盘库检测
|
||||
std::string target_sn = "DA8789631";
|
||||
|
||||
// 查找匹配 SN 的2D相机索引
|
||||
int mvs_count = DeviceManager::getInstance().get2DCameraCount();
|
||||
int found_index = -1;
|
||||
|
||||
for (int i = 0; i < mvs_count; ++i) {
|
||||
std::string sn = DeviceManager::getInstance().get2DCameraId(i);
|
||||
// 简单的字符串匹配 (包含关系或相等)
|
||||
if (sn.find(target_sn) != std::string::npos) {
|
||||
found_index = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (found_index >= 0) {
|
||||
std::cout << "[TaskManager] Using MVS Camera Index " << found_index
|
||||
<< " (SN: " << target_sn << ") for Flag " << task_data.flag
|
||||
<< std::endl;
|
||||
|
||||
// 重试逻辑:尝试多次获取图像,应对相机刚启动第一次抓拍可能为空的情况
|
||||
for (int retry = 0; retry < 15; ++retry) {
|
||||
image_acquired = DeviceManager::getInstance().get2DCameraImage(
|
||||
found_index, color_img, fps);
|
||||
if (image_acquired && !color_img.empty()) {
|
||||
break;
|
||||
}
|
||||
std::cout << "[TaskManager] Waiting for image from camera "
|
||||
<< found_index << " (Retry " << retry + 1 << "/15)..."
|
||||
<< std::endl;
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(200));
|
||||
}
|
||||
|
||||
// depth_img 为空,视觉盘库算法只需要 color_img
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Camera with SN " << target_sn
|
||||
<< " not found for Flag " << task_data.flag << "!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
// 未知 Flag
|
||||
std::cerr << "[TaskManager] Unknown task flag: " << task_data.flag
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 检查图像获取结果
|
||||
if (!image_acquired) {
|
||||
std::cerr
|
||||
<< "[TaskManager] Failed to get images from DeviceManager for flag "
|
||||
<< task_data.flag << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 执行检测算法
|
||||
bool success = false;
|
||||
try {
|
||||
std::cout << "[TaskManager] Invoking detector->execute..." << std::endl;
|
||||
success = detector->execute(depth_img, color_img, task_data.side, result,
|
||||
!point_cloud.empty() ? &point_cloud : nullptr,
|
||||
task_data.beam_length);
|
||||
std::cout << "[TaskManager] Detector returned: "
|
||||
<< (success ? "Success" : "Failure") << std::endl;
|
||||
} catch (const std::exception &e) {
|
||||
std::cerr << "[TaskManager] Exception during detection: " << e.what()
|
||||
<< std::endl;
|
||||
success = false;
|
||||
} catch (...) {
|
||||
std::cerr << "[TaskManager] Unknown exception during detection."
|
||||
<< std::endl;
|
||||
success = false;
|
||||
}
|
||||
|
||||
return success;
|
||||
}
|
||||
|
||||
std::shared_ptr<DetectionBase> TaskManager::getDetector(int flag) {
|
||||
auto it = detectors_.find(flag);
|
||||
if (it != detectors_.end()) {
|
||||
return it->second;
|
||||
}
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 处理检测结果
|
||||
*
|
||||
* 处理检测结果,包括:
|
||||
* 1. 添加警告和报警信号(根据阈值计算)
|
||||
* 2. 格式化为JSON字符串
|
||||
* 3. 写入Redis
|
||||
*
|
||||
* @param result 检测结果(原始结果,不会被修改)
|
||||
* @return true 处理成功,false 处理失败(Redis未连接、写入失败)
|
||||
*
|
||||
* @note 此函数会创建结果的副本,在副本上添加警告/报警信号,不修改原始结果
|
||||
* @note 结果写入Redis后,WMS系统可以读取并处理
|
||||
*/
|
||||
bool TaskManager::processResult(const DetectionResult &result) {
|
||||
// 检查Redis连接状态
|
||||
if (!redis_result_comm_ || !redis_result_comm_->isConnected()) {
|
||||
std::cerr << "[TaskManager] Redis not connected, cannot write result"
|
||||
<< std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// 创建结果副本,用于添加警告和报警信号
|
||||
// 使用副本避免修改原始结果,保持数据完整性
|
||||
DetectionResult processed_result = result;
|
||||
|
||||
// 添加警告和报警信号
|
||||
// 根据检测类型(flag)和阈值,计算每个测量值的警告/报警状态
|
||||
addWarningAlarmSignals(processed_result);
|
||||
|
||||
// 写入Redis
|
||||
// 将结果拆分为单独的Key写入Redis
|
||||
std::map<std::string, std::string> result_map = processed_result.toMap();
|
||||
bool success = true;
|
||||
|
||||
for (const auto &pair : result_map) {
|
||||
if (!redis_result_comm_->writeString(pair.first, pair.second)) {
|
||||
std::cerr << "[TaskManager] Failed to write key: " << pair.first
|
||||
<< std::endl;
|
||||
success = false;
|
||||
}
|
||||
}
|
||||
|
||||
// 结果写入完成后,清空触发键,避免程序重启后被旧任务自动触发
|
||||
// 约定:flag 置 0,side/time 置空字符串
|
||||
bool clear_ok = true;
|
||||
if (redis_task_comm_ && redis_task_comm_->isConnected()) {
|
||||
clear_ok &= redis_task_comm_->writeString("vision_task_flag", "0");
|
||||
clear_ok &= redis_task_comm_->writeString("vision_task_side", "");
|
||||
clear_ok &= redis_task_comm_->writeString("vision_task_time", "");
|
||||
} else {
|
||||
// 如果没有提供 task DB 的连接,则尝试用结果 DB 连接,但可能写不到正确DB
|
||||
clear_ok &= redis_result_comm_->writeString("vision_task_flag", "0");
|
||||
clear_ok &= redis_result_comm_->writeString("vision_task_side", "");
|
||||
clear_ok &= redis_result_comm_->writeString("vision_task_time", "");
|
||||
}
|
||||
|
||||
if (success) {
|
||||
std::cout
|
||||
<< "[TaskManager] Detection result written to Redis (26 keys): type="
|
||||
<< processed_result.result_type << std::endl;
|
||||
} else {
|
||||
std::cerr << "[TaskManager] Failed to write some detection results"
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
if (!clear_ok) {
|
||||
std::cerr << "[TaskManager] Warning: failed to clear task trigger keys "
|
||||
"(vision_task_flag/side/time)."
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
return success;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 计算警告和报警信号(静态方法)
|
||||
*
|
||||
* 根据当前值和阈值,计算警告和报警状态
|
||||
* 阈值范围定义:
|
||||
* - A < B < C < D
|
||||
* - 正常范围:[B, C]
|
||||
* - 警告范围:[A, B) 或 (C, D]
|
||||
* - 报警范围:< A 或 > D
|
||||
*
|
||||
* @param value 当前测量值
|
||||
* @param threshold 阈值JSON字符串,格式:{"A":-5.0,"B":-3.0,"C":3.0,"D":5.0}
|
||||
* @param warning_alarm [输出]
|
||||
* 警告和报警信号JSON字符串,格式:{"warning":false,"alarm":false}
|
||||
*
|
||||
* @note 当前实现使用硬编码的阈值,后续应使用JSON库解析threshold参数
|
||||
* @note 报警时warning也设置为true(报警包含警告)
|
||||
* @note 这是静态方法,可以在不创建TaskManager实例的情况下调用
|
||||
*
|
||||
* @todo 使用JSON库(如nlohmann::json)解析threshold字符串
|
||||
*/
|
||||
void TaskManager::calculateWarningAlarm(float value,
|
||||
const std::string &threshold,
|
||||
std::string &warning_alarm) {
|
||||
// TODO: 使用JSON库解析threshold字符串
|
||||
// 当前使用简单的字符串解析方式
|
||||
// 假设threshold格式为: {"A":-5.0,"B":-3.0,"C":3.0,"D":5.0}
|
||||
|
||||
// 临时实现:使用硬编码的阈值(实际应使用JSON库解析threshold参数)
|
||||
// 后续实现示例:
|
||||
// nlohmann::json j = nlohmann::json::parse(threshold);
|
||||
// float A = j.value("A", -5.0f);
|
||||
// float B = j.value("B", -3.0f);
|
||||
// float C = j.value("C", 3.0f);
|
||||
// float D = j.value("D", 5.0f);
|
||||
float A = -5.0f, B = -3.0f, C = 3.0f, D = 5.0f;
|
||||
|
||||
bool warning = false; // 警告标志
|
||||
bool alarm = false; // 报警标志
|
||||
|
||||
// 判断警告和报警
|
||||
// 阈值范围:A < B < C < D
|
||||
// 正常范围:[B, C] - 无警告无报警
|
||||
// 警告范围:[A, B) 或 (C, D] - 有警告无报警
|
||||
// 报警范围:< A 或 > D - 有警告有报警
|
||||
|
||||
if (value < A || value > D) {
|
||||
// 超出报警阈值范围
|
||||
alarm = true; // 设置报警标志
|
||||
warning = true; // 报警时也设置警告标志(报警包含警告)
|
||||
} else if (value < B || value > C) {
|
||||
// 超出正常范围但在报警阈值内(警告范围)
|
||||
warning = true; // 只设置警告标志
|
||||
}
|
||||
// else: 在正常范围内 [B, C],warning和alarm都保持false
|
||||
|
||||
// 生成JSON字符串
|
||||
// 格式:{"warning":true/false,"alarm":true/false}
|
||||
std::ostringstream oss;
|
||||
oss << "{\"warning\":" << (warning ? "true" : "false")
|
||||
<< ",\"alarm\":" << (alarm ? "true" : "false") << "}";
|
||||
warning_alarm = oss.str();
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 为结果添加警告和报警信号
|
||||
*
|
||||
* 根据检测类型(flag),为相应的测量值添加警告和报警信号
|
||||
* 支持的检测类型:
|
||||
* - Flag 2: 托盘位置偏移检测(5个测量值)
|
||||
* - Flag 3: 横梁变形检测(2个测量值)
|
||||
*
|
||||
* @param result [输入输出] 检测结果,会被修改(添加warning_alarm字段)
|
||||
*
|
||||
* @note 只有Flag 2和Flag 3需要计算警告/报警,其他flag的结果不处理
|
||||
* @note 每个测量值都有独立的阈值和警告/报警状态
|
||||
*/
|
||||
void TaskManager::addWarningAlarmSignals(DetectionResult &result) {
|
||||
// Flag 2: 托盘位置偏移检测
|
||||
// 包含5个测量值:左右偏移、前后偏移、左侧插孔变形、右侧插孔变形、旋转角度
|
||||
if (result.result_type == 2) {
|
||||
// 左右偏移量
|
||||
if (!result.offset_lat_mm_threshold.empty()) {
|
||||
calculateWarningAlarm(result.offset_lat_mm_value,
|
||||
result.offset_lat_mm_threshold,
|
||||
result.offset_lat_mm_warning_alarm);
|
||||
}
|
||||
|
||||
// 前后偏移量
|
||||
if (!result.offset_lon_mm_threshold.empty()) {
|
||||
calculateWarningAlarm(result.offset_lon_mm_value,
|
||||
result.offset_lon_mm_threshold,
|
||||
result.offset_lon_mm_warning_alarm);
|
||||
}
|
||||
|
||||
// 左侧插孔变形
|
||||
if (!result.hole_def_mm_left_threshold.empty()) {
|
||||
calculateWarningAlarm(result.hole_def_mm_left_value,
|
||||
result.hole_def_mm_left_threshold,
|
||||
result.hole_def_mm_left_warning_alarm);
|
||||
}
|
||||
|
||||
// 右侧插孔变形
|
||||
if (!result.hole_def_mm_right_threshold.empty()) {
|
||||
calculateWarningAlarm(result.hole_def_mm_right_value,
|
||||
result.hole_def_mm_right_threshold,
|
||||
result.hole_def_mm_right_warning_alarm);
|
||||
}
|
||||
|
||||
// 旋转角度
|
||||
if (!result.rotation_angle_threshold.empty()) {
|
||||
calculateWarningAlarm(result.rotation_angle_value,
|
||||
result.rotation_angle_threshold,
|
||||
result.rotation_angle_warning_alarm);
|
||||
}
|
||||
}
|
||||
|
||||
// Flag 3: 横梁和立柱变形检测
|
||||
if (result.result_type == 3) {
|
||||
// 横梁弯曲量
|
||||
if (!result.beam_def_mm_threshold.empty()) {
|
||||
calculateWarningAlarm(result.beam_def_mm_value,
|
||||
result.beam_def_mm_threshold,
|
||||
result.beam_def_mm_warning_alarm);
|
||||
}
|
||||
|
||||
// 立柱弯曲量
|
||||
if (!result.rack_def_mm_threshold.empty()) {
|
||||
calculateWarningAlarm(result.rack_def_mm_value,
|
||||
result.rack_def_mm_threshold,
|
||||
result.rack_def_mm_warning_alarm);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 视觉盘点循环执行函数
|
||||
*
|
||||
* 专门处理 flag=4 的视觉盘点任务,实现连续抓拍和去重逻辑
|
||||
* 直到收到 flag=5 的任务(或系统停止)时退出循环
|
||||
*/
|
||||
bool TaskManager::executeVisualInventoryLoop(
|
||||
const RedisTaskData &task_data, std::shared_ptr<DetectionBase> detector,
|
||||
DetectionResult &final_result) {
|
||||
if (!detector)
|
||||
return false;
|
||||
|
||||
// 1. 相机准备
|
||||
std::string target_sn = "DA8789631";
|
||||
int mvs_count = DeviceManager::getInstance().get2DCameraCount();
|
||||
int found_index = -1;
|
||||
|
||||
for (int i = 0; i < mvs_count; ++i) {
|
||||
std::string sn = DeviceManager::getInstance().get2DCameraId(i);
|
||||
if (sn.find(target_sn) != std::string::npos) {
|
||||
found_index = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (found_index < 0) {
|
||||
std::cerr << "[TaskManager] Camera with SN " << target_sn
|
||||
<< " not found for Visual Inventory!" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
std::cout << "[TaskManager] Starting Visual Inventory Loop using Camera "
|
||||
<< found_index << std::endl;
|
||||
|
||||
std::set<std::string> seen_codes;
|
||||
bool loop_running = true;
|
||||
|
||||
while (loop_running && running_) {
|
||||
// 2. 检查停止信号 (flag=5)
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(task_queue_mutex_);
|
||||
if (!task_queue_.empty()) {
|
||||
// 检查队列头部是否为停止信号
|
||||
if (task_queue_.front().flag == 5) {
|
||||
task_queue_.pop(); // 消费停止信号
|
||||
std::cout << "[TaskManager] Visual Inventory Stopped by flag=5"
|
||||
<< std::endl;
|
||||
loop_running = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
DetectionResult result;
|
||||
cv::Mat color_img, depth_img; // depth为空
|
||||
double fps = 0.0;
|
||||
|
||||
// 3. 获取图像
|
||||
// 尝试获取图像,如果不成功则跳过本次检测
|
||||
bool image_acquired = DeviceManager::getInstance().get2DCameraImage(
|
||||
found_index, color_img, fps);
|
||||
|
||||
if (image_acquired && !color_img.empty()) {
|
||||
// 4. 执行检测
|
||||
try {
|
||||
if (detector->execute(depth_img, color_img, task_data.side, result)) {
|
||||
// 5. 结果处理与去重
|
||||
// 解析 result.result_barcodes (JSON格式)
|
||||
std::string json = result.result_barcodes;
|
||||
std::vector<std::string> new_codes;
|
||||
|
||||
// 简单的字符串查找解析
|
||||
size_t array_start = json.find('[');
|
||||
size_t array_end = json.find(']');
|
||||
|
||||
if (array_start != std::string::npos &&
|
||||
array_end != std::string::npos && array_end > array_start) {
|
||||
size_t pos = array_start + 1;
|
||||
while (pos < array_end) {
|
||||
size_t quote_start = json.find('"', pos);
|
||||
if (quote_start == std::string::npos || quote_start >= array_end)
|
||||
break;
|
||||
|
||||
size_t quote_end = json.find('"', quote_start + 1);
|
||||
if (quote_end == std::string::npos || quote_end >= array_end)
|
||||
break;
|
||||
|
||||
std::string code =
|
||||
json.substr(quote_start + 1, quote_end - quote_start - 1);
|
||||
|
||||
// 简单的反转义 (只是为了匹配)
|
||||
if (seen_codes.find(code) == seen_codes.end()) {
|
||||
seen_codes.insert(code);
|
||||
new_codes.push_back(code);
|
||||
}
|
||||
|
||||
pos = quote_end + 1;
|
||||
}
|
||||
}
|
||||
|
||||
// 如果有新识别的码,则上报
|
||||
if (!new_codes.empty()) {
|
||||
std::cout << "[TaskManager] Detected " << new_codes.size()
|
||||
<< " NEW codes." << std::endl;
|
||||
|
||||
// 重新构建JSON结果
|
||||
std::string new_json = "{\"" + task_data.side + "\":[";
|
||||
int idx = 0;
|
||||
for (const auto &code : seen_codes) {
|
||||
if (idx > 0)
|
||||
new_json += ",";
|
||||
new_json += "\"" + code + "\"";
|
||||
idx++;
|
||||
}
|
||||
new_json += "]}";
|
||||
|
||||
result.result_barcodes = new_json;
|
||||
processResult(result);
|
||||
}
|
||||
}
|
||||
} catch (const std::exception &e) {
|
||||
std::cerr << "[TaskManager] Exception during inventory detection: "
|
||||
<< e.what() << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
// 6. 控制循环频率
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(200));
|
||||
}
|
||||
|
||||
final_result.result_status = "success";
|
||||
return true;
|
||||
}
|
||||
148
image_capture/src/task/task_manager.h
Normal file
148
image_capture/src/task/task_manager.h
Normal file
@@ -0,0 +1,148 @@
|
||||
#pragma once
|
||||
|
||||
#include "../algorithm/core/detection_result.h"
|
||||
#include "../redis/redis_communicator.h"
|
||||
#include "../redis/task_data.h"
|
||||
#include <opencv2/opencv.hpp>
|
||||
|
||||
class DetectionBase;
|
||||
#include <atomic>
|
||||
#include <condition_variable>
|
||||
#include <map>
|
||||
#include <memory>
|
||||
#include <mutex>
|
||||
#include <queue>
|
||||
#include <string>
|
||||
#include <thread>
|
||||
|
||||
|
||||
/**
|
||||
* @brief 任务管理器(合并了结果处理功能)
|
||||
*
|
||||
* 功能说明:
|
||||
* - 接收Redis任务触发
|
||||
* - 根据flag分发到对应的检测任务
|
||||
* - 管理任务执行状态
|
||||
* - 协调检测算法执行
|
||||
* - 处理检测结果(格式化、计算警告/报警、写入Redis)
|
||||
*/
|
||||
class TaskManager {
|
||||
public:
|
||||
/**
|
||||
* 任务执行状态
|
||||
*/
|
||||
enum class TaskStatus {
|
||||
IDLE, // 空闲
|
||||
RUNNING, // 执行中
|
||||
COMPLETED, // 已完成
|
||||
FAILED // 失败
|
||||
};
|
||||
|
||||
TaskManager();
|
||||
~TaskManager();
|
||||
|
||||
/**
|
||||
* 初始化任务管理器
|
||||
* @param redis_comm Redis通信对象(用于写入结果)
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool
|
||||
initialize(std::shared_ptr<RedisCommunicator> redis_result_comm = nullptr,
|
||||
std::shared_ptr<RedisCommunicator> redis_task_comm = nullptr);
|
||||
|
||||
/**
|
||||
* 处理新任务
|
||||
* @param task_data 任务数据
|
||||
*/
|
||||
void handleTask(const RedisTaskData &task_data);
|
||||
|
||||
/**
|
||||
* 获取当前任务状态
|
||||
*/
|
||||
TaskStatus getCurrentTaskStatus() const;
|
||||
|
||||
/**
|
||||
* 获取最新检测结果
|
||||
* @param result [输出] 检测结果
|
||||
* @return 是否有结果
|
||||
*/
|
||||
bool getLatestResult(DetectionResult &result);
|
||||
|
||||
/**
|
||||
* 停止当前任务
|
||||
*/
|
||||
void stopCurrentTask();
|
||||
|
||||
/**
|
||||
* 计算警告和报警信号(静态方法,供外部调用)
|
||||
* @param value 当前值
|
||||
* @param threshold 阈值JSON字符串 {"A": -5.0, "B": -3.0, "C": 3.0, "D": 5.0}
|
||||
* @param warning_alarm [输出] 警告和报警信号JSON字符串 {"warning": false,
|
||||
* "alarm": false}
|
||||
*/
|
||||
static void calculateWarningAlarm(float value, const std::string &threshold,
|
||||
std::string &warning_alarm);
|
||||
|
||||
private:
|
||||
/**
|
||||
* 任务执行线程函数
|
||||
*/
|
||||
void taskExecutionThreadFunc();
|
||||
|
||||
/**
|
||||
* 执行检测任务
|
||||
* @param task_data 任务数据
|
||||
* @param result [输出] 检测结果
|
||||
* @return 是否成功
|
||||
*/
|
||||
bool executeDetectionTask(const RedisTaskData &task_data,
|
||||
DetectionResult &result);
|
||||
|
||||
// 视觉盘点循环执行函数
|
||||
bool executeVisualInventoryLoop(const RedisTaskData &task_data,
|
||||
std::shared_ptr<DetectionBase> detector,
|
||||
DetectionResult &final_result);
|
||||
|
||||
/**
|
||||
* 获取指定类型的检测器
|
||||
* @param flag 任务类型(1~5)
|
||||
* @return 检测器指针,如果不存在返回nullptr
|
||||
*/
|
||||
std::shared_ptr<DetectionBase> getDetector(int flag);
|
||||
|
||||
/**
|
||||
* 处理检测结果(格式化、计算警告/报警、写入Redis)
|
||||
* @param result 检测结果
|
||||
* @return 是否成功处理
|
||||
*/
|
||||
bool processResult(const DetectionResult &result);
|
||||
|
||||
/**
|
||||
* 为结果添加警告和报警信号
|
||||
* @param result 检测结果(会被修改)
|
||||
*/
|
||||
void addWarningAlarmSignals(DetectionResult &result);
|
||||
|
||||
// 检测器映射表(flag -> detector)
|
||||
std::map<int, std::shared_ptr<DetectionBase>> detectors_;
|
||||
|
||||
// 任务队列
|
||||
std::queue<RedisTaskData> task_queue_;
|
||||
std::mutex task_queue_mutex_;
|
||||
std::condition_variable task_queue_cv_; // 条件变量,用于通知任务到达
|
||||
|
||||
// 当前任务状态
|
||||
std::atomic<TaskStatus> current_status_;
|
||||
std::mutex result_mutex_;
|
||||
DetectionResult latest_result_;
|
||||
|
||||
// 任务执行线程
|
||||
std::atomic<bool> running_;
|
||||
std::thread execution_thread_;
|
||||
|
||||
// Redis通信对象(用于写入结果和清空触发键)
|
||||
std::shared_ptr<RedisCommunicator>
|
||||
redis_result_comm_; // 写结果(通常在输出DB)
|
||||
std::shared_ptr<RedisCommunicator>
|
||||
redis_task_comm_; // 清空触发键(通常在输入DB)
|
||||
};
|
||||
36
image_capture/src/tools/calibration_tool/README.md
Normal file
36
image_capture/src/tools/calibration_tool/README.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Calibration Tool (标定工具)
|
||||
|
||||
## 简介 (Introduction)
|
||||
本工具用于计算相机相对于特定平面(如地面或货架表面)的位姿(外参)。它通过读取深度图中的平面区域,拟合平面方程,计算出校正矩阵(Transformation Matrix)。
|
||||
|
||||
## 功能 (Features)
|
||||
* **图像加载**:支持加载深度图(16-bit PNG/TIFF)和彩色图。
|
||||
* **ROI 选择**:在彩色图上交互式选择矩形区域(4个点)。
|
||||
* **自动映射**:将彩色图的 ROI 自动映射到深度图坐标系(支持手动缩放回退模式)。
|
||||
* **平面拟合**:使用 RANSAC 算法从点云中拟合最佳平面。
|
||||
* **结果保存**:将计算得到的变换矩阵保存为 JSON 文件,供主程序使用。
|
||||
|
||||
## 使用步骤 (Usage)
|
||||
1. **加载参数**:点击 `Load Intrinsics`,选择由 `intrinsic_dumper` 生成的 `intrinsics_<SN>.json` 文件。
|
||||
2. **加载图像**:分别加载同一场景拍摄的 Color 图像和 Depth 图像。
|
||||
3. **选择区域**:在 Color 图像显示区域,依次点击 4 个点,围成一个矩形区域(目标平面)。
|
||||
4. **执行标定**:点击 `Execute Calibration`。
|
||||
* 工具会显示拟合的点数和平面方程。
|
||||
* 状态栏显示 `Calibration SUCCESS` 表示成功。
|
||||
5. **保存结果**:点击 `Save Result`。
|
||||
* 默认文件名为 `calibration_result_<SN>.json`。
|
||||
|
||||
## 输出格式 (Output)
|
||||
JSON 文件包含:
|
||||
* `camera_id`: 相机序列号 (SN)。
|
||||
* `transformation_matrix`: 4x4 变换矩阵(Row-major)。该矩阵表示从相机坐标系到世界坐标系(Reference Plane)的刚体变换 (Rotation + Translation)。
|
||||
* `roi_points_depth`: 深度图上的有效 ROI 区域顶点。
|
||||
* `calibration_time`: 标定时间。
|
||||
|
||||
### 术语解释
|
||||
* **Extrinsics (外参)**:在计算机视觉中,通常指相机相对于世界坐标系(或另一个相机)的旋转和平移关系。本工具生成的 `transformation_matrix` 即由于相机相对于地面/货架的位姿,因此在广义上属于“外参”。
|
||||
|
||||
|
||||
## 注意事项 (Notes)
|
||||
* 如果相机缺少 RGB-Depth 外参,工具会自动使用 "Manual Scaling" 模式进行近似映射。
|
||||
* 请确保选取区域平整且深度数据有效(避免全黑区域)。
|
||||
731
image_capture/src/tools/calibration_tool/calibration_widget.cpp
Normal file
731
image_capture/src/tools/calibration_tool/calibration_widget.cpp
Normal file
@@ -0,0 +1,731 @@
|
||||
#ifndef NOMINMAX
|
||||
#define NOMINMAX
|
||||
#endif
|
||||
|
||||
#include "calibration_widget.h"
|
||||
#include <TYCoordinateMapper.h> // Include Mapper
|
||||
#include <open3d/Open3D.h>
|
||||
#include <QFileDialog>
|
||||
#include <QMessageBox>
|
||||
#include <QDebug>
|
||||
#include <QMouseEvent>
|
||||
#include <QPainter>
|
||||
#include <QJsonDocument>
|
||||
#include <QJsonObject>
|
||||
#include <QJsonArray>
|
||||
#include <QDateTime>
|
||||
#include <QApplication>
|
||||
#include <thread> // New
|
||||
|
||||
// Placeholder for now
|
||||
CalibrationWidget::CalibrationWidget(QWidget *parent) : QWidget(parent) {
|
||||
setupUi();
|
||||
|
||||
// Init SDK for CoordinateMapper math functions
|
||||
TY_STATUS status = TYInitLib();
|
||||
if (status != TY_STATUS_OK) {
|
||||
QMessageBox::warning(this, "Error", "Failed to initialize TY SDK. Calibration might crash.");
|
||||
}
|
||||
|
||||
has_calibration_result_ = false;
|
||||
is_selecting_roi_ = true;
|
||||
has_calib_params_ = false;
|
||||
|
||||
std::memset(calibration_matrix_, 0, sizeof(calibration_matrix_));
|
||||
|
||||
// Install event filter for mouse interaction
|
||||
label_color_display_->installEventFilter(this);
|
||||
}
|
||||
|
||||
CalibrationWidget::~CalibrationWidget() {
|
||||
TYDeinitLib();
|
||||
}
|
||||
|
||||
bool CalibrationWidget::eventFilter(QObject *obj, QEvent *event) {
|
||||
if (obj == label_color_display_ && event->type() == QEvent::MouseButtonPress) {
|
||||
QMouseEvent *mouseEvent = static_cast<QMouseEvent*>(event);
|
||||
if (mouseEvent->button() == Qt::LeftButton && is_selecting_roi_) {
|
||||
// Coordinate mapping: Label -> Image
|
||||
if (mat_color_raw_.empty()) return false;
|
||||
|
||||
// Calculate scale
|
||||
double scale_x = (double)label_color_display_->width() / mat_color_raw_.cols;
|
||||
double scale_y = (double)label_color_display_->height() / mat_color_raw_.rows;
|
||||
double scale = std::min(scale_x, scale_y);
|
||||
|
||||
int offset_x = (label_color_display_->width() - mat_color_raw_.cols * scale) / 2;
|
||||
int offset_y = (label_color_display_->height() - mat_color_raw_.rows * scale) / 2;
|
||||
|
||||
int img_x = (mouseEvent->pos().x() - offset_x) / scale;
|
||||
int img_y = (mouseEvent->pos().y() - offset_y) / scale;
|
||||
|
||||
// Append point
|
||||
if (img_x >= 0 && img_x < mat_color_raw_.cols && img_y >= 0 && img_y < mat_color_raw_.rows) {
|
||||
if (roi_points_color_.size() >= 4) roi_points_color_.clear(); // Reset if full
|
||||
roi_points_color_.push_back(cv::Point(img_x, img_y));
|
||||
updateDisplay();
|
||||
log("Added ROI point: (" + QString::number(img_x) + ", " + QString::number(img_y) + ")");
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
return QWidget::eventFilter(obj, event);
|
||||
}
|
||||
|
||||
|
||||
void CalibrationWidget::setupUi() {
|
||||
QVBoxLayout *main_layout = new QVBoxLayout(this);
|
||||
|
||||
// Top Control Panel
|
||||
QVBoxLayout *top_panel_layout = new QVBoxLayout(); // Container for 2 rows
|
||||
|
||||
// Row 1: Source / Input
|
||||
QHBoxLayout *input_layout = new QHBoxLayout();
|
||||
|
||||
// Removed Camera Controls
|
||||
|
||||
btn_load_color_ = new QPushButton("加载彩色图", this);
|
||||
btn_load_depth_ = new QPushButton("加载深度图", this);
|
||||
btn_load_calib_ = new QPushButton("加载标定参数", this);
|
||||
|
||||
input_layout->addWidget(btn_load_color_);
|
||||
input_layout->addWidget(btn_load_depth_);
|
||||
input_layout->addWidget(btn_load_calib_);
|
||||
input_layout->addStretch();
|
||||
|
||||
// Row 2: Actions
|
||||
QHBoxLayout *action_layout = new QHBoxLayout();
|
||||
btn_run_calibration_ = new QPushButton("执行标定", this);
|
||||
btn_view_3d_ = new QPushButton("查看3D", this);
|
||||
btn_save_result_ = new QPushButton("保存结果", this);
|
||||
|
||||
action_layout->addWidget(btn_run_calibration_);
|
||||
action_layout->addWidget(btn_view_3d_);
|
||||
action_layout->addWidget(btn_save_result_);
|
||||
action_layout->addStretch(); // Align Left
|
||||
|
||||
top_panel_layout->addLayout(input_layout);
|
||||
top_panel_layout->addLayout(action_layout);
|
||||
|
||||
main_layout->addLayout(top_panel_layout);
|
||||
|
||||
// Image Display Area
|
||||
QHBoxLayout *display_layout = new QHBoxLayout();
|
||||
label_color_display_ = new QLabel("Color Image (Click to select ROI)", this);
|
||||
label_color_display_->setMinimumSize(640, 480);
|
||||
label_color_display_->setAlignment(Qt::AlignCenter);
|
||||
label_color_display_->setStyleSheet("border: 1px solid gray;");
|
||||
|
||||
label_depth_display_ = new QLabel("Depth Image", this);
|
||||
label_depth_display_->setMinimumSize(640, 480);
|
||||
label_depth_display_->setAlignment(Qt::AlignCenter);
|
||||
label_depth_display_->setStyleSheet("border: 1px solid gray;");
|
||||
|
||||
display_layout->addWidget(label_color_display_);
|
||||
display_layout->addWidget(label_depth_display_);
|
||||
|
||||
main_layout->addLayout(display_layout);
|
||||
|
||||
// Log Area
|
||||
text_log_ = new QTextEdit(this);
|
||||
text_log_->setMaximumHeight(150);
|
||||
text_log_->setReadOnly(true);
|
||||
main_layout->addWidget(text_log_);
|
||||
|
||||
// Connections
|
||||
connect(btn_load_depth_, &QPushButton::clicked, this, &CalibrationWidget::loadDepthImage);
|
||||
connect(btn_load_color_, &QPushButton::clicked, this, &CalibrationWidget::loadColorImage);
|
||||
// connect(btn_capture_, &QPushButton::clicked, this, &CalibrationWidget::captureImage); // Removed
|
||||
// connect(btn_refresh_cameras_, &QPushButton::clicked, this, &CalibrationWidget::refreshCameraList); // Removed
|
||||
connect(btn_load_calib_, &QPushButton::clicked, this, &CalibrationWidget::loadCalibParams);
|
||||
connect(btn_run_calibration_, &QPushButton::clicked, this, &CalibrationWidget::runCalibration);
|
||||
connect(btn_save_result_, &QPushButton::clicked, this, &CalibrationWidget::saveCalibrationResult);
|
||||
connect(btn_view_3d_, &QPushButton::clicked, this, &CalibrationWidget::view3DCloud);
|
||||
|
||||
log("Ready. Please load images.");
|
||||
}
|
||||
|
||||
void CalibrationWidget::log(const QString& msg) {
|
||||
text_log_->append(msg);
|
||||
}
|
||||
|
||||
// Helper for locking UI
|
||||
void CalibrationWidget::setUiLocked(bool locked) {
|
||||
bool enabled = !locked;
|
||||
btn_load_depth_->setEnabled(enabled);
|
||||
btn_load_color_->setEnabled(enabled);
|
||||
// btn_capture_->setEnabled(enabled);
|
||||
btn_load_calib_->setEnabled(enabled);
|
||||
btn_run_calibration_->setEnabled(enabled);
|
||||
btn_view_3d_->setEnabled(enabled);
|
||||
btn_save_result_->setEnabled(enabled);
|
||||
// btn_refresh_cameras_->setEnabled(enabled);
|
||||
// combo_camera_list_->setEnabled(enabled);
|
||||
}
|
||||
|
||||
void CalibrationWidget::loadDepthImage() {
|
||||
QString fileName = QFileDialog::getOpenFileName(this, "Open Depth Image", "", "Images (*.png *.tif *.tiff)");
|
||||
if (fileName.isEmpty()) return;
|
||||
|
||||
setUiLocked(true);
|
||||
log("Loading depth image...");
|
||||
|
||||
std::thread([this, fileName]() {
|
||||
cv::Mat loaded = cv::imread(fileName.toStdString(), cv::IMREAD_UNCHANGED);
|
||||
|
||||
QMetaObject::invokeMethod(this, [this, fileName, loaded]() {
|
||||
if (loaded.empty()) {
|
||||
log("Error: Failed to load depth image.");
|
||||
} else {
|
||||
mat_depth_raw_ = loaded;
|
||||
log("Loaded depth image: " + fileName + " (" + QString::number(mat_depth_raw_.cols) + "x" + QString::number(mat_depth_raw_.rows) + ")");
|
||||
updateDisplay();
|
||||
}
|
||||
setUiLocked(false);
|
||||
}, Qt::QueuedConnection);
|
||||
}).detach();
|
||||
}
|
||||
|
||||
void CalibrationWidget::loadColorImage() {
|
||||
QString fileName = QFileDialog::getOpenFileName(this, "Open Color Image", "", "Images (*.png *.jpg *.jpeg *.bmp)");
|
||||
if (fileName.isEmpty()) return;
|
||||
|
||||
setUiLocked(true);
|
||||
log("Loading color image...");
|
||||
|
||||
std::thread([this, fileName]() {
|
||||
cv::Mat loaded = cv::imread(fileName.toStdString());
|
||||
|
||||
QMetaObject::invokeMethod(this, [this, fileName, loaded]() {
|
||||
if (loaded.empty()) {
|
||||
log("Error: Failed to load color image.");
|
||||
} else {
|
||||
mat_color_raw_ = loaded;
|
||||
log("Loaded color image: " + fileName);
|
||||
updateDisplay();
|
||||
}
|
||||
setUiLocked(false);
|
||||
}, Qt::QueuedConnection);
|
||||
}).detach();
|
||||
}
|
||||
|
||||
void CalibrationWidget::updateDisplay() {
|
||||
if (!mat_color_raw_.empty()) {
|
||||
// Draw ROI if exists
|
||||
cv::Mat display = mat_color_raw_.clone();
|
||||
if (roi_points_color_.size() > 0) {
|
||||
for (size_t i = 0; i < roi_points_color_.size(); ++i) {
|
||||
cv::circle(display, roi_points_color_[i], 5, cv::Scalar(0, 0, 255), -1);
|
||||
if (i > 0) {
|
||||
cv::line(display, roi_points_color_[i-1], roi_points_color_[i], cv::Scalar(0, 255, 0), 2);
|
||||
}
|
||||
}
|
||||
if (roi_points_color_.size() == 4) {
|
||||
cv::line(display, roi_points_color_[3], roi_points_color_[0], cv::Scalar(0, 255, 0), 2);
|
||||
}
|
||||
}
|
||||
|
||||
QImage img = cvMatToQImage(display);
|
||||
label_color_display_->setPixmap(QPixmap::fromImage(img).scaled(label_color_display_->size(), Qt::KeepAspectRatio));
|
||||
}
|
||||
|
||||
if (!mat_depth_raw_.empty()) {
|
||||
// Normalize for display
|
||||
cv::Mat display;
|
||||
cv::normalize(mat_depth_raw_, display, 0, 255, cv::NORM_MINMAX, CV_8U);
|
||||
cv::cvtColor(display, display, cv::COLOR_GRAY2BGR); // Fake color
|
||||
|
||||
QImage img = cvMatToQImage(display);
|
||||
label_depth_display_->setPixmap(QPixmap::fromImage(img).scaled(label_depth_display_->size(), Qt::KeepAspectRatio));
|
||||
}
|
||||
}
|
||||
|
||||
QImage CalibrationWidget::cvMatToQImage(const cv::Mat& mat) {
|
||||
if (mat.type() == CV_8UC3) {
|
||||
// BGR -> RGB
|
||||
cv::Mat rgb;
|
||||
cv::cvtColor(mat, rgb, cv::COLOR_BGR2RGB);
|
||||
QImage img((const uchar*)rgb.data, rgb.cols, rgb.rows, rgb.step, QImage::Format_RGB888);
|
||||
return img.copy();
|
||||
} else if (mat.type() == CV_8UC1) {
|
||||
QImage img((const uchar*)mat.data, mat.cols, mat.rows, mat.step, QImage::Format_Grayscale8);
|
||||
return img.copy();
|
||||
}
|
||||
return QImage();
|
||||
}
|
||||
|
||||
|
||||
// Helper struct for camera info - REMOVED
|
||||
// refreshCameraList - REMOVED
|
||||
// captureImage - REMOVED
|
||||
|
||||
void CalibrationWidget::loadCalibParams() {
|
||||
QString fileName = QFileDialog::getOpenFileName(this, "Load Intrinsics JSON", "", "JSON (*.json)");
|
||||
if (fileName.isEmpty()) return;
|
||||
|
||||
QFile file(fileName);
|
||||
if (!file.open(QIODevice::ReadOnly)) {
|
||||
log("Error: Could not open file: " + fileName);
|
||||
return;
|
||||
}
|
||||
|
||||
QByteArray data = file.readAll();
|
||||
QJsonDocument doc = QJsonDocument::fromJson(data);
|
||||
if (doc.isNull()) {
|
||||
log("Error: Invalid JSON format.");
|
||||
return;
|
||||
}
|
||||
|
||||
QJsonObject root = doc.object();
|
||||
|
||||
auto parseCalib = [](const QJsonObject& obj, TY_CAMERA_CALIB_INFO& info) {
|
||||
if (obj.contains("intrinsic")) {
|
||||
QJsonArray arr = obj["intrinsic"].toArray();
|
||||
for(int i=0; i<9 && i<arr.size(); ++i) info.intrinsic.data[i] = (float)arr[i].toDouble();
|
||||
}
|
||||
if (obj.contains("extrinsic")) {
|
||||
QJsonArray arr = obj["extrinsic"].toArray();
|
||||
for(int i=0; i<16 && i<arr.size(); ++i) info.extrinsic.data[i] = (float)arr[i].toDouble();
|
||||
}
|
||||
if (obj.contains("distortion")) {
|
||||
QJsonArray arr = obj["distortion"].toArray();
|
||||
for(int i=0; i<12 && i<arr.size(); ++i) info.distortion.data[i] = (float)arr[i].toDouble();
|
||||
}
|
||||
};
|
||||
|
||||
if (root.contains("depth")) {
|
||||
parseCalib(root["depth"].toObject(), depth_calib_);
|
||||
} else {
|
||||
log("Warning: JSON missing 'depth' node.");
|
||||
}
|
||||
|
||||
if (root.contains("color")) {
|
||||
parseCalib(root["color"].toObject(), color_calib_);
|
||||
} else {
|
||||
log("Warning: JSON missing 'color' node.");
|
||||
}
|
||||
|
||||
// Parse SN from JSON if available, or fall back to filename
|
||||
QString sn = "";
|
||||
if (root.contains("camera_id")) {
|
||||
// Try getting it directly from JSON first
|
||||
sn = root["camera_id"].toString();
|
||||
}
|
||||
|
||||
// If not in JSON, try parsing from filename (e.g., intrinsics_207000146458.json)
|
||||
if (sn.isEmpty()) {
|
||||
QFileInfo fi(fileName);
|
||||
QString baseName = fi.baseName(); // intrinsics_207000146458
|
||||
QStringList parts = baseName.split('_');
|
||||
if (parts.size() >= 2) {
|
||||
sn = parts.last(); // Assume SN is the last part
|
||||
}
|
||||
}
|
||||
current_camera_sn_ = sn;
|
||||
|
||||
has_calib_params_ = true;
|
||||
|
||||
// Log loaded values for verification
|
||||
log(QString("Loaded Calibration Parameters for Camera SN: %1").arg(current_camera_sn_));
|
||||
auto logIntr = [&](const char* name, const TY_CAMERA_CALIB_INFO& info) {
|
||||
log(QString("%1 Intrinsic: fx=%2 fy=%3 cx=%4 cy=%5").arg(name)
|
||||
.arg(info.intrinsic.data[0]).arg(info.intrinsic.data[4])
|
||||
.arg(info.intrinsic.data[2]).arg(info.intrinsic.data[5]));
|
||||
log(QString("%1 Distortion: k1=%2 k2=%3 p1=%4 p2=%5 k3=%6").arg(name)
|
||||
.arg(info.distortion.data[0]).arg(info.distortion.data[1])
|
||||
.arg(info.distortion.data[2]).arg(info.distortion.data[3])
|
||||
.arg(info.distortion.data[4]));
|
||||
};
|
||||
logIntr("Depth", depth_calib_);
|
||||
logIntr("Color", color_calib_);
|
||||
|
||||
log("Loaded Calibration Parameters from " + fileName);
|
||||
}
|
||||
|
||||
|
||||
|
||||
void CalibrationWidget::view3DCloud() {
|
||||
if (!has_calibration_result_ || roi_points_depth_.empty()) {
|
||||
log("Error: No calibration result or ROI points. Run calibration first.");
|
||||
return;
|
||||
}
|
||||
|
||||
log("Generating 3D Visualization...");
|
||||
QApplication::processEvents();
|
||||
|
||||
// 1. Reconstruct Point Cloud from ROI
|
||||
auto pcd_raw = std::make_shared<open3d::geometry::PointCloud>();
|
||||
auto pcd_corrected = std::make_shared<open3d::geometry::PointCloud>();
|
||||
|
||||
float fx = depth_calib_.intrinsic.data[0];
|
||||
float fy = depth_calib_.intrinsic.data[4];
|
||||
float cx = depth_calib_.intrinsic.data[2];
|
||||
float cy = depth_calib_.intrinsic.data[5];
|
||||
|
||||
// Rebuild cloud loop (same as runCalibration)
|
||||
cv::Rect bounding_box = cv::boundingRect(roi_points_depth_);
|
||||
int start_y = std::max(0, bounding_box.y);
|
||||
int end_y = std::min(mat_depth_raw_.rows, bounding_box.y + bounding_box.height);
|
||||
int start_x = std::max(0, bounding_box.x);
|
||||
int end_x = std::min(mat_depth_raw_.cols, bounding_box.x + bounding_box.width);
|
||||
|
||||
// Prepare Transform Matrix
|
||||
Eigen::Matrix4d T_mat = Eigen::Matrix4d::Identity();
|
||||
for(int i=0; i<4; ++i)
|
||||
for(int j=0; j<4; ++j)
|
||||
T_mat(i,j) = (double)calibration_matrix_[i*4+j];
|
||||
|
||||
std::vector<double> z_values;
|
||||
|
||||
// 3D Visualization: Raw (RGB) vs Corrected (Heatmap)
|
||||
double scale_x = (double)mat_depth_raw_.cols / (double)mat_color_raw_.cols;
|
||||
double scale_y = (double)mat_depth_raw_.rows / (double)mat_color_raw_.rows;
|
||||
|
||||
for (int y = start_y; y < end_y; ++y) {
|
||||
for (int x = start_x; x < end_x; ++x) {
|
||||
if (cv::pointPolygonTest(roi_points_depth_, cv::Point2f(x, y), false) < 0) continue;
|
||||
|
||||
uint16_t d = mat_depth_raw_.at<uint16_t>(y, x);
|
||||
if (d == 0) continue;
|
||||
|
||||
double z_mm = (double)d;
|
||||
double x_mm = (x - cx) * z_mm / fx;
|
||||
double y_mm = (y - cy) * z_mm / fy;
|
||||
|
||||
if (std::isnan(x_mm) || std::isnan(y_mm) || std::isnan(z_mm)) continue;
|
||||
|
||||
Eigen::Vector3d pt_raw(x_mm, y_mm, z_mm);
|
||||
|
||||
// Add to Raw Cloud with RGB Colors
|
||||
pcd_raw->points_.push_back(pt_raw);
|
||||
|
||||
// Map depth pixel to color pixel
|
||||
int col_x = std::min(std::max(0, (int)(x / scale_x)), mat_color_raw_.cols - 1);
|
||||
int col_y = std::min(std::max(0, (int)(y / scale_y)), mat_color_raw_.rows - 1);
|
||||
cv::Vec3b bgr = mat_color_raw_.at<cv::Vec3b>(col_y, col_x);
|
||||
pcd_raw->colors_.push_back(Eigen::Vector3d(bgr[2]/255.0, bgr[1]/255.0, bgr[0]/255.0)); // BGR->RGB
|
||||
|
||||
// Transform and Add to Corrected Cloud
|
||||
Eigen::Vector4d pt_h(x_mm, y_mm, z_mm, 1.0);
|
||||
Eigen::Vector4d pt_trans = T_mat * pt_h;
|
||||
|
||||
// Shift corrected cloud to side for comparison (e.g., +1000mm in X)
|
||||
pcd_corrected->points_.push_back(pt_trans.head<3>() + Eigen::Vector3d(1000.0, 0, 0));
|
||||
|
||||
z_values.push_back(pt_trans.z());
|
||||
}
|
||||
}
|
||||
|
||||
if (z_values.empty()) {
|
||||
log("Error: No valid points in ROI.");
|
||||
return;
|
||||
}
|
||||
|
||||
// 2. Compute Statistics & Heatmap Coloring
|
||||
double sum_z = 0.0;
|
||||
for(double z : z_values) sum_z += z;
|
||||
double mean_z = sum_z / z_values.size();
|
||||
|
||||
double sq_sum = 0.0;
|
||||
for(double z : z_values) sq_sum += (z - mean_z) * (z - mean_z);
|
||||
double std_z = std::sqrt(sq_sum / z_values.size());
|
||||
|
||||
// Color Corrected cloud based on deviation from Mean Z
|
||||
for (double z : z_values) {
|
||||
double diff = std::abs(z - mean_z);
|
||||
// Simple Heatmap: Green (0 error) -> Red (error > 2mm)
|
||||
double ratio = std::min(1.0, diff / 2.0);
|
||||
pcd_corrected->colors_.push_back(Eigen::Vector3d(ratio, 1.0 - ratio, 0.0));
|
||||
}
|
||||
|
||||
log(QString("=== Validation Statistics ==="));
|
||||
log(QString("Point Count: %1").arg(z_values.size()));
|
||||
log(QString("Mean Z (Corrected): %1 mm (Target: ~0)").arg(mean_z, 0, 'f', 4));
|
||||
log(QString("StdDev Z (Flatness): %1 mm").arg(std_z, 0, 'f', 4));
|
||||
|
||||
// Quality Assessment (Focus on Flatness only, since Z is distance)
|
||||
if (std_z < 2.0) {
|
||||
log("Result: EXCELLENT. Plane is flat.");
|
||||
} else if (std_z < 5.0) {
|
||||
log("Result: GOOD. Minor noise.");
|
||||
} else {
|
||||
log("Result: WARNING. Plane may be curved or noisy.");
|
||||
|
||||
}
|
||||
|
||||
// 3. Visualize
|
||||
log("Opening 3D Viewer...");
|
||||
open3d::visualization::DrawGeometries(
|
||||
{pcd_raw, pcd_corrected},
|
||||
"Calibration Verification (Red: Raw, Green: Corrected)",
|
||||
1280, 720
|
||||
);
|
||||
log("Viewer closed.");
|
||||
}
|
||||
|
||||
// Synchronous implementation to avoid threading crashes
|
||||
void CalibrationWidget::runCalibration() {
|
||||
log("=== runCalibration() CALLED ===");
|
||||
|
||||
if (roi_points_color_.size() < 4) {
|
||||
log("Error: Please select 4 points for ROI on Color Image.");
|
||||
return;
|
||||
}
|
||||
if (!has_calib_params_) {
|
||||
log("Error: Calibration parameters not loaded.");
|
||||
return;
|
||||
}
|
||||
if (mat_depth_raw_.empty()) {
|
||||
log("Error: Depth image not loaded.");
|
||||
return;
|
||||
}
|
||||
|
||||
setUiLocked(true);
|
||||
log("Starting Calibration (Synchronous)...");
|
||||
|
||||
// Force UI update
|
||||
QApplication::processEvents();
|
||||
|
||||
try {
|
||||
// Validation
|
||||
if (mat_depth_raw_.type() != CV_16UC1) {
|
||||
throw std::runtime_error("Depth image must be 16-bit (CV_16UC1)");
|
||||
}
|
||||
if (depth_calib_.intrinsic.data[0] < 1e-6 || depth_calib_.intrinsic.data[4] < 1e-6) {
|
||||
throw std::runtime_error("Invalid depth intrinsics (fx/fy is zero)");
|
||||
}
|
||||
|
||||
// 1. Map Color ROI to Depth ROI (Manual Fallback)
|
||||
log("Mapping ROI (Manual Scaling)...");
|
||||
QApplication::processEvents();
|
||||
|
||||
std::vector<cv::Point> res_depth_roi;
|
||||
|
||||
if (mat_color_raw_.empty() || mat_depth_raw_.empty()) {
|
||||
throw std::runtime_error("Images empty during mapping");
|
||||
}
|
||||
|
||||
double scale_x = (double)mat_depth_raw_.cols / (double)mat_color_raw_.cols;
|
||||
double scale_y = (double)mat_depth_raw_.rows / (double)mat_color_raw_.rows;
|
||||
|
||||
log(QString("Mapping Scale: X=%1, Y=%2").arg(scale_x).arg(scale_y));
|
||||
|
||||
for (const auto& p : roi_points_color_) {
|
||||
int cx = (int)(p.x * scale_x);
|
||||
int cy = (int)(p.y * scale_y);
|
||||
// Clamp
|
||||
cx = std::max(0, std::min(cx, mat_depth_raw_.cols - 1));
|
||||
cy = std::max(0, std::min(cy, mat_depth_raw_.rows - 1));
|
||||
res_depth_roi.push_back(cv::Point(cx, cy));
|
||||
}
|
||||
|
||||
// 2. Build Point Cloud from ROI
|
||||
log("Building Point Cloud...");
|
||||
QApplication::processEvents();
|
||||
|
||||
auto pcd = std::make_shared<open3d::geometry::PointCloud>();
|
||||
cv::Rect bounding_box = cv::boundingRect(res_depth_roi);
|
||||
|
||||
float fx = depth_calib_.intrinsic.data[0];
|
||||
float fy = depth_calib_.intrinsic.data[4];
|
||||
float cx = depth_calib_.intrinsic.data[2];
|
||||
float cy = depth_calib_.intrinsic.data[5];
|
||||
|
||||
int valid_points = 0;
|
||||
int start_y = std::max(0, bounding_box.y);
|
||||
int end_y = std::min(mat_depth_raw_.rows, bounding_box.y + bounding_box.height);
|
||||
int start_x = std::max(0, bounding_box.x);
|
||||
int end_x = std::min(mat_depth_raw_.cols, bounding_box.x + bounding_box.width);
|
||||
|
||||
for (int y = start_y; y < end_y; ++y) {
|
||||
for (int x = start_x; x < end_x; ++x) {
|
||||
if (cv::pointPolygonTest(res_depth_roi, cv::Point2f(x, y), false) < 0) continue;
|
||||
|
||||
uint16_t d = mat_depth_raw_.at<uint16_t>(y, x);
|
||||
if (d == 0) continue;
|
||||
|
||||
double z_mm = (double)d;
|
||||
double x_mm = (x - cx) * z_mm / fx;
|
||||
double y_mm = (y - cy) * z_mm / fy;
|
||||
|
||||
if (std::isnan(x_mm) || std::isnan(y_mm) || std::isnan(z_mm)) continue;
|
||||
|
||||
pcd->points_.emplace_back(Eigen::Vector3d(x_mm, y_mm, z_mm));
|
||||
valid_points++;
|
||||
}
|
||||
}
|
||||
|
||||
log(QString("Valid Points: %1").arg(valid_points));
|
||||
QApplication::processEvents();
|
||||
|
||||
if (valid_points < 100) {
|
||||
throw std::runtime_error("Too few valid points (<100) in selected ROI");
|
||||
}
|
||||
|
||||
// 3. RANSAC Plane Fitting
|
||||
log("Fitting Plane (RANSAC)...");
|
||||
QApplication::processEvents();
|
||||
|
||||
std::vector<size_t> inliers;
|
||||
Eigen::Vector4d plane_model;
|
||||
|
||||
// Restore RANSAC
|
||||
std::tie(plane_model, inliers) = pcd->SegmentPlane(2.0, 3, 1000);
|
||||
|
||||
log(QString("RANSAC Inliers: %1").arg(inliers.size()));
|
||||
if (inliers.size() < 10) {
|
||||
throw std::runtime_error("RANSAC failed to find a valid plane");
|
||||
}
|
||||
|
||||
|
||||
// 4. Compute Rotation Matrix
|
||||
double A = plane_model[0], B = plane_model[1], C = plane_model[2], D = plane_model[3];
|
||||
log(QString("Plane Equation: %1x + %2y + %3z + %4 = 0")
|
||||
.arg(A).arg(B).arg(C).arg(D));
|
||||
|
||||
Eigen::Vector3d normal(A, B, C);
|
||||
normal.normalize();
|
||||
Eigen::Vector3d target(0, 0, 1); // Z-axis
|
||||
|
||||
Eigen::Matrix4d T_mat = Eigen::Matrix4d::Identity();
|
||||
|
||||
if (std::abs(normal.dot(target)) < 0.999) {
|
||||
Eigen::Matrix3d R = Eigen::Quaterniond::FromTwoVectors(normal, target).toRotationMatrix();
|
||||
T_mat.block<3,3>(0,0) = R;
|
||||
}
|
||||
|
||||
// Z-offset removed as per user request.
|
||||
// The transformation will align the plane normal to Z-axis but keep the original distance.
|
||||
log("Skipping Z offset adjustment (User Requested). Plane remains at original distance.");
|
||||
|
||||
// 5. Update Result
|
||||
roi_points_depth_ = res_depth_roi;
|
||||
for(int i=0; i<4; ++i)
|
||||
for(int j=0; j<4; ++j)
|
||||
calibration_matrix_[i*4+j] = (float)T_mat(i,j);
|
||||
|
||||
has_calibration_result_ = true;
|
||||
log("Calibration SUCCESS!");
|
||||
QMessageBox::information(this, "Success", "Calibration completed successfully.");
|
||||
|
||||
} catch (const std::exception& e) {
|
||||
log(QString("Calibration FAILED: %1").arg(e.what()));
|
||||
QMessageBox::critical(this, "Calibration Failed", e.what());
|
||||
} catch (...) {
|
||||
log("Calibration FAILED: Unknown error");
|
||||
QMessageBox::critical(this, "Calibration Failed", "Unknown error occurred.");
|
||||
}
|
||||
|
||||
setUiLocked(false);
|
||||
}
|
||||
|
||||
|
||||
bool CalibrationWidget::mapColorRoiToDepth(const std::vector<cv::Point>& color_roi, std::vector<cv::Point>& depth_roi) {
|
||||
if (color_roi.empty()) return false;
|
||||
depth_roi.clear();
|
||||
|
||||
// Prepare input for SDK
|
||||
std::vector<TY_PIXEL_COLOR_DESC> src_pixels(color_roi.size());
|
||||
for (size_t i = 0; i < color_roi.size(); ++i) {
|
||||
src_pixels[i].x = color_roi[i].x;
|
||||
src_pixels[i].y = color_roi[i].y;
|
||||
// BGR values are not strictly needed for coordinate mapping but struct requires them
|
||||
src_pixels[i].bgr_ch1 = 0;
|
||||
src_pixels[i].bgr_ch2 = 0;
|
||||
src_pixels[i].bgr_ch3 = 0;
|
||||
}
|
||||
|
||||
std::vector<TY_PIXEL_COLOR_DESC> dst_pixels(color_roi.size());
|
||||
|
||||
// Call SDK Mapping
|
||||
// Note: We need the raw depth buffer. Since mat_depth_raw_ is 16UC1, pointer cast is safe.
|
||||
TY_STATUS status = TYMapRGBPixelsToDepthCoordinate(
|
||||
&depth_calib_,
|
||||
mat_depth_raw_.cols, mat_depth_raw_.rows, (const uint16_t*)mat_depth_raw_.data,
|
||||
&color_calib_,
|
||||
mat_color_raw_.cols, mat_color_raw_.rows,
|
||||
src_pixels.data(), (uint32_t)src_pixels.size(),
|
||||
100, 10000, // min, max dist (mm)
|
||||
dst_pixels.data(),
|
||||
1.0f // scale
|
||||
);
|
||||
|
||||
if (status != TY_STATUS_OK) {
|
||||
log("TYMapRGBPixelsToDepthCoordinate failed: " + QString::number(status));
|
||||
return false;
|
||||
}
|
||||
|
||||
// Extract result
|
||||
for (const auto& p : dst_pixels) {
|
||||
if (p.x >= 0 && p.y >= 0) {
|
||||
depth_roi.push_back(cv::Point(p.x, p.y));
|
||||
} else {
|
||||
// If point is invalid (e.g. no depth), use fallback or interpolate?
|
||||
// For corners, this is critical.
|
||||
log("Warning: Invalid depth mapping for point (" + QString::number(p.x) + "," + QString::number(p.y) + ")");
|
||||
// Fallback to closest valid or original (scaled) logic
|
||||
depth_roi.push_back(cv::Point(p.x, p.y));
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void CalibrationWidget::saveCalibrationResult() {
|
||||
if (!has_calibration_result_) {
|
||||
log("Error: No calibration result to save. Run calibration first.");
|
||||
return;
|
||||
}
|
||||
|
||||
// Default filename with SN
|
||||
QString defaultName = "calibration_result_";
|
||||
if (!current_camera_sn_.isEmpty()) {
|
||||
defaultName += current_camera_sn_;
|
||||
} else {
|
||||
defaultName += "unknown";
|
||||
}
|
||||
defaultName += ".json";
|
||||
|
||||
QString fileName = QFileDialog::getSaveFileName(this, "Save Calibration Result", defaultName, "JSON (*.json)");
|
||||
if (fileName.isEmpty()) return;
|
||||
|
||||
QJsonObject root;
|
||||
|
||||
// Save Camera ID (SN)
|
||||
if (!current_camera_sn_.isEmpty()) {
|
||||
root["camera_id"] = current_camera_sn_;
|
||||
} else {
|
||||
root["camera_id"] = "unknown";
|
||||
}
|
||||
|
||||
// Save Matrix (Row-major 4x4)
|
||||
QJsonArray matArr;
|
||||
for(int i=0; i<16; ++i) {
|
||||
matArr.append(calibration_matrix_[i]);
|
||||
}
|
||||
root["transformation_matrix"] = matArr;
|
||||
|
||||
// Save ROI Points (Depth)
|
||||
QJsonArray roiArr;
|
||||
for(const auto& p : roi_points_depth_) {
|
||||
QJsonObject pt;
|
||||
pt["x"] = p.x;
|
||||
pt["y"] = p.y;
|
||||
roiArr.append(pt);
|
||||
}
|
||||
root["roi_points_depth"] = roiArr;
|
||||
|
||||
// Save Timestamp
|
||||
root["calibration_time"] = QDateTime::currentDateTime().toString(Qt::ISODate);
|
||||
|
||||
QJsonDocument doc(root);
|
||||
QFile file(fileName);
|
||||
if (file.open(QIODevice::WriteOnly)) {
|
||||
file.write(doc.toJson());
|
||||
log("Calibration saved to: " + fileName);
|
||||
QMessageBox::information(this, "Saved", "Calibration result saved successfully.");
|
||||
} else {
|
||||
log("Error: Could not write to file: " + fileName);
|
||||
QMessageBox::critical(this, "Error", "Could not save file.");
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,85 @@
|
||||
#ifndef NOMINMAX
|
||||
#define NOMINMAX
|
||||
#endif
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <QWidget>
|
||||
#include <QLabel>
|
||||
#include <QPushButton>
|
||||
#include <QVBoxLayout>
|
||||
#include <QHBoxLayout>
|
||||
#include <QLineEdit>
|
||||
#include <QTextEdit>
|
||||
#include <QComboBox>
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <vector>
|
||||
|
||||
#include <TYApi.h> // Ensure this path is in include dirs
|
||||
|
||||
class CalibrationWidget : public QWidget {
|
||||
Q_OBJECT
|
||||
|
||||
public:
|
||||
explicit CalibrationWidget(QWidget *parent = nullptr);
|
||||
~CalibrationWidget();
|
||||
|
||||
protected:
|
||||
bool eventFilter(QObject *obj, QEvent *event) override;
|
||||
|
||||
private slots:
|
||||
void loadDepthImage();
|
||||
void loadColorImage();
|
||||
// Capture slots removed
|
||||
void loadCalibParams();
|
||||
void runCalibration();
|
||||
void saveCalibrationResult();
|
||||
void view3DCloud();
|
||||
|
||||
private:
|
||||
void setupUi();
|
||||
void updateDisplay();
|
||||
QImage cvMatToQImage(const cv::Mat& mat);
|
||||
|
||||
// Internal helpers
|
||||
void log(const QString& msg);
|
||||
void setUiLocked(bool locked); // New
|
||||
// Convert ROI and run logic
|
||||
bool mapColorRoiToDepth(const std::vector<cv::Point>& color_roi, std::vector<cv::Point>& depth_roi);
|
||||
|
||||
// UI Controls
|
||||
QLabel *label_color_display_;
|
||||
QLabel *label_depth_display_;
|
||||
QTextEdit *text_log_;
|
||||
|
||||
QPushButton *btn_load_depth_;
|
||||
QPushButton *btn_load_color_;
|
||||
|
||||
// Camera controls removed
|
||||
QPushButton *btn_load_calib_;
|
||||
QPushButton *btn_run_calibration_;
|
||||
QPushButton *btn_save_result_;
|
||||
QPushButton *btn_view_3d_;
|
||||
|
||||
// Data
|
||||
|
||||
cv::Mat mat_depth_raw_; // 16UC1
|
||||
cv::Mat mat_color_raw_; // BGR
|
||||
|
||||
// Calibration parameters for the camera (Intrinsics + Extrinsics between RGB and Depth)
|
||||
TY_CAMERA_CALIB_INFO depth_calib_;
|
||||
TY_CAMERA_CALIB_INFO color_calib_;
|
||||
bool has_calib_params_;
|
||||
|
||||
// ROI in Color Image
|
||||
std::vector<cv::Point> roi_points_color_;
|
||||
std::vector<cv::Point> roi_points_depth_; // Mapped ROI
|
||||
bool is_selecting_roi_;
|
||||
|
||||
// Calibration Result
|
||||
bool has_calibration_result_;
|
||||
// 4x4 matrix storged in vector or struct
|
||||
float calibration_matrix_[16];
|
||||
|
||||
QString current_camera_sn_;
|
||||
};
|
||||
@@ -0,0 +1 @@
|
||||
// This is a backup before attempting major surgery
|
||||
13
image_capture/src/tools/calibration_tool/main.cpp
Normal file
13
image_capture/src/tools/calibration_tool/main.cpp
Normal file
@@ -0,0 +1,13 @@
|
||||
#include <QApplication>
|
||||
#include "calibration_widget.h"
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
QApplication app(argc, argv);
|
||||
|
||||
CalibrationWidget widget;
|
||||
widget.setWindowTitle("Beam/Rack Deflection Calibration Tool");
|
||||
widget.resize(1200, 800);
|
||||
widget.show();
|
||||
|
||||
return app.exec();
|
||||
}
|
||||
280
image_capture/src/tools/generate_reference/main.cpp
Normal file
280
image_capture/src/tools/generate_reference/main.cpp
Normal file
@@ -0,0 +1,280 @@
|
||||
|
||||
#include <atomic>
|
||||
#include <iostream>
|
||||
#include <vector>
|
||||
|
||||
|
||||
#include <QApplication>
|
||||
#include <QFileDialog>
|
||||
#include <QJsonDocument>
|
||||
#include <QJsonObject>
|
||||
#include <QMessageBox>
|
||||
#include <opencv2/opencv.hpp>
|
||||
|
||||
|
||||
#include <QDebug>
|
||||
#include <QDir>
|
||||
#include <QFile>
|
||||
#include <QJsonArray>
|
||||
|
||||
|
||||
// Algorithm
|
||||
#include "../../algorithm/detections/pallet_offset/pallet_offset_detection.h"
|
||||
|
||||
// State
|
||||
std::vector<cv::Point> g_roi_points;
|
||||
cv::Mat g_depth, g_color, g_display_img;
|
||||
bool g_trigger_detect = false;
|
||||
std::string g_win_name = "Offline Reference Generator";
|
||||
|
||||
void onMouse(int event, int x, int y, int flags, void *userdata) {
|
||||
if (event != cv::EVENT_LBUTTONDOWN)
|
||||
return;
|
||||
|
||||
// Add point
|
||||
g_roi_points.push_back(cv::Point(x, y));
|
||||
std::cout << "[Tool] Point " << g_roi_points.size() << ": (" << x << "," << y
|
||||
<< ")" << std::endl;
|
||||
|
||||
if (g_roi_points.size() == 4) {
|
||||
std::cout << "[Tool] ROI Complete. Triggering Detection." << std::endl;
|
||||
g_trigger_detect = true;
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
QApplication app(argc, argv);
|
||||
|
||||
std::cout << "Select a Depth Image (16-bit PNG/TIFF)..." << std::endl;
|
||||
|
||||
QString fileName = QFileDialog::getOpenFileName(
|
||||
nullptr, "Open Depth Image", "", "Images (*.png *.tif *.tiff *.bmp)");
|
||||
|
||||
if (fileName.isEmpty()) {
|
||||
std::cerr << "No file selected." << std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
std::string depth_path = fileName.toStdString();
|
||||
g_depth = cv::imread(depth_path, cv::IMREAD_UNCHANGED);
|
||||
|
||||
if (g_depth.empty()) {
|
||||
std::cerr << "Failed to load image: " << depth_path << std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
std::cout << "Loaded Image: " << g_depth.cols << "x" << g_depth.rows
|
||||
<< " Type=" << g_depth.type() << std::endl;
|
||||
|
||||
if (g_depth.type() != CV_16UC1) {
|
||||
std::cerr << "[Error] Input image is not a 16-bit single-channel depth map "
|
||||
"(CV_16UC1)."
|
||||
<< std::endl;
|
||||
std::cerr << " Current type=" << g_depth.type()
|
||||
<< " (likely 8-bit 3-channel if 16)." << std::endl;
|
||||
std::cerr << " Please select a valid raw depth image (unit: mm)."
|
||||
<< std::endl;
|
||||
// Proceeding might be dangerous for the algorithm, but we can at least show
|
||||
// it. For the tool to work, we really need CV_16U. Let's create a dummy
|
||||
// 16-bit image if possible or just exit? Better to let the user know and
|
||||
// maybe show the image so they see what they loaded. But the algorithm call
|
||||
// later will likely fail or give bad results.
|
||||
}
|
||||
|
||||
// Optional: Load matching color
|
||||
// ...
|
||||
|
||||
cv::normalize(g_depth, g_display_img, 0, 255, cv::NORM_MINMAX, CV_8U);
|
||||
if (g_display_img.channels() == 1) {
|
||||
cv::cvtColor(g_display_img, g_display_img, cv::COLOR_GRAY2BGR);
|
||||
} else {
|
||||
// If already 3 channels (e.g. user loaded a color image by mistake), ensure
|
||||
// it is 8-bit BGR for display normalize already made it CV_8U. No
|
||||
// conversion needed if it's already BGR.
|
||||
}
|
||||
|
||||
cv::namedWindow(g_win_name, cv::WINDOW_AUTOSIZE);
|
||||
cv::setMouseCallback(g_win_name, onMouse);
|
||||
|
||||
std::cout << "\n=========================================" << std::endl;
|
||||
std::cout << " Controls:" << std::endl;
|
||||
std::cout << " [Click on Image] : Select ROI (4 pts)" << std::endl;
|
||||
std::cout << " [R] : Reset ROI" << std::endl;
|
||||
std::cout << " [ESC] : Exit" << std::endl;
|
||||
std::cout << "=========================================\n" << std::endl;
|
||||
|
||||
bool running = true;
|
||||
while (running) {
|
||||
cv::Mat show = g_display_img.clone();
|
||||
|
||||
// Draw ROI
|
||||
for (size_t i = 0; i < g_roi_points.size(); ++i) {
|
||||
cv::circle(show, g_roi_points[i], 4, cv::Scalar(0, 0, 255), -1);
|
||||
if (i > 0)
|
||||
cv::line(show, g_roi_points[i - 1], g_roi_points[i],
|
||||
cv::Scalar(0, 255, 0), 2);
|
||||
}
|
||||
if (g_roi_points.size() == 4) {
|
||||
cv::line(show, g_roi_points[3], g_roi_points[0], cv::Scalar(0, 255, 0),
|
||||
2);
|
||||
}
|
||||
|
||||
if (g_roi_points.empty()) {
|
||||
cv::putText(show, "Click 4 points to select ROI", cv::Point(20, 30),
|
||||
cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 255, 0), 2);
|
||||
}
|
||||
|
||||
cv::imshow(g_win_name, show);
|
||||
|
||||
if (g_trigger_detect) {
|
||||
g_trigger_detect = false;
|
||||
|
||||
// -------------------------------------------------
|
||||
// Load Calibration (Auto-search Multi-path)
|
||||
// -------------------------------------------------
|
||||
cv::Mat calib_mat;
|
||||
QStringList search_dirs;
|
||||
search_dirs << QCoreApplication::applicationDirPath();
|
||||
search_dirs << QDir::currentPath();
|
||||
search_dirs << QDir::currentPath() + "/../"; // Parent
|
||||
search_dirs << "D:/Git/stereo_warehouse_inspection/image_capture/build/"
|
||||
"bin/Debug"; // Hard fallback
|
||||
|
||||
bool found_calib = false;
|
||||
|
||||
for (const QString &dirPath : search_dirs) {
|
||||
QDir dir(dirPath);
|
||||
QStringList filters;
|
||||
filters << "calibration_result_*.json";
|
||||
dir.setNameFilters(filters);
|
||||
QFileInfoList list =
|
||||
dir.entryInfoList(QDir::Files, QDir::Time); // Newest first
|
||||
|
||||
if (!list.isEmpty()) {
|
||||
QString calibPath = list.first().absoluteFilePath();
|
||||
std::cout << "[Tool] Found calibration file: "
|
||||
<< calibPath.toStdString() << " (Newest)" << std::endl;
|
||||
|
||||
QFile f(calibPath);
|
||||
if (f.open(QIODevice::ReadOnly)) {
|
||||
QJsonDocument doc = QJsonDocument::fromJson(f.readAll());
|
||||
QJsonObject obj = doc.object();
|
||||
if (obj.contains("transformation_matrix")) {
|
||||
QJsonArray arr = obj["transformation_matrix"].toArray();
|
||||
if (arr.size() == 16) {
|
||||
calib_mat = cv::Mat::eye(4, 4, CV_64F);
|
||||
std::cout << "[Tool] Matrix Diagonal: ";
|
||||
for (int i = 0; i < 4; ++i) {
|
||||
for (int j = 0; j < 4; ++j) {
|
||||
calib_mat.at<double>(i, j) = arr[i * 4 + j].toDouble();
|
||||
if (i == j)
|
||||
std::cout << calib_mat.at<double>(i, j) << " ";
|
||||
}
|
||||
}
|
||||
std::cout << std::endl;
|
||||
std::cout << "[Tool] Calibration Loaded Successfully."
|
||||
<< std::endl;
|
||||
found_calib = true;
|
||||
break; // Stop searching
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!found_calib) {
|
||||
std::cout << "\n[Tool] !!! WARNING: NO CALIBRATION FILE FOUND !!!"
|
||||
<< std::endl;
|
||||
std::cout << "[Tool] !!! Running in CAMERA FRAME (Uncalibrated) !!!"
|
||||
<< std::endl;
|
||||
std::cout << "[Tool] !!! Angles will likely match Physical Camera Tilt "
|
||||
"(approx 0 if flat, 18 if tilted) !!!\n"
|
||||
<< std::endl;
|
||||
}
|
||||
|
||||
// Hardcoded Intrinsics from valid logs (User specific)
|
||||
// fx=1053.48, fy=1053.48, cx=640.301, cy=496.681
|
||||
CameraIntrinsics intr;
|
||||
intr.fx = 1053.48f;
|
||||
intr.fy = 1053.48f;
|
||||
intr.cx = 640.301f;
|
||||
intr.cy = 496.681f;
|
||||
|
||||
std::cout << "[Tool] Detecting with Intrinsics: fx=" << intr.fx
|
||||
<< " cx=" << intr.cx << std::endl;
|
||||
|
||||
PalletOffsetResult res;
|
||||
bool success = PalletOffsetAlgorithm::detect(g_depth, cv::Mat(), "left",
|
||||
res, nullptr, g_roi_points,
|
||||
intr, &calib_mat);
|
||||
|
||||
if (success) {
|
||||
std::cout << "[Tool] Success!" << std::endl;
|
||||
std::cout << " Abs Pos: (" << res.abs_x << ", " << res.abs_y << ", "
|
||||
<< res.abs_z << ")" << std::endl;
|
||||
|
||||
// Calculate angle for storage
|
||||
double angle_rad =
|
||||
std::atan2(res.right_hole_pos.z - res.left_hole_pos.z,
|
||||
res.right_hole_pos.x - res.left_hole_pos.x);
|
||||
double angle_deg = angle_rad * 180.0 / CV_PI;
|
||||
|
||||
QJsonObject root;
|
||||
root["x"] = res.abs_x;
|
||||
root["y"] = res.abs_y;
|
||||
root["z"] = res.abs_z;
|
||||
root["reference_angle"] = angle_deg; // Explicitly save angle
|
||||
|
||||
QJsonObject leftObj;
|
||||
leftObj["x"] = res.left_hole_pos.x;
|
||||
leftObj["y"] = res.left_hole_pos.y;
|
||||
leftObj["z"] = res.left_hole_pos.z;
|
||||
root["left_hole"] = leftObj;
|
||||
|
||||
QJsonObject rightObj;
|
||||
rightObj["x"] = res.right_hole_pos.x;
|
||||
rightObj["y"] = res.right_hole_pos.y;
|
||||
rightObj["z"] = res.right_hole_pos.z;
|
||||
root["right_hole"] = rightObj;
|
||||
|
||||
QFile file("reference_pallet.json");
|
||||
if (file.open(QIODevice::WriteOnly)) {
|
||||
file.write(QJsonDocument(root).toJson());
|
||||
file.close();
|
||||
std::cout << "[GenerateReference] SUCCESS: Saved reference to "
|
||||
"reference_pallet.json"
|
||||
<< std::endl;
|
||||
std::cout << " Ref Pos (X,Y,Z): " << res.abs_x
|
||||
<< ", " << res.abs_y << ", " << res.abs_z << std::endl;
|
||||
// Calculate angle for display
|
||||
double angle_deg =
|
||||
std::atan2(res.right_hole_pos.z - res.left_hole_pos.z,
|
||||
res.right_hole_pos.x - res.left_hole_pos.x) *
|
||||
180.0 / CV_PI;
|
||||
std::cout << " Ref Angle: " << angle_deg << " deg"
|
||||
<< std::endl;
|
||||
|
||||
QMessageBox::information(
|
||||
nullptr, "Success",
|
||||
"Reference data saved to reference_pallet.json");
|
||||
}
|
||||
} else {
|
||||
std::cerr << "[Tool] Failed." << std::endl;
|
||||
}
|
||||
|
||||
// Clear to allow re-selection? Or keep?
|
||||
// Keep points to show what was selected.
|
||||
// Reset must be manual.
|
||||
}
|
||||
|
||||
int key = cv::waitKey(30);
|
||||
if (key == 27)
|
||||
running = false;
|
||||
if (key == 'r' || key == 'R') {
|
||||
g_roi_points.clear();
|
||||
std::cout << "[Tool] ROI Reset." << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
31
image_capture/src/tools/intrinsic_dumper/README.md
Normal file
31
image_capture/src/tools/intrinsic_dumper/README.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Intrinsic Dumper (内参导出工具)
|
||||
|
||||
## 简介 (Introduction)
|
||||
本工具是一个命令行程序 (CLI),用于自动扫描连接的 Percipio 相机,并提取其出厂标定参数(内参、畸变系数、外参)。
|
||||
|
||||
## 功能 (Features)
|
||||
* **自动扫描**:通过 Percipio SDK 枚举所有连接的设备。
|
||||
* **参数提取**:并发获取 Depth 相机和 Color 相机的标定数据。
|
||||
* **自动命名**:根据相机序列号 (SN) 生成唯一的文件名。
|
||||
* **格式规范**:输出符合项目标准的 JSON 格式。
|
||||
|
||||
## 使用步骤 (Usage)
|
||||
1. 确保相机已正确连接并安装驱动。
|
||||
2. 双击运行 `intrinsic_dumper.exe` 或在命令行中执行。
|
||||
3. 程序将输出日志,提示发现的设备 SN。
|
||||
4. 执行完成后,程序会自动关闭(或按任意键退出,视环境而定)。
|
||||
5. 在当前目录下查找生成的 `intrinsics_<SN>.json` 文件。
|
||||
|
||||
## 输出格式 (Output)
|
||||
文件名示例:`intrinsics_207000146458.json`
|
||||
```json
|
||||
{
|
||||
"depth": {
|
||||
"intrinsic": [fx, 0, cx, 0, fy, cy, 0, 0, 1],
|
||||
"distortion": [...],
|
||||
"extrinsic": [...]
|
||||
},
|
||||
"color": { ... },
|
||||
"camera_id": "207000146458"
|
||||
}
|
||||
```
|
||||
116
image_capture/src/tools/intrinsic_dumper/main.cpp
Normal file
116
image_capture/src/tools/intrinsic_dumper/main.cpp
Normal file
@@ -0,0 +1,116 @@
|
||||
#ifndef NOMINMAX
|
||||
#define NOMINMAX
|
||||
#endif
|
||||
|
||||
#include <TYApi.h>
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
|
||||
#include <QJsonDocument>
|
||||
#include <QJsonObject>
|
||||
#include <QJsonArray>
|
||||
#include <QFile>
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
// QCoreApplication not needed for basic JSON operations
|
||||
|
||||
std::cout << "Initializing Percipio SDK..." << std::endl;
|
||||
TY_STATUS status = TYInitLib();
|
||||
if (status != TY_STATUS_OK) {
|
||||
std::cerr << "Failed to init SDK: " << status << std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
TYUpdateInterfaceList();
|
||||
uint32_t n = 0;
|
||||
TYGetInterfaceNumber(&n);
|
||||
if (n == 0) {
|
||||
std::cerr << "No interfaces found." << std::endl;
|
||||
TYDeinitLib();
|
||||
return -1;
|
||||
}
|
||||
|
||||
std::vector<TY_INTERFACE_INFO> ifaces(n);
|
||||
TYGetInterfaceList(&ifaces[0], n, &n);
|
||||
|
||||
int device_count = 0;
|
||||
|
||||
for (const auto& ifaceInfo : ifaces) {
|
||||
TY_INTERFACE_HANDLE hIface = nullptr;
|
||||
if (TYOpenInterface(ifaceInfo.id, &hIface) == TY_STATUS_OK) {
|
||||
TYUpdateDeviceList(hIface);
|
||||
uint32_t devCount = 0;
|
||||
TYGetDeviceNumber(hIface, &devCount);
|
||||
if (devCount > 0) {
|
||||
std::vector<TY_DEVICE_BASE_INFO> devs(devCount);
|
||||
TYGetDeviceList(hIface, &devs[0], devCount, &devCount);
|
||||
|
||||
for (uint32_t i = 0; i < devCount; ++i) {
|
||||
std::string sn = devs[i].id;
|
||||
std::cout << "Found Device SN: " << sn << std::endl;
|
||||
|
||||
TY_DEV_HANDLE handle = nullptr;
|
||||
if (TYOpenDevice(hIface, devs[i].id, &handle) == TY_STATUS_OK) {
|
||||
|
||||
// Fetch Calib Info
|
||||
TY_CAMERA_CALIB_INFO depth_info, color_info;
|
||||
bool has_depth = false;
|
||||
bool has_color = false;
|
||||
|
||||
if (TYGetStruct(handle, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA, &depth_info, sizeof(depth_info)) == TY_STATUS_OK) {
|
||||
has_depth = true;
|
||||
std::cout << " - Got Depth Calibration." << std::endl;
|
||||
}
|
||||
|
||||
if (TYGetStruct(handle, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &color_info, sizeof(color_info)) == TY_STATUS_OK) {
|
||||
has_color = true;
|
||||
std::cout << " - Got Color Calibration." << std::endl;
|
||||
}
|
||||
|
||||
TYCloseDevice(handle);
|
||||
|
||||
if (has_depth || has_color) {
|
||||
QJsonObject root;
|
||||
|
||||
auto formatInfo = [](const TY_CAMERA_CALIB_INFO& info) -> QJsonObject {
|
||||
QJsonObject obj;
|
||||
QJsonArray intr, extr, dist;
|
||||
for(int k=0; k<9; k++) intr.append((double)info.intrinsic.data[k]);
|
||||
for(int k=0; k<12; k++) dist.append((double)info.distortion.data[k]);
|
||||
for(int k=0; k<16; k++) extr.append((double)info.extrinsic.data[k]);
|
||||
|
||||
obj["intrinsic"] = intr;
|
||||
obj["distortion"] = dist;
|
||||
obj["extrinsic"] = extr;
|
||||
return obj;
|
||||
};
|
||||
|
||||
if (has_depth) root["depth"] = formatInfo(depth_info);
|
||||
if (has_color) root["color"] = formatInfo(color_info);
|
||||
|
||||
QString filename = QString("intrinsics_%1.json").arg(QString::fromStdString(sn));
|
||||
QJsonDocument doc(root);
|
||||
QFile file(filename);
|
||||
if (file.open(QIODevice::WriteOnly)) {
|
||||
file.write(doc.toJson());
|
||||
file.close();
|
||||
std::cout << " -> Saved to " << filename.toStdString() << std::endl;
|
||||
}
|
||||
}
|
||||
device_count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
TYCloseInterface(hIface);
|
||||
}
|
||||
}
|
||||
|
||||
if (device_count == 0) {
|
||||
std::cout << "No devices processed." << std::endl;
|
||||
}
|
||||
|
||||
TYDeinitLib();
|
||||
return 0;
|
||||
}
|
||||
29
image_capture/src/tools/slot_algo_tuner/README.md
Normal file
29
image_capture/src/tools/slot_algo_tuner/README.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Slot Algorithm Tuner (算法调优工具)
|
||||
|
||||
## 简介 (Introduction)
|
||||
本工具是一个可视化参数调试程序,用于在离线环境下对货架变形检测算法的关键参数进行微调。它通过对比标准图像与输入图像的差异,模拟算法的处理流程。
|
||||
|
||||
## 功能 (Features)
|
||||
* **图像加载**:支持加载 Reference (基准) 图像和 Input (待测) 图像。
|
||||
* **参数实时调整**:
|
||||
* **ROI (x, y, w, h)**:感兴趣区域设置。
|
||||
* **Threshold**:差分二值化阈值。
|
||||
* **Blur Size**:高斯模糊核大小。
|
||||
* **Area Threshold**:连通域面积过滤阈值。
|
||||
* **可视化反馈**:实时显示 Reference、Input、Difference (差分图) 和 Mask (掩膜) 结果。
|
||||
|
||||
## 使用步骤 (Usage)
|
||||
1. **加载基准图**:点击 `Load Reference`,选择一张无变形的标准货架图像。
|
||||
2. **加载测试图**:点击 `Load Input`,选择需要检测的现场图像。
|
||||
3. **调整参数**:
|
||||
* 调整右侧面板的 SpinBox 数值。
|
||||
* 点击 `Process` 按钮(或参数改变时自动刷新)查看效果。
|
||||
4. **观察结果**:
|
||||
* **Diff Image**:显示两幅图像的像素差异。
|
||||
* **Mask Image**:显示经过阈值和滤波后的检测结果。
|
||||
* 下方文本框会显示检测到的区域信息(如有)。
|
||||
|
||||
## 适用场景 (Use Cases)
|
||||
* 确定现场环境下的最佳二值化阈值。
|
||||
* 调整检测敏感度以过滤噪声。
|
||||
* 验证 ROI 区域是否覆盖目标货架。
|
||||
13
image_capture/src/tools/slot_algo_tuner/main.cpp
Normal file
13
image_capture/src/tools/slot_algo_tuner/main.cpp
Normal file
@@ -0,0 +1,13 @@
|
||||
#include <QApplication>
|
||||
#include "tuner_widget.h"
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
QApplication app(argc, argv);
|
||||
|
||||
TunerWidget w;
|
||||
w.setWindowTitle("Slot Algorithm Tuner");
|
||||
w.resize(1200, 800);
|
||||
w.show();
|
||||
|
||||
return app.exec();
|
||||
}
|
||||
246
image_capture/src/tools/slot_algo_tuner/tuner_widget.cpp
Normal file
246
image_capture/src/tools/slot_algo_tuner/tuner_widget.cpp
Normal file
@@ -0,0 +1,246 @@
|
||||
#include "tuner_widget.h"
|
||||
#include <QFileDialog>
|
||||
#include <QHBoxLayout>
|
||||
#include <QGridLayout>
|
||||
#include <QGroupBox>
|
||||
#include <QScrollArea>
|
||||
#include <QDebug>
|
||||
|
||||
TunerWidget::TunerWidget(QWidget *parent) : QWidget(parent) {
|
||||
setupUi();
|
||||
}
|
||||
|
||||
void TunerWidget::setupUi() {
|
||||
QVBoxLayout *outerLayout = new QVBoxLayout(this);
|
||||
|
||||
// Create Scroll Area
|
||||
QScrollArea *scrollArea = new QScrollArea(this);
|
||||
scrollArea->setWidgetResizable(true);
|
||||
|
||||
// Create internal container widget
|
||||
QWidget *contentWidget = new QWidget(scrollArea);
|
||||
QVBoxLayout *mainLayout = new QVBoxLayout(contentWidget);
|
||||
|
||||
// Set content of scroll area
|
||||
scrollArea->setWidget(contentWidget);
|
||||
outerLayout->addWidget(scrollArea);
|
||||
|
||||
// 1. Controls Area
|
||||
QGroupBox *grpControls = new QGroupBox("Controls", this);
|
||||
QGridLayout *layoutControls = new QGridLayout(grpControls);
|
||||
|
||||
btn_load_ref_ = new QPushButton("Load Reference (Empty Slot)", this);
|
||||
btn_load_input_ = new QPushButton("Load Input (Current Test)", this);
|
||||
|
||||
connect(btn_load_ref_, &QPushButton::clicked, this, &TunerWidget::loadReferenceImage);
|
||||
connect(btn_load_input_, &QPushButton::clicked, this, &TunerWidget::loadInputImage);
|
||||
|
||||
layoutControls->addWidget(btn_load_ref_, 0, 0);
|
||||
layoutControls->addWidget(btn_load_input_, 0, 1);
|
||||
|
||||
// ROI Controls
|
||||
QGroupBox *grpROI = new QGroupBox("ROI Settings", this);
|
||||
QHBoxLayout *layoutROI = new QHBoxLayout(grpROI);
|
||||
|
||||
spin_roi_x_ = new QSpinBox(this); spin_roi_x_->setRange(0, 5000); spin_roi_x_->setPrefix("X: "); spin_roi_x_->setValue(100);
|
||||
spin_roi_y_ = new QSpinBox(this); spin_roi_y_->setRange(0, 5000); spin_roi_y_->setPrefix("Y: "); spin_roi_y_->setValue(100);
|
||||
spin_roi_w_ = new QSpinBox(this); spin_roi_w_->setRange(1, 5000); spin_roi_w_->setPrefix("W: "); spin_roi_w_->setValue(800);
|
||||
spin_roi_h_ = new QSpinBox(this); spin_roi_h_->setRange(1, 5000); spin_roi_h_->setPrefix("H: "); spin_roi_h_->setValue(600);
|
||||
|
||||
layoutROI->addWidget(spin_roi_x_);
|
||||
layoutROI->addWidget(spin_roi_y_);
|
||||
layoutROI->addWidget(spin_roi_w_);
|
||||
layoutROI->addWidget(spin_roi_h_);
|
||||
|
||||
layoutControls->addWidget(grpROI, 1, 0, 1, 2);
|
||||
|
||||
// Param Controls
|
||||
QGroupBox *grpParams = new QGroupBox("Algorithm Params", this);
|
||||
QHBoxLayout *layoutParams = new QHBoxLayout(grpParams);
|
||||
|
||||
spin_threshold_ = new QSpinBox(this); spin_threshold_->setRange(0, 255); spin_threshold_->setPrefix("Diff Thresh: "); spin_threshold_->setValue(30);
|
||||
spin_blur_ = new QSpinBox(this); spin_blur_->setRange(1, 21); spin_blur_->setSingleStep(2); spin_blur_->setPrefix("Blur Size: "); spin_blur_->setValue(5);
|
||||
spin_area_threshold_ = new QSpinBox(this); spin_area_threshold_->setRange(0, 1000000); spin_area_threshold_->setPrefix("Area Thresh: "); spin_area_threshold_->setValue(5000);
|
||||
|
||||
connect(spin_roi_x_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
connect(spin_roi_y_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
connect(spin_roi_w_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
connect(spin_roi_h_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
connect(spin_threshold_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
connect(spin_blur_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
connect(spin_area_threshold_, QOverload<int>::of(&QSpinBox::valueChanged), this, &TunerWidget::process);
|
||||
|
||||
layoutParams->addWidget(spin_threshold_);
|
||||
layoutParams->addWidget(spin_blur_);
|
||||
layoutParams->addWidget(spin_area_threshold_);
|
||||
|
||||
layoutControls->addWidget(grpParams, 2, 0, 1, 2);
|
||||
|
||||
// Result Text
|
||||
label_result_text_ = new QLabel("Ready", this);
|
||||
label_result_text_->setStyleSheet("font-size: 16px; font-weight: bold; color: blue;");
|
||||
layoutControls->addWidget(label_result_text_, 3, 0, 1, 2);
|
||||
|
||||
mainLayout->addWidget(grpControls);
|
||||
|
||||
// 2. Images Area
|
||||
QGridLayout *layoutImages = new QGridLayout();
|
||||
|
||||
label_ref_ = new QLabel("Reference", this); label_ref_->setScaledContents(false); label_ref_->setAlignment(Qt::AlignCenter); label_ref_->setStyleSheet("border: 1px solid gray; background: black;");
|
||||
label_input_ = new QLabel("Input + ROI", this); label_input_->setScaledContents(false); label_input_->setAlignment(Qt::AlignCenter); label_input_->setStyleSheet("border: 1px solid gray; background: black;");
|
||||
label_diff_ = new QLabel("AbsDiff", this); label_diff_->setScaledContents(false); label_diff_->setAlignment(Qt::AlignCenter); label_diff_->setStyleSheet("border: 1px solid gray; background: black;");
|
||||
label_mask_ = new QLabel("Result Mask", this); label_mask_->setScaledContents(false); label_mask_->setAlignment(Qt::AlignCenter); label_mask_->setStyleSheet("border: 1px solid gray; background: black;");
|
||||
|
||||
layoutImages->addWidget(new QLabel("Reference Image"), 0, 0);
|
||||
layoutImages->addWidget(label_ref_, 1, 0);
|
||||
|
||||
layoutImages->addWidget(new QLabel("Input Image (Red Box = ROI)"), 0, 1);
|
||||
layoutImages->addWidget(label_input_, 1, 1);
|
||||
|
||||
layoutImages->addWidget(new QLabel("Difference Image"), 2, 0);
|
||||
layoutImages->addWidget(label_diff_, 3, 0);
|
||||
|
||||
layoutImages->addWidget(new QLabel("Detection Mask"), 2, 1);
|
||||
layoutImages->addWidget(label_mask_, 3, 1);
|
||||
|
||||
mainLayout->addLayout(layoutImages);
|
||||
}
|
||||
|
||||
void TunerWidget::loadReferenceImage() {
|
||||
QString path = QFileDialog::getOpenFileName(this, "Open Reference Image", "", "Images (*.png *.jpg *.bmp)");
|
||||
if (path.isEmpty()) return;
|
||||
|
||||
mat_ref_raw_ = cv::imread(path.toStdString(), cv::IMREAD_GRAYSCALE);
|
||||
// Resize to target resolution if needed (simulating the real system)
|
||||
cv::Size target_size(4024, 3036);
|
||||
if (!mat_ref_raw_.empty() && mat_ref_raw_.size() != target_size) {
|
||||
cv::resize(mat_ref_raw_, mat_ref_raw_, target_size);
|
||||
}
|
||||
process();
|
||||
}
|
||||
|
||||
void TunerWidget::loadInputImage() {
|
||||
QString path = QFileDialog::getOpenFileName(this, "Open Input Image", "", "Images (*.png *.jpg *.bmp)");
|
||||
if (path.isEmpty()) return;
|
||||
|
||||
mat_input_raw_ = cv::imread(path.toStdString(), cv::IMREAD_GRAYSCALE);
|
||||
// Resize to target resolution if needed
|
||||
cv::Size target_size(4024, 3036);
|
||||
if (!mat_input_raw_.empty() && mat_input_raw_.size() != target_size) {
|
||||
cv::resize(mat_input_raw_, mat_input_raw_, target_size);
|
||||
}
|
||||
process();
|
||||
}
|
||||
|
||||
void TunerWidget::process() {
|
||||
if (mat_ref_raw_.empty() || mat_input_raw_.empty()) {
|
||||
if (!mat_ref_raw_.empty()) {
|
||||
QImage qimg = cvMatToQImage(mat_ref_raw_);
|
||||
if (!qimg.isNull()) {
|
||||
QPixmap p = QPixmap::fromImage(qimg);
|
||||
// Scale with aspect ratio
|
||||
label_ref_->setPixmap(p.scaled(640, 480, Qt::KeepAspectRatio, Qt::SmoothTransformation));
|
||||
}
|
||||
}
|
||||
if (!mat_input_raw_.empty()) {
|
||||
QImage qimg = cvMatToQImage(mat_input_raw_);
|
||||
if (!qimg.isNull()) {
|
||||
QPixmap p = QPixmap::fromImage(qimg);
|
||||
label_input_->setPixmap(p.scaled(640, 480, Qt::KeepAspectRatio, Qt::SmoothTransformation));
|
||||
}
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// 1. Get Params
|
||||
int rx = spin_roi_x_->value();
|
||||
int ry = spin_roi_y_->value();
|
||||
int rw = spin_roi_w_->value();
|
||||
int rh = spin_roi_h_->value();
|
||||
int blur = spin_blur_->value();
|
||||
int diff_th = spin_threshold_->value();
|
||||
int area_th = spin_area_threshold_->value();
|
||||
|
||||
if (blur % 2 == 0) blur++; // Ensure odd
|
||||
|
||||
// 2. Validate ROI
|
||||
int img_w = mat_input_raw_.cols;
|
||||
int img_h = mat_input_raw_.rows;
|
||||
rx = std::max(0, rx);
|
||||
ry = std::max(0, ry);
|
||||
rw = std::min(rw, img_w - rx);
|
||||
rh = std::min(rh, img_h - ry);
|
||||
|
||||
cv::Rect roi(rx, ry, rw, rh);
|
||||
|
||||
// 3. Process
|
||||
cv::Mat input_roi = mat_input_raw_(roi);
|
||||
cv::Mat ref_roi = mat_ref_raw_(roi);
|
||||
|
||||
cv::Mat input_blurred, ref_blurred;
|
||||
cv::GaussianBlur(input_roi, input_blurred, cv::Size(blur, blur), 0);
|
||||
cv::GaussianBlur(ref_roi, ref_blurred, cv::Size(blur, blur), 0);
|
||||
|
||||
cv::Mat diff;
|
||||
cv::absdiff(input_blurred, ref_blurred, diff);
|
||||
|
||||
cv::Mat mask;
|
||||
cv::threshold(diff, mask, diff_th, 255, cv::THRESH_BINARY);
|
||||
|
||||
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 5));
|
||||
cv::morphologyEx(mask, mask, cv::MORPH_OPEN, kernel);
|
||||
|
||||
int non_zero = cv::countNonZero(mask);
|
||||
|
||||
// 4. Update Result
|
||||
bool occupied = (non_zero > area_th);
|
||||
if (occupied) {
|
||||
label_result_text_->setText(QString("Occupied (Pixels: %1 > %2)").arg(non_zero).arg(area_th));
|
||||
label_result_text_->setStyleSheet("font-size: 16px; font-weight: bold; color: red;");
|
||||
} else {
|
||||
label_result_text_->setText(QString("Empty (Pixels: %1 <= %2)").arg(non_zero).arg(area_th));
|
||||
label_result_text_->setStyleSheet("font-size: 16px; font-weight: bold; color: green;");
|
||||
}
|
||||
|
||||
// 5. Update Displays
|
||||
// Fixed display width for consistent layout
|
||||
const int DISP_W = 640;
|
||||
const int DISP_H = 480;
|
||||
|
||||
// Ref
|
||||
QImage q_ref = cvMatToQImage(mat_ref_raw_);
|
||||
label_ref_->setPixmap(QPixmap::fromImage(q_ref).scaled(DISP_W, DISP_H, Qt::KeepAspectRatio, Qt::SmoothTransformation));
|
||||
|
||||
// Input with ROI Draw
|
||||
cv::Mat input_vis;
|
||||
// 确保是彩色图以画红框
|
||||
if (mat_input_raw_.channels() == 1) {
|
||||
cv::cvtColor(mat_input_raw_, input_vis, cv::COLOR_GRAY2BGR);
|
||||
} else {
|
||||
input_vis = mat_input_raw_.clone();
|
||||
}
|
||||
cv::rectangle(input_vis, roi, cv::Scalar(0, 0, 255), 10); // Red box
|
||||
QImage q_input = cvMatToQImage(input_vis);
|
||||
label_input_->setPixmap(QPixmap::fromImage(q_input).scaled(DISP_W, DISP_H, Qt::KeepAspectRatio, Qt::SmoothTransformation));
|
||||
|
||||
// Diff
|
||||
QImage q_diff = cvMatToQImage(diff);
|
||||
label_diff_->setPixmap(QPixmap::fromImage(q_diff).scaled(DISP_W, DISP_H, Qt::KeepAspectRatio, Qt::SmoothTransformation));
|
||||
|
||||
// Mask
|
||||
QImage q_mask = cvMatToQImage(mask);
|
||||
label_mask_->setPixmap(QPixmap::fromImage(q_mask).scaled(DISP_W, DISP_H, Qt::KeepAspectRatio, Qt::SmoothTransformation));
|
||||
}
|
||||
|
||||
QImage TunerWidget::cvMatToQImage(const cv::Mat& mat) {
|
||||
if (mat.empty()) return QImage();
|
||||
|
||||
if (mat.type() == CV_8UC1) {
|
||||
QImage image(mat.data, mat.cols, mat.rows, mat.step, QImage::Format_Grayscale8);
|
||||
return image.copy();
|
||||
} else if (mat.type() == CV_8UC3) {
|
||||
QImage image(mat.data, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
|
||||
return image.rgbSwapped();
|
||||
}
|
||||
return QImage();
|
||||
}
|
||||
53
image_capture/src/tools/slot_algo_tuner/tuner_widget.h
Normal file
53
image_capture/src/tools/slot_algo_tuner/tuner_widget.h
Normal file
@@ -0,0 +1,53 @@
|
||||
#pragma once
|
||||
|
||||
#include <QWidget>
|
||||
#include <QLabel>
|
||||
#include <QSpinBox>
|
||||
#include <QPushButton>
|
||||
#include <QVBoxLayout>
|
||||
#include <opencv2/opencv.hpp>
|
||||
|
||||
class TunerWidget : public QWidget {
|
||||
Q_OBJECT
|
||||
|
||||
public:
|
||||
explicit TunerWidget(QWidget *parent = nullptr);
|
||||
~TunerWidget() = default;
|
||||
|
||||
private slots:
|
||||
void loadReferenceImage();
|
||||
void loadInputImage();
|
||||
void process();
|
||||
|
||||
private:
|
||||
void setupUi();
|
||||
void updateDisplay();
|
||||
QImage cvMatToQImage(const cv::Mat& mat);
|
||||
|
||||
// UI Controls
|
||||
QLabel *label_ref_;
|
||||
QLabel *label_input_;
|
||||
QLabel *label_diff_;
|
||||
QLabel *label_mask_;
|
||||
|
||||
QPushButton *btn_load_ref_;
|
||||
QPushButton *btn_load_input_;
|
||||
|
||||
QSpinBox *spin_roi_x_;
|
||||
QSpinBox *spin_roi_y_;
|
||||
QSpinBox *spin_roi_w_;
|
||||
QSpinBox *spin_roi_h_;
|
||||
QSpinBox *spin_threshold_;
|
||||
QSpinBox *spin_blur_;
|
||||
QSpinBox *spin_area_threshold_;
|
||||
|
||||
QLabel *label_result_text_;
|
||||
|
||||
// Data
|
||||
cv::Mat mat_ref_raw_;
|
||||
cv::Mat mat_input_raw_;
|
||||
cv::Mat mat_ref_display_;
|
||||
cv::Mat mat_input_display_;
|
||||
cv::Mat mat_diff_display_;
|
||||
cv::Mat mat_mask_display_;
|
||||
};
|
||||
197
image_capture/src/vision/vision_controller.cpp
Normal file
197
image_capture/src/vision/vision_controller.cpp
Normal file
@@ -0,0 +1,197 @@
|
||||
/**
|
||||
* @file vision_controller.cpp
|
||||
* @brief Vision系统主控制器实现文件
|
||||
*
|
||||
* 此文件实现了VisionController类的完整功能:
|
||||
* - 系统初始化(Redis、任务管理器)
|
||||
* - 系统启动和停止
|
||||
* - 任务接收和分发
|
||||
* - 模块间协调和数据流管理
|
||||
*
|
||||
* 设计说明:
|
||||
* - VisionController是系统唯一控制器,统一管理Redis和任务模块
|
||||
* - 设备由MainWindow初始化,VisionController直接使用DeviceManager单例
|
||||
* - 使用回调函数实现模块间解耦
|
||||
* - 所有模块使用智能指针管理,自动释放资源
|
||||
*/
|
||||
|
||||
#include "vision_controller.h"
|
||||
#include "../redis/redis_communicator.h"
|
||||
#include "../task/task_manager.h"
|
||||
#include "../device/device_manager.h"
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <iostream>
|
||||
|
||||
/**
|
||||
* @brief 构造函数
|
||||
*
|
||||
* 初始化所有成员变量为默认值
|
||||
* - running_: 系统未运行
|
||||
* - initialized_: 系统未初始化
|
||||
*/
|
||||
VisionController::VisionController()
|
||||
: running_(false) // 系统未运行
|
||||
, initialized_(false) // 系统未初始化
|
||||
{
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 析构函数
|
||||
*
|
||||
* 确保在对象销毁时正确停止系统
|
||||
* 调用stop()清理所有资源
|
||||
*/
|
||||
VisionController::~VisionController() {
|
||||
stop();
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 初始化Vision系统
|
||||
*
|
||||
* @param redis_host Redis服务器地址,默认"127.0.0.1"
|
||||
* @param redis_port Redis服务器端口,默认6379
|
||||
* @param task_db 任务监听Redis数据库编号
|
||||
* @param result_db 结果写入Redis数据库编号
|
||||
* @return true 初始化成功,false 初始化失败
|
||||
*/
|
||||
bool VisionController::initialize(const std::string& redis_host,
|
||||
int redis_port,
|
||||
int task_db,
|
||||
int result_db) {
|
||||
if (initialized_) {
|
||||
std::cout << "[VisionController] System already initialized" << std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
std::cout << "[VisionController] Starting Vision system initialization..." << std::endl;
|
||||
std::cout << "[VisionController] Redis config: Input DB=" << task_db << ", Output DB=" << result_db << std::endl;
|
||||
|
||||
// ========== 1. 初始化Redis任务监听模块 (Input DB) ==========
|
||||
redis_comm_ = std::make_shared<RedisCommunicator>();
|
||||
// TODO: move password to config
|
||||
if (!redis_comm_->initialize(redis_host, redis_port, task_db, "123456")) {
|
||||
std::cerr << "[VisionController] Redis task communicator (DB " << task_db << ") initialization failed" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
// ========== 2. 初始化Redis结果写入模块 (Output DB) ==========
|
||||
redis_result_comm_ = std::make_shared<RedisCommunicator>();
|
||||
if (!redis_result_comm_->initialize(redis_host, redis_port, result_db, "123456")) {
|
||||
std::cerr << "[VisionController] Redis result communicator (DB " << result_db << ") initialization failed" << std::endl;
|
||||
return false;
|
||||
}
|
||||
std::cout << "[VisionController] Redis communicators initialized successfully" << std::endl;
|
||||
|
||||
// ========== 3. 初始化任务管理器 ==========
|
||||
task_manager_ = std::make_shared<TaskManager>();
|
||||
|
||||
// 初始化任务管理器,传入结果写入(输出DB)和触发清空(输入DB)两个Redis连接
|
||||
if (!task_manager_->initialize(redis_result_comm_, redis_comm_)) {
|
||||
std::cerr << "[VisionController] Task manager initialization failed" << std::endl;
|
||||
return false;
|
||||
}
|
||||
std::cout << "[VisionController] Task manager initialized successfully" << std::endl;
|
||||
|
||||
// ========== 4. 设置回调函数 ==========
|
||||
redis_comm_->setTaskCallback(
|
||||
[this](const RedisTaskData& task_data) {
|
||||
this->onTaskReceived(task_data);
|
||||
}
|
||||
);
|
||||
|
||||
initialized_ = true;
|
||||
std::cout << "[VisionController] Vision system initialization complete" << std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
bool VisionController::start() {
|
||||
if (!initialized_) {
|
||||
std::cerr << "[VisionController] System not initialized, cannot start" << std::endl;
|
||||
return false;
|
||||
}
|
||||
|
||||
if (running_) {
|
||||
return true;
|
||||
}
|
||||
|
||||
std::cout << "[VisionController] Starting Vision system..." << std::endl;
|
||||
|
||||
// 启动Redis任务监听 (只有 Input DB 需要监听)
|
||||
if (!redis_comm_->startListening()) {
|
||||
std::cerr << "[VisionController] Redis listening start failed" << std::endl;
|
||||
return false;
|
||||
}
|
||||
std::cout << "[VisionController] Redis listening started" << std::endl;
|
||||
|
||||
running_ = true;
|
||||
std::cout << "[VisionController] Vision system started successfully" << std::endl;
|
||||
return true;
|
||||
}
|
||||
|
||||
void VisionController::stop() {
|
||||
if (!running_) {
|
||||
return;
|
||||
}
|
||||
|
||||
std::cout << "[VisionController] Stopping Vision system..." << std::endl;
|
||||
|
||||
running_ = false;
|
||||
|
||||
// 停止Redis监听
|
||||
if (redis_comm_) {
|
||||
redis_comm_->stopListening();
|
||||
}
|
||||
// redis_result_comm_ 不需要专门停止,因为它不跑监听线程,析构时会自动断开
|
||||
|
||||
if (task_manager_) {
|
||||
task_manager_->stopCurrentTask();
|
||||
}
|
||||
|
||||
std::cout << "[VisionController] Vision system stopped" << std::endl;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 检查系统是否正在运行
|
||||
*
|
||||
* @return true 系统正在运行,false 系统已停止
|
||||
*/
|
||||
bool VisionController::isRunning() const {
|
||||
return running_;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief 任务接收回调函数
|
||||
*
|
||||
* 当Redis检测到新任务时,RedisCommunicator会调用此函数
|
||||
* 此函数将任务转发给TaskManager处理
|
||||
*
|
||||
* @param task_data 任务数据,包含flag、side、task_time
|
||||
*
|
||||
* @note 此函数运行在Redis监听线程中,需要快速返回,避免阻塞监听
|
||||
* @note TaskManager使用任务队列异步处理任务,不会阻塞此函数
|
||||
*/
|
||||
void VisionController::onTaskReceived(const RedisTaskData& task_data) {
|
||||
// 如果系统未处于运行状态,忽略任务,防止在初始化/停止阶段拉起任务
|
||||
if (!running_) {
|
||||
std::cout << "[VisionController] Received task while system not running, ignoring. flag="
|
||||
<< task_data.flag << ", side=" << task_data.side << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
// 设备未就绪时也忽略任务,避免在相机尚未完全启动时触发算法导致异常
|
||||
if (!DeviceManager::getInstance().isRunning()) {
|
||||
std::cerr << "[VisionController] DeviceManager not running, ignoring task: flag="
|
||||
<< task_data.flag << ", side=" << task_data.side << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
std::cout << "[VisionController] Received new task: flag=" << task_data.flag
|
||||
<< ", side=" << task_data.side << std::endl;
|
||||
|
||||
// 将任务转发给TaskManager处理
|
||||
// TaskManager会将任务加入队列,由执行线程异步处理
|
||||
if (task_manager_) {
|
||||
task_manager_->handleTask(task_data);
|
||||
}
|
||||
}
|
||||
|
||||
81
image_capture/src/vision/vision_controller.h
Normal file
81
image_capture/src/vision/vision_controller.h
Normal file
@@ -0,0 +1,81 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <atomic>
|
||||
#include <string>
|
||||
#include <memory>
|
||||
#include <string>
|
||||
#include "../redis/task_data.h"
|
||||
|
||||
class RedisCommunicator;
|
||||
class TaskManager;
|
||||
|
||||
/**
|
||||
* @brief Vision系统主控制器(唯一系统级控制器)
|
||||
*
|
||||
* 这是Vision系统的唯一系统级控制器,负责Redis监听和任务处理。
|
||||
*
|
||||
* 功能说明:
|
||||
* - 整合Redis通信和任务管理模块
|
||||
* - 协调各模块之间的交互和数据流
|
||||
* - 提供统一的系统启动和停止接口
|
||||
* - 管理Redis监听和任务处理的生命周期
|
||||
*
|
||||
* 架构层次:
|
||||
* - VisionController(系统控制器)-> 管理Redis和任务模块
|
||||
* - RedisCommunicator(通信模块)
|
||||
* - TaskManager(任务管理模块,包含结果处理)
|
||||
*
|
||||
* 注意:设备由MainWindow初始化和管理,VisionController直接使用DeviceManager单例。
|
||||
* TaskManager也直接使用DeviceManager单例获取图像。
|
||||
*/
|
||||
class VisionController {
|
||||
public:
|
||||
VisionController();
|
||||
~VisionController();
|
||||
|
||||
/**
|
||||
* 初始化Vision系统
|
||||
* @param redis_host Redis服务器地址
|
||||
* @param redis_port Redis服务器端口
|
||||
* @param task_db 任务监听Redis数据库编号
|
||||
* @param result_db 结果写入Redis数据库编号
|
||||
* @return 是否成功初始化
|
||||
*/
|
||||
bool initialize(const std::string& redis_host = "127.0.0.1",
|
||||
int redis_port = 6379,
|
||||
int task_db = 0,
|
||||
int result_db = 1);
|
||||
|
||||
/**
|
||||
* 启动Vision系统
|
||||
* @return 是否成功启动
|
||||
*/
|
||||
bool start();
|
||||
|
||||
/**
|
||||
* 停止Vision系统
|
||||
*/
|
||||
void stop();
|
||||
|
||||
/**
|
||||
* 检查系统是否正在运行
|
||||
*/
|
||||
bool isRunning() const;
|
||||
|
||||
private:
|
||||
/**
|
||||
* 任务回调函数(当Redis检测到新任务时调用)
|
||||
*/
|
||||
void onTaskReceived(const RedisTaskData& task_data);
|
||||
|
||||
// 各模块
|
||||
std::shared_ptr<RedisCommunicator> redis_comm_; // 任务监听 (Input)
|
||||
std::shared_ptr<RedisCommunicator> redis_result_comm_; // 结果写入 (Output)
|
||||
std::shared_ptr<TaskManager> task_manager_;
|
||||
|
||||
// 状态
|
||||
std::atomic<bool> running_;
|
||||
bool initialized_;
|
||||
};
|
||||
|
||||
1441
image_capture/third_party/mvs/Includes/CameraParams.h
vendored
Normal file
1441
image_capture/third_party/mvs/Includes/CameraParams.h
vendored
Normal file
File diff suppressed because it is too large
Load Diff
3195
image_capture/third_party/mvs/Includes/MvCameraControl.h
vendored
Normal file
3195
image_capture/third_party/mvs/Includes/MvCameraControl.h
vendored
Normal file
File diff suppressed because it is too large
Load Diff
122
image_capture/third_party/mvs/Includes/MvErrorDefine.h
vendored
Normal file
122
image_capture/third_party/mvs/Includes/MvErrorDefine.h
vendored
Normal file
@@ -0,0 +1,122 @@
|
||||
|
||||
#ifndef _MV_ERROR_DEFINE_H_
|
||||
#define _MV_ERROR_DEFINE_H_
|
||||
|
||||
#include "MvISPErrorDefine.h"
|
||||
|
||||
/********************************************************************/
|
||||
/// \~chinese
|
||||
/// \name 正确码定义
|
||||
/// @{
|
||||
/// \~english
|
||||
/// \name Definition of correct code
|
||||
/// @{
|
||||
#define MV_OK 0x00000000 ///< \~chinese 成功,无错误 \~english Successed, no error
|
||||
/// @}
|
||||
|
||||
/********************************************************************/
|
||||
/// \~chinese
|
||||
/// \name 通用错误码定义:范围0x80000000-0x800000FF
|
||||
/// @{
|
||||
/// \~english
|
||||
/// \name Definition of General error code
|
||||
/// @{
|
||||
#define MV_E_HANDLE 0x80000000 ///< \~chinese 错误或无效的句柄 \~english Error or invalid handle
|
||||
#define MV_E_SUPPORT 0x80000001 ///< \~chinese 不支持的功能 \~english Not supported function
|
||||
#define MV_E_BUFOVER 0x80000002 ///< \~chinese 缓存已满 \~english Buffer overflow
|
||||
#define MV_E_CALLORDER 0x80000003 ///< \~chinese 函数调用顺序错误 \~english Function calling order error
|
||||
#define MV_E_PARAMETER 0x80000004 ///< \~chinese 错误的参数 \~english Incorrect parameter
|
||||
#define MV_E_RESOURCE 0x80000006 ///< \~chinese 资源申请失败 \~english Applying resource failed
|
||||
#define MV_E_NODATA 0x80000007 ///< \~chinese 无数据 \~english No data
|
||||
#define MV_E_PRECONDITION 0x80000008 ///< \~chinese 前置条件有误,或运行环境已发生变化 \~english Precondition error, or running environment changed
|
||||
#define MV_E_VERSION 0x80000009 ///< \~chinese 版本不匹配 \~english Version mismatches
|
||||
#define MV_E_NOENOUGH_BUF 0x8000000A ///< \~chinese 传入的内存空间不足 \~english Insufficient memory
|
||||
#define MV_E_ABNORMAL_IMAGE 0x8000000B ///< \~chinese 异常图像,可能是丢包导致图像不完整 \~english Abnormal image, maybe incomplete image because of lost packet
|
||||
#define MV_E_LOAD_LIBRARY 0x8000000C ///< \~chinese 动态导入DLL失败 \~english Load library failed
|
||||
#define MV_E_NOOUTBUF 0x8000000D ///< \~chinese 没有可输出的缓存 \~english No Avaliable Buffer
|
||||
#define MV_E_ENCRYPT 0x8000000E ///< \~chinese 加密错误 \~english Encryption error
|
||||
#define MV_E_OPENFILE 0x8000000F ///< \~chinese 打开文件出现错误 \~english open file error
|
||||
#define MV_E_BUF_IN_USE 0x80000010 ///< \~chinese 缓存地址已使用 \~english Buffer already in use
|
||||
#define MV_E_BUF_INVALID 0x80000011 ///< \~chinese 无效的缓存地址 \~english Buffer address invalid
|
||||
#define MV_E_NOALIGN_BUF 0x80000012 ///< \~chinese 缓存对齐异常 \~english Buffer alignmenterror error
|
||||
#define MV_E_NOENOUGH_BUF_NUM 0x80000013 ///< \~chinese 缓存个数不足 \~english Insufficient cache count
|
||||
#define MV_E_PORT_IN_USE 0x80000014 ///< \~chinese 串口被占用 \~english Port is in use
|
||||
#define MV_E_IMAGE_DECODEC 0x80000015 ///< \~chinese 解码错误(SDK校验图像异常)\~english Decoding error (SDK verification image exception)
|
||||
#define MV_E_UINT32_LIMIT 0x80000016 /// \~chinese 图像大小超过unsigned int返回,接口不支持
|
||||
#define MV_E_IMAGE_HEIGHT 0x80000017 /// \~chinese 图像高度异常(残帧丢弃) \~english image height anomaly (discard incomplete images)
|
||||
#define MV_E_NOENOUGH_DDR 0x80000018 ///< \~chinese DDR缓存不足 \~english The DDR cache is Insufficient
|
||||
#define MV_E_NOENOUGH_STREAM 0x80000019 ///< \~chinese 流通道不足 \~english The stream channel is Insufficient
|
||||
#define MV_E_NORESPONSE 0x8000001A ///< \~chinese 设备无响应 \~english No response from device
|
||||
|
||||
#define MV_E_UNKNOW 0x800000FF ///< \~chinese 未知的错误 \~english Unknown error
|
||||
/// @}
|
||||
|
||||
/********************************************************************/
|
||||
/// \~chinese
|
||||
/// \name GenICam系列错误:范围0x80000100-0x800001FF
|
||||
/// @{
|
||||
/// \~english
|
||||
/// \name GenICam Series Error Codes: Range from 0x80000100 to 0x800001FF
|
||||
/// @{
|
||||
#define MV_E_GC_GENERIC 0x80000100 ///< \~chinese 通用错误 \~english General error
|
||||
#define MV_E_GC_ARGUMENT 0x80000101 ///< \~chinese 参数非法 \~english Illegal parameters
|
||||
#define MV_E_GC_RANGE 0x80000102 ///< \~chinese 值超出范围 \~english The value is out of range
|
||||
#define MV_E_GC_PROPERTY 0x80000103 ///< \~chinese 属性 \~english Property
|
||||
#define MV_E_GC_RUNTIME 0x80000104 ///< \~chinese 运行环境有问题 \~english Running environment error
|
||||
#define MV_E_GC_LOGICAL 0x80000105 ///< \~chinese 逻辑错误 \~english Logical error
|
||||
#define MV_E_GC_ACCESS 0x80000106 ///< \~chinese 节点访问条件有误 \~english Node accessing condition error
|
||||
#define MV_E_GC_TIMEOUT 0x80000107 ///< \~chinese 超时 \~english Timeout
|
||||
#define MV_E_GC_DYNAMICCAST 0x80000108 ///< \~chinese 转换异常 \~english Transformation exception
|
||||
#define MV_E_GC_UNKNOW 0x800001FF ///< \~chinese GenICam未知错误 \~english GenICam unknown error
|
||||
/// @}
|
||||
|
||||
/********************************************************************/
|
||||
/// \~chinese
|
||||
/// \name GigE_STATUS对应的错误码:范围0x80000200-0x800002FF
|
||||
/// @{
|
||||
/// \~english
|
||||
/// \name GigE_STATUS Error Codes: Range from 0x80000200 to 0x800002FF
|
||||
/// @{
|
||||
#define MV_E_NOT_IMPLEMENTED 0x80000200 ///< \~chinese 命令不被设备支持 \~english The command is not supported by device
|
||||
#define MV_E_INVALID_ADDRESS 0x80000201 ///< \~chinese 访问的目标地址不存在 \~english The target address being accessed does not exist
|
||||
#define MV_E_WRITE_PROTECT 0x80000202 ///< \~chinese 目标地址不可写 \~english The target address is not writable
|
||||
#define MV_E_ACCESS_DENIED 0x80000203 ///< \~chinese 设备无访问权限 \~english No permission
|
||||
#define MV_E_BUSY 0x80000204 ///< \~chinese 设备忙,或网络断开 \~english Device is busy, or network disconnected
|
||||
#define MV_E_PACKET 0x80000205 ///< \~chinese 网络包数据错误 \~english Network data packet error
|
||||
#define MV_E_NETER 0x80000206 ///< \~chinese 网络相关错误 \~english Network error
|
||||
#define MV_E_SUPPORT_MODIFY_DEVICE_IP 0x8000020E ///< 在固定IP模式下不支持修改设备IP模式 \~english Current Mode Not Support Modify Ip
|
||||
#define MV_E_KEY_VERIFICATION 0x8000020F ///< \~chinese 秘钥校验错误 \~english SwitchKey error
|
||||
#define MV_E_IP_CONFLICT 0x80000221 ///< \~chinese 设备IP冲突 \~english Device IP conflict
|
||||
/// @}
|
||||
|
||||
/********************************************************************/
|
||||
/// \~chinese
|
||||
/// \name USB_STATUS对应的错误码:范围0x80000300-0x800003FF
|
||||
/// @{
|
||||
/// \~english
|
||||
/// \name USB_STATUS Error Codes: Range from 0x80000300 to 0x800003FF
|
||||
/// @{
|
||||
#define MV_E_USB_READ 0x80000300 ///< \~chinese 读usb出错 \~english Reading USB error
|
||||
#define MV_E_USB_WRITE 0x80000301 ///< \~chinese 写usb出错 \~english Writing USB error
|
||||
#define MV_E_USB_DEVICE 0x80000302 ///< \~chinese 设备异常 \~english Device exception
|
||||
#define MV_E_USB_GENICAM 0x80000303 ///< \~chinese GenICam相关错误 \~english GenICam error
|
||||
#define MV_E_USB_BANDWIDTH 0x80000304 ///< \~chinese 带宽不足 \~english Insufficient bandwidth
|
||||
#define MV_E_USB_DRIVER 0x80000305 ///< \~chinese 驱动不匹配或者未装驱动 \~english Driver mismatch or unmounted drive
|
||||
#define MV_E_USB_UNKNOW 0x800003FF ///< \~chinese USB未知的错误 \~english USB unknown error
|
||||
/// @}
|
||||
|
||||
/********************************************************************/
|
||||
/// \~chinese
|
||||
/// \name 升级时对应的错误码:范围0x80000400-0x800004FF
|
||||
/// @{
|
||||
/// \~english
|
||||
/// \name Upgrade Error Codes: Range from 0x80000400 to 0x800004FF
|
||||
/// @{
|
||||
#define MV_E_UPG_FILE_MISMATCH 0x80000400 ///< \~chinese 升级固件不匹配 \~english Firmware mismatches
|
||||
#define MV_E_UPG_LANGUSGE_MISMATCH 0x80000401 ///< \~chinese 升级固件语言不匹配 \~english Firmware language mismatches
|
||||
#define MV_E_UPG_CONFLICT 0x80000402 ///< \~chinese 升级冲突(设备已经在升级了再次请求升级即返回此错误) \~english Upgrading conflicted (repeated upgrading requests during device upgrade)
|
||||
#define MV_E_UPG_INNER_ERR 0x80000403 ///< \~chinese 升级时设备内部出现错误 \~english Camera internal error during upgrade
|
||||
#define MV_E_UPG_UNKNOW 0x800004FF ///< \~chinese 升级时未知错误 \~english Unknown error during upgrade
|
||||
/// @}
|
||||
|
||||
#endif //_MV_ERROR_DEFINE_H_
|
||||
98
image_capture/third_party/mvs/Includes/MvISPErrorDefine.h
vendored
Normal file
98
image_capture/third_party/mvs/Includes/MvISPErrorDefine.h
vendored
Normal file
@@ -0,0 +1,98 @@
|
||||
|
||||
#ifndef _MV_ISP_ERROR_DEFINE_H_
|
||||
#define _MV_ISP_ERROR_DEFINE_H_
|
||||
|
||||
/************************************************************************
|
||||
* 来自ISP算法库的错误码
|
||||
************************************************************************/
|
||||
// 通用类型
|
||||
#define MV_ALG_OK 0x00000000 //处理正确
|
||||
#define MV_ALG_ERR 0x10000000 //不确定类型错误
|
||||
|
||||
// 能力检查
|
||||
#define MV_ALG_E_ABILITY_ARG 0x10000001 //能力集中存在无效参数
|
||||
|
||||
// 内存检查
|
||||
#define MV_ALG_E_MEM_NULL 0x10000002 //内存地址为空
|
||||
#define MV_ALG_E_MEM_ALIGN 0x10000003 //内存对齐不满足要求
|
||||
#define MV_ALG_E_MEM_LACK 0x10000004 //内存空间大小不够
|
||||
#define MV_ALG_E_MEM_SIZE_ALIGN 0x10000005 //内存空间大小不满足对齐要求
|
||||
#define MV_ALG_E_MEM_ADDR_ALIGN 0x10000006 //内存地址不满足对齐要求
|
||||
|
||||
// 图像检查
|
||||
#define MV_ALG_E_IMG_FORMAT 0x10000007 //图像格式不正确或者不支持
|
||||
#define MV_ALG_E_IMG_SIZE 0x10000008 //图像宽高不正确或者超出范围
|
||||
#define MV_ALG_E_IMG_STEP 0x10000009 //图像宽高与step参数不匹配
|
||||
#define MV_ALG_E_IMG_DATA_NULL 0x1000000A //图像数据存储地址为空
|
||||
|
||||
// 输入输出参数检查
|
||||
#define MV_ALG_E_CFG_TYPE 0x1000000B //设置或者获取参数类型不正确
|
||||
#define MV_ALG_E_CFG_SIZE 0x1000000C //设置或者获取参数的输入、输出结构体大小不正确
|
||||
#define MV_ALG_E_PRC_TYPE 0x1000000D //处理类型不正确
|
||||
#define MV_ALG_E_PRC_SIZE 0x1000000E //处理时输入、输出参数大小不正确
|
||||
#define MV_ALG_E_FUNC_TYPE 0x1000000F //子处理类型不正确
|
||||
#define MV_ALG_E_FUNC_SIZE 0x10000010 //子处理时输入、输出参数大小不正确
|
||||
|
||||
// 运行参数检查
|
||||
#define MV_ALG_E_PARAM_INDEX 0x10000011 //index参数不正确
|
||||
#define MV_ALG_E_PARAM_VALUE 0x10000012 //value参数不正确或者超出范围
|
||||
#define MV_ALG_E_PARAM_NUM 0x10000013 //param_num参数不正确
|
||||
|
||||
// 接口调用检查
|
||||
#define MV_ALG_E_NULL_PTR 0x10000014 //函数参数指针为空
|
||||
#define MV_ALG_E_OVER_MAX_MEM 0x10000015 //超过限定的最大内存
|
||||
#define MV_ALG_E_CALL_BACK 0x10000016 //回调函数出错
|
||||
|
||||
// 算法库加密相关检查
|
||||
#define MV_ALG_E_ENCRYPT 0x10000017 //加密错误
|
||||
#define MV_ALG_E_EXPIRE 0x10000018 //算法库使用期限错误
|
||||
|
||||
// 内部模块返回的基本错误类型
|
||||
#define MV_ALG_E_BAD_ARG 0x10000019 //参数范围不正确
|
||||
#define MV_ALG_E_DATA_SIZE 0x1000001A //数据大小不正确
|
||||
#define MV_ALG_E_STEP 0x1000001B //数据step不正确
|
||||
|
||||
// cpu指令集支持错误码
|
||||
#define MV_ALG_E_CPUID 0x1000001C //cpu不支持优化代码中的指令集
|
||||
|
||||
#define MV_ALG_WARNING 0x1000001D //警告
|
||||
|
||||
#define MV_ALG_E_TIME_OUT 0x1000001E //算法库超时
|
||||
#define MV_ALG_E_LIB_VERSION 0x1000001F //算法版本号出错
|
||||
#define MV_ALG_E_MODEL_VERSION 0x10000020 //模型版本号出错
|
||||
#define MV_ALG_E_GPU_MEM_ALLOC 0x10000021 //GPU内存分配错误
|
||||
#define MV_ALG_E_FILE_NON_EXIST 0x10000022 //文件不存在
|
||||
#define MV_ALG_E_NONE_STRING 0x10000023 //字符串为空
|
||||
#define MV_ALG_E_IMAGE_CODEC 0x10000024 //图像解码器错误
|
||||
#define MV_ALG_E_FILE_OPEN 0x10000025 //打开文件错误
|
||||
#define MV_ALG_E_FILE_READ 0x10000026 //文件读取错误
|
||||
#define MV_ALG_E_FILE_WRITE 0x10000027 //文件写错误
|
||||
#define MV_ALG_E_FILE_READ_SIZE 0x10000028 //文件读取大小错误
|
||||
#define MV_ALG_E_FILE_TYPE 0x10000029 //文件类型错误
|
||||
#define MV_ALG_E_MODEL_TYPE 0x1000002A //模型类型错误
|
||||
#define MV_ALG_E_MALLOC_MEM 0x1000002B //分配内存错误
|
||||
#define MV_ALG_E_BIND_CORE_FAILED 0x1000002C //线程绑核失败
|
||||
|
||||
// 降噪特有错误码
|
||||
#define MV_ALG_E_DENOISE_NE_IMG_FORMAT 0x10402001 //噪声特性图像格式错误
|
||||
#define MV_ALG_E_DENOISE_NE_FEATURE_TYPE 0x10402002 //噪声特性类型错误
|
||||
#define MV_ALG_E_DENOISE_NE_PROFILE_NUM 0x10402003 //噪声特性个数错误
|
||||
#define MV_ALG_E_DENOISE_NE_GAIN_NUM 0x10402004 //噪声特性增益个数错误
|
||||
#define MV_ALG_E_DENOISE_NE_GAIN_VAL 0x10402005 //噪声曲线增益值输入错误
|
||||
#define MV_ALG_E_DENOISE_NE_BIN_NUM 0x10402006 //噪声曲线柱数错误
|
||||
#define MV_ALG_E_DENOISE_NE_INIT_GAIN 0x10402007 //噪声估计初始化增益设置错误
|
||||
#define MV_ALG_E_DENOISE_NE_NOT_INIT 0x10402008 //噪声估计未初始化
|
||||
#define MV_ALG_E_DENOISE_COLOR_MODE 0x10402009 //颜色空间模式错误
|
||||
#define MV_ALG_E_DENOISE_ROI_NUM 0x1040200a //图像ROI个数错误
|
||||
#define MV_ALG_E_DENOISE_ROI_ORI_PT 0x1040200b //图像ROI原点错误
|
||||
#define MV_ALG_E_DENOISE_ROI_SIZE 0x1040200c //图像ROI大小错误
|
||||
#define MV_ALG_E_DENOISE_GAIN_NOT_EXIST 0x1040200d //输入的相机增益不存在(增益个数已达上限)
|
||||
#define MV_ALG_E_DENOISE_GAIN_BEYOND_RANGE 0x1040200e //输入的相机增益不在范围内
|
||||
#define MV_ALG_E_DENOISE_NP_BUF_SIZE 0x1040200f //输入的噪声特性内存大小错误
|
||||
|
||||
// 去紫边特有错误码
|
||||
#define MV_ALG_E_PFC_ROI_PT 0x10405000 //去紫边算法ROI原点错误
|
||||
#define MV_ALG_E_PFC_ROI_SIZE 0x10405001 //去紫边算法ROI大小错误
|
||||
#define MV_ALG_E_PFC_KERNEL_SIZE 0x10405002 //去紫边算法滤波核尺寸错误
|
||||
|
||||
#endif //_MV_ISP_ERROR_DEFINE_H_
|
||||
2148
image_capture/third_party/mvs/Includes/MvObsoleteInterfaces.h
vendored
Normal file
2148
image_capture/third_party/mvs/Includes/MvObsoleteInterfaces.h
vendored
Normal file
File diff suppressed because it is too large
Load Diff
655
image_capture/third_party/mvs/Includes/ObsoleteCamParams.h
vendored
Normal file
655
image_capture/third_party/mvs/Includes/ObsoleteCamParams.h
vendored
Normal file
@@ -0,0 +1,655 @@
|
||||
|
||||
#ifndef _MV_OBSOLETE_CAM_PARAMS_H_
|
||||
#define _MV_OBSOLETE_CAM_PARAMS_H_
|
||||
|
||||
#include "PixelType.h"
|
||||
|
||||
/// \~chinese 输出帧的信息 \~english Output Frame Information
|
||||
typedef struct _MV_FRAME_OUT_INFO_
|
||||
{
|
||||
unsigned short nWidth; ///< [OUT] \~chinese 图像宽 \~english Image Width
|
||||
unsigned short nHeight; ///< [OUT] \~chinese 图像高 \~english Image Height
|
||||
enum MvGvspPixelType enPixelType; ///< [OUT] \~chinese 像素格式 \~english Pixel Type
|
||||
|
||||
unsigned int nFrameNum; ///< [OUT] \~chinese 帧号 \~english Frame Number
|
||||
unsigned int nDevTimeStampHigh; ///< [OUT] \~chinese 时间戳高32位 \~english Timestamp high 32 bits
|
||||
unsigned int nDevTimeStampLow; ///< [OUT] \~chinese 时间戳低32位 \~english Timestamp low 32 bits
|
||||
unsigned int nReserved0; ///< [OUT] \~chinese 保留,8字节对齐 \~english Reserved, 8-byte aligned
|
||||
int64_t nHostTimeStamp; ///< [OUT] \~chinese 主机生成的时间戳 \~english Host-generated timestamp
|
||||
|
||||
unsigned int nFrameLen;
|
||||
|
||||
unsigned int nLostPacket; // 本帧丢包数
|
||||
unsigned int nReserved[2];
|
||||
}MV_FRAME_OUT_INFO;
|
||||
|
||||
/// \~chinese 保存图片参数 \~english Save image type
|
||||
typedef struct _MV_SAVE_IMAGE_PARAM_T_
|
||||
{
|
||||
unsigned char* pData; ///< [IN] \~chinese 输入数据缓存 \~english Input Data Buffer
|
||||
unsigned int nDataLen; ///< [IN] \~chinese 输入数据大小 \~english Input Data Size
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 输入像素格式 \~english Input Data Pixel Format
|
||||
unsigned short nWidth; ///< [IN] \~chinese 图像宽 \~english Image Width
|
||||
unsigned short nHeight; ///< [IN] \~chinese 图像高 \~english Image Height
|
||||
|
||||
unsigned char* pImageBuffer; ///< [OUT] \~chinese 输出图片缓存 \~english Output Image Buffer
|
||||
unsigned int nImageLen; ///< [OUT] \~chinese 输出图片大小 \~english Output Image Size
|
||||
unsigned int nBufferSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Output buffer size provided
|
||||
enum MV_SAVE_IAMGE_TYPE enImageType; ///< [IN] \~chinese 输出图片格式 \~english Output Image Format
|
||||
|
||||
}MV_SAVE_IMAGE_PARAM;
|
||||
|
||||
typedef struct _MV_IMAGE_BASIC_INFO_
|
||||
{
|
||||
unsigned short nWidthValue;
|
||||
unsigned short nWidthMin;
|
||||
unsigned int nWidthMax;
|
||||
unsigned int nWidthInc;
|
||||
|
||||
unsigned int nHeightValue;
|
||||
unsigned int nHeightMin;
|
||||
unsigned int nHeightMax;
|
||||
unsigned int nHeightInc;
|
||||
|
||||
float fFrameRateValue;
|
||||
float fFrameRateMin;
|
||||
float fFrameRateMax;
|
||||
|
||||
unsigned int enPixelType; ///< [OUT] \~chinese 当前的像素格式 \~english Current pixel format
|
||||
unsigned int nSupportedPixelFmtNum; ///< [OUT] \~chinese 支持的像素格式种类 \~english Support pixel format
|
||||
unsigned int enPixelList[MV_MAX_XML_SYMBOLIC_NUM];
|
||||
unsigned int nReserved[8];
|
||||
|
||||
}MV_IMAGE_BASIC_INFO;
|
||||
|
||||
|
||||
/// \~chinese 噪声特性类型 \~english Noise feature type
|
||||
typedef enum _MV_CC_BAYER_NOISE_FEATURE_TYPE
|
||||
{
|
||||
MV_CC_BAYER_NOISE_FEATURE_TYPE_INVALID = 0, ///< \~chinese 无效值 \~english Invalid
|
||||
MV_CC_BAYER_NOISE_FEATURE_TYPE_PROFILE = 1, ///< \~chinese 噪声曲线 \~english Noise curve
|
||||
MV_CC_BAYER_NOISE_FEATURE_TYPE_LEVEL = 2, ///< \~chinese 噪声水平 \~english Noise level
|
||||
MV_CC_BAYER_NOISE_FEATURE_TYPE_DEFAULT = 1, ///< \~chinese 默认值 \~english Default
|
||||
|
||||
}MV_CC_BAYER_NOISE_FEATURE_TYPE;
|
||||
|
||||
/// \~chinese Bayer格式降噪特性信息 \~english Denoise profile info
|
||||
typedef struct _MV_CC_BAYER_NOISE_PROFILE_INFO_T_
|
||||
{
|
||||
unsigned int nVersion; ///< \~chinese 版本 \~english version
|
||||
MV_CC_BAYER_NOISE_FEATURE_TYPE enNoiseFeatureType; ///< \~chinese 噪声特性类型 \~english noise feature type
|
||||
enum MvGvspPixelType enPixelType; ///< \~chinese 图像格式 \~english image format
|
||||
int nNoiseLevel; ///< \~chinese 平均噪声水平 \~english noise level
|
||||
unsigned int nCurvePointNum; ///< \~chinese 曲线点数 \~english curve point number
|
||||
int* nNoiseCurve; ///< \~chinese 噪声曲线 \~english noise curve
|
||||
int* nLumCurve; ///< \~chinese 亮度曲线 \~english luminance curve
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_BAYER_NOISE_PROFILE_INFO;
|
||||
|
||||
/// \~chinese Bayer格式噪声估计参数 \~english Bayer noise estimate param
|
||||
typedef struct _MV_CC_BAYER_NOISE_ESTIMATE_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽(大于等于8) \~english Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高(大于等于8) \~english Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
|
||||
unsigned char* pSrcData; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcDataLen; ///< [IN] \~chinese 输入数据大小 \~english Input data size
|
||||
|
||||
unsigned int nNoiseThreshold; ///< [IN] \~chinese 噪声阈值(0-4095) \~english Noise Threshold
|
||||
|
||||
unsigned char* pCurveBuf; ///< [IN] \~chinese 用于存储噪声曲线和亮度曲线(需要外部分配,缓存大小:4096 * sizeof(int) * 2) \~english Buffer used to store noise and brightness curves, size:4096 * sizeof(int) * 2)
|
||||
MV_CC_BAYER_NOISE_PROFILE_INFO stNoiseProfile; ///< [OUT] \~chinese 降噪特性信息 \~english Denoise profile
|
||||
|
||||
unsigned int nThreadNum; ///< [IN] \~chinese 线程数量,0表示算法库根据硬件自适应;1表示单线程(默认);大于1表示线程数目 \~english Thread number, 0 means that the library is adaptive to the hardware, 1 means single thread(Default value), Greater than 1 indicates the number of threads
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_BAYER_NOISE_ESTIMATE_PARAM;
|
||||
|
||||
/// \~chinese Bayer格式空域降噪参数 \~english Bayer spatial Denoise param
|
||||
typedef struct _MV_CC_BAYER_SPATIAL_DENOISE_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽(大于等于8) \~english Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高(大于等于8) \~english Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
|
||||
unsigned char* pSrcData; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcDataLen; ///< [IN] \~chinese 输入数据大小 \~english Input data size
|
||||
|
||||
unsigned char* pDstBuf; ///< [OUT] \~chinese 输出降噪后的数据 \~english Output data buffer
|
||||
unsigned int nDstBufSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
unsigned int nDstBufLen; ///< [OUT] \~chinese 输出降噪后的数据长度 \~english Output data length
|
||||
|
||||
MV_CC_BAYER_NOISE_PROFILE_INFO stNoiseProfile; ///< [IN] \~chinese 降噪特性信息(来源于噪声估计) \~english Denoise profile
|
||||
unsigned int nDenoiseStrength; ///< [IN] \~chinese 降噪强度(0-100) \~english nDenoise Strength
|
||||
unsigned int nSharpenStrength; ///< [IN] \~chinese 锐化强度(0-32) \~english Sharpen Strength
|
||||
unsigned int nNoiseCorrect; ///< [IN] \~chinese 噪声校正系数(0-1280) \~english Noise Correct
|
||||
|
||||
unsigned int nThreadNum; ///< [IN] \~chinese 线程数量,0表示算法库根据硬件自适应;1表示单线程(默认);大于1表示线程数目 \~english Thread number, 0 means that the library is adaptive to the hardware, 1 means single thread(Default value), Greater than 1 indicates the number of threads
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_BAYER_SPATIAL_DENOISE_PARAM;
|
||||
|
||||
/// \~chinese CLUT参数 \~english CLUT param
|
||||
typedef struct _MV_CC_CLUT_PARAM_T_
|
||||
{
|
||||
bool bCLUTEnable; ///< [IN] \~chinese 是否启用CLUT \~english CLUT enable
|
||||
unsigned int nCLUTScale; ///< [IN] \~chinese 量化系数(2的整数幂,最大65536) \~english Quantitative scale(Integer power of 2, <= 65536)
|
||||
unsigned int nCLUTSize; ///< [IN] \~chinese CLUT大小,目前仅支持17 \~english CLUT size, currently only supports 17
|
||||
unsigned char* pCLUTBuf; ///< [IN] \~chinese 量化CLUT表 \~english CLUT buffer
|
||||
unsigned int nCLUTBufLen; ///< [IN] \~chinese 量化CLUT缓存大小(nCLUTSize*nCLUTSize*nCLUTSize*sizeof(int)*3) \~english CLUT buffer length(nCLUTSize*nCLUTSize*nCLUTSize*sizeof(int)*3)
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_CLUT_PARAM;
|
||||
|
||||
/// \~chinese 锐化结构体 \~english Sharpen structure
|
||||
typedef struct _MV_CC_SHARPEN_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽度(最小8) \~english Image Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高度(最小8) \~english Image Height
|
||||
unsigned char* pSrcBuf; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcBufLen; ///< [IN] \~chinese 输入数据大小 \~english Input data length
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
|
||||
unsigned char* pDstBuf; ///< [OUT] \~chinese 输出数据缓存 \~english Output data buffer
|
||||
unsigned int nDstBufSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
unsigned int nDstBufLen; ///< [OUT] \~chinese 输出数据长度 \~english Output data length
|
||||
|
||||
unsigned int nSharpenAmount; ///< [IN] \~chinese 锐度调节强度,[0,500] \~english Sharpen amount,[0,500] // [nSharpenAmount 作废, 使用 nSharpenPosAmount & nSharpenNegAmount 替代 ]
|
||||
unsigned int nSharpenRadius; ///< [IN] \~chinese 锐度调节半径(半径越大,耗时越长),[1,21] \~english Sharpen radius(The larger the radius, the longer it takes),[1,21]
|
||||
unsigned int nSharpenThreshold; ///< [IN] \~chinese 锐度调节阈值,[0,255] \~english Sharpen threshold,[0,255]
|
||||
|
||||
|
||||
unsigned int nSharpenPosAmount; // [IN] 锐度调节正向强度,范围:[0, 500]
|
||||
unsigned int nSharpenNegAmount; // [IN] 锐度调节负向强度,范围:[0, 500]
|
||||
|
||||
unsigned int nRes[6]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_SHARPEN_PARAM;
|
||||
|
||||
/// \~chinese 色彩校正结构体 \~english Color correct structure
|
||||
typedef struct _MV_CC_COLOR_CORRECT_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽度 \~english Image Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高度 \~english Image Height
|
||||
unsigned char* pSrcBuf; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcBufLen; ///< [IN] \~chinese 输入数据大小 \~english Input data length
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
|
||||
unsigned char* pDstBuf; ///< [OUT] \~chinese 输出数据缓存 \~english Output data buffer
|
||||
unsigned int nDstBufSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
unsigned int nDstBufLen; ///< [OUT] \~chinese 输出数据长度 \~english Output data length
|
||||
|
||||
unsigned int nImageBit; ///< [IN] \~chinese 有效图像位数(8,10,12,16) \~english Image bit(8 or 10 or 12 or 16)
|
||||
MV_CC_GAMMA_PARAM stGammaParam; ///< [IN] \~chinese Gamma信息 \~english Gamma info
|
||||
MV_CC_CCM_PARAM_EX stCCMParam; ///< [IN] \~chinese CCM信息 \~english CCM info
|
||||
MV_CC_CLUT_PARAM stCLUTParam; ///< [IN] \~chinese CLUT信息 \~english CLUT info
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_COLOR_CORRECT_PARAM;
|
||||
|
||||
/// \~chinese 矩形ROI结构体 \~english Rect ROI structure
|
||||
typedef struct _MV_CC_RECT_I_
|
||||
{
|
||||
unsigned int nX; ///< \~chinese 矩形左上角X轴坐标 \~english X Position
|
||||
unsigned int nY; ///< \~chinese 矩形左上角Y轴坐标 \~english Y Position
|
||||
unsigned int nWidth; ///< \~chinese 矩形宽度 \~english Rect Width
|
||||
unsigned int nHeight; ///< \~chinese 矩形高度 \~english Rect Height
|
||||
|
||||
}MV_CC_RECT_I;
|
||||
|
||||
/// \~chinese 噪声估计结构体 \~english Noise estimate structure
|
||||
typedef struct _MV_CC_NOISE_ESTIMATE_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽度(最小8) \~english Image Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高度(最小8) \~english Image Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
unsigned char* pSrcBuf; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcBufLen; ///< [IN] \~chinese 输入数据大小 \~english Input data length
|
||||
|
||||
MV_CC_RECT_I* pstROIRect; ///< [IN] \~chinese 图像ROI \~english Image ROI
|
||||
unsigned int nROINum; ///< [IN] \~chinese ROI个数 \~english ROI number
|
||||
|
||||
///< \~chinese Bayer域噪声估计参数,Mono8/RGB域无效 \~english Bayer Noise estimate param,Mono8/RGB formats are invalid
|
||||
unsigned int nNoiseThreshold; ///< [IN] \~chinese 噪声阈值[0,4095] \~english Noise threshold[0,4095]
|
||||
///< \~chinese 建议值:8bit,0xE0;10bit,0x380;12bit,0xE00 \~english Suggestive value:8bit,0xE0;10bit,0x380;12bit,0xE00
|
||||
|
||||
unsigned char* pNoiseProfile; ///< [OUT] \~chinese 输出噪声特性 \~english Output Noise Profile
|
||||
unsigned int nNoiseProfileSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
unsigned int nNoiseProfileLen; ///< [OUT] \~chinese 输出噪声特性长度 \~english Output Noise Profile length
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_NOISE_ESTIMATE_PARAM;
|
||||
|
||||
/// \~chinese 空域降噪结构体 \~english Spatial denoise structure
|
||||
typedef struct _MV_CC_SPATIAL_DENOISE_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽度(最小8) \~english Image Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高度(最小8) \~english Image Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
unsigned char* pSrcBuf; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcBufLen; ///< [IN] \~chinese 输入数据大小 \~english Input data length
|
||||
|
||||
unsigned char* pDstBuf; ///< [OUT] \~chinese 输出降噪后的数据 \~english Output data buffer
|
||||
unsigned int nDstBufSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
unsigned int nDstBufLen; ///< [OUT] \~chinese 输出降噪后的数据长度 \~english Output data length
|
||||
|
||||
unsigned char* pNoiseProfile; ///< [IN] \~chinese 输入噪声特性 \~english Input Noise Profile
|
||||
unsigned int nNoiseProfileLen; ///< [IN] \~chinese 输入噪声特性长度 \~english Input Noise Profile length
|
||||
|
||||
///< \~chinese Bayer域空域降噪参数,Mono8/RGB域无效 \~english Bayer Spatial denoise param,Mono8/RGB formats are invalid
|
||||
unsigned int nBayerDenoiseStrength; ///< [IN] \~chinese 降噪强度[0,100] \~english Denoise Strength[0,100]
|
||||
unsigned int nBayerSharpenStrength; ///< [IN] \~chinese 锐化强度[0,32] \~english Sharpen Strength[0,32]
|
||||
unsigned int nBayerNoiseCorrect; ///< [IN] \~chinese 噪声校正系数[0,1280] \~english Noise Correct[0,1280]
|
||||
|
||||
///< \~chinese Mono8/RGB域空域降噪参数,Bayer域无效 \~english Mono8/RGB Spatial denoise param,Bayer formats are invalid
|
||||
unsigned int nNoiseCorrectLum; ///< [IN] \~chinese 亮度校正系数[1,2000] \~english Noise Correct Lum[1,2000]
|
||||
unsigned int nNoiseCorrectChrom; ///< [IN] \~chinese 色调校正系数[1,2000] \~english Noise Correct Chrom[1,2000]
|
||||
unsigned int nStrengthLum; ///< [IN] \~chinese 亮度降噪强度[0,100] \~english Strength Lum[0,100]
|
||||
unsigned int nStrengthChrom; ///< [IN] \~chinese 色调降噪强度[0,100] \~english Strength Chrom[0,100]
|
||||
unsigned int nStrengthSharpen; ///< [IN] \~chinese 锐化强度[1,1000] \~english Strength Sharpen[1,1000]
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_SPATIAL_DENOISE_PARAM;
|
||||
|
||||
/// \~chinese LSC标定结构体 \~english LSC calib structure
|
||||
typedef struct _MV_CC_LSC_CALIB_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽度[16,65535] \~english Image Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高度[16-65535] \~english Image Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
unsigned char* pSrcBuf; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcBufLen; ///< [IN] \~chinese 输入数据长度 \~english Input data length
|
||||
|
||||
unsigned char* pCalibBuf; ///< [OUT] \~chinese 输出标定表缓存 \~english Output calib buffer
|
||||
unsigned int nCalibBufSize; ///< [IN] \~chinese 提供的标定表缓冲大小(nWidth*nHeight*sizeof(unsigned short)) \~english Provided output buffer size
|
||||
unsigned int nCalibBufLen; ///< [OUT] \~chinese 输出标定表缓存长度 \~english Output calib buffer length
|
||||
|
||||
unsigned int nSecNumW; ///< [IN] \~chinese 宽度分块数 \~english Width Sec num
|
||||
unsigned int nSecNumH; ///< [IN] \~chinese 高度分块数 \~english Height Sec num
|
||||
unsigned int nPadCoef; ///< [IN] \~chinese 边缘填充系数[1,5] \~english Pad Coef[1,5]
|
||||
unsigned int nCalibMethod; ///< [IN] \~chinese 标定方式(0-中心为基准;1-最亮区域为基准;2-目标亮度为基准) \~english Calib method
|
||||
unsigned int nTargetGray; ///< [IN] \~chinese 目标亮度(标定方式为2时有效) \~english Target Gray
|
||||
///< \~chinese 8位,范围:[0,255] \~english 8bit,range:[0,255]
|
||||
///< \~chinese 10位,范围:[0,1023] \~english 10bit,range:[0,1023]
|
||||
///< \~chinese 12位,范围:[0,4095] \~english 12bit,range:[0,4095]
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_LSC_CALIB_PARAM;
|
||||
|
||||
/// \~chinese LSC校正结构体 \~english LSC correct structure
|
||||
typedef struct _MV_CC_LSC_CORRECT_PARAM_T_
|
||||
{
|
||||
unsigned int nWidth; ///< [IN] \~chinese 图像宽度[16,65535] \~english Image Width
|
||||
unsigned int nHeight; ///< [IN] \~chinese 图像高度[16,65535] \~english Image Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
unsigned char* pSrcBuf; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcBufLen; ///< [IN] \~chinese 输入数据长度 \~english Input data length
|
||||
|
||||
unsigned char* pDstBuf; ///< [OUT] \~chinese 输出数据缓存 \~english Output data buffer
|
||||
unsigned int nDstBufSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
unsigned int nDstBufLen; ///< [OUT] \~chinese 输出数据长度 \~english Output data length
|
||||
|
||||
unsigned char* pCalibBuf; ///< [IN] \~chinese 输入标定表缓存 \~english Input calib buffer
|
||||
unsigned int nCalibBufLen; ///< [IN] \~chinese 输入标定表缓存长度 \~english Input calib buffer length
|
||||
|
||||
unsigned int nRes[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_LSC_CORRECT_PARAM;
|
||||
|
||||
/// \~chinese 某个节点对应的子节点个数最大值 \~english The maximum number of child nodes corresponding to a node
|
||||
#define MV_MAX_XML_NODE_NUM_C 128
|
||||
|
||||
/// \~chinese 节点名称字符串最大长度 \~english The maximum length of node name string
|
||||
#define MV_MAX_XML_NODE_STRLEN_C 64
|
||||
|
||||
/// \~chinese 节点String值最大长度 \~english The maximum length of Node String
|
||||
#define MV_MAX_XML_STRVALUE_STRLEN_C 64
|
||||
|
||||
/// \~chinese 节点描述字段最大长度 \~english The maximum length of the node description field
|
||||
#define MV_MAX_XML_DISC_STRLEN_C 512
|
||||
|
||||
/// \~chinese 最多的单元数 \~english The maximum number of units
|
||||
#define MV_MAX_XML_ENTRY_NUM 10
|
||||
|
||||
/// \~chinese 父节点个数上限 \~english The maximum number of parent nodes
|
||||
#define MV_MAX_XML_PARENTS_NUM 8
|
||||
|
||||
/// \~chinese 每个已经实现单元的名称长度 \~english The length of the name of each unit that has been implemented
|
||||
#define MV_MAX_XML_SYMBOLIC_STRLEN_C 64
|
||||
|
||||
enum MV_XML_Visibility
|
||||
{
|
||||
V_Beginner = 0, ///< Always visible
|
||||
V_Expert = 1, ///< Visible for experts or Gurus
|
||||
V_Guru = 2, ///< Visible for Gurus
|
||||
V_Invisible = 3, ///< Not Visible
|
||||
V_Undefined = 99 ///< Object is not yet initialized
|
||||
};
|
||||
|
||||
/// \~chinese 单个节点基本属性 | en:Single Node Basic Attributes
|
||||
typedef struct _MV_XML_NODE_FEATURE_
|
||||
{
|
||||
enum MV_XML_InterfaceType enType; ///< \~chinese 节点类型 \~english Node Type
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Is visibility
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 节点描述,目前暂不支持 \~english Node Description, NOT SUPPORT NOW
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 显示名称 \~english Display Name
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 节点名 \~english Node Name
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 提示 \~english Notice
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_NODE_FEATURE;
|
||||
|
||||
/// \~chinese 节点列表 | en:Node List
|
||||
typedef struct _MV_XML_NODES_LIST_
|
||||
{
|
||||
unsigned int nNodeNum; ///< \~chinese 节点个数 \~english Node Number
|
||||
MV_XML_NODE_FEATURE stNodes[MV_MAX_XML_NODE_NUM_C];
|
||||
}MV_XML_NODES_LIST;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Value_
|
||||
{
|
||||
enum MV_XML_InterfaceType enType; ///< \~chinese 节点类型 \~english Node Type
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 节点描述,目前暂不支持 \~english Node Description, NOT SUPPORT NOW
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 显示名称 \~english Display Name
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 节点名 \~english Node Name
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 提示 \~english Notice
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Value;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Base_
|
||||
{
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
}MV_XML_FEATURE_Base;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Integer_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
int64_t nValue; ///< \~chinese 当前值 \~english Current Value
|
||||
int64_t nMinValue; ///< \~chinese 最小值 \~english Min Value
|
||||
int64_t nMaxValue; ///< \~chinese 最大值 \~english Max Value
|
||||
int64_t nIncrement; ///< \~chinese 增量 \~english Increment
|
||||
|
||||
unsigned int nReserved[4];
|
||||
|
||||
}MV_XML_FEATURE_Integer;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Boolean_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
bool bValue; ///< \~chinese 当前值 \~english Current Value
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Boolean;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Command_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Command;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Float_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
double dfValue; ///< \~chinese 当前值 \~english Current Value
|
||||
double dfMinValue; ///< \~chinese 最小值 \~english Min Value
|
||||
double dfMaxValue; ///< \~chinese 最大值 \~english Max Value
|
||||
double dfIncrement; ///< \~chinese 增量 \~english Increment
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Float;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_String_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
char strValue[MV_MAX_XML_STRVALUE_STRLEN_C]; ///< \~chinese 当前值 \~english Current Value
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_String;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Register_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
int64_t nAddrValue; ///< \~chinese 当前值 \~english Current Value
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Register;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Category_
|
||||
{
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 节点描述 目前暂不支持 \~english Node Description, NOT SUPPORT NOW
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 显示名称 \~english Display Name
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 节点名 \~english Node Name
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 提示 \~english Notice
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Category;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_EnumEntry_
|
||||
{
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C];
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 目前暂不支持 \~english NOT SUPPORT NOW
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C];
|
||||
int bIsImplemented;
|
||||
int nParentsNum;
|
||||
MV_XML_NODE_FEATURE stParentsList[MV_MAX_XML_PARENTS_NUM];
|
||||
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
int64_t nValue; ///< \~chinese 当前值 \~english Current Value
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
int nReserved[8];
|
||||
|
||||
}MV_XML_FEATURE_EnumEntry;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Enumeration_
|
||||
{
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 节点描述 目前暂不支持 \~english Node Description, NOT SUPPORT NOW
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 显示名称 \~english Display Name
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 节点名 \~english Node Name
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 提示 \~english Notice
|
||||
|
||||
int nSymbolicNum; ///< \~chinese ymbolic数 \~english Symbolic Number
|
||||
char strCurrentSymbolic[MV_MAX_XML_SYMBOLIC_STRLEN_C];///< \~chinese 当前Symbolic索引 \~english Current Symbolic Index
|
||||
char strSymbolic[MV_MAX_XML_SYMBOLIC_NUM][MV_MAX_XML_SYMBOLIC_STRLEN_C];
|
||||
enum MV_XML_AccessMode enAccessMode; ////< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
int64_t nValue; ///< \~chinese 当前值 \~english Current Value
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Enumeration;
|
||||
|
||||
typedef struct _MV_XML_FEATURE_Port_
|
||||
{
|
||||
enum MV_XML_Visibility enVisivility; ///< \~chinese 是否可见 \~english Visible
|
||||
char strDescription[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 节点描述,目前暂不支持 \~english Node Description, NOT SUPPORT NOW
|
||||
char strDisplayName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 显示名称 \~english Display Name
|
||||
char strName[MV_MAX_XML_NODE_STRLEN_C]; ///< \~chinese 节点名 \~english Node Name
|
||||
char strToolTip[MV_MAX_XML_DISC_STRLEN_C]; ///< \~chinese 提示 \~english Notice
|
||||
|
||||
enum MV_XML_AccessMode enAccessMode; ///< \~chinese 访问模式 \~english Access Mode
|
||||
int bIsLocked; ///< \~chinese 是否锁定。0-否;1-是,目前暂不支持 \~english Locked. 0-NO; 1-YES, NOT SUPPORT NOW
|
||||
|
||||
unsigned int nReserved[4];
|
||||
}MV_XML_FEATURE_Port;
|
||||
|
||||
typedef struct _MV_XML_CAMERA_FEATURE_
|
||||
{
|
||||
enum MV_XML_InterfaceType enType;
|
||||
union
|
||||
{
|
||||
MV_XML_FEATURE_Integer stIntegerFeature;
|
||||
MV_XML_FEATURE_Float stFloatFeature;
|
||||
MV_XML_FEATURE_Enumeration stEnumerationFeature;
|
||||
MV_XML_FEATURE_String stStringFeature;
|
||||
}SpecialFeature;
|
||||
|
||||
}MV_XML_CAMERA_FEATURE;
|
||||
|
||||
|
||||
|
||||
/// \~chinese 图片保存参数 \~english Save Image Parameters
|
||||
typedef struct _MV_SAVE_IMAGE_PARAM_T_EX_
|
||||
{
|
||||
unsigned char* pData; ///< [IN] \~chinese 输入数据缓存 \~english Input Data Buffer
|
||||
unsigned int nDataLen; ///< [IN] \~chinese 输入数据长度 \~english Input Data length
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 输入数据的像素格式 \~english Input Data Pixel Format
|
||||
unsigned short nWidth; ///< [IN] \~chinese 图像宽 \~english Image Width
|
||||
unsigned short nHeight; ///< [IN] \~chinese 图像高 \~english Image Height
|
||||
|
||||
unsigned char* pImageBuffer; ///< [OUT] \~chinese 输出图片缓存 \~english Output Image Buffer
|
||||
unsigned int nImageLen; ///< [OUT] \~chinese 输出图片长度 \~english Output Image length
|
||||
unsigned int nBufferSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Output buffer size provided
|
||||
enum MV_SAVE_IAMGE_TYPE enImageType; ///< [IN] \~chinese 输出图片格式 \~english Output Image Format
|
||||
unsigned int nJpgQuality; ///< [IN] \~chinese JPG编码质量(50-99],其它格式无效 \~english Encoding quality(50-99],Other formats are invalid
|
||||
|
||||
unsigned int iMethodValue; ///< [IN] \~chinese 插值方法 0-快速 1-均衡(其它值默认为均衡) 2-最优 3-最优+ \~english Bayer interpolation method 0-Fast 1-Equilibrium 2-Optimal 3-Optimal+
|
||||
|
||||
unsigned int nReserved[3]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_SAVE_IMAGE_PARAM_EX;
|
||||
|
||||
|
||||
|
||||
/// \~chinese 图片保存参数 \~english Save Image Parameters
|
||||
typedef struct _MV_SAVE_IMG_TO_FILE_PARAM_
|
||||
{
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese输入数据的像素格式 \~english The pixel format of the input data
|
||||
unsigned char* pData; ///< [IN] \~chinese 输入数据缓存 \~english Input Data Buffer
|
||||
unsigned int nDataLen; ///< [IN] \~chinese 输入数据长度 \~english Input Data length
|
||||
unsigned short nWidth; ///< [IN] \~chinese 图像宽 \~english Image Width
|
||||
unsigned short nHeight; ///< [IN] \~chinese 图像高 \~english Image Height
|
||||
enum MV_SAVE_IAMGE_TYPE enImageType; ///< [IN] \~chinese 输入图片格式 \~english Input Image Format
|
||||
unsigned int nQuality; ///< [IN] \~chinese JPG编码质量(50-99],其它格式无效 \~english JPG Encoding quality(50-99],Other formats are invalid
|
||||
char pImagePath[256]; ///< [IN] \~chinese 输入文件路径 \~english Input file path
|
||||
|
||||
int iMethodValue; ///< [IN] \~chinese 插值方法 0-快速 1-均衡(其它值默认为均衡) 2-最优 3-最优+ \~english Bayer interpolation method 0-Fast 1-Equilibrium 2-Optimal 3-Optimal+
|
||||
|
||||
unsigned int nReserved[8]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_SAVE_IMG_TO_FILE_PARAM;
|
||||
|
||||
|
||||
// \~chinese 像素转换结构体 \~english Pixel convert structure
|
||||
typedef struct _MV_CC_PIXEL_CONVERT_PARAM_
|
||||
{
|
||||
unsigned short nWidth; ///< [IN] \~chinese 图像宽 \~english Width
|
||||
unsigned short nHeight; ///< [IN] \~chinese 图像高 \~english Height
|
||||
|
||||
enum MvGvspPixelType enSrcPixelType; ///< [IN] \~chinese 源像素格式 \~english Source pixel format
|
||||
unsigned char* pSrcData; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcDataLen; ///< [IN] \~chinese 输入数据长度 \~english Input data length
|
||||
|
||||
enum MvGvspPixelType enDstPixelType; ///< [IN] \~chinese 目标像素格式 \~english Destination pixel format
|
||||
unsigned char* pDstBuffer; ///< [OUT] \~chinese 输出数据缓存 \~english Output data buffer
|
||||
unsigned int nDstLen; ///< [OUT] \~chinese 输出数据长度 \~english Output data length
|
||||
unsigned int nDstBufferSize; ///< [IN] \~chinese 提供的输出缓冲区大小 \~english Provided output buffer size
|
||||
|
||||
unsigned int nRes[4]; ///< \~chinese 预留 \~english Reserved
|
||||
|
||||
}MV_CC_PIXEL_CONVERT_PARAM;
|
||||
|
||||
/// \~chinese 保存的3D数据格式 \~english The saved format for 3D data
|
||||
enum MV_SAVE_POINT_CLOUD_FILE_TYPE
|
||||
{
|
||||
MV_PointCloudFile_Undefined = 0, ///< \~chinese 未定义的点云格式 \~english Undefined point cloud format
|
||||
MV_PointCloudFile_PLY = 1, ///< \~chinese PLY点云格式 \~english The point cloud format named PLY
|
||||
MV_PointCloudFile_CSV = 2, ///< \~chinese CSV点云格式 \~english The point cloud format named CSV
|
||||
MV_PointCloudFile_OBJ = 3, ///< \~chinese OBJ点云格式 \~english The point cloud format named OBJ
|
||||
|
||||
};
|
||||
|
||||
/// \~chinese 保存3D数据到缓存 \~english Save 3D data to buffer
|
||||
typedef struct _MV_SAVE_POINT_CLOUD_PARAM_
|
||||
{
|
||||
unsigned int nLinePntNum; ///< [IN] \~chinese 行点数,即图像宽 \~english The number of points in each row,which is the width of the image
|
||||
unsigned int nLineNum; ///< [IN] \~chinese 行数,即图像高 \~english The number of rows,which is the height of the image
|
||||
|
||||
enum MvGvspPixelType enSrcPixelType; ///< [IN] \~chinese 输入数据的像素格式 \~english The pixel format of the input data
|
||||
unsigned char* pSrcData; ///< [IN] \~chinese 输入数据缓存 \~english Input data buffer
|
||||
unsigned int nSrcDataLen; ///< [IN] \~chinese 输入数据长度 \~english Input data length
|
||||
|
||||
unsigned char* pDstBuf; ///< [OUT] \~chinese 输出像素数据缓存 \~english Output pixel data buffer
|
||||
unsigned int nDstBufSize; ///< [IN] \~chinese 提供的输出缓冲区大小(nLinePntNum * nLineNum * (16*3 + 4) + 2048) \~english Output buffer size provided(nLinePntNum * nLineNum * (16*3 + 4) + 2048)
|
||||
unsigned int nDstBufLen; ///< [OUT] \~chinese 输出像素数据缓存长度 \~english Output pixel data buffer size
|
||||
enum MV_SAVE_POINT_CLOUD_FILE_TYPE enPointCloudFileType; ///< [IN] \~chinese 提供输出的点云文件类型 \~english Output point data file type provided
|
||||
|
||||
unsigned int nReserved[8]; ///< \~chinese 保留字段 \~english Reserved
|
||||
|
||||
}MV_SAVE_POINT_CLOUD_PARAM;
|
||||
|
||||
/// \~chinese 显示帧信息 \~english Display frame information
|
||||
typedef struct _MV_DISPLAY_FRAME_INFO_
|
||||
{
|
||||
void* hWnd; ///< [IN] \~chinese 窗口句柄 \~english HWND
|
||||
unsigned char* pData; ///< [IN] \~chinese 显示的数据 \~english Data Buffer
|
||||
unsigned int nDataLen; ///< [IN] \~chinese 数据长度 \~english Data Size
|
||||
unsigned short nWidth; ///< [IN] \~chinese 图像宽 \~english Width
|
||||
unsigned short nHeight; ///< [IN] \~chinese 图像高 \~english Height
|
||||
enum MvGvspPixelType enPixelType; ///< [IN] \~chinese 像素格式 \~english Pixel format
|
||||
|
||||
unsigned int enRenderMode; /// [IN] \~chinese 图像渲染方式 Windows:0-GDI(默认), 1-D3D, 2-OPENGL Linux: 0-OPENGL(默认) \~english Windows:0-GDI(default), 1-D3D, 2-OPENGL Linux: 0-OPENGL(default)
|
||||
unsigned int nRes[3]; ///< \~chinese 保留 \~english Reserved
|
||||
|
||||
}MV_DISPLAY_FRAME_INFO;
|
||||
|
||||
|
||||
|
||||
|
||||
#endif /* _MV_OBSOLETE_CAM_PARAMS_H_ */
|
||||
201
image_capture/third_party/mvs/Includes/PixelType.h
vendored
Normal file
201
image_capture/third_party/mvs/Includes/PixelType.h
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
|
||||
#ifndef _MV_PIXEL_TYPE_H_
|
||||
#define _MV_PIXEL_TYPE_H_
|
||||
|
||||
/************************************************************************/
|
||||
/* GigE Vision (2.0.03) PIXEL FORMATS */
|
||||
/************************************************************************/
|
||||
|
||||
// Indicate if pixel is monochrome or RGB
|
||||
#define MV_GVSP_PIX_MONO 0x01000000
|
||||
#define MV_GVSP_PIX_RGB 0x02000000 // deprecated in version 1.1
|
||||
#define MV_GVSP_PIX_COLOR 0x02000000
|
||||
#define MV_GVSP_PIX_CUSTOM 0x80000000
|
||||
#define MV_GVSP_PIX_COLOR_MASK 0xFF000000
|
||||
|
||||
// Indicate effective number of bits occupied by the pixel (including padding).
|
||||
// This can be used to compute amount of memory required to store an image.
|
||||
#define MV_PIXEL_BIT_COUNT(n) ((n) << 16)
|
||||
|
||||
#define MV_GVSP_PIX_EFFECTIVE_PIXEL_SIZE_MASK 0x00FF0000
|
||||
#define MV_GVSP_PIX_EFFECTIVE_PIXEL_SIZE_SHIFT 16
|
||||
|
||||
// Pixel ID: lower 16-bit of the pixel formats
|
||||
#define MV_GVSP_PIX_ID_MASK 0x0000FFFF
|
||||
#define MV_GVSP_PIX_COUNT 0x46 // next Pixel ID available
|
||||
|
||||
/// \addtogroup 像素格式定义
|
||||
///@{
|
||||
|
||||
///< \~chinese 图片格式定义
|
||||
enum MvGvspPixelType
|
||||
{
|
||||
// Undefined pixel type
|
||||
#ifdef WIN32
|
||||
PixelType_Gvsp_Undefined = 0xFFFFFFFF, ///< 未定义的像素类型
|
||||
|
||||
#else
|
||||
PixelType_Gvsp_Undefined = -1, ///< 未定义的像素类型
|
||||
|
||||
#endif
|
||||
// Mono buffer format defines
|
||||
PixelType_Gvsp_Mono1p = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(1) | 0x0037), ///< Mono1p
|
||||
PixelType_Gvsp_Mono2p = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(2) | 0x0038), ///< Mono2p
|
||||
PixelType_Gvsp_Mono4p = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(4) | 0x0039), ///< Mono4p
|
||||
PixelType_Gvsp_Mono8 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0001), ///< Mono8
|
||||
PixelType_Gvsp_Mono8_Signed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0002), ///< Mono8_Signed
|
||||
PixelType_Gvsp_Mono10 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0003), ///< Mono10
|
||||
PixelType_Gvsp_Mono10_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0004), ///< Mono10_Packed
|
||||
PixelType_Gvsp_Mono12 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0005), ///< Mono12
|
||||
PixelType_Gvsp_Mono12_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0006), ///< Mono12_Packed
|
||||
PixelType_Gvsp_Mono14 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0025), ///< Mono14
|
||||
PixelType_Gvsp_Mono16 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0007), ///< Mono16
|
||||
|
||||
// Bayer buffer format defines
|
||||
PixelType_Gvsp_BayerGR8 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0008), ///< BayerGR8
|
||||
PixelType_Gvsp_BayerRG8 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0009), ///< BayerRG8
|
||||
PixelType_Gvsp_BayerGB8 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x000A), ///< BayerGB8
|
||||
PixelType_Gvsp_BayerBG8 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x000B), ///< BayerBG8
|
||||
PixelType_Gvsp_BayerRBGG8 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0046), ///< BayerRBGG8
|
||||
PixelType_Gvsp_BayerGR10 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000C), ///< BayerGR10
|
||||
PixelType_Gvsp_BayerRG10 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000D), ///< BayerRG10
|
||||
PixelType_Gvsp_BayerGB10 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000E), ///< BayerGB10
|
||||
PixelType_Gvsp_BayerBG10 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000F), ///< BayerBG10
|
||||
PixelType_Gvsp_BayerGR12 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0010), ///< BayerGR12
|
||||
PixelType_Gvsp_BayerRG12 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0011), ///< BayerRG12
|
||||
PixelType_Gvsp_BayerGB12 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0012), ///< BayerGB12
|
||||
PixelType_Gvsp_BayerBG12 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0013), ///< BayerBG12
|
||||
PixelType_Gvsp_BayerGR10_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0026), ///< BayerGR10_Packed
|
||||
PixelType_Gvsp_BayerRG10_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0027), ///< BayerRG10_Packed
|
||||
PixelType_Gvsp_BayerGB10_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0028), ///< BayerGB10_Packed
|
||||
PixelType_Gvsp_BayerBG10_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0029), ///< BayerBG10_Packed
|
||||
PixelType_Gvsp_BayerGR12_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002A), ///< BayerGR12_Packed
|
||||
PixelType_Gvsp_BayerRG12_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002B), ///< BayerRG12_Packed
|
||||
PixelType_Gvsp_BayerGB12_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002C), ///< BayerGB12_Packed
|
||||
PixelType_Gvsp_BayerBG12_Packed = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002D), ///< BayerBG12_Packed
|
||||
PixelType_Gvsp_BayerGR16 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x002E), ///< BayerGR16
|
||||
PixelType_Gvsp_BayerRG16 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x002F), ///< BayerRG16
|
||||
PixelType_Gvsp_BayerGB16 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0030), ///< BayerGB16
|
||||
PixelType_Gvsp_BayerBG16 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0031), ///< BayerBG16
|
||||
|
||||
// RGB Packed buffer format defines
|
||||
PixelType_Gvsp_RGB8_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0014), ///< RGB8_Packed
|
||||
PixelType_Gvsp_BGR8_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0015), ///< BGR8_Packed
|
||||
PixelType_Gvsp_RGBA8_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(32) | 0x0016), ///< RGBA8_Packed
|
||||
PixelType_Gvsp_BGRA8_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(32) | 0x0017), ///< BGRA8_Packed
|
||||
PixelType_Gvsp_RGB10_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0018), ///< RGB10_Packed
|
||||
PixelType_Gvsp_BGR10_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0019), ///< BGR10_Packed
|
||||
PixelType_Gvsp_RGB12_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x001A), ///< RGB12_Packed
|
||||
PixelType_Gvsp_BGR12_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x001B), ///< BGR12_Packed
|
||||
PixelType_Gvsp_RGB16_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0033), ///< RGB16_Packed
|
||||
PixelType_Gvsp_BGR16_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x004B), ///< BGR16_Packed
|
||||
PixelType_Gvsp_RGBA16_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x0064), ///< RGBA16_Packed
|
||||
PixelType_Gvsp_BGRA16_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x0051), ///< BGRA16_Packed
|
||||
PixelType_Gvsp_RGB10V1_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(32) | 0x001C), ///< RGB10V1_Packed
|
||||
PixelType_Gvsp_RGB10V2_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(32) | 0x001D), ///< RGB10V2_Packed
|
||||
PixelType_Gvsp_RGB12V1_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(36) | 0X0034), ///< RGB12V1_Packed
|
||||
PixelType_Gvsp_RGB565_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0035), ///< RGB565_Packed
|
||||
PixelType_Gvsp_BGR565_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0X0036), ///< BGR565_Packed
|
||||
|
||||
// YUV Packed buffer format defines
|
||||
PixelType_Gvsp_YUV411_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(12) | 0x001E), ///< YUV411_Packed
|
||||
PixelType_Gvsp_YUV422_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x001F), ///< YUV422_Packed
|
||||
PixelType_Gvsp_YUV422_YUYV_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0032), ///< YUV422_YUYV_Packed
|
||||
PixelType_Gvsp_YUV444_Packed = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0020), ///< YUV444_Packed
|
||||
PixelType_Gvsp_YCBCR8_CBYCR = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x003A), ///< YCBCR8_CBYCR
|
||||
PixelType_Gvsp_YCBCR422_8 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x003B), ///< YCBCR422_8
|
||||
PixelType_Gvsp_YCBCR422_8_CBYCRY = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0043), ///< YCBCR422_8_CBYCRY
|
||||
PixelType_Gvsp_YCBCR411_8_CBYYCRYY = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(12) | 0x003C), ///< YCBCR411_8_CBYYCRYY
|
||||
PixelType_Gvsp_YCBCR601_8_CBYCR = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x003D), ///< YCBCR601_8_CBYCR
|
||||
PixelType_Gvsp_YCBCR601_422_8 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x003E), ///< YCBCR601_422_8
|
||||
PixelType_Gvsp_YCBCR601_422_8_CBYCRY = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0044), ///< YCBCR601_422_8_CBYCRY
|
||||
PixelType_Gvsp_YCBCR601_411_8_CBYYCRYY = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(12) | 0x003F), ///< YCBCR601_411_8_CBYYCRYY
|
||||
PixelType_Gvsp_YCBCR709_8_CBYCR = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0040), ///< YCBCR709_8_CBYCR
|
||||
PixelType_Gvsp_YCBCR709_422_8 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0041), ///< YCBCR709_422_8
|
||||
PixelType_Gvsp_YCBCR709_422_8_CBYCRY = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0045), ///< YCBCR709_422_8_CBYCRY
|
||||
PixelType_Gvsp_YCBCR709_411_8_CBYYCRYY = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(12) | 0x0042), ///< YCBCR709_411_8_CBYYCRYY
|
||||
|
||||
// YUV420
|
||||
PixelType_Gvsp_YUV420SP_NV12 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(12) | 0x8001), ///< YUV420SP_NV12
|
||||
PixelType_Gvsp_YUV420SP_NV21 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(12) | 0x8002), ///< YUV420SP_NV21
|
||||
|
||||
// RGB Planar buffer format defines
|
||||
PixelType_Gvsp_RGB8_Planar = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0021), ///< RGB8_Planar
|
||||
PixelType_Gvsp_RGB10_Planar = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0022), ///< RGB10_Planar
|
||||
PixelType_Gvsp_RGB12_Planar = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0023), ///< RGB12_Planar
|
||||
PixelType_Gvsp_RGB16_Planar = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0024), ///< RGB16_Planar
|
||||
|
||||
// 自定义的图片格式
|
||||
PixelType_Gvsp_Jpeg = (MV_GVSP_PIX_CUSTOM | MV_PIXEL_BIT_COUNT(24) | 0x0001), ///< Jpeg
|
||||
|
||||
PixelType_Gvsp_Coord3D_ABC32f = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(96) | 0x00C0), ///< 0x026000C0X
|
||||
PixelType_Gvsp_Coord3D_ABC32f_Planar = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(96) | 0x00C1), ///< 0x026000C1X
|
||||
|
||||
PixelType_Gvsp_Coord3D_AC32f = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(40) | 0x00C2), ///< 该值被废弃,请参考PixelType_Gvsp_Coord3D_AC32f_64; the value is discarded
|
||||
PixelType_Gvsp_COORD3D_DEPTH_PLUS_MASK = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(28) | 0x0001), ///< 该值被废弃; the value is discarded (已放入Chunkdata)
|
||||
|
||||
PixelType_Gvsp_Coord3D_ABC32 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(96) | 0x3001), ///< Coord3D_ABC32
|
||||
PixelType_Gvsp_Coord3D_AB32f = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x3002), ///< Coord3D_AB32f
|
||||
PixelType_Gvsp_Coord3D_AB32 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x3003), ///< Coord3D_AB32
|
||||
PixelType_Gvsp_Coord3D_AC32f_64 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x00C2), ///< Coord3D_AC32f_64
|
||||
PixelType_Gvsp_Coord3D_AC32f_Planar = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x00C3), ///< Coord3D_AC32f_Planar
|
||||
PixelType_Gvsp_Coord3D_AC32 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x3004), ///< Coord3D_AC32
|
||||
PixelType_Gvsp_Coord3D_A32f = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(32) | 0x00BD), ///< Coord3D_A32f
|
||||
PixelType_Gvsp_Coord3D_A32 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(32) | 0x3005), ///< Coord3D_A32
|
||||
PixelType_Gvsp_Coord3D_C32f = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(32) | 0x00BF), ///< Coord3D_C32f
|
||||
PixelType_Gvsp_Coord3D_C32 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(32) | 0x3006), ///< Coord3D_C32
|
||||
PixelType_Gvsp_Coord3D_ABC16 = (MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x00B9), ///< Coord3D_ABC16
|
||||
PixelType_Gvsp_Coord3D_C16 = (MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x00B8), ///< Coord3D_C16
|
||||
|
||||
PixelType_Gvsp_Float32 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(32) | 0x0001), //0x81200001
|
||||
|
||||
//无损压缩像素格式定义
|
||||
PixelType_Gvsp_HB_Mono8 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0001), ///< HB_Mono8
|
||||
PixelType_Gvsp_HB_Mono10 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0003), ///< HB_Mono10
|
||||
PixelType_Gvsp_HB_Mono10_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0004), ///< HB_Mono10_Packed
|
||||
PixelType_Gvsp_HB_Mono12 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0005), ///< HB_Mono12
|
||||
PixelType_Gvsp_HB_Mono12_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0006), ///< HB_Mono12_Packed
|
||||
PixelType_Gvsp_HB_Mono16 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0007), ///< HB_Mono16
|
||||
PixelType_Gvsp_HB_BayerGR8 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0008), ///< HB_BayerGR8
|
||||
PixelType_Gvsp_HB_BayerRG8 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0009), ///< HB_BayerRG8
|
||||
PixelType_Gvsp_HB_BayerGB8 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x000A), ///< HB_BayerGB8
|
||||
PixelType_Gvsp_HB_BayerBG8 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x000B), ///< HB_BayerBG8
|
||||
PixelType_Gvsp_HB_BayerRBGG8 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(8) | 0x0046), ///< HB_BayerRBGG8
|
||||
PixelType_Gvsp_HB_BayerGR10 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000C), ///< HB_BayerGR10
|
||||
PixelType_Gvsp_HB_BayerRG10 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000D), ///< HB_BayerRG10
|
||||
PixelType_Gvsp_HB_BayerGB10 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000E), ///< HB_BayerGB10
|
||||
PixelType_Gvsp_HB_BayerBG10 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x000F), ///< HB_BayerBG10
|
||||
PixelType_Gvsp_HB_BayerGR12 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0010), ///< HB_BayerGR12
|
||||
PixelType_Gvsp_HB_BayerRG12 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0011), ///< HB_BayerRG12
|
||||
PixelType_Gvsp_HB_BayerGB12 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0012), ///< HB_BayerGB12
|
||||
PixelType_Gvsp_HB_BayerBG12 = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(16) | 0x0013), ///< HB_BayerBG12
|
||||
PixelType_Gvsp_HB_BayerGR10_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0026), ///< HB_BayerGR10_Packed
|
||||
PixelType_Gvsp_HB_BayerRG10_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0027), ///< HB_BayerRG10_Packed
|
||||
PixelType_Gvsp_HB_BayerGB10_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0028), ///< HB_BayerGB10_Packed
|
||||
PixelType_Gvsp_HB_BayerBG10_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x0029), ///< HB_BayerBG10_Packed
|
||||
PixelType_Gvsp_HB_BayerGR12_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002A), ///< HB_BayerGR12_Packed
|
||||
PixelType_Gvsp_HB_BayerRG12_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002B), ///< HB_BayerRG12_Packed
|
||||
PixelType_Gvsp_HB_BayerGB12_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002C), ///< HB_BayerGB12_Packed
|
||||
PixelType_Gvsp_HB_BayerBG12_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_MONO | MV_PIXEL_BIT_COUNT(12) | 0x002D), ///< HB_BayerBG12_Packed
|
||||
PixelType_Gvsp_HB_YUV422_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x001F), ///< HB_YUV422_Packed
|
||||
PixelType_Gvsp_HB_YUV422_YUYV_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(16) | 0x0032), ///< HB_YUV422_YUYV_Packed
|
||||
PixelType_Gvsp_HB_RGB8_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0014), ///< HB_RGB8_Packed
|
||||
PixelType_Gvsp_HB_BGR8_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(24) | 0x0015), ///< HB_BGR8_Packed
|
||||
PixelType_Gvsp_HB_RGBA8_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(32) | 0x0016), ///< HB_RGBA8_Packed
|
||||
PixelType_Gvsp_HB_BGRA8_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(32) | 0x0017), ///< HB_BGRA8_Packed
|
||||
PixelType_Gvsp_HB_RGB16_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x0033), ///< HB_RGB16_Packed
|
||||
PixelType_Gvsp_HB_BGR16_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(48) | 0x004B), ///< HB_BGR16_Packed
|
||||
PixelType_Gvsp_HB_RGBA16_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x0064), ///< HB_RGBA16_Packed
|
||||
PixelType_Gvsp_HB_BGRA16_Packed = (MV_GVSP_PIX_CUSTOM | MV_GVSP_PIX_COLOR | MV_PIXEL_BIT_COUNT(64) | 0x0051), ///< HB_BGRA16_Packed
|
||||
|
||||
};
|
||||
///@}
|
||||
|
||||
#ifdef WIN32
|
||||
typedef __int64 int64_t;
|
||||
typedef unsigned __int64 uint64_t;
|
||||
#else
|
||||
#include <stdint.h>
|
||||
#endif
|
||||
|
||||
#endif /* _MV_PIXEL_TYPE_H_ */
|
||||
218
image_capture/third_party/percipio/common/BayerISP.hpp
vendored
Normal file
218
image_capture/third_party/percipio/common/BayerISP.hpp
vendored
Normal file
@@ -0,0 +1,218 @@
|
||||
#ifndef SAMPLE_COMMON_ISP_HPP_
|
||||
#define SAMPLE_COMMON_ISP_HPP_
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <algorithm>
|
||||
#include "TyIsp.h"
|
||||
|
||||
/**
|
||||
*The RGB image data output by some cameras is the original Bayer array.
|
||||
*By calling the API provided by this file, Bayer data can be converted to BGR array.
|
||||
*You can refer to the sample code: SimpleView_FetchFrame.
|
||||
*/
|
||||
|
||||
static int __TYCompareFirmwareVersion(const TY_DEVICE_BASE_INFO &info, int major, int minor){
|
||||
const TY_VERSION_INFO &v = info.firmwareVersion;
|
||||
if (v.major < major){
|
||||
return -1;
|
||||
}
|
||||
if (v.major == major && v.minor < minor){
|
||||
return -1;
|
||||
}
|
||||
if (v.major == major && v.minor == minor){
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
static TY_STATUS __TYDetectOldVer21ColorCam(TY_DEV_HANDLE dev_handle,bool *is_v21_color_device){
|
||||
TY_DEVICE_BASE_INFO info;
|
||||
TY_STATUS res = TYGetDeviceInfo(dev_handle, &info);
|
||||
if (res != TY_STATUS_OK){
|
||||
LOGI("get device info failed");
|
||||
return res;
|
||||
}
|
||||
*is_v21_color_device = false;
|
||||
if (info.iface.type == TY_INTERFACE_USB){
|
||||
*is_v21_color_device = true;
|
||||
}
|
||||
if ((info.iface.type == TY_INTERFACE_ETHERNET || info.iface.type == TY_INTERFACE_RAW) &&
|
||||
__TYCompareFirmwareVersion(info, 2, 2) < 0){
|
||||
*is_v21_color_device = true;
|
||||
}
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static void __TYParseSizeFromImageMode(TY_IMAGE_MODE mode , int *image_size) {
|
||||
const int mask = ((0x01 << 12) - 1);
|
||||
int height = mode & mask;
|
||||
int width = (mode >> 12) & mask;
|
||||
image_size[0] = width;
|
||||
image_size[1] = height;
|
||||
|
||||
}
|
||||
|
||||
///init color isp setting
|
||||
///for bayer raw image process
|
||||
static TY_STATUS ColorIspInitSetting(TY_ISP_HANDLE isp_handle, TY_DEV_HANDLE dev_handle){
|
||||
bool is_v21_color_device ;
|
||||
TY_STATUS res = __TYDetectOldVer21ColorCam(dev_handle, &is_v21_color_device);//old version device has different config
|
||||
if (res != TY_STATUS_OK){
|
||||
return res;
|
||||
}
|
||||
if (is_v21_color_device){
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_BLACK_LEVEL, 11));
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_BLACK_LEVEL_GAIN, 256.f / (256 - 11)));
|
||||
}
|
||||
else{
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_BLACK_LEVEL, 0));
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_BLACK_LEVEL_GAIN, 1.f));
|
||||
bool b;
|
||||
ASSERT_OK(TYHasFeature(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_ANALOG_GAIN, &b));
|
||||
if (b){
|
||||
TYSetInt(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_ANALOG_GAIN, 1);
|
||||
}
|
||||
}
|
||||
TYISPSetFeature(isp_handle, TY_ISP_FEATURE_BAYER_PATTERN, TY_ISP_BAYER_AUTO);
|
||||
float shading[9] = { 0.30890417098999026, 10.63355541229248, -6.433426856994629,
|
||||
0.24413758516311646, 11.739893913269043, -8.148622512817383,
|
||||
0.1255662441253662, 11.88359546661377, -7.865192413330078 };
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_SHADING, (uint8_t*)shading, sizeof(shading)));
|
||||
int shading_center[2] = { 640, 480 };
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_SHADING_CENTER, (uint8_t*)shading_center, sizeof(shading_center)));
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_CCM_ENABLE, 0));//we are not using ccm by default
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_CAM_DEV_HANDLE, (uint8_t*)&dev_handle, sizeof(dev_handle)));
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_CAM_DEV_COMPONENT, int32_t(TY_COMPONENT_RGB_CAM)));
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_GAMMA, 1.f));
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_AUTOBRIGHT, 1));//enable auto bright control
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_ENABLE_AUTO_EXPOSURE_GAIN, 0));//disable ae by default
|
||||
int default_image_size[2] = { 1280, 960 };// image size
|
||||
int current_image_size[2] = { 1280, 960 };// image size for current parameters
|
||||
TY_IMAGE_MODE img_mode;
|
||||
#if 1
|
||||
res = TYGetEnum(dev_handle, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, &img_mode);
|
||||
if (res == TY_STATUS_OK) {
|
||||
__TYParseSizeFromImageMode(img_mode, current_image_size);
|
||||
}
|
||||
TY_ENUM_ENTRY mode_entry[10];
|
||||
uint32_t num;
|
||||
res = TYGetEnumEntryInfo(dev_handle, TY_COMPONENT_RGB_CAM, TY_ENUM_IMAGE_MODE, mode_entry, 10, &num);
|
||||
if (res == TY_STATUS_OK) {
|
||||
__TYParseSizeFromImageMode(mode_entry[0].value, default_image_size);
|
||||
}
|
||||
|
||||
#else
|
||||
//some device may not support WIDTH & HEIGHT feature. image mode is recommended
|
||||
TYGetInt(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_WIDTH, &image_size[0]);
|
||||
TYGetInt(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_HEIGHT, &image_size[1]);
|
||||
#endif
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_IMAGE_SIZE, (uint8_t*)&default_image_size, sizeof(default_image_size)));//the orignal raw image size
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_INPUT_RESAMPLE_SCALE, default_image_size[0] / current_image_size[0]));//resampled input
|
||||
#if 1
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_ENABLE_AUTO_WHITEBALANCE, 1)); //eanble auto white balance
|
||||
#else
|
||||
//manual wb gain control
|
||||
const float wb_rgb_gain[3] = { 2.0123140811920168, 1, 1.481866478919983 };
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_WHITEBALANCE_GAIN, (uint8_t*)wb_rgb_gain, sizeof(wb_rgb_gain)));
|
||||
#endif
|
||||
|
||||
//try to load specifical device config from device storage
|
||||
TY_COMPONENT_ID comp_all;
|
||||
ASSERT_OK(TYGetComponentIDs(dev_handle, &comp_all));
|
||||
if (!(comp_all & TY_COMPONENT_STORAGE)){
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
bool has_isp_block = false;
|
||||
ASSERT_OK(TYHasFeature(dev_handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_ISP_BLOCK, &has_isp_block));
|
||||
if (!has_isp_block){
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
uint32_t sz = 0;
|
||||
ASSERT_OK(TYGetByteArraySize(dev_handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_ISP_BLOCK, &sz));
|
||||
if (sz <= 0){
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
std::vector<uint8_t> buff(sz);
|
||||
ASSERT_OK(TYGetByteArray(dev_handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_ISP_BLOCK, &buff[0], buff.size()));
|
||||
res = TYISPLoadConfig(isp_handle, &buff[0], buff.size());
|
||||
if (res == TY_STATUS_OK){
|
||||
LOGD("Load RGB ISP Config From Device");
|
||||
}
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
|
||||
static TY_STATUS ColorIspInitAutoExposure(TY_ISP_HANDLE isp_handle, TY_DEV_HANDLE dev_handle){
|
||||
bool is_v21_color_device;
|
||||
TY_STATUS res = __TYDetectOldVer21ColorCam(dev_handle, &is_v21_color_device);//old version device has different config
|
||||
if (res != TY_STATUS_OK){
|
||||
return res;
|
||||
}
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_ENABLE_AUTO_EXPOSURE_GAIN, 1));
|
||||
|
||||
// do not enable gain auto control by default
|
||||
# if 1
|
||||
int auto_gain_range[2] = { -1, -1 };
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_AUTO_GAIN_RANGE, (uint8_t*)&auto_gain_range, sizeof(auto_gain_range)));
|
||||
#else
|
||||
if(is_v21_color_device){
|
||||
const int old_auto_gain_range[2] = { 33, 255 };
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_AUTO_GAIN_RANGE, (uint8_t*)&old_auto_gain_range, sizeof(old_auto_gain_range)));
|
||||
}
|
||||
else{
|
||||
#define CHECK_GO_FAILED(a) {if((a)!=TY_STATUS_OK) break;}
|
||||
do{
|
||||
TY_FEATURE_ID_LIST feature_id = TY_INT_GAIN;
|
||||
bool val;
|
||||
CHECK_GO_FAILED(TYHasFeature(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_GAIN, &val));
|
||||
if (val) {
|
||||
feature_id = TY_INT_GAIN;
|
||||
}
|
||||
CHECK_GO_FAILED(TYHasFeature(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_R_GAIN, &val));
|
||||
if (val) {
|
||||
feature_id = TY_INT_R_GAIN;
|
||||
}
|
||||
int auto_gain_range[2] = { 15, 255 };
|
||||
TY_INT_RANGE range;
|
||||
CHECK_GO_FAILED(TYGetIntRange(dev_handle, TY_COMPONENT_RGB_CAM, feature_id, &range));
|
||||
auto_gain_range[0] = std::min(range.min + 1, range.max);
|
||||
auto_gain_range[1] = std::max(range.max - 1, range.min);
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_AUTO_GAIN_RANGE, (uint8_t*)&auto_gain_range, sizeof(auto_gain_range)));
|
||||
} while(0);
|
||||
#undef CHECK_GO_FAILED
|
||||
}
|
||||
#endif
|
||||
|
||||
//constraint exposure time
|
||||
int auto_expo_range[2] = { 10, 100 };
|
||||
TY_INT_RANGE range;
|
||||
res = TYGetIntRange(dev_handle, TY_COMPONENT_RGB_CAM, TY_INT_EXPOSURE_TIME, &range);
|
||||
if (res == TY_STATUS_OK) {
|
||||
auto_expo_range[0] = std::min(range.min + 1, range.max);
|
||||
auto_expo_range[1] = std::max(range.max - 1, range.min);
|
||||
}
|
||||
ASSERT_OK(TYISPSetFeature(isp_handle, TY_ISP_FEATURE_AUTO_EXPOSURE_RANGE, (uint8_t*)&auto_expo_range, sizeof(auto_expo_range)));
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
|
||||
static TY_STATUS ColorIspShowSupportedFeatures(TY_ISP_HANDLE handle){
|
||||
int sz;
|
||||
TY_STATUS res = TYISPGetFeatureInfoListSize(handle,&sz);
|
||||
if (res != TY_STATUS_OK){
|
||||
return res;
|
||||
}
|
||||
std::vector<TY_ISP_FEATURE_INFO> info;
|
||||
info.resize(sz);
|
||||
TYISPGetFeatureInfoList(handle, &info[0], info.size());
|
||||
for (int idx = 0; idx < sz; idx++){
|
||||
printf("feature name : %-50s type : %s \n", info[idx].name, info[idx].value_type);
|
||||
}
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
#endif
|
||||
126
image_capture/third_party/percipio/common/CommandLineFeatureHelper.hpp
vendored
Normal file
126
image_capture/third_party/percipio/common/CommandLineFeatureHelper.hpp
vendored
Normal file
@@ -0,0 +1,126 @@
|
||||
#ifndef CMDLINE_FEATURE_HELPER__H_
|
||||
#define CMDLINE_FEATURE_HELPER__H_
|
||||
|
||||
#include "CommandLineParser.hpp"
|
||||
#include "TYApi.h"
|
||||
#include "Utils.hpp"
|
||||
|
||||
/// @brief command line sgb feature param id
|
||||
struct ty_fetaure_options
|
||||
{
|
||||
int component_id;
|
||||
int feature_id;
|
||||
|
||||
ty_fetaure_options(int comp_id = 0, int _f_id = 0)
|
||||
{
|
||||
component_id = comp_id;
|
||||
feature_id = _f_id;
|
||||
}
|
||||
};
|
||||
|
||||
/// @brief command line feature helper for set device feature by command line args
|
||||
class CommandLineFeatureHelper
|
||||
{
|
||||
public:
|
||||
TyCommandlineParser<ty_fetaure_options> cmd_parser; ///< command line parser
|
||||
|
||||
/// @brief add feature param to command line parser
|
||||
/// @param param command line param name
|
||||
/// @param comp_id component id , 0 for not a feature setting
|
||||
/// @param feat_id feature id , 0 for not a feature setting
|
||||
/// @param val default value
|
||||
/// @param desc describe
|
||||
/// @param is_flag is a flag only , no value
|
||||
void add_feature(const std::string ¶m, int comp_id, int feat_id, int val, const std::string &desc, bool is_flag = false)
|
||||
{
|
||||
cmd_parser.addItem(param, desc, is_flag, std::to_string(val), ty_fetaure_options(comp_id, feat_id));
|
||||
}
|
||||
|
||||
/// @brief add feature param to command line parser
|
||||
/// @param param command line param name
|
||||
/// @param comp_id component id , 0 for not a feature setting
|
||||
/// @param feat_id feature id , 0 for not a feature setting
|
||||
/// @param val default value
|
||||
/// @param desc describe
|
||||
/// @param is_flag is a flag only , no value
|
||||
void add_feature(const std::string ¶m, int comp_id, int feat_id, std::string val, const std::string &desc, bool is_flag = false)
|
||||
{
|
||||
cmd_parser.addItem(param, desc, is_flag, val, ty_fetaure_options(comp_id, feat_id));
|
||||
}
|
||||
|
||||
/// @brief add feature param to command line parser
|
||||
/// @param name command line param name
|
||||
/// @return command line item
|
||||
const TyCommandlineItem<ty_fetaure_options> *get_feature(const std::string &name) const
|
||||
{
|
||||
auto res = cmd_parser.get(name);
|
||||
return res;
|
||||
}
|
||||
|
||||
/// @brief get command line param describe
|
||||
/// @return describe string
|
||||
std::string usage_describe() const
|
||||
{
|
||||
return cmd_parser.getUsage();
|
||||
}
|
||||
|
||||
/// @brief parse command line args
|
||||
void parse_argv(int argc, char *argv[])
|
||||
{
|
||||
cmd_parser.parse(argc, argv);
|
||||
}
|
||||
|
||||
/// @brief set command line param to device
|
||||
/// @param hDevice device handle
|
||||
void set_device_feature(TY_DEV_HANDLE hDevice)
|
||||
{
|
||||
// loop for all command line argv items and set to device
|
||||
for (auto &kv : cmd_parser.cmd_items)
|
||||
{
|
||||
auto &p = kv.second;
|
||||
int res = TY_STATUS_OK;
|
||||
if (!p.has_set)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
int feature_id = p.ctx.feature_id;
|
||||
int comp_id = p.ctx.component_id;
|
||||
if (comp_id == 0 && feature_id == 0)
|
||||
{
|
||||
// param is not a feature setting
|
||||
continue;
|
||||
}
|
||||
// set feature by type
|
||||
int type = feature_id & 0xf000;
|
||||
if (type == TY_FEATURE_INT)
|
||||
{
|
||||
int val = p.get_int_val();
|
||||
LOGD("set feature %s (compId 0x%x featId 0x%x) to %d", p.name.c_str(), comp_id, feature_id, val);
|
||||
res = TYSetInt(hDevice, comp_id, feature_id, val);
|
||||
}
|
||||
else if (type == TY_FEATURE_BOOL)
|
||||
{
|
||||
bool val = p.get_bool_val();
|
||||
LOGD("set feature %s (compId 0x%x featId 0x%x) to %d", p.name.c_str(), comp_id, feature_id, val);
|
||||
res = TYSetBool(hDevice, comp_id, feature_id, val);
|
||||
}
|
||||
else if (type == TY_FEATURE_FLOAT)
|
||||
{
|
||||
float val = p.get_float_val();
|
||||
LOGD("set feature %s (compId 0x%x featId 0x%x) to %f", p.name.c_str(), comp_id, feature_id, val);
|
||||
res = TYSetFloat(hDevice, comp_id, feature_id, val);
|
||||
}
|
||||
else
|
||||
{
|
||||
LOGE("unknow feature type %d for %s", type, p.name.c_str());
|
||||
continue;
|
||||
}
|
||||
if (res != TY_STATUS_OK)
|
||||
{
|
||||
LOGE("set feature %s (%s) FAILED with return status code %d", p.name.c_str(), p.describe.c_str(), res);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
#endif // CMDLINE_FEATURE_HELPER__H_
|
||||
173
image_capture/third_party/percipio/common/CommandLineParser.hpp
vendored
Normal file
173
image_capture/third_party/percipio/common/CommandLineParser.hpp
vendored
Normal file
@@ -0,0 +1,173 @@
|
||||
#ifndef _TYP_COMMAND_LINE_PARSER_HPP
|
||||
#define _TYP_COMMAND_LINE_PARSER_HPP
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <map>
|
||||
|
||||
/// @brief command line arg item
|
||||
/// @tparam T context type
|
||||
template <class T>
|
||||
class TyCommandlineItem
|
||||
{
|
||||
public:
|
||||
TyCommandlineItem(const std::string &name = "",
|
||||
const std::string &describe = "",
|
||||
bool is_flag = false,
|
||||
const std::string &default_val = "")
|
||||
{
|
||||
this->name = name;
|
||||
this->describe = describe;
|
||||
this->default_val = default_val;
|
||||
this->is_flag = is_flag;
|
||||
has_set = false;
|
||||
curr_val = default_val;
|
||||
}
|
||||
std::string name, describe; ///< name and describe
|
||||
std::string default_val; ///< default value
|
||||
bool is_flag; ///< flag only, no value
|
||||
T ctx; ///< context
|
||||
|
||||
bool has_set; ///< has set by command line
|
||||
std::string curr_val; ///< current arg value
|
||||
|
||||
|
||||
int get_int_val() const
|
||||
{
|
||||
return std::stoi(curr_val);
|
||||
}
|
||||
|
||||
float get_float_val() const
|
||||
{
|
||||
return std::stof(curr_val);
|
||||
}
|
||||
|
||||
double get_double_val() const
|
||||
{
|
||||
return std::stod(curr_val);
|
||||
}
|
||||
|
||||
std::string get_str_val() const
|
||||
{
|
||||
return curr_val;
|
||||
}
|
||||
|
||||
bool get_bool_val() const
|
||||
{
|
||||
return curr_val == "true" || curr_val == "1";
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
////--------------------
|
||||
|
||||
/// @brief command line parser
|
||||
/// @tparam T context type
|
||||
template <class T>
|
||||
class TyCommandlineParser
|
||||
{
|
||||
|
||||
public:
|
||||
std::map<std::string, TyCommandlineItem<T>> cmd_items; ///< command line items
|
||||
|
||||
/// @brief add command line item
|
||||
/// @param name item name
|
||||
/// @param describe item describe
|
||||
/// @param is_flag is flag only
|
||||
/// @param default_val default value
|
||||
/// @param ctx context
|
||||
void addItem(const std::string &name,
|
||||
const std::string &describe,
|
||||
bool is_flag = false,
|
||||
const std::string &default_val = "0",
|
||||
T ctx = T())
|
||||
{
|
||||
TyCommandlineItem<T> item(name, describe, is_flag, default_val);
|
||||
item.ctx = ctx;
|
||||
cmd_items.emplace(name, item);
|
||||
}
|
||||
|
||||
/// @brief clear all items
|
||||
void clear()
|
||||
{
|
||||
cmd_items.clear();
|
||||
}
|
||||
|
||||
/// @brief parse command line
|
||||
/// @param argc arg count
|
||||
/// @param argv arg list
|
||||
/// @return 0: success, -1: failed
|
||||
int parse(int argc, char *argv[])
|
||||
{
|
||||
int idx = 1;
|
||||
while (idx < argc)
|
||||
{
|
||||
std::string arg = argv[idx];
|
||||
if (arg[0] != '-')
|
||||
{
|
||||
continue;
|
||||
}
|
||||
arg = arg.substr(1);
|
||||
auto find_res = cmd_items.find(arg);
|
||||
if (find_res== cmd_items.end()) {
|
||||
printf("TyCommandlineParser:ignore unknow param: %s\n", arg.c_str());
|
||||
idx++;
|
||||
continue;
|
||||
}
|
||||
auto& item = find_res->second;
|
||||
item.has_set = true;
|
||||
item.curr_val = item.default_val;
|
||||
if (idx + 1 < argc && !item.is_flag)
|
||||
{
|
||||
item.curr_val = argv[idx + 1];
|
||||
idx++;
|
||||
}
|
||||
idx++;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/// @brief get command line item
|
||||
/// @param name item name
|
||||
/// @return item
|
||||
const TyCommandlineItem<T> *get(const std::string &name) const
|
||||
{
|
||||
auto find_res = cmd_items.find(name);
|
||||
if (find_res != cmd_items.end()) {
|
||||
return &find_res->second;
|
||||
}
|
||||
LOGE("ERROR: not find command argv by name %s ", name.c_str());
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
/// @brief get usage string
|
||||
/// @return usage string
|
||||
std::string getUsage() const
|
||||
{
|
||||
std::string usage = "Usage: \n";
|
||||
size_t max_name_len = 1;
|
||||
for (auto& kv : cmd_items) {
|
||||
max_name_len = std::max(kv.first.size(), max_name_len);
|
||||
}
|
||||
for (auto& kv : cmd_items)
|
||||
{
|
||||
const auto &cmd = kv.second;
|
||||
std::string name = cmd.name;
|
||||
if (name.size() < max_name_len) {
|
||||
name.append(max_name_len - name.size(), ' ');
|
||||
}
|
||||
usage += " -" + name + " ";
|
||||
if (!cmd.is_flag)
|
||||
{
|
||||
usage += "<value> ";
|
||||
}
|
||||
else {
|
||||
usage += " ";
|
||||
}
|
||||
usage += cmd.describe + " \n";
|
||||
}
|
||||
return usage;
|
||||
}
|
||||
};
|
||||
|
||||
#endif // _TYP_COMMAND_LINE_PARSER_HPP
|
||||
647
image_capture/third_party/percipio/common/DepthInpainter.cpp
vendored
Normal file
647
image_capture/third_party/percipio/common/DepthInpainter.cpp
vendored
Normal file
@@ -0,0 +1,647 @@
|
||||
#include "DepthInpainter.hpp"
|
||||
#include <stdint.h>
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
|
||||
#include <opencv2/opencv.hpp>
|
||||
#ifndef CV_VERSION_EPOCH
|
||||
#if defined (CV_MAJOR_VERSION) && (CV_VERSION_MAJOR == 4)
|
||||
#include <opencv2/imgproc/types_c.h>
|
||||
#include <opencv2/imgproc/imgproc_c.h>
|
||||
#include <opencv2/photo/legacy/constants_c.h>
|
||||
#include <opencv2/imgcodecs/legacy/constants_c.h>
|
||||
#endif
|
||||
#endif
|
||||
|
||||
using namespace cv;
|
||||
|
||||
#undef CV_MAT_ELEM_PTR_FAST
|
||||
#define CV_MAT_ELEM_PTR_FAST( mat, row, col, pix_size ) \
|
||||
((mat).data.ptr + (size_t)(mat).step*(row) + (pix_size)*(col))
|
||||
|
||||
inline float
|
||||
min4( float a, float b, float c, float d )
|
||||
{
|
||||
a = MIN(a,b);
|
||||
c = MIN(c,d);
|
||||
return MIN(a,c);
|
||||
}
|
||||
|
||||
#define CV_MAT_3COLOR_ELEM(img,type,y,x,c) CV_MAT_ELEM(img,type,y,(x)*3+(c))
|
||||
#define KNOWN 0 //known outside narrow band
|
||||
#define BAND 1 //narrow band (known)
|
||||
#define INSIDE 2 //unknown
|
||||
#define CHANGE 3 //servise
|
||||
|
||||
typedef struct CvHeapElem
|
||||
{
|
||||
float T;
|
||||
int i,j;
|
||||
struct CvHeapElem* prev;
|
||||
struct CvHeapElem* next;
|
||||
}
|
||||
CvHeapElem;
|
||||
|
||||
|
||||
class CvPriorityQueueFloat
|
||||
{
|
||||
protected:
|
||||
CvHeapElem *mem,*empty,*head,*tail;
|
||||
int num,in;
|
||||
|
||||
public:
|
||||
bool Init( const CvMat* f )
|
||||
{
|
||||
int i,j;
|
||||
for( i = num = 0; i < f->rows; i++ )
|
||||
{
|
||||
for( j = 0; j < f->cols; j++ )
|
||||
num += CV_MAT_ELEM(*f,uchar,i,j)!=0;
|
||||
}
|
||||
if (num<=0) return false;
|
||||
mem = (CvHeapElem*)cvAlloc((num+2)*sizeof(CvHeapElem));
|
||||
if (mem==NULL) return false;
|
||||
|
||||
head = mem;
|
||||
head->i = head->j = -1;
|
||||
head->prev = NULL;
|
||||
head->next = mem+1;
|
||||
head->T = -FLT_MAX;
|
||||
empty = mem+1;
|
||||
for (i=1; i<=num; i++) {
|
||||
mem[i].prev = mem+i-1;
|
||||
mem[i].next = mem+i+1;
|
||||
mem[i].i = -1;
|
||||
mem[i].T = FLT_MAX;
|
||||
}
|
||||
tail = mem+i;
|
||||
tail->i = tail->j = -1;
|
||||
tail->prev = mem+i-1;
|
||||
tail->next = NULL;
|
||||
tail->T = FLT_MAX;
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Add(const CvMat* f) {
|
||||
int i,j;
|
||||
for (i=0; i<f->rows; i++) {
|
||||
for (j=0; j<f->cols; j++) {
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j)!=0) {
|
||||
if (!Push(i,j,0)) return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Push(int i, int j, float T) {
|
||||
CvHeapElem *tmp=empty,*add=empty;
|
||||
if (empty==tail) return false;
|
||||
while (tmp->prev->T>T) tmp = tmp->prev;
|
||||
if (tmp!=empty) {
|
||||
add->prev->next = add->next;
|
||||
add->next->prev = add->prev;
|
||||
empty = add->next;
|
||||
add->prev = tmp->prev;
|
||||
add->next = tmp;
|
||||
add->prev->next = add;
|
||||
add->next->prev = add;
|
||||
} else {
|
||||
empty = empty->next;
|
||||
}
|
||||
add->i = i;
|
||||
add->j = j;
|
||||
add->T = T;
|
||||
in++;
|
||||
// printf("push i %3d j %3d T %12.4e in %4d\n",i,j,T,in);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Pop(int *i, int *j) {
|
||||
CvHeapElem *tmp=head->next;
|
||||
if (empty==tmp) return false;
|
||||
*i = tmp->i;
|
||||
*j = tmp->j;
|
||||
tmp->prev->next = tmp->next;
|
||||
tmp->next->prev = tmp->prev;
|
||||
tmp->prev = empty->prev;
|
||||
tmp->next = empty;
|
||||
tmp->prev->next = tmp;
|
||||
tmp->next->prev = tmp;
|
||||
empty = tmp;
|
||||
in--;
|
||||
// printf("pop i %3d j %3d T %12.4e in %4d\n",tmp->i,tmp->j,tmp->T,in);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Pop(int *i, int *j, float *T) {
|
||||
CvHeapElem *tmp=head->next;
|
||||
if (empty==tmp) return false;
|
||||
*i = tmp->i;
|
||||
*j = tmp->j;
|
||||
*T = tmp->T;
|
||||
tmp->prev->next = tmp->next;
|
||||
tmp->next->prev = tmp->prev;
|
||||
tmp->prev = empty->prev;
|
||||
tmp->next = empty;
|
||||
tmp->prev->next = tmp;
|
||||
tmp->next->prev = tmp;
|
||||
empty = tmp;
|
||||
in--;
|
||||
// printf("pop i %3d j %3d T %12.4e in %4d\n",tmp->i,tmp->j,tmp->T,in);
|
||||
return true;
|
||||
}
|
||||
|
||||
CvPriorityQueueFloat(void) {
|
||||
num=in=0;
|
||||
mem=empty=head=tail=NULL;
|
||||
}
|
||||
|
||||
~CvPriorityQueueFloat(void)
|
||||
{
|
||||
cvFree( &mem );
|
||||
}
|
||||
};
|
||||
|
||||
inline float VectorScalMult(CvPoint2D32f v1,CvPoint2D32f v2) {
|
||||
return v1.x*v2.x+v1.y*v2.y;
|
||||
}
|
||||
|
||||
inline float VectorLength(CvPoint2D32f v1) {
|
||||
return v1.x*v1.x+v1.y*v1.y;
|
||||
}
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////
|
||||
//HEAP::iterator Heap_Iterator;
|
||||
//HEAP Heap;
|
||||
|
||||
static float FastMarching_solve(int i1,int j1,int i2,int j2, const CvMat* f, const CvMat* t)
|
||||
{
|
||||
double sol, a11, a22, m12;
|
||||
a11=CV_MAT_ELEM(*t,float,i1,j1);
|
||||
a22=CV_MAT_ELEM(*t,float,i2,j2);
|
||||
m12=MIN(a11,a22);
|
||||
|
||||
if( CV_MAT_ELEM(*f,uchar,i1,j1) != INSIDE )
|
||||
if( CV_MAT_ELEM(*f,uchar,i2,j2) != INSIDE )
|
||||
if( fabs(a11-a22) >= 1.0 )
|
||||
sol = 1+m12;
|
||||
else
|
||||
sol = (a11+a22+sqrt((double)(2-(a11-a22)*(a11-a22))))*0.5;
|
||||
else
|
||||
sol = 1+a11;
|
||||
else if( CV_MAT_ELEM(*f,uchar,i2,j2) != INSIDE )
|
||||
sol = 1+a22;
|
||||
else
|
||||
sol = 1+m12;
|
||||
|
||||
return (float)sol;
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
static void
|
||||
icvCalcFMM(const CvMat *f, CvMat *t, CvPriorityQueueFloat *Heap, bool negate) {
|
||||
int i, j, ii = 0, jj = 0, q;
|
||||
float dist;
|
||||
|
||||
while (Heap->Pop(&ii,&jj)) {
|
||||
|
||||
unsigned known=(negate)?CHANGE:KNOWN;
|
||||
CV_MAT_ELEM(*f,uchar,ii,jj) = (uchar)known;
|
||||
|
||||
for (q=0; q<4; q++) {
|
||||
i=0; j=0;
|
||||
if (q==0) {i=ii-1; j=jj;}
|
||||
else if(q==1) {i=ii; j=jj-1;}
|
||||
else if(q==2) {i=ii+1; j=jj;}
|
||||
else {i=ii; j=jj+1;}
|
||||
if ((i<=0)||(j<=0)||(i>f->rows)||(j>f->cols)) continue;
|
||||
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j)==INSIDE) {
|
||||
dist = min4(FastMarching_solve(i-1,j,i,j-1,f,t),
|
||||
FastMarching_solve(i+1,j,i,j-1,f,t),
|
||||
FastMarching_solve(i-1,j,i,j+1,f,t),
|
||||
FastMarching_solve(i+1,j,i,j+1,f,t));
|
||||
CV_MAT_ELEM(*t,float,i,j) = dist;
|
||||
CV_MAT_ELEM(*f,uchar,i,j) = BAND;
|
||||
Heap->Push(i,j,dist);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (negate) {
|
||||
for (i=0; i<f->rows; i++) {
|
||||
for(j=0; j<f->cols; j++) {
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j) == CHANGE) {
|
||||
CV_MAT_ELEM(*f,uchar,i,j) = KNOWN;
|
||||
CV_MAT_ELEM(*t,float,i,j) = -CV_MAT_ELEM(*t,float,i,j);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static void
|
||||
icvTeleaInpaintFMM(const CvMat *f, CvMat *t, CvMat *out, int range, CvPriorityQueueFloat *Heap ) {
|
||||
int i = 0, j = 0, ii = 0, jj = 0, k, l, q, color = 0;
|
||||
float dist;
|
||||
|
||||
if (CV_MAT_CN(out->type)==1) {
|
||||
|
||||
while (Heap->Pop(&ii,&jj)) {
|
||||
|
||||
CV_MAT_ELEM(*f,uchar,ii,jj) = KNOWN;
|
||||
for(q=0; q<4; q++) {
|
||||
if (q==0) {i=ii-1; j=jj;}
|
||||
else if(q==1) {i=ii; j=jj-1;}
|
||||
else if(q==2) {i=ii+1; j=jj;}
|
||||
else if(q==3) {i=ii; j=jj+1;}
|
||||
if ((i<=1)||(j<=1)||(i>t->rows-1)||(j>t->cols-1)) continue;
|
||||
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j)==INSIDE) {
|
||||
dist = min4(FastMarching_solve(i-1,j,i,j-1,f,t),
|
||||
FastMarching_solve(i+1,j,i,j-1,f,t),
|
||||
FastMarching_solve(i-1,j,i,j+1,f,t),
|
||||
FastMarching_solve(i+1,j,i,j+1,f,t));
|
||||
CV_MAT_ELEM(*t,float,i,j) = dist;
|
||||
|
||||
for (color=0; color<=0; color++) {
|
||||
CvPoint2D32f gradI,gradT,r;
|
||||
float Ia=0,Jx=0,Jy=0,s=1.0e-20f,w,dst,lev,dir,sat;
|
||||
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j+1)!=INSIDE) {
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j-1)!=INSIDE) {
|
||||
gradT.x=(float)((CV_MAT_ELEM(*t,float,i,j+1)-CV_MAT_ELEM(*t,float,i,j-1)))*0.5f;
|
||||
} else {
|
||||
gradT.x=(float)((CV_MAT_ELEM(*t,float,i,j+1)-CV_MAT_ELEM(*t,float,i,j)));
|
||||
}
|
||||
} else {
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j-1)!=INSIDE) {
|
||||
gradT.x=(float)((CV_MAT_ELEM(*t,float,i,j)-CV_MAT_ELEM(*t,float,i,j-1)));
|
||||
} else {
|
||||
gradT.x=0;
|
||||
}
|
||||
}
|
||||
if (CV_MAT_ELEM(*f,uchar,i+1,j)!=INSIDE) {
|
||||
if (CV_MAT_ELEM(*f,uchar,i-1,j)!=INSIDE) {
|
||||
gradT.y=(float)((CV_MAT_ELEM(*t,float,i+1,j)-CV_MAT_ELEM(*t,float,i-1,j)))*0.5f;
|
||||
} else {
|
||||
gradT.y=(float)((CV_MAT_ELEM(*t,float,i+1,j)-CV_MAT_ELEM(*t,float,i,j)));
|
||||
}
|
||||
} else {
|
||||
if (CV_MAT_ELEM(*f,uchar,i-1,j)!=INSIDE) {
|
||||
gradT.y=(float)((CV_MAT_ELEM(*t,float,i,j)-CV_MAT_ELEM(*t,float,i-1,j)));
|
||||
} else {
|
||||
gradT.y=0;
|
||||
}
|
||||
}
|
||||
for (k=i-range; k<=i+range; k++) {
|
||||
int km=k-1+(k==1),kp=k-1-(k==t->rows-2);
|
||||
for (l=j-range; l<=j+range; l++) {
|
||||
int lm=l-1+(l==1),lp=l-1-(l==t->cols-2);
|
||||
if (k>0&&l>0&&k<t->rows-1&&l<t->cols-1) {
|
||||
if ((CV_MAT_ELEM(*f,uchar,k,l)!=INSIDE)&&
|
||||
((l-j)*(l-j)+(k-i)*(k-i)<=range*range)) {
|
||||
r.y = (float)(i-k);
|
||||
r.x = (float)(j-l);
|
||||
|
||||
dst = (float)(1./(VectorLength(r)*sqrt(VectorLength(r))));
|
||||
lev = (float)(1./(1+fabs(CV_MAT_ELEM(*t,float,k,l)-CV_MAT_ELEM(*t,float,i,j))));
|
||||
|
||||
dir=VectorScalMult(r,gradT);
|
||||
if (fabs(dir)<=0.01) dir=0.000001f;
|
||||
w = (float)fabs(dst*lev*dir);
|
||||
|
||||
if (CV_MAT_ELEM(*f,uchar,k,l+1)!=INSIDE) {
|
||||
if (CV_MAT_ELEM(*f,uchar,k,l-1)!=INSIDE) {
|
||||
// gradI.x=(float)((CV_MAT_ELEM(*out,uchar,km,lp+1)-CV_MAT_ELEM(*out,uchar,km,lm-1)))*2.0f;
|
||||
gradI.x=(float)((CV_MAT_ELEM(*out,uint16_t,km,lp+1)-CV_MAT_ELEM(*out,uint16_t,km,lm-1)))*2.0f;
|
||||
} else {
|
||||
// gradI.x=(float)((CV_MAT_ELEM(*out,uchar,km,lp+1)-CV_MAT_ELEM(*out,uchar,km,lm)));
|
||||
gradI.x=(float)((CV_MAT_ELEM(*out,uint16_t,km,lp+1)-CV_MAT_ELEM(*out,uint16_t,km,lm)));
|
||||
}
|
||||
} else {
|
||||
if (CV_MAT_ELEM(*f,uchar,k,l-1)!=INSIDE) {
|
||||
// gradI.x=(float)((CV_MAT_ELEM(*out,uchar,km,lp)-CV_MAT_ELEM(*out,uchar,km,lm-1)));
|
||||
gradI.x=(float)((CV_MAT_ELEM(*out,uint16_t,km,lp)-CV_MAT_ELEM(*out,uint16_t,km,lm-1)));
|
||||
} else {
|
||||
gradI.x=0;
|
||||
}
|
||||
}
|
||||
if (CV_MAT_ELEM(*f,uchar,k+1,l)!=INSIDE) {
|
||||
if (CV_MAT_ELEM(*f,uchar,k-1,l)!=INSIDE) {
|
||||
// gradI.y=(float)((CV_MAT_ELEM(*out,uchar,kp+1,lm)-CV_MAT_ELEM(*out,uchar,km-1,lm)))*2.0f;
|
||||
gradI.y=(float)((CV_MAT_ELEM(*out,uint16_t,kp+1,lm)-CV_MAT_ELEM(*out,uint16_t,km-1,lm)))*2.0f;
|
||||
} else {
|
||||
// gradI.y=(float)((CV_MAT_ELEM(*out,uchar,kp+1,lm)-CV_MAT_ELEM(*out,uchar,km,lm)));
|
||||
gradI.y=(float)((CV_MAT_ELEM(*out,uint16_t,kp+1,lm)-CV_MAT_ELEM(*out,uint16_t,km,lm)));
|
||||
}
|
||||
} else {
|
||||
if (CV_MAT_ELEM(*f,uchar,k-1,l)!=INSIDE) {
|
||||
// gradI.y=(float)((CV_MAT_ELEM(*out,uchar,kp,lm)-CV_MAT_ELEM(*out,uchar,km-1,lm)));
|
||||
gradI.y=(float)((CV_MAT_ELEM(*out,uint16_t,kp,lm)-CV_MAT_ELEM(*out,uint16_t,km-1,lm)));
|
||||
} else {
|
||||
gradI.y=0;
|
||||
}
|
||||
}
|
||||
// Ia += (float)w * (float)(CV_MAT_ELEM(*out,uchar,km,lm));
|
||||
Ia += (float)w * (float)(CV_MAT_ELEM(*out,uint16_t,km,lm));
|
||||
Jx -= (float)w * (float)(gradI.x*r.x);
|
||||
Jy -= (float)w * (float)(gradI.y*r.y);
|
||||
s += w;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
sat = (float)((Ia/s+(Jx+Jy)/(sqrt(Jx*Jx+Jy*Jy)+1.0e-20f)+0.5f));
|
||||
{
|
||||
// CV_MAT_ELEM(*out,uchar,i-1,j-1) = cv::saturate_cast<uchar>(sat);
|
||||
CV_MAT_ELEM(*out,uint16_t,i-1,j-1) = cv::saturate_cast<uint16_t>(sat);
|
||||
}
|
||||
}
|
||||
|
||||
CV_MAT_ELEM(*f,uchar,i,j) = BAND;
|
||||
Heap->Push(i,j,dist);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static void
|
||||
icvNSInpaintFMM(const CvMat *f, CvMat *t, CvMat *out, int range, CvPriorityQueueFloat *Heap) {
|
||||
int i = 0, j = 0, ii = 0, jj = 0, k, l, q;
|
||||
float dist;
|
||||
|
||||
if (CV_MAT_CN(out->type)==1) {
|
||||
|
||||
while (Heap->Pop(&ii,&jj)) {
|
||||
|
||||
CV_MAT_ELEM(*f,uchar,ii,jj) = KNOWN;
|
||||
for(q=0; q<4; q++) {
|
||||
if (q==0) {i=ii-1; j=jj;}
|
||||
else if(q==1) {i=ii; j=jj-1;}
|
||||
else if(q==2) {i=ii+1; j=jj;}
|
||||
else if(q==3) {i=ii; j=jj+1;}
|
||||
if ((i<=1)||(j<=1)||(i>t->rows-1)||(j>t->cols-1)) continue;
|
||||
|
||||
if (CV_MAT_ELEM(*f,uchar,i,j)==INSIDE) {
|
||||
dist = min4(FastMarching_solve(i-1,j,i,j-1,f,t),
|
||||
FastMarching_solve(i+1,j,i,j-1,f,t),
|
||||
FastMarching_solve(i-1,j,i,j+1,f,t),
|
||||
FastMarching_solve(i+1,j,i,j+1,f,t));
|
||||
CV_MAT_ELEM(*t,float,i,j) = dist;
|
||||
|
||||
{
|
||||
CvPoint2D32f gradI,r;
|
||||
float Ia=0,s=1.0e-20f,w,dst,dir;
|
||||
|
||||
for (k=i-range; k<=i+range; k++) {
|
||||
int km=k-1+(k==1),kp=k-1-(k==t->rows-2);
|
||||
for (l=j-range; l<=j+range; l++) {
|
||||
int lm=l-1+(l==1),lp=l-1-(l==t->cols-2);
|
||||
if (k>0&&l>0&&k<t->rows-1&&l<t->cols-1) {
|
||||
if ((CV_MAT_ELEM(*f,uchar,k,l)!=INSIDE)&&
|
||||
((l-j)*(l-j)+(k-i)*(k-i)<=range*range)) {
|
||||
r.y=(float)(i-k);
|
||||
r.x=(float)(j-l);
|
||||
|
||||
dst = 1/(VectorLength(r)*VectorLength(r)+1);
|
||||
|
||||
if (CV_MAT_ELEM(*f,uchar,k+1,l)!=INSIDE) {
|
||||
if (CV_MAT_ELEM(*f,uchar,k-1,l)!=INSIDE) {
|
||||
// gradI.x=(float)(abs(CV_MAT_ELEM(*out,uchar,kp+1,lm)-CV_MAT_ELEM(*out,uchar,kp,lm))+
|
||||
// abs(CV_MAT_ELEM(*out,uchar,kp,lm)-CV_MAT_ELEM(*out,uchar,km-1,lm)));
|
||||
gradI.x=(float)(abs(CV_MAT_ELEM(*out,uint16_t,kp+1,lm)-CV_MAT_ELEM(*out,uint16_t,kp,lm))+
|
||||
abs(CV_MAT_ELEM(*out,uint16_t,kp,lm)-CV_MAT_ELEM(*out,uint16_t,km-1,lm)));
|
||||
} else {
|
||||
// gradI.x=(float)(abs(CV_MAT_ELEM(*out,uchar,kp+1,lm)-CV_MAT_ELEM(*out,uchar,kp,lm)))*2.0f;
|
||||
gradI.x=(float)(abs(CV_MAT_ELEM(*out,uint16_t,kp+1,lm)-CV_MAT_ELEM(*out,uint16_t,kp,lm)))*2.0f;
|
||||
}
|
||||
} else {
|
||||
if (CV_MAT_ELEM(*f,uchar,k-1,l)!=INSIDE) {
|
||||
// gradI.x=(float)(abs(CV_MAT_ELEM(*out,uchar,kp,lm)-CV_MAT_ELEM(*out,uchar,km-1,lm)))*2.0f;
|
||||
gradI.x=(float)(abs(CV_MAT_ELEM(*out,uint16_t,kp,lm)-CV_MAT_ELEM(*out,uint16_t,km-1,lm)))*2.0f;
|
||||
} else {
|
||||
gradI.x=0;
|
||||
}
|
||||
}
|
||||
if (CV_MAT_ELEM(*f,uchar,k,l+1)!=INSIDE) {
|
||||
if (CV_MAT_ELEM(*f,uchar,k,l-1)!=INSIDE) {
|
||||
// gradI.y=(float)(abs(CV_MAT_ELEM(*out,uchar,km,lp+1)-CV_MAT_ELEM(*out,uchar,km,lm))+
|
||||
// abs(CV_MAT_ELEM(*out,uchar,km,lm)-CV_MAT_ELEM(*out,uchar,km,lm-1)));
|
||||
gradI.y=(float)(abs(CV_MAT_ELEM(*out,uint16_t,km,lp+1)-CV_MAT_ELEM(*out,uint16_t,km,lm))+
|
||||
abs(CV_MAT_ELEM(*out,uint16_t,km,lm)-CV_MAT_ELEM(*out,uint16_t,km,lm-1)));
|
||||
} else {
|
||||
// gradI.y=(float)(abs(CV_MAT_ELEM(*out,uchar,km,lp+1)-CV_MAT_ELEM(*out,uchar,km,lm)))*2.0f;
|
||||
gradI.y=(float)(abs(CV_MAT_ELEM(*out,uint16_t,km,lp+1)-CV_MAT_ELEM(*out,uint16_t,km,lm)))*2.0f;
|
||||
}
|
||||
} else {
|
||||
if (CV_MAT_ELEM(*f,uchar,k,l-1)!=INSIDE) {
|
||||
// gradI.y=(float)(abs(CV_MAT_ELEM(*out,uchar,km,lm)-CV_MAT_ELEM(*out,uchar,km,lm-1)))*2.0f;
|
||||
gradI.y=(float)(abs(CV_MAT_ELEM(*out,uint16_t,km,lm)-CV_MAT_ELEM(*out,uint16_t,km,lm-1)))*2.0f;
|
||||
} else {
|
||||
gradI.y=0;
|
||||
}
|
||||
}
|
||||
|
||||
gradI.x=-gradI.x;
|
||||
dir=VectorScalMult(r,gradI);
|
||||
|
||||
if (fabs(dir)<=0.01) {
|
||||
dir=0.000001f;
|
||||
} else {
|
||||
dir = (float)fabs(VectorScalMult(r,gradI)/sqrt(VectorLength(r)*VectorLength(gradI)));
|
||||
}
|
||||
w = dst*dir;
|
||||
// Ia += (float)w * (float)(CV_MAT_ELEM(*out,uchar,km,lm));
|
||||
Ia += (float)w * (float)(CV_MAT_ELEM(*out,uint16_t,km,lm));
|
||||
s += w;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// CV_MAT_ELEM(*out,uchar,i-1,j-1) = cv::saturate_cast<uchar>((double)Ia/s);
|
||||
CV_MAT_ELEM(*out,uint16_t,i-1,j-1) = cv::saturate_cast<uint16_t>((double)Ia/s);
|
||||
}
|
||||
|
||||
CV_MAT_ELEM(*f,uchar,i,j) = BAND;
|
||||
Heap->Push(i,j,dist);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
#define SET_BORDER1_C1(image,type,value) {\
|
||||
int i,j;\
|
||||
for(j=0; j<image->cols; j++) {\
|
||||
CV_MAT_ELEM(*image,type,0,j) = value;\
|
||||
}\
|
||||
for (i=1; i<image->rows-1; i++) {\
|
||||
CV_MAT_ELEM(*image,type,i,0) = CV_MAT_ELEM(*image,type,i,image->cols-1) = value;\
|
||||
}\
|
||||
for(j=0; j<image->cols; j++) {\
|
||||
CV_MAT_ELEM(*image,type,erows-1,j) = value;\
|
||||
}\
|
||||
}
|
||||
|
||||
#define COPY_MASK_BORDER1_C1(src,dst,type) {\
|
||||
int i,j;\
|
||||
for (i=0; i<src->rows; i++) {\
|
||||
for(j=0; j<src->cols; j++) {\
|
||||
if (CV_MAT_ELEM(*src,type,i,j)!=0)\
|
||||
CV_MAT_ELEM(*dst,type,i+1,j+1) = INSIDE;\
|
||||
}\
|
||||
}\
|
||||
}
|
||||
|
||||
|
||||
void
|
||||
_cvInpaint( const CvArr* _input_img, const CvArr* _inpaint_mask, CvArr* _output_img,
|
||||
double inpaintRange, int flags )
|
||||
{
|
||||
cv::Ptr<CvMat> mask, band, f, t, out;
|
||||
cv::Ptr<CvPriorityQueueFloat> Heap, Out;
|
||||
IplConvKernel *el_cross, *el_range;
|
||||
|
||||
CvMat input_hdr, mask_hdr, output_hdr;
|
||||
CvMat* input_img, *inpaint_mask, *output_img;
|
||||
int range=cvRound(inpaintRange);
|
||||
int erows, ecols;
|
||||
|
||||
input_img = cvGetMat( _input_img, &input_hdr );
|
||||
inpaint_mask = cvGetMat( _inpaint_mask, &mask_hdr );
|
||||
output_img = cvGetMat( _output_img, &output_hdr );
|
||||
|
||||
if( !CV_ARE_SIZES_EQ(input_img,output_img) || !CV_ARE_SIZES_EQ(input_img,inpaint_mask))
|
||||
CV_Error( CV_StsUnmatchedSizes, "All the input and output images must have the same size" );
|
||||
|
||||
if( (CV_MAT_TYPE(input_img->type) != CV_16UC1) ||
|
||||
!CV_ARE_TYPES_EQ(input_img,output_img) )
|
||||
CV_Error( CV_StsUnsupportedFormat,
|
||||
"Only 8-bit 1-channel and 3-channel input/output images are supported" );
|
||||
|
||||
if( CV_MAT_TYPE(inpaint_mask->type) != CV_8UC1 )
|
||||
CV_Error( CV_StsUnsupportedFormat, "The mask must be 8-bit 1-channel image" );
|
||||
|
||||
range = MAX(range,1);
|
||||
range = MIN(range,100);
|
||||
|
||||
ecols = input_img->cols + 2;
|
||||
erows = input_img->rows + 2;
|
||||
|
||||
f = cvCreateMat(erows, ecols, CV_8UC1);
|
||||
t = cvCreateMat(erows, ecols, CV_32FC1);
|
||||
band = cvCreateMat(erows, ecols, CV_8UC1);
|
||||
mask = cvCreateMat(erows, ecols, CV_8UC1);
|
||||
el_cross = cvCreateStructuringElementEx(3,3,1,1,CV_SHAPE_CROSS,NULL);
|
||||
|
||||
cvCopy( input_img, output_img );
|
||||
cvSet(mask,cvScalar(KNOWN,0,0,0));
|
||||
COPY_MASK_BORDER1_C1(inpaint_mask,mask,uchar);
|
||||
SET_BORDER1_C1(mask,uchar,0);
|
||||
cvSet(f,cvScalar(KNOWN,0,0,0));
|
||||
cvSet(t,cvScalar(1.0e6f,0,0,0));
|
||||
cvDilate(mask,band,el_cross,1); // image with narrow band
|
||||
cvReleaseStructuringElement(&el_cross);
|
||||
Heap=new CvPriorityQueueFloat;
|
||||
if (!Heap->Init(band))
|
||||
return;
|
||||
cvSub(band,mask,band,NULL);
|
||||
SET_BORDER1_C1(band,uchar,0);
|
||||
if (!Heap->Add(band))
|
||||
return;
|
||||
cvSet(f,cvScalar(BAND,0,0,0),band);
|
||||
cvSet(f,cvScalar(INSIDE,0,0,0),mask);
|
||||
cvSet(t,cvScalar(0,0,0,0),band);
|
||||
|
||||
if( flags == CV_INPAINT_TELEA )
|
||||
{
|
||||
out = cvCreateMat(erows, ecols, CV_8UC1);
|
||||
el_range = cvCreateStructuringElementEx(2*range+1,2*range+1,
|
||||
range,range,CV_SHAPE_RECT,NULL);
|
||||
cvDilate(mask,out,el_range,1);
|
||||
cvReleaseStructuringElement(&el_range);
|
||||
cvSub(out,mask,out,NULL);
|
||||
Out=new CvPriorityQueueFloat;
|
||||
if (!Out->Init(out))
|
||||
return;
|
||||
if (!Out->Add(band))
|
||||
return;
|
||||
cvSub(out,band,out,NULL);
|
||||
SET_BORDER1_C1(out,uchar,0);
|
||||
icvCalcFMM(out,t,Out,true);
|
||||
icvTeleaInpaintFMM(mask,t,output_img,range,Heap);
|
||||
}
|
||||
else if (flags == CV_INPAINT_NS) {
|
||||
icvNSInpaintFMM(mask,t,output_img,range,Heap);
|
||||
} else {
|
||||
CV_Error( CV_StsBadArg, "The flags argument must be one of CV_INPAINT_TELEA or CV_INPAINT_NS" );
|
||||
}
|
||||
}
|
||||
|
||||
CvMat ToCvMat(const cv::Mat& m)
|
||||
{
|
||||
CV_DbgAssert(m.dims <= 2);
|
||||
CvMat dst = cvMat(m.rows, m.dims == 1 ? 1 : m.cols, m.type(), m.data);
|
||||
dst.step = (int)m.step[0];
|
||||
dst.type = (dst.type & ~cv::Mat::CONTINUOUS_FLAG) | (m.flags & cv::Mat::CONTINUOUS_FLAG);
|
||||
return dst;
|
||||
}
|
||||
|
||||
void _inpaint( InputArray _src, InputArray _mask, OutputArray _dst,
|
||||
double inpaintRange, int flags )
|
||||
{
|
||||
Mat src = _src.getMat(), mask = _mask.getMat();
|
||||
_dst.create( src.size(), src.type() );
|
||||
CvMat c_src = ToCvMat(src), c_mask = ToCvMat(mask), c_dst = ToCvMat(_dst.getMat());
|
||||
_cvInpaint( &c_src, &c_mask, &c_dst, inpaintRange, flags );
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
cv::Mat DepthInpainter::genValidMask(const cv::Mat& depth)
|
||||
{
|
||||
cv::Mat orgMask = (depth == 0);
|
||||
// cv::Mat mask = orgMask.clone();
|
||||
cv::Mat mask = orgMask;
|
||||
|
||||
cv::Mat kernel = cv::Mat::zeros(_kernelSize, _kernelSize, CV_8U);
|
||||
cv::circle(kernel, cv::Point(kernel.cols/2, kernel.rows/2), kernel.rows/2, cv::Scalar(255), -1);
|
||||
cv::erode(orgMask, mask, kernel);
|
||||
cv::dilate(mask, mask, kernel);
|
||||
|
||||
gSpeckleFilter.Compute(mask, 0, _maxInternalHoleToBeFilled, 1);
|
||||
|
||||
// revert mask
|
||||
mask = mask == 0;
|
||||
|
||||
return mask;
|
||||
}
|
||||
|
||||
void DepthInpainter::inpaint(const cv::Mat& depth, cv::Mat& out, const cv::Mat& mask)
|
||||
{
|
||||
cv::Mat newDepth;
|
||||
cv::Mat _mask = mask.empty() ? (depth == 0) : mask;
|
||||
if(depth.type() == CV_8U || depth.type() == CV_8UC3){
|
||||
cv::inpaint(depth, _mask, newDepth, _inpaintRadius, cv::INPAINT_TELEA);
|
||||
} else if(depth.type() == CV_16U){
|
||||
_inpaint(depth, _mask, newDepth, _inpaintRadius, cv::INPAINT_TELEA);
|
||||
}
|
||||
|
||||
if(mask.empty() && !_fillAll){
|
||||
// gen masked image
|
||||
cv::Mat mask = genValidMask(depth);
|
||||
out = cv::Mat::zeros(depth.size(), CV_16U);
|
||||
newDepth.copyTo(out, mask);
|
||||
} else {
|
||||
out = newDepth;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
36
image_capture/third_party/percipio/common/DepthInpainter.hpp
vendored
Normal file
36
image_capture/third_party/percipio/common/DepthInpainter.hpp
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
#ifndef XYZ_INPAINTER_HPP_
|
||||
#define XYZ_INPAINTER_HPP_
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include "ImageSpeckleFilter.hpp"
|
||||
|
||||
//#warn("DepthInpainter this design no longer supported by new opencv version, using opencv inpaint api for alternative")
|
||||
|
||||
|
||||
class DepthInpainter
|
||||
{
|
||||
public:
|
||||
int _kernelSize;
|
||||
int _maxInternalHoleToBeFilled;
|
||||
double _inpaintRadius;
|
||||
bool _fillAll;
|
||||
|
||||
|
||||
DepthInpainter()
|
||||
: _kernelSize(5)
|
||||
, _maxInternalHoleToBeFilled(50)
|
||||
, _inpaintRadius(1)
|
||||
, _fillAll(true)
|
||||
{
|
||||
}
|
||||
|
||||
void inpaint(const cv::Mat& inputDepth, cv::Mat& out, const cv::Mat& mask);
|
||||
|
||||
private:
|
||||
cv::Mat genValidMask(const cv::Mat& depth);
|
||||
};
|
||||
|
||||
#endif
|
||||
#endif
|
||||
249
image_capture/third_party/percipio/common/DepthRender.hpp
vendored
Normal file
249
image_capture/third_party/percipio/common/DepthRender.hpp
vendored
Normal file
@@ -0,0 +1,249 @@
|
||||
#ifndef PERCIPIO_SAMPLE_COMMON_DEPTH_RENDER_HPP_
|
||||
#define PERCIPIO_SAMPLE_COMMON_DEPTH_RENDER_HPP_
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
#include <opencv2/opencv.hpp>
|
||||
#ifndef CV_VERSION_EPOCH
|
||||
#if defined (CV_MAJOR_VERSION) && (CV_VERSION_MAJOR == 4)
|
||||
#include <opencv2/imgproc/types_c.h>
|
||||
#include <opencv2/imgcodecs/legacy/constants_c.h>
|
||||
#endif
|
||||
#endif
|
||||
#include <map>
|
||||
#include <vector>
|
||||
|
||||
|
||||
class DepthRender {
|
||||
public:
|
||||
enum OutputColorType {
|
||||
COLORTYPE_RAINBOW = 0,
|
||||
COLORTYPE_BLUERED = 1,
|
||||
COLORTYPE_GRAY = 2
|
||||
};
|
||||
|
||||
enum ColorRangeMode {
|
||||
COLOR_RANGE_ABS = 0,
|
||||
COLOR_RANGE_DYNAMIC = 1
|
||||
};
|
||||
|
||||
DepthRender() : needResetColorTable(true)
|
||||
, color_type(COLORTYPE_BLUERED)
|
||||
, range_mode(COLOR_RANGE_DYNAMIC)
|
||||
, min_distance(0)
|
||||
, max_distance(0)
|
||||
, invalid_label(0)
|
||||
{}
|
||||
|
||||
void SetColorType( OutputColorType ct = COLORTYPE_BLUERED ){
|
||||
if(ct != color_type){
|
||||
needResetColorTable = true;
|
||||
color_type = ct;
|
||||
}
|
||||
}
|
||||
|
||||
void SetRangeMode( ColorRangeMode rm = COLOR_RANGE_DYNAMIC ){
|
||||
if(range_mode != rm){
|
||||
needResetColorTable = true;
|
||||
range_mode = rm;
|
||||
}
|
||||
}
|
||||
|
||||
/// for abs mode
|
||||
void SetColorRange(int minDis, int maxDis){
|
||||
min_distance = minDis;
|
||||
max_distance = maxDis;
|
||||
}
|
||||
|
||||
/// input 16UC1 output 8UC3
|
||||
void Compute(const cv::Mat &src, cv::Mat& dst ){
|
||||
dst = Compute(src);
|
||||
}
|
||||
cv::Mat Compute(const cv::Mat &src){
|
||||
cv::Mat src16U;
|
||||
if(src.type() != CV_16U){
|
||||
src.convertTo(src16U, CV_16U);
|
||||
}else{
|
||||
src16U = src;
|
||||
}
|
||||
|
||||
if(needResetColorTable){
|
||||
BuildColorTable();
|
||||
needResetColorTable = false;
|
||||
}
|
||||
|
||||
cv::Mat dst;
|
||||
filtered_mask = (src16U == invalid_label);
|
||||
clr_disp = src16U.clone();
|
||||
if(COLOR_RANGE_ABS == range_mode) {
|
||||
TruncValue(clr_disp, filtered_mask, min_distance, max_distance);
|
||||
clr_disp -= min_distance;
|
||||
clr_disp = clr_disp * 255 / (max_distance - min_distance);
|
||||
clr_disp.convertTo(clr_disp, CV_8UC1);
|
||||
} else {
|
||||
unsigned short vmax, vmin;
|
||||
HistAdjustRange(clr_disp, invalid_label, min_distance, vmin, vmax);
|
||||
clr_disp = (clr_disp - vmin) * 255 / (vmax - vmin);
|
||||
//clr_disp = 255 - clr_disp;
|
||||
clr_disp.convertTo(clr_disp, CV_8UC1);
|
||||
}
|
||||
|
||||
switch (color_type) {
|
||||
case COLORTYPE_GRAY:
|
||||
clr_disp = 255 - clr_disp;
|
||||
cv::cvtColor(clr_disp, dst, cv::COLOR_GRAY2BGR);
|
||||
break;
|
||||
case COLORTYPE_BLUERED:
|
||||
//temp = 255 - clr_disp;
|
||||
CalcColorMap(clr_disp, dst);
|
||||
//cv::applyColorMap(temp, color_img, cv::COLORMAP_COOL);
|
||||
break;
|
||||
case COLORTYPE_RAINBOW:
|
||||
//cv::cvtColor(color_img, color_img, CV_GRAY2BGR);
|
||||
cv::applyColorMap(clr_disp, dst, cv::COLORMAP_RAINBOW);
|
||||
break;
|
||||
}
|
||||
ClearInvalidArea(dst, filtered_mask);
|
||||
|
||||
return dst;
|
||||
}
|
||||
|
||||
private:
|
||||
void CalcColorMap(const cv::Mat &src, cv::Mat &dst){
|
||||
std::vector<cv::Scalar> &table = _color_lookup_table;
|
||||
assert(table.size() == 256);
|
||||
assert(!src.empty());
|
||||
assert(src.type() == CV_8UC1);
|
||||
dst.create(src.size(), CV_8UC3);
|
||||
const unsigned char* sptr = src.ptr<unsigned char>();
|
||||
unsigned char* dptr = dst.ptr<unsigned char>();
|
||||
for (int i = src.size().area(); i != 0; i--) {
|
||||
cv::Scalar &v = table[*sptr];
|
||||
dptr[0] = (unsigned char)v.val[0];
|
||||
dptr[1] = (unsigned char)v.val[1];
|
||||
dptr[2] = (unsigned char)v.val[2];
|
||||
dptr += 3;
|
||||
sptr += 1;
|
||||
}
|
||||
}
|
||||
void BuildColorTable(){
|
||||
_color_lookup_table.resize(256);
|
||||
cv::Scalar from(50, 0, 0xff), to(50, 200, 255);
|
||||
for (int i = 0; i < 128; i++) {
|
||||
float a = (float)i / 128;
|
||||
cv::Scalar &v = _color_lookup_table[i];
|
||||
for (int j = 0; j < 3; j++) {
|
||||
v.val[j] = from.val[j] * (1 - a) + to.val[j] * a;
|
||||
}
|
||||
}
|
||||
from = to;
|
||||
to = cv::Scalar(255, 104, 0);
|
||||
for (int i = 128; i < 256; i++) {
|
||||
float a = (float)(i - 128) / 128;
|
||||
cv::Scalar &v = _color_lookup_table[i];
|
||||
for (int j = 0; j < 3; j++) {
|
||||
v.val[j] = from.val[j] * (1 - a) + to.val[j] * a;
|
||||
}
|
||||
}
|
||||
}
|
||||
//keep value in range
|
||||
void TruncValue(cv::Mat &img, cv::Mat &mask, short min_val, short max_val){
|
||||
assert(max_val >= min_val);
|
||||
assert(img.type() == CV_16SC1);
|
||||
assert(mask.type() == CV_8UC1);
|
||||
short* ptr = img.ptr<short>();
|
||||
unsigned char* mask_ptr = mask.ptr<unsigned char>();
|
||||
for (int i = img.size().area(); i != 0; i--) {
|
||||
if (*ptr > max_val) {
|
||||
*ptr = max_val;
|
||||
*mask_ptr = 0xff;
|
||||
} else if (*ptr < min_val) {
|
||||
*ptr = min_val;
|
||||
*mask_ptr = 0xff;
|
||||
}
|
||||
ptr++;
|
||||
mask_ptr++;
|
||||
}
|
||||
}
|
||||
void ClearInvalidArea(cv::Mat &clr_disp, cv::Mat &filtered_mask){
|
||||
assert(clr_disp.type() == CV_8UC3);
|
||||
assert(filtered_mask.type() == CV_8UC1);
|
||||
assert(clr_disp.size().area() == filtered_mask.size().area());
|
||||
unsigned char* filter_ptr = filtered_mask.ptr<unsigned char>();
|
||||
unsigned char* ptr = clr_disp.ptr<unsigned char>();
|
||||
int len = clr_disp.size().area();
|
||||
for (int i = 0; i < len; i++) {
|
||||
if (*filter_ptr != 0) {
|
||||
ptr[0] = ptr[1] = ptr[2] = 0;
|
||||
}
|
||||
filter_ptr++;
|
||||
ptr += 3;
|
||||
}
|
||||
}
|
||||
void HistAdjustRange(const cv::Mat &dist, ushort invalid, int min_display_distance_range
|
||||
, ushort &min_val, ushort &max_val) {
|
||||
std::map<ushort, int> hist;
|
||||
int sz = dist.size().area();
|
||||
const ushort* ptr = dist.ptr < ushort>();
|
||||
int total_num = 0;
|
||||
for (int idx = sz; idx != 0; idx--, ptr++) {
|
||||
if (invalid == *ptr) {
|
||||
continue;
|
||||
}
|
||||
total_num++;
|
||||
if (hist.find(*ptr) != hist.end()) {
|
||||
hist[*ptr]++;
|
||||
} else {
|
||||
hist.insert(std::make_pair(*ptr, 1));
|
||||
}
|
||||
}
|
||||
if (hist.empty()) {
|
||||
min_val = 0;
|
||||
max_val = 2000;
|
||||
return;
|
||||
}
|
||||
const int delta = total_num * 0.01;
|
||||
int sum = 0;
|
||||
min_val = hist.begin()->first;
|
||||
for (std::map<ushort, int>::iterator it = hist.begin(); it != hist.end();it++){
|
||||
sum += it->second;
|
||||
if (sum > delta) {
|
||||
min_val = it->first;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
sum = 0;
|
||||
max_val = hist.rbegin()->first;
|
||||
for (std::map<ushort, int>::reverse_iterator s = hist.rbegin()
|
||||
; s != hist.rend(); s++) {
|
||||
sum += s->second;
|
||||
if (sum > delta) {
|
||||
max_val = s->first;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
const int min_display_dist = min_display_distance_range;
|
||||
if (max_val - min_val < min_display_dist) {
|
||||
int m = (max_val + min_val) / 2;
|
||||
max_val = m + min_display_dist / 2;
|
||||
min_val = m - min_display_dist / 2;
|
||||
if (min_val < 0) {
|
||||
min_val = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool needResetColorTable;
|
||||
OutputColorType color_type;
|
||||
ColorRangeMode range_mode;
|
||||
int min_distance;
|
||||
int max_distance;
|
||||
uint16_t invalid_label;
|
||||
cv::Mat clr_disp ;
|
||||
cv::Mat filtered_mask;
|
||||
std::vector<cv::Scalar> _color_lookup_table;
|
||||
};
|
||||
|
||||
#endif
|
||||
#endif
|
||||
120
image_capture/third_party/percipio/common/ImageSpeckleFilter.cpp
vendored
Normal file
120
image_capture/third_party/percipio/common/ImageSpeckleFilter.cpp
vendored
Normal file
@@ -0,0 +1,120 @@
|
||||
|
||||
#include "ImageSpeckleFilter.hpp"
|
||||
#include <stdio.h>
|
||||
#include <stdexcept>
|
||||
|
||||
#ifdef WIN32
|
||||
#include <stdint.h>
|
||||
#endif
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
struct Point2s {
|
||||
Point2s(short _x, short _y) {
|
||||
x = _x;
|
||||
y = _y;
|
||||
}
|
||||
short x, y;
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
void filterSpecklesImpl(cv::Mat& img, int newVal, int maxSpeckleSize, int maxDiff, std::vector<char> &_buf) {
|
||||
int width = img.cols, height = img.rows;
|
||||
int npixels = width * height;//number of pixels
|
||||
size_t bufSize = npixels * (int)(sizeof(Point2s) + sizeof(int) + sizeof(uint8_t));//all pixel buffer
|
||||
if (_buf.size() < bufSize) {
|
||||
_buf.resize((int)bufSize);
|
||||
}
|
||||
|
||||
uint8_t* buf = (uint8_t*)(&_buf[0]);
|
||||
int i, j, dstep = img.cols;//(int)(img.step / sizeof(T));
|
||||
int* labels = (int*)buf;
|
||||
buf += npixels * sizeof(labels[0]);
|
||||
Point2s* wbuf = (Point2s*)buf;
|
||||
buf += npixels * sizeof(wbuf[0]);
|
||||
uint8_t* rtype = (uint8_t*)buf;
|
||||
int curlabel = 0;
|
||||
|
||||
// clear out label assignments
|
||||
memset(labels, 0, npixels * sizeof(labels[0]));
|
||||
|
||||
for (i = 0; i < height; i++) {
|
||||
T* ds = img.ptr<T>(i);
|
||||
int* ls = labels + width * i;//label ptr for a row
|
||||
|
||||
for (j = 0; j < width; j++) {
|
||||
if (ds[j] != newVal) { // not a bad disparity
|
||||
if (ls[j]) { // has a label, check for bad label
|
||||
if (rtype[ls[j]]) // small region, zero out disparity
|
||||
ds[j] = (T)newVal;
|
||||
}
|
||||
// no label, assign and propagate
|
||||
else {
|
||||
Point2s* ws = wbuf; // initialize wavefront
|
||||
Point2s p((short)j, (short)i); // current pixel
|
||||
curlabel++; // next label
|
||||
int count = 0; // current region size
|
||||
ls[j] = curlabel;
|
||||
|
||||
// wavefront propagation
|
||||
while (ws >= wbuf) { // wavefront not empty
|
||||
count++;
|
||||
// put neighbors onto wavefront
|
||||
T* dpp = &img.ptr<T>(p.y)[p.x];
|
||||
T dp = *dpp;
|
||||
int* lpp = labels + width * p.y + p.x;
|
||||
|
||||
if (p.x < width - 1 && !lpp[+1] && dpp[+1] != newVal && std::abs(dp - dpp[+1]) <= maxDiff) {
|
||||
lpp[+1] = curlabel;
|
||||
*ws++ = Point2s(p.x + 1, p.y);
|
||||
}
|
||||
|
||||
if (p.x > 0 && !lpp[-1] && dpp[-1] != newVal && std::abs(dp - dpp[-1]) <= maxDiff) {
|
||||
lpp[-1] = curlabel;
|
||||
*ws++ = Point2s(p.x - 1, p.y);
|
||||
}
|
||||
|
||||
if (p.y < height - 1 && !lpp[+width] && dpp[+dstep] != newVal && std::abs(dp - dpp[+dstep]) <= maxDiff) {
|
||||
lpp[+width] = curlabel;
|
||||
*ws++ = Point2s(p.x, p.y + 1);
|
||||
}
|
||||
|
||||
if (p.y > 0 && !lpp[-width] && dpp[-dstep] != newVal && std::abs(dp - dpp[-dstep]) <= maxDiff) {
|
||||
lpp[-width] = curlabel;
|
||||
*ws++ = Point2s(p.x, p.y - 1);
|
||||
}
|
||||
|
||||
// pop most recent and propagate
|
||||
// NB: could try least recent, maybe better convergence
|
||||
p = *--ws;
|
||||
}
|
||||
|
||||
// assign label type
|
||||
if (count <= maxSpeckleSize) { // speckle region
|
||||
rtype[ls[j]] = 1; // small region label
|
||||
ds[j] = (T)newVal;
|
||||
} else
|
||||
rtype[ls[j]] = 0; // large region label
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
ImageSpeckleFilter gSpeckleFilter;
|
||||
|
||||
void ImageSpeckleFilter::Compute(cv::Mat &image, int newVal, int maxSpeckleSize, int maxDiff)
|
||||
{
|
||||
if(image.type() == CV_8U){
|
||||
filterSpecklesImpl<uint8_t>(image, newVal, maxSpeckleSize, maxDiff, _labelBuf);
|
||||
} else if(image.type() == CV_16U){
|
||||
filterSpecklesImpl<uint16_t>(image, newVal, maxSpeckleSize, maxDiff, _labelBuf);
|
||||
} else {
|
||||
char sz[10];
|
||||
sprintf(sz, "%d", image.type());
|
||||
throw std::runtime_error(std::string("ImageSpeckleFilter only support 8u and 16u, not ") + sz);
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
22
image_capture/third_party/percipio/common/ImageSpeckleFilter.hpp
vendored
Normal file
22
image_capture/third_party/percipio/common/ImageSpeckleFilter.hpp
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
#ifndef XYZ_IMAGE_SPECKLE_FILTER_HPP_
|
||||
#define XYZ_IMAGE_SPECKLE_FILTER_HPP_
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
#include <vector>
|
||||
#include <opencv2/opencv.hpp>
|
||||
|
||||
|
||||
class ImageSpeckleFilter
|
||||
{
|
||||
public:
|
||||
void Compute(cv::Mat &image, int newVal = 0, int maxSpeckleSize = 50, int maxDiff = 6);
|
||||
|
||||
private:
|
||||
std::vector<char> _labelBuf;
|
||||
};
|
||||
|
||||
extern ImageSpeckleFilter gSpeckleFilter;
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
||||
95
image_capture/third_party/percipio/common/MatViewer.cpp
vendored
Normal file
95
image_capture/third_party/percipio/common/MatViewer.cpp
vendored
Normal file
@@ -0,0 +1,95 @@
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include "MatViewer.hpp"
|
||||
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
int GraphicItem::globalID = 0;
|
||||
|
||||
|
||||
void OpencvViewer::_onMouseCallback(int event, int x, int y, int /*flags*/, void* ustc)
|
||||
{
|
||||
OpencvViewer* p = (OpencvViewer*)ustc;
|
||||
|
||||
// NOTE: This callback will be called very frequently while mouse moving,
|
||||
// keep it simple
|
||||
|
||||
bool repaint = false;
|
||||
p->onMouseCallback(p->_orgImg, event, cv::Point(x,y), repaint);
|
||||
if(repaint){
|
||||
p->showImage();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void OpencvViewer::showImage()
|
||||
{
|
||||
_showImg = _orgImg.clone();
|
||||
for(std::map<int, GraphicItem*>::iterator it = _items.begin()
|
||||
; it != _items.end(); it++){
|
||||
it->second->draw(_showImg);
|
||||
}
|
||||
cv::imshow(_win.c_str(), _showImg);
|
||||
cv::setMouseCallback(_win, _onMouseCallback, this);
|
||||
}
|
||||
|
||||
///////////////////////////// DepthViewer ///////////////////////////////////////
|
||||
|
||||
|
||||
DepthViewer::DepthViewer(const std::string& win)
|
||||
: OpencvViewer(win)
|
||||
, _centerDepthItem(std::string(), cv::Point(0,20), 0.5, cv::Scalar(0,255,0), 2)
|
||||
, _pickedDepthItem(std::string(), cv::Point(0,40), 0.5, cv::Scalar(0,255,0), 2)
|
||||
{
|
||||
OpencvViewer::addGraphicItem(&_centerDepthItem);
|
||||
OpencvViewer::addGraphicItem(&_pickedDepthItem);
|
||||
depth_scale_unit = 1.f;
|
||||
}
|
||||
|
||||
|
||||
void DepthViewer::show(const cv::Mat& img)
|
||||
{
|
||||
if(img.type() != CV_16U || img.total() == 0){
|
||||
return;
|
||||
}
|
||||
|
||||
char str[128];
|
||||
float val = img.at<uint16_t>(img.rows / 2, img.cols / 2)*depth_scale_unit;
|
||||
sprintf(str, "Depth at center: %.1f", val);
|
||||
_centerDepthItem.set(str);
|
||||
|
||||
val = img.at<uint16_t>(_fixLoc.y, _fixLoc.x)*depth_scale_unit;
|
||||
sprintf(str, "Depth at (%d,%d): %.1f", _fixLoc.x, _fixLoc.y , val);
|
||||
_pickedDepthItem.set(str);
|
||||
|
||||
_depth = img.clone();
|
||||
_renderedDepth = _render.Compute(img);
|
||||
OpencvViewer::show(_renderedDepth);
|
||||
}
|
||||
|
||||
|
||||
void DepthViewer::onMouseCallback(cv::Mat& img, int event, const cv::Point pnt
|
||||
, bool& repaint)
|
||||
{
|
||||
repaint = false;
|
||||
switch(event){
|
||||
case cv::EVENT_LBUTTONDOWN: {
|
||||
_fixLoc = pnt;
|
||||
char str[64];
|
||||
float val = _depth.at<uint16_t>(pnt.y, pnt.x)*depth_scale_unit;
|
||||
sprintf(str, "Depth at (%d,%d): %.1f", pnt.x, pnt.y, val);
|
||||
printf(">>>>>>>>>>>>>>>> depth(%.1f)\n", val);
|
||||
_pickedDepthItem.set(str);
|
||||
repaint = true;
|
||||
break;
|
||||
}
|
||||
case cv::EVENT_MOUSEMOVE:
|
||||
// uint16_t val = _img.at<uint16_t>(pnt.y, pnt.x);
|
||||
// char str[32];
|
||||
// sprintf(str, "Depth at mouse: %d", val);
|
||||
// drawText(img, str, cv::Point(0,60), 0.5, cv::Scalar(0,255,0), 2);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
#endif
|
||||
144
image_capture/third_party/percipio/common/MatViewer.hpp
vendored
Normal file
144
image_capture/third_party/percipio/common/MatViewer.hpp
vendored
Normal file
@@ -0,0 +1,144 @@
|
||||
#ifndef XYZ_MAT_VIEWER_HPP_
|
||||
#define XYZ_MAT_VIEWER_HPP_
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include <string>
|
||||
#include "DepthRender.hpp"
|
||||
|
||||
|
||||
class GraphicItem
|
||||
{
|
||||
public:
|
||||
GraphicItem(
|
||||
const cv::Scalar& color = cv::Scalar(255,255,255)
|
||||
)
|
||||
: _id(++globalID), _color(color) {}
|
||||
virtual ~GraphicItem() {}
|
||||
|
||||
int id() const { return _id; }
|
||||
|
||||
cv::Scalar color() const { return _color; }
|
||||
void setColor(const cv::Scalar& color) { _color = color; }
|
||||
|
||||
virtual void draw(cv::Mat& img) = 0;
|
||||
|
||||
protected:
|
||||
int _id;
|
||||
cv::Scalar _color;
|
||||
|
||||
private:
|
||||
static int globalID;
|
||||
|
||||
};
|
||||
|
||||
class GraphicRectangleItem : public GraphicItem
|
||||
{
|
||||
public:
|
||||
cv::Rect _rect;
|
||||
|
||||
GraphicRectangleItem(
|
||||
const cv::Scalar& color = cv::Scalar(255,255,255),
|
||||
const cv::Rect& rect = cv::Rect()
|
||||
)
|
||||
: GraphicItem(color), _rect(rect) {}
|
||||
virtual ~GraphicRectangleItem() {}
|
||||
void set(const cv::Rect& rect) { _rect = rect; }
|
||||
virtual void draw(cv::Mat& img){ cv::rectangle(img, _rect, color()); }
|
||||
};
|
||||
|
||||
class GraphicStringItem : public GraphicItem
|
||||
{
|
||||
public:
|
||||
std::string _str;
|
||||
cv::Point _loc;
|
||||
double _scale;
|
||||
int _thick;
|
||||
|
||||
GraphicStringItem(
|
||||
const std::string& str = std::string(),
|
||||
const cv::Point loc = cv::Point(),
|
||||
double scale = 0,
|
||||
const cv::Scalar& color = cv::Scalar(),
|
||||
int thick = 0
|
||||
)
|
||||
: GraphicItem(color), _str(str), _loc(loc), _scale(scale), _thick(thick) {}
|
||||
virtual ~GraphicStringItem() {}
|
||||
void set(const std::string& str) { _str = str; }
|
||||
virtual void draw(cv::Mat& img){
|
||||
cv::putText(img, _str, _loc, cv::FONT_HERSHEY_SIMPLEX, _scale, _color, _thick);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
class OpencvViewer
|
||||
{
|
||||
public:
|
||||
OpencvViewer(const std::string& win)
|
||||
: _win(win)
|
||||
{
|
||||
_has_win = 0;
|
||||
//cv::namedWindow(_win);
|
||||
//cv::setMouseCallback(_win, _onMouseCallback, this);
|
||||
}
|
||||
~OpencvViewer()
|
||||
{
|
||||
if (_has_win)
|
||||
{
|
||||
//cv::setMouseCallback(_win, NULL, NULL);
|
||||
cv::destroyWindow(_win);
|
||||
}
|
||||
}
|
||||
|
||||
const std::string& name() const {return _win;}
|
||||
|
||||
virtual void show(const cv::Mat& img)
|
||||
{
|
||||
_has_win = 1;
|
||||
_orgImg = img.clone();
|
||||
showImage();
|
||||
}
|
||||
virtual void onMouseCallback(cv::Mat& /*img*/, int /*event*/, const cv::Point /*pnt*/
|
||||
, bool& repaint) {repaint = false;}
|
||||
|
||||
void addGraphicItem(GraphicItem* item) {
|
||||
_items.insert(std::make_pair(item->id(), item));}
|
||||
void delGraphicItem(GraphicItem* item) { _items.erase(item->id()); }
|
||||
|
||||
private:
|
||||
static void _onMouseCallback(int event, int x, int y, int flags, void* ustc);
|
||||
|
||||
void showImage();
|
||||
|
||||
cv::Mat _orgImg;
|
||||
cv::Mat _showImg;
|
||||
int _has_win;
|
||||
std::string _win;
|
||||
std::map<int, GraphicItem*> _items;
|
||||
};
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
class DepthViewer : public OpencvViewer
|
||||
{
|
||||
public:
|
||||
DepthViewer(const std::string& win);
|
||||
virtual void show(const cv::Mat& depthImage);
|
||||
virtual void onMouseCallback(cv::Mat& img, int event, const cv::Point pnt
|
||||
, bool& repaint);
|
||||
|
||||
|
||||
float depth_scale_unit;
|
||||
private:
|
||||
cv::Mat _depth;
|
||||
cv::Mat _renderedDepth;
|
||||
DepthRender _render;
|
||||
GraphicStringItem _centerDepthItem;
|
||||
GraphicStringItem _pickedDepthItem;
|
||||
cv::Point _fixLoc;
|
||||
};
|
||||
|
||||
|
||||
#endif
|
||||
#endif
|
||||
198
image_capture/third_party/percipio/common/ParametersParse.cpp
vendored
Normal file
198
image_capture/third_party/percipio/common/ParametersParse.cpp
vendored
Normal file
@@ -0,0 +1,198 @@
|
||||
|
||||
#include "ParametersParse.h"
|
||||
#include "json11.hpp"
|
||||
|
||||
using namespace json11;
|
||||
|
||||
TY_STATUS write_int_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
if(value.is_number())
|
||||
return TYSetInt(hDevice, comp, feat, static_cast<int>(value.number_value()));
|
||||
else
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
TY_STATUS write_float_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
if(value.is_number())
|
||||
return TYSetFloat(hDevice, comp, feat, static_cast<float>(value.number_value()));
|
||||
else
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
TY_STATUS write_enum_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
if(value.is_number())
|
||||
return TYSetEnum(hDevice, comp, feat, static_cast<uint32_t>(value.number_value()));
|
||||
else
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
TY_STATUS write_bool_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
if(value.is_bool())
|
||||
return TYSetBool(hDevice, comp, feat, value.bool_value());
|
||||
else
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
bool json_parse_arrar(const Json& value, std::vector<char>& buff)
|
||||
{
|
||||
buff.clear();
|
||||
if(value.is_array()) {
|
||||
size_t size = value.array_items().size();
|
||||
buff.resize(size);
|
||||
for(size_t i = 0; i < size; i++)
|
||||
buff[i] = static_cast<char>(value[i].number_value());
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
TY_STATUS write_string_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
std::vector<char> buff(0);
|
||||
if(json_parse_arrar(value, buff)) {
|
||||
buff.push_back(0);
|
||||
return TYSetString(hDevice, comp, feat, &buff[0]);
|
||||
} else {
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
TY_STATUS write_bytearray_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
std::vector<char> buff(0);
|
||||
if(json_parse_arrar(value, buff)) {
|
||||
return TYSetByteArray(hDevice, comp, feat, (uint8_t*)(&buff[0]), buff.size());
|
||||
} else {
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
TY_STATUS write_struct_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
std::vector<char> buff(0);
|
||||
if(json_parse_arrar(value, buff)) {
|
||||
return TYSetStruct(hDevice, comp, feat, (void*)(&buff[0]), buff.size());
|
||||
} else {
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
TY_STATUS device_write_feature(const TY_DEV_HANDLE hDevice, TY_COMPONENT_ID comp, TY_FEATURE_ID feat, const Json& value)
|
||||
{
|
||||
TY_STATUS status = TY_STATUS_OK;
|
||||
TY_FEATURE_TYPE type = TYFeatureType(feat);
|
||||
switch (type)
|
||||
{
|
||||
case TY_FEATURE_INT:
|
||||
status = write_int_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
case TY_FEATURE_FLOAT:
|
||||
status = write_float_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
case TY_FEATURE_ENUM:
|
||||
status = write_enum_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
case TY_FEATURE_BOOL:
|
||||
status = write_bool_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
case TY_FEATURE_STRING:
|
||||
status = write_string_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
case TY_FEATURE_BYTEARRAY:
|
||||
status = write_bytearray_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
case TY_FEATURE_STRUCT:
|
||||
status = write_struct_feature(hDevice, comp, feat, value);
|
||||
break;
|
||||
default:
|
||||
status = TY_STATUS_INVALID_FEATURE;
|
||||
break;
|
||||
}
|
||||
return status;
|
||||
}
|
||||
|
||||
struct DevParam
|
||||
{
|
||||
TY_COMPONENT_ID compID;
|
||||
TY_FEATURE_ID featID;
|
||||
Json feat_value;
|
||||
};
|
||||
|
||||
bool isValidJsonString(const char* code)
|
||||
{
|
||||
std::string err;
|
||||
const auto json = Json::parse(code, err);
|
||||
if(json.is_null()) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
bool json_parse(const TY_DEV_HANDLE hDevice, const char* jscode)
|
||||
{
|
||||
std::string err;
|
||||
const auto json = Json::parse(jscode, err);
|
||||
|
||||
Json components = json["component"];
|
||||
if(components.is_array()) {
|
||||
std::vector<DevParam> param_list(0);
|
||||
for (auto &k : components.array_items()) {
|
||||
const Json& comp_id = k["id"];
|
||||
const Json& comp_desc = k["desc"];
|
||||
const Json& features = k["feature"];
|
||||
|
||||
if(!comp_id.is_string()) continue;
|
||||
if(!comp_desc.is_string()) continue;
|
||||
if(!features.is_array()) continue;
|
||||
|
||||
const char* comp_desc_str = comp_desc.string_value().c_str();
|
||||
const char* comp_id_str = comp_id.string_value().c_str();
|
||||
|
||||
TY_COMPONENT_ID m_comp_id;
|
||||
sscanf(comp_id_str,"%x",&m_comp_id);
|
||||
|
||||
for (auto &f : features.array_items()) {
|
||||
const Json& feat_name = f["name"];
|
||||
const Json& feat_id = f["id"];
|
||||
const Json& feat_value = f["value"];
|
||||
|
||||
if(!feat_id.is_string()) continue;
|
||||
if(!feat_name.is_string()) continue;
|
||||
|
||||
const char* feat_name_str = feat_name.string_value().c_str();
|
||||
const char* feat_id_str = feat_id.string_value().c_str();
|
||||
|
||||
TY_FEATURE_ID m_feat_id;
|
||||
sscanf(feat_id_str,"%x",&m_feat_id);
|
||||
|
||||
param_list.push_back({m_comp_id, m_feat_id, feat_value});
|
||||
}
|
||||
}
|
||||
|
||||
while(1)
|
||||
{
|
||||
size_t cnt = param_list.size();
|
||||
for(auto it = param_list.begin(); it != param_list.end(); )
|
||||
{
|
||||
if(TY_STATUS_OK == device_write_feature(hDevice, it->compID, it->featID, it->feat_value))
|
||||
{
|
||||
it = param_list.erase(it);
|
||||
} else {
|
||||
++it;
|
||||
}
|
||||
}
|
||||
|
||||
if(param_list.size() == 0) {
|
||||
return true;
|
||||
}
|
||||
|
||||
if(param_list.size() == cnt) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
6
image_capture/third_party/percipio/common/ParametersParse.h
vendored
Normal file
6
image_capture/third_party/percipio/common/ParametersParse.h
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
#ifndef _PARAMETERS_PARSE_H_
|
||||
#define _PARAMETERS_PARSE_H_
|
||||
#include "TYApi.h"
|
||||
bool isValidJsonString(const char* code);
|
||||
bool json_parse(const TY_DEV_HANDLE hDevice, const char* jscode);
|
||||
#endif
|
||||
83
image_capture/third_party/percipio/common/TYThread.cpp
vendored
Normal file
83
image_capture/third_party/percipio/common/TYThread.cpp
vendored
Normal file
@@ -0,0 +1,83 @@
|
||||
#include "TYThread.hpp"
|
||||
|
||||
#ifdef _WIN32
|
||||
|
||||
#include <windows.h>
|
||||
class TYThreadImpl
|
||||
{
|
||||
public:
|
||||
TYThreadImpl() : _thread(NULL) {}
|
||||
int create(TYThread::Callback_t cb, void* arg) {
|
||||
DWORD dwThreadId = 0;
|
||||
_thread = CreateThread(
|
||||
NULL, // default security attributes
|
||||
0, // use default stack size
|
||||
(LPTHREAD_START_ROUTINE)cb, // thread function name
|
||||
arg, // argument to thread function
|
||||
0, // use default creation flags
|
||||
&dwThreadId); // returns the thread identifier
|
||||
return 0;
|
||||
}
|
||||
int destroy() {
|
||||
// TerminateThread(_thread, 0);
|
||||
switch (WaitForSingleObject(_thread, INFINITE))
|
||||
{
|
||||
case WAIT_OBJECT_0:
|
||||
if (CloseHandle(_thread)) {
|
||||
_thread = 0;
|
||||
return 0;
|
||||
}
|
||||
else {
|
||||
return -1;
|
||||
}
|
||||
default:
|
||||
return -2;
|
||||
}
|
||||
}
|
||||
private:
|
||||
HANDLE _thread;
|
||||
};
|
||||
|
||||
#else // _WIN32
|
||||
|
||||
#include <pthread.h>
|
||||
class TYThreadImpl
|
||||
{
|
||||
public:
|
||||
TYThreadImpl() {}
|
||||
int create(TYThread::Callback_t cb, void* arg) {
|
||||
int ret = pthread_create(&_thread, NULL, cb, arg);
|
||||
return ret;
|
||||
}
|
||||
int destroy() {
|
||||
pthread_join(_thread, NULL);
|
||||
return 0;
|
||||
}
|
||||
private:
|
||||
pthread_t _thread;
|
||||
};
|
||||
|
||||
#endif // _WIN32
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
TYThread::TYThread()
|
||||
{
|
||||
impl = new TYThreadImpl();
|
||||
}
|
||||
|
||||
TYThread::~TYThread()
|
||||
{
|
||||
delete impl;
|
||||
impl = NULL;
|
||||
}
|
||||
|
||||
int TYThread::create(Callback_t cb, void* arg)
|
||||
{
|
||||
return impl->create(cb, arg);
|
||||
}
|
||||
|
||||
int TYThread::destroy()
|
||||
{
|
||||
return impl->destroy();
|
||||
}
|
||||
25
image_capture/third_party/percipio/common/TYThread.hpp
vendored
Normal file
25
image_capture/third_party/percipio/common/TYThread.hpp
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
#ifndef XYZ_TYThread_HPP_
|
||||
#define XYZ_TYThread_HPP_
|
||||
|
||||
|
||||
class TYThreadImpl;
|
||||
|
||||
class TYThread
|
||||
{
|
||||
public:
|
||||
typedef void* (*Callback_t)(void*);
|
||||
|
||||
TYThread();
|
||||
~TYThread();
|
||||
|
||||
int create(Callback_t cb, void* arg);
|
||||
int destroy();
|
||||
|
||||
private:
|
||||
TYThreadImpl* impl;
|
||||
};
|
||||
|
||||
|
||||
|
||||
|
||||
#endif
|
||||
496
image_capture/third_party/percipio/common/Utils.hpp
vendored
Normal file
496
image_capture/third_party/percipio/common/Utils.hpp
vendored
Normal file
@@ -0,0 +1,496 @@
|
||||
#ifndef SAMPLE_COMMON_UTILS_HPP_
|
||||
#define SAMPLE_COMMON_UTILS_HPP_
|
||||
|
||||
/**
|
||||
* This file excludes opencv for sample_raw.
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
#include <sstream>
|
||||
#include <inttypes.h>
|
||||
#include "TYApi.h"
|
||||
#include "TYThread.hpp"
|
||||
#include "crc32.h"
|
||||
#include "ParametersParse.h"
|
||||
#include "huffman.h"
|
||||
|
||||
#ifndef ASSERT
|
||||
#define ASSERT(x) do{ \
|
||||
if(!(x)) { \
|
||||
LOGE("Assert failed at %s:%d", __FILE__, __LINE__); \
|
||||
LOGE(" : " #x ); \
|
||||
abort(); \
|
||||
} \
|
||||
}while(0)
|
||||
#endif
|
||||
|
||||
#ifndef ASSERT_OK
|
||||
#define ASSERT_OK(x) do{ \
|
||||
int err = (x); \
|
||||
if(err != TY_STATUS_OK) { \
|
||||
LOGE("Assert failed: error %d(%s) at %s:%d", err, TYErrorString(err), __FILE__, __LINE__); \
|
||||
LOGE(" : " #x ); \
|
||||
abort(); \
|
||||
} \
|
||||
}while(0)
|
||||
#endif
|
||||
|
||||
#ifndef CHECK_RET
|
||||
#define CHECK_RET(x) do{ \
|
||||
int err = (x); \
|
||||
if(err != TY_STATUS_OK) { \
|
||||
LOGD(#x " failed: error %d(%s)", err, TYErrorString(err)); \
|
||||
LOGD("at %s:%d", __FILE__, __LINE__); \
|
||||
} \
|
||||
}while(0)
|
||||
#endif
|
||||
|
||||
#ifdef _WIN32
|
||||
# include <windows.h>
|
||||
# include <time.h>
|
||||
static inline char* getLocalTime()
|
||||
{
|
||||
static char local[26] = {0};
|
||||
SYSTEMTIME wtm;
|
||||
struct tm tm;
|
||||
GetLocalTime(&wtm);
|
||||
tm.tm_year = wtm.wYear - 1900;
|
||||
tm.tm_mon = wtm.wMonth - 1;
|
||||
tm.tm_mday = wtm.wDay;
|
||||
tm.tm_hour = wtm.wHour;
|
||||
tm.tm_min = wtm.wMinute;
|
||||
tm.tm_sec = wtm.wSecond;
|
||||
tm.tm_isdst = -1;
|
||||
|
||||
strftime(local, 26, "%Y-%m-%d %H:%M:%S", &tm);
|
||||
|
||||
return local;
|
||||
}
|
||||
|
||||
static inline uint64_t getSystemTime()
|
||||
{
|
||||
SYSTEMTIME wtm;
|
||||
struct tm tm;
|
||||
GetLocalTime(&wtm);
|
||||
tm.tm_year = wtm.wYear - 1900;
|
||||
tm.tm_mon = wtm.wMonth - 1;
|
||||
tm.tm_mday = wtm.wDay;
|
||||
tm.tm_hour = wtm.wHour;
|
||||
tm.tm_min = wtm.wMinute;
|
||||
tm.tm_sec = wtm.wSecond;
|
||||
tm. tm_isdst = -1;
|
||||
return mktime(&tm) * 1000 + wtm.wMilliseconds;
|
||||
}
|
||||
static inline void MSleep(uint32_t ms)
|
||||
{
|
||||
Sleep(ms);
|
||||
}
|
||||
#else
|
||||
# include <sys/time.h>
|
||||
# include <unistd.h>
|
||||
static inline char* getLocalTime()
|
||||
{
|
||||
static char local[26] = {0};
|
||||
time_t time;
|
||||
|
||||
struct timeval tv;
|
||||
gettimeofday(&tv, NULL);
|
||||
|
||||
time = tv.tv_sec;
|
||||
struct tm* p_time = localtime(&time);
|
||||
strftime(local, 26, "%Y-%m-%d %H:%M:%S", p_time);
|
||||
|
||||
return local;
|
||||
}
|
||||
|
||||
static inline uint64_t getSystemTime()
|
||||
{
|
||||
struct timeval tv;
|
||||
gettimeofday(&tv, NULL);
|
||||
return tv.tv_sec*1000 + tv.tv_usec/1000;
|
||||
}
|
||||
static inline void MSleep(uint32_t ms)
|
||||
{
|
||||
usleep(ms * 1000);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
#define LOGD(fmt,...) printf("%" PRIu64 " (%s) " fmt "\n", getSystemTime(), getLocalTime(), ##__VA_ARGS__)
|
||||
#define LOGI(fmt,...) printf("%" PRIu64 " (%s) " fmt "\n", getSystemTime(), getLocalTime(), ##__VA_ARGS__)
|
||||
#define LOGW(fmt,...) printf("%" PRIu64 " (%s) " fmt "\n", getSystemTime(), getLocalTime(), ##__VA_ARGS__)
|
||||
#define LOGE(fmt,...) printf("%" PRIu64 " (%s) Error: " fmt "\n", getSystemTime(), getLocalTime(), ##__VA_ARGS__)
|
||||
#define xLOGD(fmt,...)
|
||||
#define xLOGI(fmt,...)
|
||||
#define xLOGW(fmt,...)
|
||||
#define xLOGE(fmt,...)
|
||||
|
||||
|
||||
#ifdef _WIN32
|
||||
# include <windows.h>
|
||||
# define MSLEEP(x) Sleep(x)
|
||||
// windows defined macro max/min
|
||||
# ifdef max
|
||||
# undef max
|
||||
# endif
|
||||
# ifdef min
|
||||
# undef min
|
||||
# endif
|
||||
#else
|
||||
# include <unistd.h>
|
||||
# include <sys/time.h>
|
||||
# define MSLEEP(x) usleep((x)*1000)
|
||||
#endif
|
||||
|
||||
static inline const char* colorFormatName(TY_PIXEL_FORMAT fmt)
|
||||
{
|
||||
#define FORMAT_CASE(a) case (a): return #a
|
||||
switch(fmt){
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_UNDEFINED);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_MONO);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_RGB);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_YVYU);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_YUYV);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_DEPTH16);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_BAYER8GB);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_BAYER8BG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_BAYER8GR);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_BAYER8RG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_MONO10);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER10GBRG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER10BGGR);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER10GRBG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER10RGGB);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_MONO12);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER12GBRG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER12BGGR);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER12GRBG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_CSI_BAYER12RGGB);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_BGR);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_JPEG);
|
||||
FORMAT_CASE(TY_PIXEL_FORMAT_MJPG);
|
||||
default: return "UNKNOWN FORMAT";
|
||||
}
|
||||
#undef FORMAT_CASE
|
||||
}
|
||||
|
||||
|
||||
static inline const TY_IMAGE_DATA* TYImageInFrame(const TY_FRAME_DATA& frame
|
||||
, const TY_COMPONENT_ID comp)
|
||||
{
|
||||
for(int i = 0; i < frame.validCount; i++){
|
||||
if(frame.image[i].componentID == comp){
|
||||
return &frame.image[i];
|
||||
}
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
static void *updateThreadFunc(void *userdata)
|
||||
{
|
||||
TY_INTERFACE_HANDLE iface = (TY_INTERFACE_HANDLE)userdata;
|
||||
TYUpdateDeviceList(iface);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static TY_STATUS updateDevicesParallel(std::vector<TY_INTERFACE_HANDLE> &ifaces,
|
||||
uint64_t timeout=2000)
|
||||
{
|
||||
if(ifaces.size() != 0) {
|
||||
TYThread *updateThreads = new TYThread[ifaces.size()];
|
||||
for(int i = 0; i < ifaces.size(); i++) {
|
||||
updateThreads[i].create(updateThreadFunc, ifaces[i]);
|
||||
}
|
||||
for(int i = 0; i < ifaces.size(); i++) {
|
||||
updateThreads[i].destroy();
|
||||
}
|
||||
delete [] updateThreads;
|
||||
updateThreads = NULL;
|
||||
}
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS selectDevice(TY_INTERFACE_TYPE iface
|
||||
, const std::string& ID, const std::string& IP
|
||||
, uint32_t deviceNum, std::vector<TY_DEVICE_BASE_INFO>& out)
|
||||
{
|
||||
LOGD("Update interface list");
|
||||
ASSERT_OK( TYUpdateInterfaceList() );
|
||||
|
||||
uint32_t n = 0;
|
||||
ASSERT_OK( TYGetInterfaceNumber(&n) );
|
||||
LOGD("Got %u interface list", n);
|
||||
if(n == 0){
|
||||
LOGE("interface number incorrect");
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
std::vector<TY_INTERFACE_INFO> ifaces(n);
|
||||
ASSERT_OK( TYGetInterfaceList(&ifaces[0], n, &n) );
|
||||
ASSERT( n == ifaces.size() );
|
||||
for(uint32_t i = 0; i < n; i++){
|
||||
LOGI("Found interface %u:", i);
|
||||
LOGI(" name: %s", ifaces[i].name);
|
||||
LOGI(" id: %s", ifaces[i].id);
|
||||
LOGI(" type: 0x%x", ifaces[i].type);
|
||||
if(TYIsNetworkInterface(ifaces[i].type)){
|
||||
LOGI(" MAC: %s", ifaces[i].netInfo.mac);
|
||||
LOGI(" ip: %s", ifaces[i].netInfo.ip);
|
||||
LOGI(" netmask: %s", ifaces[i].netInfo.netmask);
|
||||
LOGI(" gateway: %s", ifaces[i].netInfo.gateway);
|
||||
LOGI(" broadcast: %s", ifaces[i].netInfo.broadcast);
|
||||
}
|
||||
}
|
||||
|
||||
out.clear();
|
||||
std::vector<TY_INTERFACE_TYPE> ifaceTypeList;
|
||||
std::vector<TY_INTERFACE_HANDLE> hIfaces;
|
||||
ifaceTypeList.push_back(TY_INTERFACE_USB);
|
||||
ifaceTypeList.push_back(TY_INTERFACE_ETHERNET);
|
||||
ifaceTypeList.push_back(TY_INTERFACE_IEEE80211);
|
||||
for(size_t t = 0; t < ifaceTypeList.size(); t++){
|
||||
for(uint32_t i = 0; i < ifaces.size(); i++){
|
||||
if(ifaces[i].type == ifaceTypeList[t] && (ifaces[i].type & iface) && deviceNum > out.size()){
|
||||
TY_INTERFACE_HANDLE hIface;
|
||||
ASSERT_OK( TYOpenInterface(ifaces[i].id, &hIface) );
|
||||
hIfaces.push_back(hIface);
|
||||
}
|
||||
}
|
||||
}
|
||||
updateDevicesParallel(hIfaces);
|
||||
for (uint32_t i = 0; i < hIfaces.size(); i++) {
|
||||
TY_INTERFACE_HANDLE hIface = hIfaces[i];
|
||||
uint32_t n = 0;
|
||||
TYGetDeviceNumber(hIface, &n);
|
||||
if(n > 0){
|
||||
std::vector<TY_DEVICE_BASE_INFO> devs(n);
|
||||
TYGetDeviceList(hIface, &devs[0], n, &n);
|
||||
for(uint32_t j = 0; j < n; j++){
|
||||
if(deviceNum > out.size() && ((ID.empty() && IP.empty())
|
||||
|| (!ID.empty() && devs[j].id == ID)
|
||||
|| (!IP.empty() && IP == devs[j].netInfo.ip)))
|
||||
{
|
||||
if (devs[j].iface.type == TY_INTERFACE_ETHERNET || devs[j].iface.type == TY_INTERFACE_IEEE80211) {
|
||||
LOGI("*** Select %s on %s, ip %s", devs[j].id, devs[j].iface.id, devs[j].netInfo.ip);
|
||||
} else {
|
||||
LOGI("*** Select %s on %s", devs[j].id, devs[j].iface.id);
|
||||
}
|
||||
out.push_back(devs[j]);
|
||||
}
|
||||
}
|
||||
}
|
||||
TYCloseInterface(hIface);
|
||||
}
|
||||
|
||||
if(out.size() == 0){
|
||||
LOGE("not found any device");
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS get_feature_enum_list(TY_DEV_HANDLE handle,
|
||||
TY_COMPONENT_ID compID,
|
||||
TY_FEATURE_ID featID,
|
||||
std::vector<TY_ENUM_ENTRY> &feature_info){
|
||||
uint32_t n = 0;
|
||||
ASSERT_OK(TYGetEnumEntryCount(handle, compID, featID, &n));
|
||||
LOGD("=== %14s: entry count %d", "", n);
|
||||
feature_info.clear();
|
||||
if (n == 0){
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
feature_info.resize(n);
|
||||
ASSERT_OK(TYGetEnumEntryInfo(handle, compID, featID, &feature_info[0], n, &n));
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS get_image_mode(TY_DEV_HANDLE handle
|
||||
, TY_COMPONENT_ID compID
|
||||
, TY_IMAGE_MODE &image_mode, int idx)
|
||||
{
|
||||
std::vector<TY_ENUM_ENTRY> image_mode_list;
|
||||
ASSERT_OK(get_feature_enum_list(handle, compID, TY_ENUM_IMAGE_MODE, image_mode_list));
|
||||
if (image_mode_list.size() == 0 || idx < 0
|
||||
|| idx > image_mode_list.size() -1){
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
image_mode = image_mode_list[idx].value;
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS get_default_image_mode(TY_DEV_HANDLE handle
|
||||
, TY_COMPONENT_ID compID
|
||||
, TY_IMAGE_MODE &image_mode)
|
||||
{
|
||||
return get_image_mode(handle, compID, image_mode, 0);
|
||||
}
|
||||
|
||||
enum EncodingType : uint32_t
|
||||
{
|
||||
HUFFMAN = 0,
|
||||
};
|
||||
//10MB
|
||||
#define MAX_STORAGE_SIZE (10*1024*1024)
|
||||
|
||||
static inline TY_STATUS clear_storage(const TY_DEV_HANDLE handle)
|
||||
{
|
||||
uint32_t block_size;
|
||||
ASSERT_OK( TYGetByteArraySize(handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_CUSTOM_BLOCK, &block_size) );
|
||||
|
||||
uint8_t* blocks = new uint8_t[MAX_STORAGE_SIZE] ();
|
||||
ASSERT_OK( TYSetByteArray(handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_CUSTOM_BLOCK, blocks, block_size) );
|
||||
|
||||
delete []blocks;
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS load_parameters_from_storage(const TY_DEV_HANDLE handle, std::string& js)
|
||||
{
|
||||
uint32_t block_size;
|
||||
uint8_t* blocks = new uint8_t[MAX_STORAGE_SIZE] ();
|
||||
ASSERT_OK( TYGetByteArraySize(handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_CUSTOM_BLOCK, &block_size) );
|
||||
ASSERT_OK( TYGetByteArray(handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_CUSTOM_BLOCK, blocks, block_size) );
|
||||
|
||||
uint32_t crc_data = *(uint32_t*)blocks;
|
||||
if(0 == crc_data || 0xffffffff == crc_data) {
|
||||
LOGE("The CRC check code is empty.");
|
||||
delete []blocks;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
uint32_t crc;
|
||||
uint8_t* js_code = blocks + 4;
|
||||
crc = crc32_bitwise(js_code, strlen((const char*)js_code));
|
||||
if((crc != crc_data) || !isValidJsonString((const char*)js_code)) {
|
||||
EncodingType type = *(EncodingType*)(blocks + 4);
|
||||
ASSERT(type == HUFFMAN);
|
||||
uint32_t huffman_size = *(uint32_t*)(blocks + 8);
|
||||
uint8_t* huffman_ptr = (uint8_t*)(blocks + 12);
|
||||
if(huffman_size > (MAX_STORAGE_SIZE - 12)) {
|
||||
LOGE("Data length error.");
|
||||
delete []blocks;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
crc = crc32_bitwise(huffman_ptr, huffman_size);
|
||||
if(crc_data != crc) {
|
||||
LOGE("The data in the storage area has a CRC check error.");
|
||||
delete []blocks;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
std::string huffman_string(huffman_ptr, huffman_ptr + huffman_size);
|
||||
if(!TextHuffmanDecompression(huffman_string, js)) {
|
||||
LOGE("Huffman decoding error");
|
||||
delete []blocks;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
} else {
|
||||
js = std::string((const char*)js_code);
|
||||
}
|
||||
|
||||
if(!json_parse(handle, (const char* )js.c_str())) {
|
||||
LOGW("parameters load fail!");
|
||||
delete []blocks;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
delete []blocks;
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS write_parameters_to_storage(const TY_DEV_HANDLE handle, const std::string& json_file)
|
||||
{
|
||||
std::ifstream ifs(json_file);
|
||||
if (!ifs.is_open()) {
|
||||
LOGE("Unable to open file");
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
std::stringstream buffer;
|
||||
buffer << ifs.rdbuf();
|
||||
ifs.close();
|
||||
|
||||
std::string huffman_string;
|
||||
if(!TextHuffmanCompression(buffer.str(), huffman_string)) {
|
||||
LOGE("Huffman compression error");
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
const char* str = huffman_string.data();
|
||||
uint32_t crc = crc32_bitwise(str, huffman_string.length());
|
||||
|
||||
uint32_t block_size;
|
||||
ASSERT_OK( TYGetByteArraySize(handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_CUSTOM_BLOCK, &block_size) );
|
||||
if(block_size < huffman_string.length() + 12) {
|
||||
LOGE("The configuration file is too large, the maximum size should not exceed 4000 bytes");
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
uint8_t* blocks = new uint8_t[block_size] ();
|
||||
*(uint32_t*)blocks = crc;
|
||||
*(uint32_t*)(blocks + 4) = HUFFMAN;
|
||||
*(uint32_t*)(blocks + 8) = huffman_string.length();
|
||||
memcpy((char*)blocks + 12, str, huffman_string.length());
|
||||
ASSERT_OK( TYSetByteArray(handle, TY_COMPONENT_STORAGE, TY_BYTEARRAY_CUSTOM_BLOCK, blocks, block_size) );
|
||||
|
||||
delete []blocks;
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline void parse_firmware_errcode(TY_FW_ERRORCODE err_code) {
|
||||
if (TY_FW_ERRORCODE_CAM0_NOT_DETECTED & err_code) {
|
||||
LOGE("Left sensor Not Detected");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_CAM1_NOT_DETECTED & err_code) {
|
||||
LOGE("Right sensor Not Detected");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_CAM2_NOT_DETECTED & err_code) {
|
||||
LOGE("Color sensor Not Detected");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_POE_NOT_INIT & err_code) {
|
||||
LOGE("POE init error");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_RECMAP_NOT_CORRECT & err_code) {
|
||||
LOGE("RecMap error");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_LOOKUPTABLE_NOT_CORRECT & err_code) {
|
||||
LOGE("Disparity error");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_DRV8899_NOT_INIT & err_code) {
|
||||
LOGE("Motor init error");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_FOC_START_ERR & err_code) {
|
||||
LOGE("Motor start failed");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_CONFIG_NOT_FOUND & err_code) {
|
||||
LOGE("Config file not exist");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_CONFIG_NOT_CORRECT & err_code) {
|
||||
LOGE("Broken Config file");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_XML_NOT_FOUND & err_code) {
|
||||
LOGE("XML file not exist");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_XML_NOT_CORRECT & err_code) {
|
||||
LOGE("XML Parse err");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_XML_OVERRIDE_FAILED & err_code) {
|
||||
LOGE("Illegal XML file overrided, Only Used in Debug Mode!");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_CAM_INIT_FAILED & err_code) {
|
||||
LOGE("Init default cam feature failed!");
|
||||
}
|
||||
if (TY_FW_ERRORCODE_LASER_INIT_FAILED & err_code) {
|
||||
LOGE("Init default laser feature failed!");
|
||||
}
|
||||
|
||||
}
|
||||
#endif
|
||||
539
image_capture/third_party/percipio/common/common.hpp
vendored
Normal file
539
image_capture/third_party/percipio/common/common.hpp
vendored
Normal file
@@ -0,0 +1,539 @@
|
||||
#ifndef SAMPLE_COMMON_COMMON_HPP_
|
||||
#define SAMPLE_COMMON_COMMON_HPP_
|
||||
|
||||
#include "Utils.hpp"
|
||||
|
||||
#include <fstream>
|
||||
#include <iterator>
|
||||
|
||||
#include <memory>
|
||||
#include <iostream>
|
||||
#include <typeinfo>
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
#include <opencv2/opencv.hpp>
|
||||
#include "DepthRender.hpp"
|
||||
#include "MatViewer.hpp"
|
||||
#include "DepthInpainter.hpp"
|
||||
#endif
|
||||
|
||||
#include "TYThread.hpp"
|
||||
#include "TyIsp.h"
|
||||
#include "BayerISP.hpp"
|
||||
#include "CommandLineParser.hpp"
|
||||
#include "CommandLineFeatureHelper.hpp"
|
||||
|
||||
static inline int decodeCsiRaw10(unsigned char* src, unsigned short* dst, int width, int height)
|
||||
{
|
||||
if(width & 0x3) {
|
||||
return -1;
|
||||
}
|
||||
int raw10_line_size = 5 * width / 4;
|
||||
for(size_t i = 0, j = 0; i < raw10_line_size * height; i+=5, j+=4)
|
||||
{
|
||||
//[A2 - A9] | [B2 - B9] | [C2 - C9] | [D2 - D9] | [A0A1-B0B1-C0C1-D0D1]
|
||||
dst[j + 0] = ((uint16_t)src[i + 0] << 2) | ((src[i + 4] & 0x3) >> 0);
|
||||
dst[j + 1] = ((uint16_t)src[i + 1] << 2) | ((src[i + 4] & 0xc) >> 2);
|
||||
dst[j + 2] = ((uint16_t)src[i + 2] << 2) | ((src[i + 4] & 0x30) >> 4);
|
||||
dst[j + 3] = ((uint16_t)src[i + 3] << 2) | ((src[i + 4] & 0xc0) >> 6);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int decodeCsiRaw12(unsigned char* src, unsigned short* dst, int width, int height)
|
||||
{
|
||||
if(width & 0x1) {
|
||||
return -1;
|
||||
}
|
||||
int raw12_line_size = 3 * width / 2;
|
||||
for(size_t i = 0, j = 0; i < raw12_line_size * height; i+=3, j+=2)
|
||||
{
|
||||
//[A4 - A11] | [B4 - B11] | [A0A1A2A3-B0B1B2B3]
|
||||
dst[j + 0] = ((uint16_t)src[i + 0] << 4) | ((src[i + 2] & 0x0f) >> 0);
|
||||
dst[j + 1] = ((uint16_t)src[i + 1] << 4) | ((src[i + 2] & 0xf0) >> 4);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int decodeCsiRaw14(unsigned char* src, unsigned short* dst, int width, int height)
|
||||
{
|
||||
if(width & 0x3) {
|
||||
return -1;
|
||||
}
|
||||
int raw14_line_size = 7 * width / 4;
|
||||
for(size_t i = 0, j = 0; i < raw14_line_size * height; i+=7, j+=4)
|
||||
{
|
||||
//[A6 - A13] | [B6 - B13] | [C6 - C13] | [D6 - D13] | [A0A1A2A3A4A5-B0B1] | [B2B3B4B5-C0C1C2C3] | [C4C5-D0D1D2D3D4D5]
|
||||
dst[j + 0] = ((uint16_t)src[i + 0] << 6) | ((src[i + 4] & 0x3f) >> 0);
|
||||
dst[j + 1] = ((uint16_t)src[i + 1] << 6) | ((src[i + 4] & 0xc0) >> 6) | ((src[i + 5] & 0x0f) << 2);
|
||||
dst[j + 2] = ((uint16_t)src[i + 2] << 6) | ((src[i + 5] & 0xf0) >> 4) | ((src[i + 6] & 0x03) << 4);
|
||||
dst[j + 3] = ((uint16_t)src[i + 3] << 6) | ((src[i + 6] & 0xfc) >> 2);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
static inline int parseCsiRaw10(unsigned char* src, cv::Mat &dst, int width, int height)
|
||||
{
|
||||
cv::Mat m(height, width, CV_16U);
|
||||
decodeCsiRaw10(src, (ushort*)m.data, width, height);
|
||||
//convert valid 10bit from lsb to msb, d = s * 64
|
||||
dst = m * 64;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int parseCsiRaw12(unsigned char* src, cv::Mat &dst, int width, int height)
|
||||
{
|
||||
cv::Mat m(height, width, CV_16U);
|
||||
decodeCsiRaw12(src, (ushort*)m.data, width, height);
|
||||
//convert valid 12bit from lsb to msb, d = s * 16
|
||||
dst = m * 16;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int parseIrFrame(const TY_IMAGE_DATA* img, cv::Mat* pIR)
|
||||
{
|
||||
if (img->pixelFormat == TY_PIXEL_FORMAT_MONO16 || img->pixelFormat==TY_PIXEL_FORMAT_TOF_IR_MONO16){
|
||||
*pIR = cv::Mat(img->height, img->width, CV_16U, img->buffer).clone();
|
||||
} else if(img->pixelFormat == TY_PIXEL_FORMAT_CSI_MONO10) {
|
||||
*pIR = cv::Mat(img->height, img->width, CV_16U);
|
||||
parseCsiRaw10((uchar*)img->buffer, (*pIR), img->width, img->height);
|
||||
} else if(img->pixelFormat == TY_PIXEL_FORMAT_MONO) {
|
||||
*pIR = cv::Mat(img->height, img->width, CV_8U, img->buffer).clone();
|
||||
} else if(img->pixelFormat == TY_PIXEL_FORMAT_CSI_MONO12) {
|
||||
*pIR = cv::Mat(img->height, img->width, CV_8U, img->buffer).clone();
|
||||
parseCsiRaw12((uchar*)img->buffer, (*pIR), img->width, img->height);
|
||||
}
|
||||
else {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int parseBayer8Frame(const TY_IMAGE_DATA* img, cv::Mat* pColor, TY_ISP_HANDLE color_isp_handle = NULL)
|
||||
{
|
||||
int code = cv::COLOR_BayerGB2BGR;
|
||||
switch (img->pixelFormat)
|
||||
{
|
||||
case TY_PIXEL_FORMAT_BAYER8GBRG:
|
||||
code = cv::COLOR_BayerGR2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_BAYER8BGGR:
|
||||
code = cv::COLOR_BayerRG2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_BAYER8GRBG:
|
||||
code = cv::COLOR_BayerGB2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_BAYER8RGGB:
|
||||
code = cv::COLOR_BayerBG2BGR;
|
||||
break;
|
||||
default:
|
||||
LOGE("Invalid bayer8 fmt!");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (!color_isp_handle){
|
||||
cv::Mat raw(img->height, img->width, CV_8U, img->buffer);
|
||||
cv::cvtColor(raw, *pColor, code);
|
||||
}
|
||||
else{
|
||||
cv::Mat raw(img->height, img->width, CV_8U, img->buffer);
|
||||
pColor->create(img->height, img->width, CV_8UC3);
|
||||
int sz = img->height* img->width * 3;
|
||||
TY_IMAGE_DATA out_buff = TYInitImageData(sz, pColor->data, img->width, img->height);
|
||||
out_buff.pixelFormat = TY_PIXEL_FORMAT_BGR;
|
||||
int res = TYISPProcessImage(color_isp_handle, img, &out_buff);
|
||||
if (res != TY_STATUS_OK){
|
||||
//fall back to using opencv api
|
||||
cv::Mat raw(img->height, img->width, CV_8U, img->buffer);
|
||||
cv::cvtColor(raw, *pColor, code);
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int parseBayer10Frame(const TY_IMAGE_DATA* img, cv::Mat* pColor)
|
||||
{
|
||||
int code = cv::COLOR_BayerGB2BGR;
|
||||
switch (img->pixelFormat)
|
||||
{
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER10GBRG:
|
||||
code = cv::COLOR_BayerGR2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER10BGGR:
|
||||
code = cv::COLOR_BayerRG2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER10GRBG:
|
||||
code = cv::COLOR_BayerGB2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER10RGGB:
|
||||
code = cv::COLOR_BayerBG2BGR;
|
||||
break;
|
||||
default:
|
||||
LOGE("Invalid bayer10 fmt!");
|
||||
return -1;
|
||||
}
|
||||
cv::Mat raw16(img->height, img->width, CV_16U);
|
||||
parseCsiRaw10((uchar*)img->buffer, raw16, img->width, img->height);
|
||||
cv::cvtColor(raw16, *pColor, code);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int parseBayer12Frame(const TY_IMAGE_DATA* img, cv::Mat* pColor)
|
||||
{
|
||||
int code = cv::COLOR_BayerGB2BGR;
|
||||
switch (img->pixelFormat)
|
||||
{
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER12GBRG:
|
||||
code = cv::COLOR_BayerGR2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER12BGGR:
|
||||
code = cv::COLOR_BayerRG2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER12GRBG:
|
||||
code = cv::COLOR_BayerGB2BGR;
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_CSI_BAYER12RGGB:
|
||||
code = cv::COLOR_BayerBG2BGR;
|
||||
break;
|
||||
default:
|
||||
LOGE("Invalid bayer12 fmt!");
|
||||
return -1;
|
||||
}
|
||||
cv::Mat raw16(img->height, img->width, CV_16U);
|
||||
parseCsiRaw12((uchar*)img->buffer, raw16, img->width, img->height);
|
||||
cv::cvtColor(raw16, *pColor, code);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int parseColorFrame(const TY_IMAGE_DATA* img, cv::Mat* pColor, TY_ISP_HANDLE color_isp_handle = NULL)
|
||||
{
|
||||
int ret = 0;
|
||||
if (img->pixelFormat == TY_PIXEL_FORMAT_JPEG){
|
||||
std::vector<uchar> _v((uchar*)img->buffer, (uchar*)img->buffer + img->size);
|
||||
*pColor = cv::imdecode(_v, cv::IMREAD_COLOR);
|
||||
ASSERT(img->width == pColor->cols && img->height == pColor->rows);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_YVYU){
|
||||
cv::Mat yuv(img->height, img->width, CV_8UC2, img->buffer);
|
||||
cv::cvtColor(yuv, *pColor, cv::COLOR_YUV2BGR_YVYU);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_YUYV){
|
||||
cv::Mat yuv(img->height, img->width, CV_8UC2, img->buffer);
|
||||
cv::cvtColor(yuv, *pColor, cv::COLOR_YUV2BGR_YUYV);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_RGB){
|
||||
cv::Mat rgb(img->height, img->width, CV_8UC3, img->buffer);
|
||||
cv::cvtColor(rgb, *pColor, cv::COLOR_RGB2BGR);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_BGR){
|
||||
*pColor = cv::Mat(img->height, img->width, CV_8UC3, img->buffer).clone();
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_BAYER8GBRG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_BAYER8BGGR ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_BAYER8GRBG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_BAYER8RGGB)
|
||||
{
|
||||
ret = parseBayer8Frame(img, pColor, color_isp_handle);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10GBRG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10BGGR ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10GRBG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10RGGB)
|
||||
{
|
||||
ret = parseBayer10Frame(img, pColor);
|
||||
}
|
||||
else if(img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12GBRG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12BGGR ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12GRBG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12RGGB)
|
||||
{
|
||||
ret = parseBayer12Frame(img, pColor);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_MONO){
|
||||
cv::Mat gray(img->height, img->width, CV_8U, img->buffer);
|
||||
cv::cvtColor(gray, *pColor, cv::COLOR_GRAY2BGR);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_CSI_MONO10){
|
||||
cv::Mat gray16(img->height, img->width, CV_16U);
|
||||
parseCsiRaw10((uchar*)img->buffer, gray16, img->width, img->height);
|
||||
*pColor = gray16.clone();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int parseImage(const TY_IMAGE_DATA* img, cv::Mat* image, TY_ISP_HANDLE color_isp_handle = NULL)
|
||||
{
|
||||
int ret = 0;
|
||||
if (img->pixelFormat == TY_PIXEL_FORMAT_JPEG){
|
||||
std::vector<uchar> _v((uchar*)img->buffer, (uchar*)img->buffer + img->size);
|
||||
*image = cv::imdecode(_v, cv::IMREAD_COLOR);
|
||||
ASSERT(img->width == image->cols && img->height == image->rows);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_YVYU){
|
||||
cv::Mat yuv(img->height, img->width, CV_8UC2, img->buffer);
|
||||
cv::cvtColor(yuv, *image, cv::COLOR_YUV2BGR_YVYU);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_YUYV){
|
||||
cv::Mat yuv(img->height, img->width, CV_8UC2, img->buffer);
|
||||
cv::cvtColor(yuv, *image, cv::COLOR_YUV2BGR_YUYV);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_RGB){
|
||||
cv::Mat rgb(img->height, img->width, CV_8UC3, img->buffer);
|
||||
cv::cvtColor(rgb, *image, cv::COLOR_RGB2BGR);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_BGR){
|
||||
*image = cv::Mat(img->height, img->width, CV_8UC3, img->buffer).clone();
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_BAYER8GBRG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_BAYER8BGGR ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_BAYER8GRBG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_BAYER8RGGB)
|
||||
{
|
||||
ret = parseBayer8Frame(img, image, color_isp_handle);
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10GBRG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10BGGR ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10GRBG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER10RGGB)
|
||||
{
|
||||
ret = parseBayer10Frame(img, image);
|
||||
}
|
||||
else if(img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12GBRG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12BGGR ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12GRBG ||
|
||||
img->pixelFormat == TY_PIXEL_FORMAT_CSI_BAYER12RGGB)
|
||||
{
|
||||
ret = parseBayer12Frame(img, image);
|
||||
}
|
||||
else if(img->pixelFormat == TY_PIXEL_FORMAT_MONO) {
|
||||
*image = cv::Mat(img->height, img->width, CV_8U, img->buffer).clone();
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_CSI_MONO10){
|
||||
cv::Mat gray16(img->height, img->width, CV_16U);
|
||||
ret = parseCsiRaw10((uchar*)img->buffer, gray16, img->width, img->height);
|
||||
*image = gray16.clone();
|
||||
}
|
||||
else if(img->pixelFormat == TY_PIXEL_FORMAT_CSI_MONO12) {
|
||||
cv::Mat gray16(img->height, img->width, CV_16U);
|
||||
ret = parseCsiRaw12((uchar*)img->buffer, gray16, img->width, img->height);
|
||||
*image = gray16.clone();
|
||||
}
|
||||
else if (img->pixelFormat == TY_PIXEL_FORMAT_MONO16 || img->pixelFormat==TY_PIXEL_FORMAT_TOF_IR_MONO16){
|
||||
*image = cv::Mat(img->height, img->width, CV_16U, img->buffer).clone();
|
||||
}
|
||||
else {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int parseFrame(const TY_FRAME_DATA& frame, cv::Mat* pDepth
|
||||
, cv::Mat* pLeftIR, cv::Mat* pRightIR
|
||||
, cv::Mat* pColor, TY_ISP_HANDLE color_isp_handle = NULL)
|
||||
{
|
||||
for (int i = 0; i < frame.validCount; i++){
|
||||
if (frame.image[i].status != TY_STATUS_OK) continue;
|
||||
|
||||
// get depth image
|
||||
if (pDepth && frame.image[i].componentID == TY_COMPONENT_DEPTH_CAM){
|
||||
if (frame.image[i].pixelFormat == TY_PIXEL_FORMAT_XYZ48) {
|
||||
*pDepth = cv::Mat(frame.image[i].height, frame.image[i].width
|
||||
, CV_16SC3, frame.image[i].buffer).clone();
|
||||
}
|
||||
else {
|
||||
*pDepth = cv::Mat(frame.image[i].height, frame.image[i].width
|
||||
, CV_16U, frame.image[i].buffer).clone();
|
||||
}
|
||||
}
|
||||
// get left ir image
|
||||
if (pLeftIR && frame.image[i].componentID == TY_COMPONENT_IR_CAM_LEFT){
|
||||
parseIrFrame(&frame.image[i], pLeftIR);
|
||||
}
|
||||
// get right ir image
|
||||
if (pRightIR && frame.image[i].componentID == TY_COMPONENT_IR_CAM_RIGHT){
|
||||
parseIrFrame(&frame.image[i], pRightIR);
|
||||
}
|
||||
// get BGR
|
||||
if (pColor && frame.image[i].componentID == TY_COMPONENT_RGB_CAM){
|
||||
parseColorFrame(&frame.image[i], pColor, color_isp_handle);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
enum{
|
||||
PC_FILE_FORMAT_XYZ = 0,
|
||||
};
|
||||
|
||||
static void writePC_XYZ(const cv::Point3f* pnts, const cv::Vec3b *color, size_t n, FILE* fp)
|
||||
{
|
||||
if (color){
|
||||
for (size_t i = 0; i < n; i++){
|
||||
if (!std::isnan(pnts[i].x)){
|
||||
fprintf(fp, "%f %f %f %d %d %d\n", pnts[i].x, pnts[i].y, pnts[i].z, color[i][0], color[i][1], color[i][2]);
|
||||
}
|
||||
}
|
||||
}
|
||||
else{
|
||||
for (size_t i = 0; i < n; i++){
|
||||
if (!std::isnan(pnts[i].x)){
|
||||
fprintf(fp, "%f %f %f 0 0 0\n", pnts[i].x, pnts[i].y, pnts[i].z);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void writePointCloud(const cv::Point3f* pnts, const cv::Vec3b *color, size_t n, const char* file, int format)
|
||||
{
|
||||
FILE* fp = fopen(file, "w");
|
||||
if (!fp){
|
||||
return;
|
||||
}
|
||||
|
||||
switch (format){
|
||||
case PC_FILE_FORMAT_XYZ:
|
||||
writePC_XYZ(pnts, color, n, fp);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
fclose(fp);
|
||||
}
|
||||
#else
|
||||
|
||||
|
||||
#endif
|
||||
|
||||
class CallbackWrapper
|
||||
{
|
||||
public:
|
||||
typedef void(*TY_FRAME_CALLBACK) (TY_FRAME_DATA*, void* userdata);
|
||||
|
||||
CallbackWrapper(){
|
||||
_hDevice = NULL;
|
||||
_cb = NULL;
|
||||
_userdata = NULL;
|
||||
_exit = true;
|
||||
}
|
||||
|
||||
TY_STATUS TYRegisterCallback(TY_DEV_HANDLE hDevice, TY_FRAME_CALLBACK v, void* userdata)
|
||||
{
|
||||
_hDevice = hDevice;
|
||||
_cb = v;
|
||||
_userdata = userdata;
|
||||
_exit = false;
|
||||
_cbThread.create(&workerThread, this);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
void TYUnregisterCallback()
|
||||
{
|
||||
if (!_exit) {
|
||||
_exit = true;
|
||||
_cbThread.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
static void* workerThread(void* userdata)
|
||||
{
|
||||
CallbackWrapper* pWrapper = (CallbackWrapper*)userdata;
|
||||
TY_FRAME_DATA frame;
|
||||
|
||||
while (!pWrapper->_exit)
|
||||
{
|
||||
int err = TYFetchFrame(pWrapper->_hDevice, &frame, 100);
|
||||
if (!err) {
|
||||
pWrapper->_cb(&frame, pWrapper->_userdata);
|
||||
}
|
||||
}
|
||||
LOGI("frameCallback exit!");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
TY_DEV_HANDLE _hDevice;
|
||||
TY_FRAME_CALLBACK _cb;
|
||||
void* _userdata;
|
||||
|
||||
bool _exit;
|
||||
TYThread _cbThread;
|
||||
};
|
||||
|
||||
|
||||
|
||||
#ifdef _WIN32
|
||||
static int get_fps() {
|
||||
static int fps_counter = 0;
|
||||
static clock_t fps_tm = 0;
|
||||
const int kMaxCounter = 250;
|
||||
fps_counter++;
|
||||
if (fps_counter < kMaxCounter) {
|
||||
return -1;
|
||||
}
|
||||
int elapse = (clock() - fps_tm);
|
||||
int v = (int)(((float)fps_counter) / elapse * CLOCKS_PER_SEC);
|
||||
fps_tm = clock();
|
||||
|
||||
fps_counter = 0;
|
||||
return v;
|
||||
}
|
||||
#else
|
||||
static int get_fps() {
|
||||
static int fps_counter = 0;
|
||||
static clock_t fps_tm = 0;
|
||||
const int kMaxCounter = 200;
|
||||
struct timeval start;
|
||||
fps_counter++;
|
||||
if (fps_counter < kMaxCounter) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
gettimeofday(&start, NULL);
|
||||
int elapse = start.tv_sec * 1000 + start.tv_usec / 1000 - fps_tm;
|
||||
int v = (int)(((float)fps_counter) / elapse * 1000);
|
||||
gettimeofday(&start, NULL);
|
||||
fps_tm = start.tv_sec * 1000 + start.tv_usec / 1000;
|
||||
|
||||
fps_counter = 0;
|
||||
return v;
|
||||
}
|
||||
#endif
|
||||
|
||||
static std::vector<uint8_t> TYReadBinaryFile(const char* filename)
|
||||
{
|
||||
// open the file:
|
||||
std::ifstream file(filename, std::ios::binary);
|
||||
if (!file.is_open()){
|
||||
return std::vector<uint8_t>();
|
||||
}
|
||||
// Stop eating new lines in binary mode!!!
|
||||
file.unsetf(std::ios::skipws);
|
||||
|
||||
// get its size:
|
||||
std::streampos fileSize;
|
||||
|
||||
file.seekg(0, std::ios::end);
|
||||
fileSize = file.tellg();
|
||||
file.seekg(0, std::ios::beg);
|
||||
|
||||
// reserve capacity
|
||||
std::vector<uint8_t> vec;
|
||||
vec.reserve(fileSize);
|
||||
|
||||
// read the data:
|
||||
vec.insert(vec.begin(),
|
||||
std::istream_iterator<uint8_t>(file),
|
||||
std::istream_iterator<uint8_t>());
|
||||
|
||||
return vec;
|
||||
}
|
||||
|
||||
#endif
|
||||
1245
image_capture/third_party/percipio/common/crc32.cpp
vendored
Normal file
1245
image_capture/third_party/percipio/common/crc32.cpp
vendored
Normal file
File diff suppressed because it is too large
Load Diff
69
image_capture/third_party/percipio/common/crc32.h
vendored
Normal file
69
image_capture/third_party/percipio/common/crc32.h
vendored
Normal file
@@ -0,0 +1,69 @@
|
||||
// //////////////////////////////////////////////////////////
|
||||
// Crc32.h
|
||||
// Copyright (c) 2011-2019 Stephan Brumme. All rights reserved.
|
||||
// Slicing-by-16 contributed by Bulat Ziganshin
|
||||
// Tableless bytewise CRC contributed by Hagai Gold
|
||||
// see http://create.stephan-brumme.com/disclaimer.html
|
||||
//
|
||||
|
||||
// if running on an embedded system, you might consider shrinking the
|
||||
// big Crc32Lookup table by undefining these lines:
|
||||
#define CRC32_USE_LOOKUP_TABLE_BYTE
|
||||
#define CRC32_USE_LOOKUP_TABLE_SLICING_BY_4
|
||||
#define CRC32_USE_LOOKUP_TABLE_SLICING_BY_8
|
||||
#define CRC32_USE_LOOKUP_TABLE_SLICING_BY_16
|
||||
// - crc32_bitwise doesn't need it at all
|
||||
// - crc32_halfbyte has its own small lookup table
|
||||
// - crc32_1byte_tableless and crc32_1byte_tableless2 don't need it at all
|
||||
// - crc32_1byte needs only Crc32Lookup[0]
|
||||
// - crc32_4bytes needs only Crc32Lookup[0..3]
|
||||
// - crc32_8bytes needs only Crc32Lookup[0..7]
|
||||
// - crc32_4x8bytes needs only Crc32Lookup[0..7]
|
||||
// - crc32_16bytes needs all of Crc32Lookup
|
||||
// using the aforementioned #defines the table is automatically fitted to your needs
|
||||
|
||||
// uint8_t, uint32_t, int32_t
|
||||
#include <stdint.h>
|
||||
// size_t
|
||||
#include <cstddef>
|
||||
|
||||
// crc32_fast selects the fastest algorithm depending on flags (CRC32_USE_LOOKUP_...)
|
||||
/// compute CRC32 using the fastest algorithm for large datasets on modern CPUs
|
||||
uint32_t crc32_fast (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
|
||||
/// merge two CRC32 such that result = crc32(dataB, lengthB, crc32(dataA, lengthA))
|
||||
uint32_t crc32_combine (uint32_t crcA, uint32_t crcB, size_t lengthB);
|
||||
|
||||
/// compute CRC32 (bitwise algorithm)
|
||||
uint32_t crc32_bitwise (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
/// compute CRC32 (half-byte algoritm)
|
||||
uint32_t crc32_halfbyte(const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
|
||||
#ifdef CRC32_USE_LOOKUP_TABLE_BYTE
|
||||
/// compute CRC32 (standard algorithm)
|
||||
uint32_t crc32_1byte (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
#endif
|
||||
|
||||
/// compute CRC32 (byte algorithm) without lookup tables
|
||||
uint32_t crc32_1byte_tableless (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
/// compute CRC32 (byte algorithm) without lookup tables
|
||||
uint32_t crc32_1byte_tableless2(const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
|
||||
#ifdef CRC32_USE_LOOKUP_TABLE_SLICING_BY_4
|
||||
/// compute CRC32 (Slicing-by-4 algorithm)
|
||||
uint32_t crc32_4bytes (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
#endif
|
||||
|
||||
#ifdef CRC32_USE_LOOKUP_TABLE_SLICING_BY_8
|
||||
/// compute CRC32 (Slicing-by-8 algorithm)
|
||||
uint32_t crc32_8bytes (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
/// compute CRC32 (Slicing-by-8 algorithm), unroll inner loop 4 times
|
||||
uint32_t crc32_4x8bytes(const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
#endif
|
||||
|
||||
#ifdef CRC32_USE_LOOKUP_TABLE_SLICING_BY_16
|
||||
/// compute CRC32 (Slicing-by-16 algorithm)
|
||||
uint32_t crc32_16bytes (const void* data, size_t length, uint32_t previousCrc32 = 0);
|
||||
/// compute CRC32 (Slicing-by-16 algorithm, prefetch upcoming data blocks)
|
||||
uint32_t crc32_16bytes_prefetch(const void* data, size_t length, uint32_t previousCrc32 = 0, size_t prefetchAhead = 256);
|
||||
#endif
|
||||
464
image_capture/third_party/percipio/common/huffman.cpp
vendored
Normal file
464
image_capture/third_party/percipio/common/huffman.cpp
vendored
Normal file
@@ -0,0 +1,464 @@
|
||||
#include <iostream>
|
||||
#include <cstdio>
|
||||
#include <string>
|
||||
#include <algorithm>
|
||||
#include <cstdlib>
|
||||
#include <cstring>
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
#include <iomanip>
|
||||
|
||||
#include <fstream>
|
||||
#include <iostream>
|
||||
|
||||
#ifndef WIN32
|
||||
#include <dirent.h>
|
||||
#endif
|
||||
|
||||
#include "huffman.h"
|
||||
|
||||
struct ersel{ //this structure will be used to create the translation tree
|
||||
ersel *left,*right;
|
||||
long int number;
|
||||
unsigned char character;
|
||||
std::string bit;
|
||||
};
|
||||
|
||||
struct translation{
|
||||
translation *zero,*one;
|
||||
unsigned char character;
|
||||
};
|
||||
|
||||
bool erselcompare0(ersel a,ersel b){
|
||||
return a.number<b.number;
|
||||
}
|
||||
|
||||
const static unsigned char check=0b10000000;
|
||||
|
||||
//below function is used for writing the uChar to compressed file
|
||||
//It does not write it directly as one byte instead it mixes uChar and current byte, writes 8 bits of it
|
||||
//and puts the rest to curent byte for later use
|
||||
void write_from_uChar(unsigned char uChar,unsigned char ¤t_byte,int current_bit_count, std::stringstream& ss){
|
||||
current_byte<<=8-current_bit_count;
|
||||
current_byte|=(uChar>>current_bit_count);
|
||||
ss.write(reinterpret_cast<const char*>(¤t_byte), sizeof(current_byte));
|
||||
current_byte=uChar;
|
||||
}
|
||||
|
||||
//below function is writing number of files we re going to translate inside current folder to compressed file's 2 bytes
|
||||
//It is done like this to make sure that it can work on little, big or middle-endian systems
|
||||
void write_file_count(int file_count,unsigned char ¤t_byte,int current_bit_count,std::stringstream& ss){
|
||||
unsigned char temp=file_count%256;
|
||||
write_from_uChar(temp,current_byte,current_bit_count,ss);
|
||||
temp=file_count/256;
|
||||
write_from_uChar(temp,current_byte,current_bit_count,ss);
|
||||
}
|
||||
|
||||
//This function is writing byte count of current input file to compressed file using 8 bytes
|
||||
//It is done like this to make sure that it can work on little, big or middle-endian systems
|
||||
void write_file_size(long int size,unsigned char ¤t_byte,int current_bit_count,std::stringstream& ss){
|
||||
for(int i=0;i<8;i++){
|
||||
write_from_uChar(size%256,current_byte,current_bit_count,ss);
|
||||
size/=256;
|
||||
}
|
||||
}
|
||||
|
||||
// Below function translates and writes bytes from current input file to the compressed file.
|
||||
void write_the_file_content(const std::string& text, std::string *str_arr, unsigned char ¤t_byte, int ¤t_bit_count, std::stringstream& ss){
|
||||
unsigned char x;
|
||||
char *str_pointer;
|
||||
long size = text.length();
|
||||
x = text.at(0);
|
||||
for(long int i=0;i<size;i++){
|
||||
str_pointer=&str_arr[x][0];
|
||||
while(*str_pointer){
|
||||
if(current_bit_count==8){
|
||||
ss.write(reinterpret_cast<const char*>(¤t_byte), sizeof(current_byte));
|
||||
current_bit_count=0;
|
||||
}
|
||||
switch(*str_pointer){
|
||||
case '1':current_byte<<=1;current_byte|=1;current_bit_count++;break;
|
||||
case '0':current_byte<<=1;current_bit_count++;break;
|
||||
default: std::cout<<"An error has occurred"<< std::endl <<"Process has been aborted";
|
||||
exit(2);
|
||||
}
|
||||
str_pointer++;
|
||||
}
|
||||
if(i != size - 1) {
|
||||
x = (unsigned char)text.at(i + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
//checks if next input is either a file or a folder
|
||||
//returns 1 if it is a file
|
||||
//returns 0 if it is a folder
|
||||
bool this_is_a_file(unsigned char ¤t_byte,int ¤t_bit_count, std::stringstream& ss){
|
||||
bool val;
|
||||
if(current_bit_count==0){
|
||||
ss.read((char*)¤t_byte, 1);
|
||||
current_bit_count=8;
|
||||
}
|
||||
val=current_byte✓
|
||||
current_byte<<=1;
|
||||
current_bit_count--;
|
||||
return val;
|
||||
}
|
||||
|
||||
// process_8_bits_NUMBER reads 8 successive bits from compressed file
|
||||
//(does not have to be in the same byte)
|
||||
// and returns it in unsigned char form
|
||||
unsigned char process_8_bits_NUMBER(unsigned char ¤t_byte,int current_bit_count, std::stringstream& ss){
|
||||
unsigned char val,temp_byte;
|
||||
ss.read((char*)&temp_byte, 1);
|
||||
val=current_byte|(temp_byte>>current_bit_count);
|
||||
current_byte=temp_byte<<8-current_bit_count;
|
||||
return val;
|
||||
}
|
||||
|
||||
// returns file's size
|
||||
long int read_file_size(unsigned char ¤t_byte,int current_bit_count, std::stringstream& ss){
|
||||
long int size=0;
|
||||
{
|
||||
long int multiplier=1;
|
||||
for(int i=0;i<8;i++){
|
||||
size+=process_8_bits_NUMBER(current_byte,current_bit_count,ss)*multiplier;
|
||||
multiplier*=256;
|
||||
}
|
||||
}
|
||||
return size;
|
||||
// Size was written to the compressed file from least significiant byte
|
||||
// to the most significiant byte to make sure system's endianness
|
||||
// does not affect the process and that is why we are processing size information like this
|
||||
}
|
||||
|
||||
|
||||
// This function translates compressed file from info that is now stored in the translation tree
|
||||
// then writes it to a newly created file
|
||||
void translate_file(long int size,unsigned char ¤t_byte,int ¤t_bit_count,translation *root, std::stringstream& ss, std::string& text){
|
||||
translation *node;
|
||||
for(long int i=0;i<size;i++){
|
||||
node=root;
|
||||
while(node->zero||node->one){
|
||||
if(current_bit_count==0){
|
||||
ss.read((char*)¤t_byte, 1);
|
||||
current_bit_count=8;
|
||||
}
|
||||
if(current_byte&check){
|
||||
node=node->one;
|
||||
}
|
||||
else{
|
||||
node=node->zero;
|
||||
}
|
||||
current_byte<<=1;
|
||||
current_bit_count--;
|
||||
}
|
||||
text.at(i) = node->character;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// process_n_bits_TO_STRING function reads n successive bits from the compressed file
|
||||
// and stores it in a leaf of the translation tree,
|
||||
// after creating that leaf and sometimes after creating nodes that are binding that leaf to the tree.
|
||||
void process_n_bits_TO_STRING(unsigned char ¤t_byte,int n,int ¤t_bit_count,std::stringstream& ss,translation *node,unsigned char uChar){
|
||||
for(int i=0;i<n;i++){
|
||||
if(current_bit_count==0){
|
||||
ss.read((char*)¤t_byte, 1);
|
||||
current_bit_count=8;
|
||||
}
|
||||
|
||||
switch(current_byte&check){
|
||||
case 0:
|
||||
if(!(node->zero)){
|
||||
node->zero=(translation*)malloc(sizeof(translation));
|
||||
node->zero->zero=NULL;
|
||||
node->zero->one=NULL;
|
||||
}
|
||||
node=node->zero;
|
||||
break;
|
||||
case 128:
|
||||
if(!(node->one)){
|
||||
node->one=(translation*)malloc(sizeof(translation));
|
||||
node->one->zero=NULL;
|
||||
node->one->one=NULL;
|
||||
}
|
||||
node=node->one;
|
||||
break;
|
||||
}
|
||||
current_byte<<=1;
|
||||
current_bit_count--;
|
||||
}
|
||||
node->character=uChar;
|
||||
}
|
||||
|
||||
// burn_tree function is used for deallocating translation tree
|
||||
void burn_tree(translation *node){
|
||||
if(node->zero)burn_tree(node->zero);
|
||||
if(node->one)burn_tree(node->one);
|
||||
free(node);
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
|
||||
bool TextHuffmanCompression(const std::string& text, std::string& result)
|
||||
{
|
||||
unsigned char x; //these are temp variables to take input from the file
|
||||
long int total_size=0,size;
|
||||
|
||||
std::stringstream ss;
|
||||
|
||||
long int number[256];
|
||||
long int total_bits=0;
|
||||
unsigned char letter_count=0;
|
||||
for(long int *i=number;i<number+256;i++){
|
||||
*i=0;
|
||||
}
|
||||
|
||||
total_bits+=16+9;
|
||||
|
||||
size = text.length();
|
||||
total_size += size;
|
||||
total_bits+=64;
|
||||
|
||||
x = text.at(0);
|
||||
for(long int j=0;j<size;j++){ //counting usage frequency of unique bytes inside the file
|
||||
number[x]++;
|
||||
x = text.at(j);
|
||||
}
|
||||
|
||||
for(long int *i=number;i<number+256;i++){
|
||||
if(*i){
|
||||
letter_count++;
|
||||
}
|
||||
}
|
||||
//---------------------------------------------
|
||||
|
||||
|
||||
// creating the base of translation array(and then sorting them by ascending frequencies
|
||||
// this array of type 'ersel' will not be used after calculating transformed versions of every unique byte
|
||||
// instead its info will be written in a new string array called str_arr
|
||||
ersel* array = new ersel[letter_count*2-1];
|
||||
ersel *e=array;
|
||||
for(long int *i=number;i<number+256;i++){
|
||||
if(*i){
|
||||
e->right=NULL;
|
||||
e->left=NULL;
|
||||
e->number=*i;
|
||||
e->character=i-number;
|
||||
e++;
|
||||
}
|
||||
}
|
||||
std::sort(array,array+letter_count,erselcompare0);
|
||||
//---------------------------------------------
|
||||
|
||||
// min1 and min2 represents nodes that has minimum weights
|
||||
// isleaf is the pointer that traverses through leafs and
|
||||
// notleaf is the pointer that traverses through nodes that are not leafs
|
||||
ersel *min1=array,*min2=array+1,*current=array+letter_count,*notleaf=array+letter_count,*isleaf=array+2;
|
||||
for(int i=0;i<letter_count-1;i++){
|
||||
current->number=min1->number+min2->number;
|
||||
current->left=min1;
|
||||
current->right=min2;
|
||||
min1->bit="1";
|
||||
min2->bit="0";
|
||||
current++;
|
||||
|
||||
if(isleaf>=array+letter_count){
|
||||
min1=notleaf;
|
||||
notleaf++;
|
||||
}
|
||||
else{
|
||||
if(isleaf->number<notleaf->number){
|
||||
min1=isleaf;
|
||||
isleaf++;
|
||||
}
|
||||
else{
|
||||
min1=notleaf;
|
||||
notleaf++;
|
||||
}
|
||||
}
|
||||
|
||||
if(isleaf>=array+letter_count){
|
||||
min2=notleaf;
|
||||
notleaf++;
|
||||
}
|
||||
else if(notleaf>=current){
|
||||
min2=isleaf;
|
||||
isleaf++;
|
||||
}
|
||||
else{
|
||||
if(isleaf->number<notleaf->number){
|
||||
min2=isleaf;
|
||||
isleaf++;
|
||||
}
|
||||
else{
|
||||
min2=notleaf;
|
||||
notleaf++;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
for(e=array+letter_count*2-2;e>array-1;e--){
|
||||
if(e->left){
|
||||
e->left->bit=e->bit+e->left->bit;
|
||||
}
|
||||
if(e->right){
|
||||
e->right->bit=e->bit+e->right->bit;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// In this block we are adding the bytes from root to leafs
|
||||
// and after this is done every leaf will have a transformation string that corresponds to it
|
||||
// Note: It is actually a very neat process. Using 4th and 5th code blocks, we are making sure that
|
||||
// the most used character is using least number of bits.
|
||||
// Specific number of bits we re going to use for that character is determined by weight distribution
|
||||
//---------------------------------------------
|
||||
|
||||
int current_bit_count=0;
|
||||
unsigned char current_byte;
|
||||
ss.write(reinterpret_cast<const char*>(&letter_count), sizeof(letter_count));
|
||||
total_bits+=8;
|
||||
//----------------------------------------
|
||||
|
||||
char *str_pointer;
|
||||
unsigned char len,current_character;
|
||||
std::string str_arr[256];
|
||||
for(e=array;e<array+letter_count;e++){
|
||||
str_arr[(e->character)]=e->bit; //we are putting the transformation string to str_arr array to make the compression process more time efficient
|
||||
len=e->bit.length();
|
||||
current_character=e->character;
|
||||
|
||||
write_from_uChar(current_character,current_byte,current_bit_count,ss);
|
||||
write_from_uChar(len,current_byte,current_bit_count,ss);
|
||||
|
||||
total_bits+=len+16;
|
||||
// above lines will write the byte and the number of bits
|
||||
// we re going to need to represent this specific byte's transformated version
|
||||
// after here we are going to write the transformed version of the number bit by bit.
|
||||
|
||||
str_pointer=&e->bit[0];
|
||||
while(*str_pointer){
|
||||
if(current_bit_count==8){
|
||||
ss.write(reinterpret_cast<const char*>(¤t_byte), sizeof(current_byte));
|
||||
current_bit_count=0;
|
||||
}
|
||||
switch(*str_pointer){
|
||||
case '1':current_byte<<=1;current_byte|=1;current_bit_count++;break;
|
||||
case '0':current_byte<<=1;current_bit_count++;break;
|
||||
default:std::cout<<"An error has occurred"<<std::endl<<"Compression process aborted"<<std::endl;
|
||||
return false;
|
||||
}
|
||||
str_pointer++;
|
||||
}
|
||||
|
||||
total_bits+=len*(e->number);
|
||||
}
|
||||
if(total_bits%8){
|
||||
total_bits=(total_bits/8+1)*8;
|
||||
// from this point on total bits doesnt represent total bits
|
||||
// instead it represents 8*number_of_bytes we are gonna use on our compressed file
|
||||
}
|
||||
|
||||
delete[]array;
|
||||
// Above loop writes the translation script into compressed file and the str_arr array
|
||||
//----------------------------------------
|
||||
|
||||
|
||||
std::cout<<"The size of the sum of ORIGINAL files is: "<<total_size<<" bytes"<<std::endl;
|
||||
std::cout<<"The size of the COMPRESSED file will be: "<<total_bits/8<<" bytes"<<std::endl;
|
||||
std::cout<<"Compressed file's size will be [%"<<100*((float)total_bits/8/total_size)<<"] of the original file"<<std::endl;
|
||||
if(total_bits/8>total_size){
|
||||
std::cout<<std::endl<<"COMPRESSED FILE'S SIZE WILL BE HIGHER THAN THE SUM OF ORIGINALS"<<std::endl<<std::endl;
|
||||
}
|
||||
|
||||
//-------------writes fourth---------------
|
||||
write_file_count(1,current_byte,current_bit_count,ss);
|
||||
//---------------------------------------
|
||||
|
||||
|
||||
//-------------writes fifth--------------
|
||||
if(current_bit_count==8){
|
||||
ss.write(reinterpret_cast<const char*>(¤t_byte), sizeof(current_byte));
|
||||
current_bit_count=0;
|
||||
}
|
||||
current_byte<<=1;
|
||||
current_byte|=1;
|
||||
current_bit_count++;
|
||||
write_file_size(size,current_byte,current_bit_count,ss); //writes sixth
|
||||
write_the_file_content(text,str_arr,current_byte,current_bit_count,ss); //writes eighth
|
||||
|
||||
if(current_bit_count==8){ // here we are writing the last byte of the file
|
||||
ss.write(reinterpret_cast<const char*>(¤t_byte), sizeof(current_byte));
|
||||
}
|
||||
else{
|
||||
current_byte<<=8-current_bit_count;
|
||||
ss.write(reinterpret_cast<const char*>(¤t_byte), sizeof(current_byte));
|
||||
}
|
||||
|
||||
result = ss.str();
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
bool TextHuffmanDecompression(const std::string& huffman, std::string& text)
|
||||
{
|
||||
unsigned char letter_count=0;
|
||||
std::stringstream ss(huffman);
|
||||
|
||||
//---------reads .first-----------
|
||||
ss.read((char*)&letter_count, 1);
|
||||
|
||||
int m_letter_count;
|
||||
if(letter_count==0)
|
||||
m_letter_count=256;
|
||||
else
|
||||
m_letter_count = letter_count;
|
||||
//-------------------------------
|
||||
|
||||
//----------------reads .second---------------------
|
||||
// and stores transformation info into binary translation tree for later use
|
||||
unsigned char current_byte=0,current_character;
|
||||
int current_bit_count=0,len;
|
||||
translation *root=(translation*)malloc(sizeof(translation));
|
||||
root->zero=NULL;
|
||||
root->one=NULL;
|
||||
|
||||
for(int i=0;i<m_letter_count;i++){
|
||||
current_character=process_8_bits_NUMBER(current_byte,current_bit_count,ss);
|
||||
len=process_8_bits_NUMBER(current_byte,current_bit_count,ss);
|
||||
|
||||
if(len==0)len=256;
|
||||
process_n_bits_TO_STRING(current_byte,len,current_bit_count,ss,root,current_character);
|
||||
}
|
||||
//--------------------------------------------------
|
||||
|
||||
|
||||
|
||||
// ---------reads .third----------
|
||||
//reads how many folders/files the program is going to create inside the main folder
|
||||
int file_count;
|
||||
file_count=process_8_bits_NUMBER(current_byte,current_bit_count,ss);
|
||||
file_count+=256*process_8_bits_NUMBER(current_byte,current_bit_count,ss);
|
||||
if(file_count != 1) {
|
||||
//
|
||||
return false;
|
||||
}
|
||||
|
||||
// File count was written to the compressed file from least significiant byte
|
||||
// to most significiant byte to make sure system's endianness
|
||||
// does not affect the process and that is why we are processing size information like this
|
||||
if(this_is_a_file(current_byte,current_bit_count,ss)){ // reads .fifth and goes inside if this is a file
|
||||
long int size=read_file_size(current_byte,current_bit_count,ss); // reads .sixth
|
||||
text.resize(size);
|
||||
translate_file(size,current_byte,current_bit_count,root,ss, text); //translates .eighth
|
||||
|
||||
burn_tree(root);
|
||||
return true;
|
||||
}
|
||||
|
||||
burn_tree(root);
|
||||
return false;
|
||||
}
|
||||
5
image_capture/third_party/percipio/common/huffman.h
vendored
Normal file
5
image_capture/third_party/percipio/common/huffman.h
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
#pragma once
|
||||
#include <string>
|
||||
|
||||
bool TextHuffmanCompression(const std::string& text, std::string& result);
|
||||
bool TextHuffmanDecompression(const std::string& huffman, std::string& text);
|
||||
790
image_capture/third_party/percipio/common/json11.cpp
vendored
Normal file
790
image_capture/third_party/percipio/common/json11.cpp
vendored
Normal file
@@ -0,0 +1,790 @@
|
||||
/* Copyright (c) 2013 Dropbox, Inc.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include "json11.hpp"
|
||||
#include <cassert>
|
||||
#include <cmath>
|
||||
#include <cstdlib>
|
||||
#include <cstdio>
|
||||
#include <limits>
|
||||
|
||||
namespace json11 {
|
||||
|
||||
static const int max_depth = 200;
|
||||
|
||||
using std::string;
|
||||
using std::vector;
|
||||
using std::map;
|
||||
using std::make_shared;
|
||||
using std::initializer_list;
|
||||
using std::move;
|
||||
|
||||
/* Helper for representing null - just a do-nothing struct, plus comparison
|
||||
* operators so the helpers in JsonValue work. We can't use nullptr_t because
|
||||
* it may not be orderable.
|
||||
*/
|
||||
struct NullStruct {
|
||||
bool operator==(NullStruct) const { return true; }
|
||||
bool operator<(NullStruct) const { return false; }
|
||||
};
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Serialization
|
||||
*/
|
||||
|
||||
static void dump(NullStruct, string &out) {
|
||||
out += "null";
|
||||
}
|
||||
|
||||
static void dump(double value, string &out) {
|
||||
if (std::isfinite(value)) {
|
||||
char buf[32];
|
||||
snprintf(buf, sizeof buf, "%.17g", value);
|
||||
out += buf;
|
||||
} else {
|
||||
out += "null";
|
||||
}
|
||||
}
|
||||
|
||||
static void dump(int value, string &out) {
|
||||
char buf[32];
|
||||
snprintf(buf, sizeof buf, "%d", value);
|
||||
out += buf;
|
||||
}
|
||||
|
||||
static void dump(bool value, string &out) {
|
||||
out += value ? "true" : "false";
|
||||
}
|
||||
|
||||
static void dump(const string &value, string &out) {
|
||||
out += '"';
|
||||
for (size_t i = 0; i < value.length(); i++) {
|
||||
const char ch = value[i];
|
||||
if (ch == '\\') {
|
||||
out += "\\\\";
|
||||
} else if (ch == '"') {
|
||||
out += "\\\"";
|
||||
} else if (ch == '\b') {
|
||||
out += "\\b";
|
||||
} else if (ch == '\f') {
|
||||
out += "\\f";
|
||||
} else if (ch == '\n') {
|
||||
out += "\\n";
|
||||
} else if (ch == '\r') {
|
||||
out += "\\r";
|
||||
} else if (ch == '\t') {
|
||||
out += "\\t";
|
||||
} else if (static_cast<uint8_t>(ch) <= 0x1f) {
|
||||
char buf[8];
|
||||
snprintf(buf, sizeof buf, "\\u%04x", ch);
|
||||
out += buf;
|
||||
} else if (static_cast<uint8_t>(ch) == 0xe2 && static_cast<uint8_t>(value[i+1]) == 0x80
|
||||
&& static_cast<uint8_t>(value[i+2]) == 0xa8) {
|
||||
out += "\\u2028";
|
||||
i += 2;
|
||||
} else if (static_cast<uint8_t>(ch) == 0xe2 && static_cast<uint8_t>(value[i+1]) == 0x80
|
||||
&& static_cast<uint8_t>(value[i+2]) == 0xa9) {
|
||||
out += "\\u2029";
|
||||
i += 2;
|
||||
} else {
|
||||
out += ch;
|
||||
}
|
||||
}
|
||||
out += '"';
|
||||
}
|
||||
|
||||
static void dump(const Json::array &values, string &out) {
|
||||
bool first = true;
|
||||
out += "[";
|
||||
for (const auto &value : values) {
|
||||
if (!first)
|
||||
out += ", ";
|
||||
value.dump(out);
|
||||
first = false;
|
||||
}
|
||||
out += "]";
|
||||
}
|
||||
|
||||
static void dump(const Json::object &values, string &out) {
|
||||
bool first = true;
|
||||
out += "{";
|
||||
for (const auto &kv : values) {
|
||||
if (!first)
|
||||
out += ", ";
|
||||
dump(kv.first, out);
|
||||
out += ": ";
|
||||
kv.second.dump(out);
|
||||
first = false;
|
||||
}
|
||||
out += "}";
|
||||
}
|
||||
|
||||
void Json::dump(string &out) const {
|
||||
m_ptr->dump(out);
|
||||
}
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Value wrappers
|
||||
*/
|
||||
|
||||
template <Json::Type tag, typename T>
|
||||
class Value : public JsonValue {
|
||||
protected:
|
||||
|
||||
// Constructors
|
||||
explicit Value(const T &value) : m_value(value) {}
|
||||
explicit Value(T &&value) : m_value(move(value)) {}
|
||||
|
||||
// Get type tag
|
||||
Json::Type type() const override {
|
||||
return tag;
|
||||
}
|
||||
|
||||
// Comparisons
|
||||
bool equals(const JsonValue * other) const override {
|
||||
return m_value == static_cast<const Value<tag, T> *>(other)->m_value;
|
||||
}
|
||||
bool less(const JsonValue * other) const override {
|
||||
return m_value < static_cast<const Value<tag, T> *>(other)->m_value;
|
||||
}
|
||||
|
||||
const T m_value;
|
||||
void dump(string &out) const override { json11::dump(m_value, out); }
|
||||
};
|
||||
|
||||
class JsonDouble final : public Value<Json::NUMBER, double> {
|
||||
double number_value() const override { return m_value; }
|
||||
int int_value() const override { return static_cast<int>(m_value); }
|
||||
bool equals(const JsonValue * other) const override { return m_value == other->number_value(); }
|
||||
bool less(const JsonValue * other) const override { return m_value < other->number_value(); }
|
||||
public:
|
||||
explicit JsonDouble(double value) : Value(value) {}
|
||||
};
|
||||
|
||||
class JsonInt final : public Value<Json::NUMBER, int> {
|
||||
double number_value() const override { return m_value; }
|
||||
int int_value() const override { return m_value; }
|
||||
bool equals(const JsonValue * other) const override { return m_value == other->number_value(); }
|
||||
bool less(const JsonValue * other) const override { return m_value < other->number_value(); }
|
||||
public:
|
||||
explicit JsonInt(int value) : Value(value) {}
|
||||
};
|
||||
|
||||
class JsonBoolean final : public Value<Json::BOOL, bool> {
|
||||
bool bool_value() const override { return m_value; }
|
||||
public:
|
||||
explicit JsonBoolean(bool value) : Value(value) {}
|
||||
};
|
||||
|
||||
class JsonString final : public Value<Json::STRING, string> {
|
||||
const string &string_value() const override { return m_value; }
|
||||
public:
|
||||
explicit JsonString(const string &value) : Value(value) {}
|
||||
explicit JsonString(string &&value) : Value(move(value)) {}
|
||||
};
|
||||
|
||||
class JsonArray final : public Value<Json::ARRAY, Json::array> {
|
||||
const Json::array &array_items() const override { return m_value; }
|
||||
const Json & operator[](size_t i) const override;
|
||||
public:
|
||||
explicit JsonArray(const Json::array &value) : Value(value) {}
|
||||
explicit JsonArray(Json::array &&value) : Value(move(value)) {}
|
||||
};
|
||||
|
||||
class JsonObject final : public Value<Json::OBJECT, Json::object> {
|
||||
const Json::object &object_items() const override { return m_value; }
|
||||
const Json & operator[](const string &key) const override;
|
||||
public:
|
||||
explicit JsonObject(const Json::object &value) : Value(value) {}
|
||||
explicit JsonObject(Json::object &&value) : Value(move(value)) {}
|
||||
};
|
||||
|
||||
class JsonNull final : public Value<Json::NUL, NullStruct> {
|
||||
public:
|
||||
JsonNull() : Value({}) {}
|
||||
};
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Static globals - static-init-safe
|
||||
*/
|
||||
struct Statics {
|
||||
const std::shared_ptr<JsonValue> null = make_shared<JsonNull>();
|
||||
const std::shared_ptr<JsonValue> t = make_shared<JsonBoolean>(true);
|
||||
const std::shared_ptr<JsonValue> f = make_shared<JsonBoolean>(false);
|
||||
const string empty_string;
|
||||
const vector<Json> empty_vector;
|
||||
const map<string, Json> empty_map;
|
||||
Statics() {}
|
||||
};
|
||||
|
||||
static const Statics & statics() {
|
||||
static const Statics s {};
|
||||
return s;
|
||||
}
|
||||
|
||||
static const Json & static_null() {
|
||||
// This has to be separate, not in Statics, because Json() accesses statics().null.
|
||||
static const Json json_null;
|
||||
return json_null;
|
||||
}
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Constructors
|
||||
*/
|
||||
|
||||
Json::Json() noexcept : m_ptr(statics().null) {}
|
||||
Json::Json(std::nullptr_t) noexcept : m_ptr(statics().null) {}
|
||||
Json::Json(double value) : m_ptr(make_shared<JsonDouble>(value)) {}
|
||||
Json::Json(int value) : m_ptr(make_shared<JsonInt>(value)) {}
|
||||
Json::Json(bool value) : m_ptr(value ? statics().t : statics().f) {}
|
||||
Json::Json(const string &value) : m_ptr(make_shared<JsonString>(value)) {}
|
||||
Json::Json(string &&value) : m_ptr(make_shared<JsonString>(move(value))) {}
|
||||
Json::Json(const char * value) : m_ptr(make_shared<JsonString>(value)) {}
|
||||
Json::Json(const Json::array &values) : m_ptr(make_shared<JsonArray>(values)) {}
|
||||
Json::Json(Json::array &&values) : m_ptr(make_shared<JsonArray>(move(values))) {}
|
||||
Json::Json(const Json::object &values) : m_ptr(make_shared<JsonObject>(values)) {}
|
||||
Json::Json(Json::object &&values) : m_ptr(make_shared<JsonObject>(move(values))) {}
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Accessors
|
||||
*/
|
||||
|
||||
Json::Type Json::type() const { return m_ptr->type(); }
|
||||
double Json::number_value() const { return m_ptr->number_value(); }
|
||||
int Json::int_value() const { return m_ptr->int_value(); }
|
||||
bool Json::bool_value() const { return m_ptr->bool_value(); }
|
||||
const string & Json::string_value() const { return m_ptr->string_value(); }
|
||||
const vector<Json> & Json::array_items() const { return m_ptr->array_items(); }
|
||||
const map<string, Json> & Json::object_items() const { return m_ptr->object_items(); }
|
||||
const Json & Json::operator[] (size_t i) const { return (*m_ptr)[i]; }
|
||||
const Json & Json::operator[] (const string &key) const { return (*m_ptr)[key]; }
|
||||
|
||||
double JsonValue::number_value() const { return 0; }
|
||||
int JsonValue::int_value() const { return 0; }
|
||||
bool JsonValue::bool_value() const { return false; }
|
||||
const string & JsonValue::string_value() const { return statics().empty_string; }
|
||||
const vector<Json> & JsonValue::array_items() const { return statics().empty_vector; }
|
||||
const map<string, Json> & JsonValue::object_items() const { return statics().empty_map; }
|
||||
const Json & JsonValue::operator[] (size_t) const { return static_null(); }
|
||||
const Json & JsonValue::operator[] (const string &) const { return static_null(); }
|
||||
|
||||
const Json & JsonObject::operator[] (const string &key) const {
|
||||
auto iter = m_value.find(key);
|
||||
return (iter == m_value.end()) ? static_null() : iter->second;
|
||||
}
|
||||
const Json & JsonArray::operator[] (size_t i) const {
|
||||
if (i >= m_value.size()) return static_null();
|
||||
else return m_value[i];
|
||||
}
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Comparison
|
||||
*/
|
||||
|
||||
bool Json::operator== (const Json &other) const {
|
||||
if (m_ptr == other.m_ptr)
|
||||
return true;
|
||||
if (m_ptr->type() != other.m_ptr->type())
|
||||
return false;
|
||||
|
||||
return m_ptr->equals(other.m_ptr.get());
|
||||
}
|
||||
|
||||
bool Json::operator< (const Json &other) const {
|
||||
if (m_ptr == other.m_ptr)
|
||||
return false;
|
||||
if (m_ptr->type() != other.m_ptr->type())
|
||||
return m_ptr->type() < other.m_ptr->type();
|
||||
|
||||
return m_ptr->less(other.m_ptr.get());
|
||||
}
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Parsing
|
||||
*/
|
||||
|
||||
/* esc(c)
|
||||
*
|
||||
* Format char c suitable for printing in an error message.
|
||||
*/
|
||||
static inline string esc(char c) {
|
||||
char buf[12];
|
||||
if (static_cast<uint8_t>(c) >= 0x20 && static_cast<uint8_t>(c) <= 0x7f) {
|
||||
snprintf(buf, sizeof buf, "'%c' (%d)", c, c);
|
||||
} else {
|
||||
snprintf(buf, sizeof buf, "(%d)", c);
|
||||
}
|
||||
return string(buf);
|
||||
}
|
||||
|
||||
static inline bool in_range(long x, long lower, long upper) {
|
||||
return (x >= lower && x <= upper);
|
||||
}
|
||||
|
||||
namespace {
|
||||
/* JsonParser
|
||||
*
|
||||
* Object that tracks all state of an in-progress parse.
|
||||
*/
|
||||
struct JsonParser final {
|
||||
|
||||
/* State
|
||||
*/
|
||||
const string &str;
|
||||
size_t i;
|
||||
string &err;
|
||||
bool failed;
|
||||
const JsonParse strategy;
|
||||
|
||||
/* fail(msg, err_ret = Json())
|
||||
*
|
||||
* Mark this parse as failed.
|
||||
*/
|
||||
Json fail(string &&msg) {
|
||||
return fail(move(msg), Json());
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
T fail(string &&msg, const T err_ret) {
|
||||
if (!failed)
|
||||
err = std::move(msg);
|
||||
failed = true;
|
||||
return err_ret;
|
||||
}
|
||||
|
||||
/* consume_whitespace()
|
||||
*
|
||||
* Advance until the current character is non-whitespace.
|
||||
*/
|
||||
void consume_whitespace() {
|
||||
while (str[i] == ' ' || str[i] == '\r' || str[i] == '\n' || str[i] == '\t')
|
||||
i++;
|
||||
}
|
||||
|
||||
/* consume_comment()
|
||||
*
|
||||
* Advance comments (c-style inline and multiline).
|
||||
*/
|
||||
bool consume_comment() {
|
||||
bool comment_found = false;
|
||||
if (str[i] == '/') {
|
||||
i++;
|
||||
if (i == str.size())
|
||||
return fail("unexpected end of input after start of comment", false);
|
||||
if (str[i] == '/') { // inline comment
|
||||
i++;
|
||||
// advance until next line, or end of input
|
||||
while (i < str.size() && str[i] != '\n') {
|
||||
i++;
|
||||
}
|
||||
comment_found = true;
|
||||
}
|
||||
else if (str[i] == '*') { // multiline comment
|
||||
i++;
|
||||
if (i > str.size()-2)
|
||||
return fail("unexpected end of input inside multi-line comment", false);
|
||||
// advance until closing tokens
|
||||
while (!(str[i] == '*' && str[i+1] == '/')) {
|
||||
i++;
|
||||
if (i > str.size()-2)
|
||||
return fail(
|
||||
"unexpected end of input inside multi-line comment", false);
|
||||
}
|
||||
i += 2;
|
||||
comment_found = true;
|
||||
}
|
||||
else
|
||||
return fail("malformed comment", false);
|
||||
}
|
||||
return comment_found;
|
||||
}
|
||||
|
||||
/* consume_garbage()
|
||||
*
|
||||
* Advance until the current character is non-whitespace and non-comment.
|
||||
*/
|
||||
void consume_garbage() {
|
||||
consume_whitespace();
|
||||
if(strategy == JsonParse::COMMENTS) {
|
||||
bool comment_found = false;
|
||||
do {
|
||||
comment_found = consume_comment();
|
||||
if (failed) return;
|
||||
consume_whitespace();
|
||||
}
|
||||
while(comment_found);
|
||||
}
|
||||
}
|
||||
|
||||
/* get_next_token()
|
||||
*
|
||||
* Return the next non-whitespace character. If the end of the input is reached,
|
||||
* flag an error and return 0.
|
||||
*/
|
||||
char get_next_token() {
|
||||
consume_garbage();
|
||||
if (failed) return static_cast<char>(0);
|
||||
if (i == str.size())
|
||||
return fail("unexpected end of input", static_cast<char>(0));
|
||||
|
||||
return str[i++];
|
||||
}
|
||||
|
||||
/* encode_utf8(pt, out)
|
||||
*
|
||||
* Encode pt as UTF-8 and add it to out.
|
||||
*/
|
||||
void encode_utf8(long pt, string & out) {
|
||||
if (pt < 0)
|
||||
return;
|
||||
|
||||
if (pt < 0x80) {
|
||||
out += static_cast<char>(pt);
|
||||
} else if (pt < 0x800) {
|
||||
out += static_cast<char>((pt >> 6) | 0xC0);
|
||||
out += static_cast<char>((pt & 0x3F) | 0x80);
|
||||
} else if (pt < 0x10000) {
|
||||
out += static_cast<char>((pt >> 12) | 0xE0);
|
||||
out += static_cast<char>(((pt >> 6) & 0x3F) | 0x80);
|
||||
out += static_cast<char>((pt & 0x3F) | 0x80);
|
||||
} else {
|
||||
out += static_cast<char>((pt >> 18) | 0xF0);
|
||||
out += static_cast<char>(((pt >> 12) & 0x3F) | 0x80);
|
||||
out += static_cast<char>(((pt >> 6) & 0x3F) | 0x80);
|
||||
out += static_cast<char>((pt & 0x3F) | 0x80);
|
||||
}
|
||||
}
|
||||
|
||||
/* parse_string()
|
||||
*
|
||||
* Parse a string, starting at the current position.
|
||||
*/
|
||||
string parse_string() {
|
||||
string out;
|
||||
long last_escaped_codepoint = -1;
|
||||
while (true) {
|
||||
if (i == str.size())
|
||||
return fail("unexpected end of input in string", "");
|
||||
|
||||
char ch = str[i++];
|
||||
|
||||
if (ch == '"') {
|
||||
encode_utf8(last_escaped_codepoint, out);
|
||||
return out;
|
||||
}
|
||||
|
||||
if (in_range(ch, 0, 0x1f))
|
||||
return fail("unescaped " + esc(ch) + " in string", "");
|
||||
|
||||
// The usual case: non-escaped characters
|
||||
if (ch != '\\') {
|
||||
encode_utf8(last_escaped_codepoint, out);
|
||||
last_escaped_codepoint = -1;
|
||||
out += ch;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Handle escapes
|
||||
if (i == str.size())
|
||||
return fail("unexpected end of input in string", "");
|
||||
|
||||
ch = str[i++];
|
||||
|
||||
if (ch == 'u') {
|
||||
// Extract 4-byte escape sequence
|
||||
string esc = str.substr(i, 4);
|
||||
// Explicitly check length of the substring. The following loop
|
||||
// relies on std::string returning the terminating NUL when
|
||||
// accessing str[length]. Checking here reduces brittleness.
|
||||
if (esc.length() < 4) {
|
||||
return fail("bad \\u escape: " + esc, "");
|
||||
}
|
||||
for (size_t j = 0; j < 4; j++) {
|
||||
if (!in_range(esc[j], 'a', 'f') && !in_range(esc[j], 'A', 'F')
|
||||
&& !in_range(esc[j], '0', '9'))
|
||||
return fail("bad \\u escape: " + esc, "");
|
||||
}
|
||||
|
||||
long codepoint = strtol(esc.data(), nullptr, 16);
|
||||
|
||||
// JSON specifies that characters outside the BMP shall be encoded as a pair
|
||||
// of 4-hex-digit \u escapes encoding their surrogate pair components. Check
|
||||
// whether we're in the middle of such a beast: the previous codepoint was an
|
||||
// escaped lead (high) surrogate, and this is a trail (low) surrogate.
|
||||
if (in_range(last_escaped_codepoint, 0xD800, 0xDBFF)
|
||||
&& in_range(codepoint, 0xDC00, 0xDFFF)) {
|
||||
// Reassemble the two surrogate pairs into one astral-plane character, per
|
||||
// the UTF-16 algorithm.
|
||||
encode_utf8((((last_escaped_codepoint - 0xD800) << 10)
|
||||
| (codepoint - 0xDC00)) + 0x10000, out);
|
||||
last_escaped_codepoint = -1;
|
||||
} else {
|
||||
encode_utf8(last_escaped_codepoint, out);
|
||||
last_escaped_codepoint = codepoint;
|
||||
}
|
||||
|
||||
i += 4;
|
||||
continue;
|
||||
}
|
||||
|
||||
encode_utf8(last_escaped_codepoint, out);
|
||||
last_escaped_codepoint = -1;
|
||||
|
||||
if (ch == 'b') {
|
||||
out += '\b';
|
||||
} else if (ch == 'f') {
|
||||
out += '\f';
|
||||
} else if (ch == 'n') {
|
||||
out += '\n';
|
||||
} else if (ch == 'r') {
|
||||
out += '\r';
|
||||
} else if (ch == 't') {
|
||||
out += '\t';
|
||||
} else if (ch == '"' || ch == '\\' || ch == '/') {
|
||||
out += ch;
|
||||
} else {
|
||||
return fail("invalid escape character " + esc(ch), "");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* parse_number()
|
||||
*
|
||||
* Parse a double.
|
||||
*/
|
||||
Json parse_number() {
|
||||
size_t start_pos = i;
|
||||
|
||||
if (str[i] == '-')
|
||||
i++;
|
||||
|
||||
// Integer part
|
||||
if (str[i] == '0') {
|
||||
i++;
|
||||
if (in_range(str[i], '0', '9'))
|
||||
return fail("leading 0s not permitted in numbers");
|
||||
} else if (in_range(str[i], '1', '9')) {
|
||||
i++;
|
||||
while (in_range(str[i], '0', '9'))
|
||||
i++;
|
||||
} else {
|
||||
return fail("invalid " + esc(str[i]) + " in number");
|
||||
}
|
||||
|
||||
if (str[i] != '.' && str[i] != 'e' && str[i] != 'E'
|
||||
&& (i - start_pos) <= static_cast<size_t>(std::numeric_limits<int>::digits10)) {
|
||||
return std::atoi(str.c_str() + start_pos);
|
||||
}
|
||||
|
||||
// Decimal part
|
||||
if (str[i] == '.') {
|
||||
i++;
|
||||
if (!in_range(str[i], '0', '9'))
|
||||
return fail("at least one digit required in fractional part");
|
||||
|
||||
while (in_range(str[i], '0', '9'))
|
||||
i++;
|
||||
}
|
||||
|
||||
// Exponent part
|
||||
if (str[i] == 'e' || str[i] == 'E') {
|
||||
i++;
|
||||
|
||||
if (str[i] == '+' || str[i] == '-')
|
||||
i++;
|
||||
|
||||
if (!in_range(str[i], '0', '9'))
|
||||
return fail("at least one digit required in exponent");
|
||||
|
||||
while (in_range(str[i], '0', '9'))
|
||||
i++;
|
||||
}
|
||||
|
||||
return std::strtod(str.c_str() + start_pos, nullptr);
|
||||
}
|
||||
|
||||
/* expect(str, res)
|
||||
*
|
||||
* Expect that 'str' starts at the character that was just read. If it does, advance
|
||||
* the input and return res. If not, flag an error.
|
||||
*/
|
||||
Json expect(const string &expected, Json res) {
|
||||
assert(i != 0);
|
||||
i--;
|
||||
if (str.compare(i, expected.length(), expected) == 0) {
|
||||
i += expected.length();
|
||||
return res;
|
||||
} else {
|
||||
return fail("parse error: expected " + expected + ", got " + str.substr(i, expected.length()));
|
||||
}
|
||||
}
|
||||
|
||||
/* parse_json()
|
||||
*
|
||||
* Parse a JSON object.
|
||||
*/
|
||||
Json parse_json(int depth) {
|
||||
if (depth > max_depth) {
|
||||
return fail("exceeded maximum nesting depth");
|
||||
}
|
||||
|
||||
char ch = get_next_token();
|
||||
if (failed)
|
||||
return Json();
|
||||
|
||||
if (ch == '-' || (ch >= '0' && ch <= '9')) {
|
||||
i--;
|
||||
return parse_number();
|
||||
}
|
||||
|
||||
if (ch == 't')
|
||||
return expect("true", true);
|
||||
|
||||
if (ch == 'f')
|
||||
return expect("false", false);
|
||||
|
||||
if (ch == 'n')
|
||||
return expect("null", Json());
|
||||
|
||||
if (ch == '"')
|
||||
return parse_string();
|
||||
|
||||
if (ch == '{') {
|
||||
map<string, Json> data;
|
||||
ch = get_next_token();
|
||||
if (ch == '}')
|
||||
return data;
|
||||
|
||||
while (1) {
|
||||
if (ch != '"')
|
||||
return fail("expected '\"' in object, got " + esc(ch));
|
||||
|
||||
string key = parse_string();
|
||||
if (failed)
|
||||
return Json();
|
||||
|
||||
ch = get_next_token();
|
||||
if (ch != ':')
|
||||
return fail("expected ':' in object, got " + esc(ch));
|
||||
|
||||
data[std::move(key)] = parse_json(depth + 1);
|
||||
if (failed)
|
||||
return Json();
|
||||
|
||||
ch = get_next_token();
|
||||
if (ch == '}')
|
||||
break;
|
||||
if (ch != ',')
|
||||
return fail("expected ',' in object, got " + esc(ch));
|
||||
|
||||
ch = get_next_token();
|
||||
}
|
||||
return data;
|
||||
}
|
||||
|
||||
if (ch == '[') {
|
||||
vector<Json> data;
|
||||
ch = get_next_token();
|
||||
if (ch == ']')
|
||||
return data;
|
||||
|
||||
while (1) {
|
||||
i--;
|
||||
data.push_back(parse_json(depth + 1));
|
||||
if (failed)
|
||||
return Json();
|
||||
|
||||
ch = get_next_token();
|
||||
if (ch == ']')
|
||||
break;
|
||||
if (ch != ',')
|
||||
return fail("expected ',' in list, got " + esc(ch));
|
||||
|
||||
ch = get_next_token();
|
||||
(void)ch;
|
||||
}
|
||||
return data;
|
||||
}
|
||||
|
||||
return fail("expected value, got " + esc(ch));
|
||||
}
|
||||
};
|
||||
}//namespace {
|
||||
|
||||
Json Json::parse(const string &in, string &err, JsonParse strategy) {
|
||||
JsonParser parser { in, 0, err, false, strategy };
|
||||
Json result = parser.parse_json(0);
|
||||
|
||||
// Check for any trailing garbage
|
||||
parser.consume_garbage();
|
||||
if (parser.failed)
|
||||
return Json();
|
||||
if (parser.i != in.size())
|
||||
return parser.fail("unexpected trailing " + esc(in[parser.i]));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
// Documented in json11.hpp
|
||||
vector<Json> Json::parse_multi(const string &in,
|
||||
std::string::size_type &parser_stop_pos,
|
||||
string &err,
|
||||
JsonParse strategy) {
|
||||
JsonParser parser { in, 0, err, false, strategy };
|
||||
parser_stop_pos = 0;
|
||||
vector<Json> json_vec;
|
||||
while (parser.i != in.size() && !parser.failed) {
|
||||
json_vec.push_back(parser.parse_json(0));
|
||||
if (parser.failed)
|
||||
break;
|
||||
|
||||
// Check for another object
|
||||
parser.consume_garbage();
|
||||
if (parser.failed)
|
||||
break;
|
||||
parser_stop_pos = parser.i;
|
||||
}
|
||||
return json_vec;
|
||||
}
|
||||
|
||||
/* * * * * * * * * * * * * * * * * * * *
|
||||
* Shape-checking
|
||||
*/
|
||||
|
||||
bool Json::has_shape(const shape & types, string & err) const {
|
||||
if (!is_object()) {
|
||||
err = "expected JSON object, got " + dump();
|
||||
return false;
|
||||
}
|
||||
|
||||
const auto& obj_items = object_items();
|
||||
for (auto & item : types) {
|
||||
const auto it = obj_items.find(item.first);
|
||||
if (it == obj_items.cend() || it->second.type() != item.second) {
|
||||
err = "bad type for " + item.first + " in " + dump();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
} // namespace json11
|
||||
232
image_capture/third_party/percipio/common/json11.hpp
vendored
Normal file
232
image_capture/third_party/percipio/common/json11.hpp
vendored
Normal file
@@ -0,0 +1,232 @@
|
||||
/* json11
|
||||
*
|
||||
* json11 is a tiny JSON library for C++11, providing JSON parsing and serialization.
|
||||
*
|
||||
* The core object provided by the library is json11::Json. A Json object represents any JSON
|
||||
* value: null, bool, number (int or double), string (std::string), array (std::vector), or
|
||||
* object (std::map).
|
||||
*
|
||||
* Json objects act like values: they can be assigned, copied, moved, compared for equality or
|
||||
* order, etc. There are also helper methods Json::dump, to serialize a Json to a string, and
|
||||
* Json::parse (static) to parse a std::string as a Json object.
|
||||
*
|
||||
* Internally, the various types of Json object are represented by the JsonValue class
|
||||
* hierarchy.
|
||||
*
|
||||
* A note on numbers - JSON specifies the syntax of number formatting but not its semantics,
|
||||
* so some JSON implementations distinguish between integers and floating-point numbers, while
|
||||
* some don't. In json11, we choose the latter. Because some JSON implementations (namely
|
||||
* Javascript itself) treat all numbers as the same type, distinguishing the two leads
|
||||
* to JSON that will be *silently* changed by a round-trip through those implementations.
|
||||
* Dangerous! To avoid that risk, json11 stores all numbers as double internally, but also
|
||||
* provides integer helpers.
|
||||
*
|
||||
* Fortunately, double-precision IEEE754 ('double') can precisely store any integer in the
|
||||
* range +/-2^53, which includes every 'int' on most systems. (Timestamps often use int64
|
||||
* or long long to avoid the Y2038K problem; a double storing microseconds since some epoch
|
||||
* will be exact for +/- 275 years.)
|
||||
*/
|
||||
|
||||
/* Copyright (c) 2013 Dropbox, Inc.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <map>
|
||||
#include <memory>
|
||||
#include <initializer_list>
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#if _MSC_VER <= 1800 // VS 2013
|
||||
#ifndef noexcept
|
||||
#define noexcept throw()
|
||||
#endif
|
||||
|
||||
#ifndef snprintf
|
||||
#define snprintf _snprintf_s
|
||||
#endif
|
||||
#endif
|
||||
#endif
|
||||
|
||||
namespace json11 {
|
||||
|
||||
enum JsonParse {
|
||||
STANDARD, COMMENTS
|
||||
};
|
||||
|
||||
class JsonValue;
|
||||
|
||||
class Json final {
|
||||
public:
|
||||
// Types
|
||||
enum Type {
|
||||
NUL, NUMBER, BOOL, STRING, ARRAY, OBJECT
|
||||
};
|
||||
|
||||
// Array and object typedefs
|
||||
typedef std::vector<Json> array;
|
||||
typedef std::map<std::string, Json> object;
|
||||
|
||||
// Constructors for the various types of JSON value.
|
||||
Json() noexcept; // NUL
|
||||
Json(std::nullptr_t) noexcept; // NUL
|
||||
Json(double value); // NUMBER
|
||||
Json(int value); // NUMBER
|
||||
Json(bool value); // BOOL
|
||||
Json(const std::string &value); // STRING
|
||||
Json(std::string &&value); // STRING
|
||||
Json(const char * value); // STRING
|
||||
Json(const array &values); // ARRAY
|
||||
Json(array &&values); // ARRAY
|
||||
Json(const object &values); // OBJECT
|
||||
Json(object &&values); // OBJECT
|
||||
|
||||
// Implicit constructor: anything with a to_json() function.
|
||||
template <class T, class = decltype(&T::to_json)>
|
||||
Json(const T & t) : Json(t.to_json()) {}
|
||||
|
||||
// Implicit constructor: map-like objects (std::map, std::unordered_map, etc)
|
||||
template <class M, typename std::enable_if<
|
||||
std::is_constructible<std::string, decltype(std::declval<M>().begin()->first)>::value
|
||||
&& std::is_constructible<Json, decltype(std::declval<M>().begin()->second)>::value,
|
||||
int>::type = 0>
|
||||
Json(const M & m) : Json(object(m.begin(), m.end())) {}
|
||||
|
||||
// Implicit constructor: vector-like objects (std::list, std::vector, std::set, etc)
|
||||
template <class V, typename std::enable_if<
|
||||
std::is_constructible<Json, decltype(*std::declval<V>().begin())>::value,
|
||||
int>::type = 0>
|
||||
Json(const V & v) : Json(array(v.begin(), v.end())) {}
|
||||
|
||||
// This prevents Json(some_pointer) from accidentally producing a bool. Use
|
||||
// Json(bool(some_pointer)) if that behavior is desired.
|
||||
Json(void *) = delete;
|
||||
|
||||
// Accessors
|
||||
Type type() const;
|
||||
|
||||
bool is_null() const { return type() == NUL; }
|
||||
bool is_number() const { return type() == NUMBER; }
|
||||
bool is_bool() const { return type() == BOOL; }
|
||||
bool is_string() const { return type() == STRING; }
|
||||
bool is_array() const { return type() == ARRAY; }
|
||||
bool is_object() const { return type() == OBJECT; }
|
||||
|
||||
// Return the enclosed value if this is a number, 0 otherwise. Note that json11 does not
|
||||
// distinguish between integer and non-integer numbers - number_value() and int_value()
|
||||
// can both be applied to a NUMBER-typed object.
|
||||
double number_value() const;
|
||||
int int_value() const;
|
||||
|
||||
// Return the enclosed value if this is a boolean, false otherwise.
|
||||
bool bool_value() const;
|
||||
// Return the enclosed string if this is a string, "" otherwise.
|
||||
const std::string &string_value() const;
|
||||
// Return the enclosed std::vector if this is an array, or an empty vector otherwise.
|
||||
const array &array_items() const;
|
||||
// Return the enclosed std::map if this is an object, or an empty map otherwise.
|
||||
const object &object_items() const;
|
||||
|
||||
// Return a reference to arr[i] if this is an array, Json() otherwise.
|
||||
const Json & operator[](size_t i) const;
|
||||
// Return a reference to obj[key] if this is an object, Json() otherwise.
|
||||
const Json & operator[](const std::string &key) const;
|
||||
|
||||
// Serialize.
|
||||
void dump(std::string &out) const;
|
||||
std::string dump() const {
|
||||
std::string out;
|
||||
dump(out);
|
||||
return out;
|
||||
}
|
||||
|
||||
// Parse. If parse fails, return Json() and assign an error message to err.
|
||||
static Json parse(const std::string & in,
|
||||
std::string & err,
|
||||
JsonParse strategy = JsonParse::STANDARD);
|
||||
static Json parse(const char * in,
|
||||
std::string & err,
|
||||
JsonParse strategy = JsonParse::STANDARD) {
|
||||
if (in) {
|
||||
return parse(std::string(in), err, strategy);
|
||||
} else {
|
||||
err = "null input";
|
||||
return nullptr;
|
||||
}
|
||||
}
|
||||
// Parse multiple objects, concatenated or separated by whitespace
|
||||
static std::vector<Json> parse_multi(
|
||||
const std::string & in,
|
||||
std::string::size_type & parser_stop_pos,
|
||||
std::string & err,
|
||||
JsonParse strategy = JsonParse::STANDARD);
|
||||
|
||||
static inline std::vector<Json> parse_multi(
|
||||
const std::string & in,
|
||||
std::string & err,
|
||||
JsonParse strategy = JsonParse::STANDARD) {
|
||||
std::string::size_type parser_stop_pos;
|
||||
return parse_multi(in, parser_stop_pos, err, strategy);
|
||||
}
|
||||
|
||||
bool operator== (const Json &rhs) const;
|
||||
bool operator< (const Json &rhs) const;
|
||||
bool operator!= (const Json &rhs) const { return !(*this == rhs); }
|
||||
bool operator<= (const Json &rhs) const { return !(rhs < *this); }
|
||||
bool operator> (const Json &rhs) const { return (rhs < *this); }
|
||||
bool operator>= (const Json &rhs) const { return !(*this < rhs); }
|
||||
|
||||
/* has_shape(types, err)
|
||||
*
|
||||
* Return true if this is a JSON object and, for each item in types, has a field of
|
||||
* the given type. If not, return false and set err to a descriptive message.
|
||||
*/
|
||||
typedef std::initializer_list<std::pair<std::string, Type>> shape;
|
||||
bool has_shape(const shape & types, std::string & err) const;
|
||||
|
||||
private:
|
||||
std::shared_ptr<JsonValue> m_ptr;
|
||||
};
|
||||
|
||||
// Internal class hierarchy - JsonValue objects are not exposed to users of this API.
|
||||
class JsonValue {
|
||||
protected:
|
||||
friend class Json;
|
||||
friend class JsonInt;
|
||||
friend class JsonDouble;
|
||||
virtual Json::Type type() const = 0;
|
||||
virtual bool equals(const JsonValue * other) const = 0;
|
||||
virtual bool less(const JsonValue * other) const = 0;
|
||||
virtual void dump(std::string &out) const = 0;
|
||||
virtual double number_value() const;
|
||||
virtual int int_value() const;
|
||||
virtual bool bool_value() const;
|
||||
virtual const std::string &string_value() const;
|
||||
virtual const Json::array &array_items() const;
|
||||
virtual const Json &operator[](size_t i) const;
|
||||
virtual const Json::object &object_items() const;
|
||||
virtual const Json &operator[](const std::string &key) const;
|
||||
virtual ~JsonValue() {}
|
||||
};
|
||||
|
||||
} // namespace json11
|
||||
2951
image_capture/third_party/percipio/include/TYApi.h
vendored
Normal file
2951
image_capture/third_party/percipio/include/TYApi.h
vendored
Normal file
File diff suppressed because it is too large
Load Diff
560
image_capture/third_party/percipio/include/TYCoordinateMapper.h
vendored
Normal file
560
image_capture/third_party/percipio/include/TYCoordinateMapper.h
vendored
Normal file
@@ -0,0 +1,560 @@
|
||||
/**@file TYCoordinateMapper.h
|
||||
* @brief Coordinate Conversion API
|
||||
* @note Considering performance, we leave the responsibility of parameters check to users.
|
||||
* @copyright Copyright(C)2016-2018 Percipio All Rights Reserved
|
||||
**/
|
||||
#ifndef TY_COORDINATE_MAPPER_H_
|
||||
#define TY_COORDINATE_MAPPER_H_
|
||||
|
||||
#include <stdlib.h>
|
||||
#include "TYApi.h"
|
||||
|
||||
typedef struct TY_PIXEL_DESC
|
||||
{
|
||||
int16_t x; // x coordinate in pixels
|
||||
int16_t y; // y coordinate in pixels
|
||||
uint16_t depth; // depth value
|
||||
uint16_t rsvd;
|
||||
}TY_PIXEL_DESC;
|
||||
|
||||
typedef struct TY_PIXEL_COLOR_DESC
|
||||
{
|
||||
int16_t x; // x coordinate in pixels
|
||||
int16_t y; // y coordinate in pixels
|
||||
uint8_t bgr_ch1; // color info <channel 1>
|
||||
uint8_t bgr_ch2; // color info <channel 2>
|
||||
uint8_t bgr_ch3; // color info <channel 3>
|
||||
uint8_t rsvd;
|
||||
}TY_PIXEL_COLOR_DESC;
|
||||
|
||||
// ------------------------------
|
||||
// base convertion
|
||||
// ------------------------------
|
||||
|
||||
/// @brief Calculate 4x4 extrinsic matrix's inverse matrix.
|
||||
/// @param [in] orgExtrinsic Input extrinsic matrix.
|
||||
/// @param [out] invExtrinsic Inverse matrix.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
/// @retval TY_STATUS_ERROR Calculation failed.
|
||||
TY_CAPI TYInvertExtrinsic (const TY_CAMERA_EXTRINSIC* orgExtrinsic,
|
||||
TY_CAMERA_EXTRINSIC* invExtrinsic);
|
||||
|
||||
/// @brief Map pixels on depth image to 3D points.
|
||||
/// @param [in] src_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of depth image.
|
||||
/// @param [in] depthH Height of depth image.
|
||||
/// @param [in] depthPixels Pixels on depth image.
|
||||
/// @param [in] count Number of depth pixels.
|
||||
/// @param [out] point3d Output point3D.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
TY_CAPI TYMapDepthToPoint3d (const TY_CAMERA_CALIB_INFO* src_calib,
|
||||
uint32_t depthW, uint32_t depthH,
|
||||
const TY_PIXEL_DESC* depthPixels, uint32_t count,
|
||||
TY_VECT_3F* point3d,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map 3D points to pixels on depth image. Reverse operation of TYMapDepthToPoint3d.
|
||||
/// @param [in] dst_calib Target depth image's calibration data.
|
||||
/// @param [in] point3d Input 3D points.
|
||||
/// @param [in] count Number of points.
|
||||
/// @param [in] depthW Width of target depth image.
|
||||
/// @param [in] depthH Height of target depth image.
|
||||
/// @param [out] depth Output depth pixels.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
TY_CAPI TYMapPoint3dToDepth (const TY_CAMERA_CALIB_INFO* dst_calib,
|
||||
const TY_VECT_3F* point3d, uint32_t count,
|
||||
uint32_t depthW, uint32_t depthH,
|
||||
TY_PIXEL_DESC* depth,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map depth image to 3D points. 0 depth pixels maps to (NAN, NAN, NAN).
|
||||
/// @param [in] src_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of depth image.
|
||||
/// @param [in] depthH Height of depth image.
|
||||
/// @param [in] depth Depth image.
|
||||
/// @param [out] point3d Output point3D image.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
TY_CAPI TYMapDepthImageToPoint3d (const TY_CAMERA_CALIB_INFO* src_calib,
|
||||
int32_t imageW, int32_t imageH,
|
||||
const uint16_t* depth,
|
||||
TY_VECT_3F* point3d,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Fill depth image empty region.
|
||||
/// @param [in] depth Depth image pixels.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
TY_CAPI TYDepthImageFillEmptyRegion(uint16_t* depth, uint32_t depthW, uint32_t depthH);
|
||||
|
||||
/// @brief Map 3D points to depth image. (NAN, NAN, NAN) will be skipped.
|
||||
/// @param [in] dst_calib Target depth image's calibration data.
|
||||
/// @param [in] point3d Input 3D points.
|
||||
/// @param [in] count Number of points.
|
||||
/// @param [in] depthW Width of target depth image.
|
||||
/// @param [in] depthH Height of target depth image.
|
||||
/// @param [in,out] depth Depth image buffer.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
TY_CAPI TYMapPoint3dToDepthImage (const TY_CAMERA_CALIB_INFO* dst_calib,
|
||||
const TY_VECT_3F* point3d, uint32_t count,
|
||||
uint32_t depthW, uint32_t depthH, uint16_t* depth,
|
||||
float f_target_scale = 1.0f);
|
||||
|
||||
/// @brief Map 3D points to another coordinate.
|
||||
/// @param [in] extrinsic Extrinsic matrix.
|
||||
/// @param [in] point3dFrom Source 3D points.
|
||||
/// @param [in] count Number of source 3D points.
|
||||
/// @param [out] point3dTo Target 3D points.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
TY_CAPI TYMapPoint3dToPoint3d (const TY_CAMERA_EXTRINSIC* extrinsic,
|
||||
const TY_VECT_3F* point3dFrom, int32_t count,
|
||||
TY_VECT_3F* point3dTo);
|
||||
|
||||
// ------------------------------
|
||||
// inlines
|
||||
// ------------------------------
|
||||
|
||||
/// @brief Map depth pixels to color coordinate pixels.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Depth image pixels.
|
||||
/// @param [in] count Number of depth image pixels.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] mappedW Width of target depth image.
|
||||
/// @param [in] mappedH Height of target depth image.
|
||||
/// @param [out] mappedDepth Output pixels.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapDepthToColorCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH,
|
||||
const TY_PIXEL_DESC* depth, uint32_t count,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t mappedW, uint32_t mappedH,
|
||||
TY_PIXEL_DESC* mappedDepth,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map original depth image to color coordinate depth image.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] mappedW Width of target depth image.
|
||||
/// @param [in] mappedH Height of target depth image.
|
||||
/// @param [out] mappedDepth Output pixels.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapDepthImageToColorCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t mappedW, uint32_t mappedH, uint16_t* mappedDepth,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Create depth image to color coordinate lookup table.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] mappedW Width of target depth image.
|
||||
/// @param [in] mappedH Height of target depth image.
|
||||
/// @param [out] lut Output lookup table.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYCreateDepthToColorCoordinateLookupTable(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t mappedW, uint32_t mappedH,
|
||||
TY_PIXEL_DESC* lut,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map original RGB pixels to depth coordinate.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Current depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] rgbW Width of RGB image.
|
||||
/// @param [in] rgbH Height of RGB image.
|
||||
/// @param [in] src Input RGB pixels info.
|
||||
/// @param [in] cnt Input src RGB pixels cnt
|
||||
/// @param [in] min_distance The min distance(mm), which is generally set to the minimum measured distance of the current camera
|
||||
/// @param [in] max_distance The longest distance(mm), which is generally set to the longest measuring distance of the current camera
|
||||
/// @param [out] dst Output RGB pixels info.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapRGBPixelsToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH,
|
||||
TY_PIXEL_COLOR_DESC* src, uint32_t cnt,
|
||||
uint32_t min_distance,
|
||||
uint32_t max_distance,
|
||||
TY_PIXEL_COLOR_DESC* dst,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map original RGB image to depth coordinate RGB image.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Current depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] rgbW Width of RGB image.
|
||||
/// @param [in] rgbH Height of RGB image.
|
||||
/// @param [in] inRgb Current RGB image.
|
||||
/// @param [out] mappedRgb Output RGB image.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapRGBImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH, const uint8_t* inRgb,
|
||||
uint8_t* mappedRgb,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map original RGB48 image to depth coordinate RGB image.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Current depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] rgbW Width of RGB48 image.
|
||||
/// @param [in] rgbH Height of RGB48 image.
|
||||
/// @param [in] inRgb Current RGB48 image.
|
||||
/// @param [out] mappedRgb Output RGB48 image.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapRGB48ImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH, const uint16_t* inRgb,
|
||||
uint16_t* mappedRgb,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
/// @brief Map original MONO16 image to depth coordinate MONO16 image.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Current depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] rgbW Width of MONO16 image.
|
||||
/// @param [in] rgbH Height of MONO16 image.
|
||||
/// @param [in] gray Current MONO16 image.
|
||||
/// @param [out] mappedGray Output MONO16 image.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapMono16ImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH, const uint16_t* gray,
|
||||
uint16_t* mappedGray,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
|
||||
/// @brief Map original MONO8 image to depth coordinate MONO8 image.
|
||||
/// @param [in] depth_calib Depth image's calibration data.
|
||||
/// @param [in] depthW Width of current depth image.
|
||||
/// @param [in] depthH Height of current depth image.
|
||||
/// @param [in] depth Current depth image.
|
||||
/// @param [in] color_calib Color image's calibration data.
|
||||
/// @param [in] monoW Width of MONO8 image.
|
||||
/// @param [in] monoH Height of MONO8 image.
|
||||
/// @param [in] inMono Current MONO8 image.
|
||||
/// @param [out] mappedMono Output MONO8 image.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
static inline TY_STATUS TYMapMono8ImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t monoW, uint32_t monoH, const uint8_t* inMono,
|
||||
uint8_t* mappedMono,
|
||||
float f_scale_unit = 1.0f);
|
||||
|
||||
|
||||
#define TYMAP_CHECKRET(f, bufToFree) \
|
||||
do{ \
|
||||
TY_STATUS err = (f); \
|
||||
if(err){ \
|
||||
if(bufToFree) \
|
||||
free(bufToFree); \
|
||||
return err; \
|
||||
} \
|
||||
} while(0)
|
||||
|
||||
|
||||
static inline TY_STATUS TYMapDepthToColorCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH,
|
||||
const TY_PIXEL_DESC* depth, uint32_t count,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t mappedW, uint32_t mappedH,
|
||||
TY_PIXEL_DESC* mappedDepth,
|
||||
float f_scale_unit)
|
||||
{
|
||||
TY_VECT_3F* p3d = (TY_VECT_3F*)malloc(sizeof(TY_VECT_3F) * count);
|
||||
TYMAP_CHECKRET(TYMapDepthToPoint3d(depth_calib, depthW, depthH, depth, count, p3d, f_scale_unit), p3d );
|
||||
TY_CAMERA_EXTRINSIC extri_inv;
|
||||
TYMAP_CHECKRET(TYInvertExtrinsic(&color_calib->extrinsic, &extri_inv), p3d);
|
||||
TYMAP_CHECKRET(TYMapPoint3dToPoint3d(&extri_inv, p3d, count, p3d), p3d );
|
||||
TYMAP_CHECKRET(TYMapPoint3dToDepth(color_calib, p3d, count, mappedW, mappedH, mappedDepth, f_scale_unit), p3d );
|
||||
free(p3d);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
|
||||
static inline TY_STATUS TYMapDepthImageToColorCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t mappedW, uint32_t mappedH, uint16_t* mappedDepth, float f_scale_unit)
|
||||
{
|
||||
TY_VECT_3F* p3d = (TY_VECT_3F*)malloc(sizeof(TY_VECT_3F) * depthW * depthH);
|
||||
TYMAP_CHECKRET(TYMapDepthImageToPoint3d(depth_calib, depthW, depthH, depth, p3d, f_scale_unit), p3d);
|
||||
TY_CAMERA_EXTRINSIC extri_inv;
|
||||
TYMAP_CHECKRET(TYInvertExtrinsic(&color_calib->extrinsic, &extri_inv), p3d);
|
||||
TYMAP_CHECKRET(TYMapPoint3dToPoint3d(&extri_inv, p3d, depthW * depthH, p3d), p3d);
|
||||
TYMAP_CHECKRET(TYMapPoint3dToDepthImage(
|
||||
color_calib, p3d, depthW * depthH, mappedW, mappedH, mappedDepth, f_scale_unit), p3d);
|
||||
free(p3d);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYMapRGBPixelsToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH,
|
||||
TY_PIXEL_COLOR_DESC* src, uint32_t cnt,
|
||||
uint32_t min_distance,
|
||||
uint32_t max_distance,
|
||||
TY_PIXEL_COLOR_DESC* dst,
|
||||
float f_scale_unit)
|
||||
{
|
||||
uint32_t m_distance_range = max_distance - min_distance;
|
||||
TY_CAMERA_EXTRINSIC extri = color_calib->extrinsic;
|
||||
|
||||
TY_PIXEL_DESC* pixels_array = (TY_PIXEL_DESC*)malloc(sizeof(TY_PIXEL_DESC) * m_distance_range);
|
||||
TY_PIXEL_DESC* pixels_mapped_array = (TY_PIXEL_DESC*)malloc(sizeof(TY_PIXEL_DESC) * m_distance_range);
|
||||
TY_VECT_3F* p3d_array = (TY_VECT_3F*)malloc(sizeof(TY_VECT_3F) * m_distance_range);
|
||||
for (uint32_t i = 0; i < cnt; i++) {
|
||||
for (uint32_t m = 0; m < m_distance_range; m++) {
|
||||
pixels_array[m].x = src[i].x;
|
||||
pixels_array[m].y = src[i].y;
|
||||
pixels_array[m].depth = m + min_distance;
|
||||
}
|
||||
|
||||
TYMapDepthToPoint3d(color_calib, rgbW, rgbH, pixels_array, m_distance_range, &p3d_array[0], f_scale_unit);
|
||||
TYMapPoint3dToPoint3d(&extri, &p3d_array[0], m_distance_range, &p3d_array[0]);
|
||||
|
||||
TYMapPoint3dToDepth(depth_calib, p3d_array, m_distance_range, depthW, depthH, pixels_mapped_array, f_scale_unit);
|
||||
|
||||
uint16_t m_min_delt = 0xffff;
|
||||
dst[i].x = -1;
|
||||
dst[i].y = -1;
|
||||
for (uint32_t m = 0; m < m_distance_range; m++) {
|
||||
int16_t pixel_x = pixels_mapped_array[m].x;
|
||||
int16_t pixel_y = pixels_mapped_array[m].y;
|
||||
uint16_t delt = abs(pixels_mapped_array[m].depth - depth[pixel_y*depthW + pixel_x]);
|
||||
if (delt < m_min_delt) {
|
||||
m_min_delt = delt;
|
||||
if (m_min_delt < 10) {
|
||||
dst[i].x = pixel_x;
|
||||
dst[i].y = pixel_y;
|
||||
dst[i].bgr_ch1 = src[i].bgr_ch1;
|
||||
dst[i].bgr_ch2 = src[i].bgr_ch2;
|
||||
dst[i].bgr_ch3 = src[i].bgr_ch3;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
free(pixels_array);
|
||||
free(pixels_mapped_array);
|
||||
free(p3d_array);
|
||||
|
||||
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYCreateDepthToColorCoordinateLookupTable(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t mappedW, uint32_t mappedH,
|
||||
TY_PIXEL_DESC* lut,
|
||||
float f_scale_unit)
|
||||
{
|
||||
TY_VECT_3F* p3d = (TY_VECT_3F*)malloc(sizeof(TY_VECT_3F) * depthW * depthH);
|
||||
TYMAP_CHECKRET(TYMapDepthImageToPoint3d(depth_calib, depthW, depthH, depth, p3d, f_scale_unit), p3d);
|
||||
TY_CAMERA_EXTRINSIC extri_inv;
|
||||
TYMAP_CHECKRET(TYInvertExtrinsic(&color_calib->extrinsic, &extri_inv), p3d);
|
||||
TYMAP_CHECKRET(TYMapPoint3dToPoint3d(&extri_inv, p3d, depthW * depthH, p3d), p3d);
|
||||
TYMAP_CHECKRET(TYMapPoint3dToDepth(color_calib, p3d, depthW * depthH, mappedW, mappedH, lut, f_scale_unit), p3d );
|
||||
free(p3d);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
inline void TYPixelsOverlapRemove(TY_PIXEL_DESC* lut, uint32_t count, uint32_t imageW, uint32_t imageH)
|
||||
{
|
||||
uint16_t* mappedDepth = (uint16_t*)calloc(imageW*imageH, sizeof(uint16_t));
|
||||
for(size_t i = 0; i < count; i++) {
|
||||
if(lut[i].x < 0 || lut[i].y < 0 || lut[i].x >= imageW || lut[i].y >= imageH) continue;
|
||||
uint32_t offset = lut[i].y * imageW + lut[i].x;
|
||||
if(lut[i].depth && (mappedDepth[offset] == 0 || mappedDepth[offset] >= lut[i].depth))
|
||||
mappedDepth[offset] = lut[i].depth;
|
||||
}
|
||||
TYDepthImageFillEmptyRegion(mappedDepth, imageW, imageH);
|
||||
for(size_t i = 0; i < count; i++) {
|
||||
if(lut[i].x < 0 || lut[i].y < 0 || lut[i].x >= imageW || lut[i].y >= imageH) {
|
||||
continue;
|
||||
} else {
|
||||
uint32_t offset = lut[i].y * imageW + lut[i].x;
|
||||
int32_t delt = lut[i].depth - mappedDepth[offset];
|
||||
if(lut[i].depth && delt > 10) {
|
||||
lut[i].x = -1;
|
||||
lut[i].y = -1;
|
||||
lut[i].depth = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
free(mappedDepth);
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYMapRGBImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH, const uint8_t* inRgb,
|
||||
uint8_t* mappedRgb, float f_scale_unit)
|
||||
{
|
||||
TY_PIXEL_DESC* lut = (TY_PIXEL_DESC*)malloc(sizeof(TY_PIXEL_DESC) * depthW * depthH);
|
||||
TYMAP_CHECKRET(TYCreateDepthToColorCoordinateLookupTable(
|
||||
depth_calib, depthW, depthH, depth,
|
||||
color_calib, depthW, depthH, lut, f_scale_unit), lut);
|
||||
TYPixelsOverlapRemove(lut, depthW * depthH, depthW, depthH);
|
||||
|
||||
for(uint32_t depthr = 0; depthr < depthH; depthr++)
|
||||
for(uint32_t depthc = 0; depthc < depthW; depthc++)
|
||||
{
|
||||
TY_PIXEL_DESC* plut = &lut[depthr * depthW + depthc];
|
||||
uint8_t* outPtr = &mappedRgb[depthW * depthr * 3 + depthc * 3];
|
||||
if(plut->x < 0 || plut->x >= (int)depthW || plut->y < 0 || plut->y >= (int)depthH){
|
||||
outPtr[0] = outPtr[1] = outPtr[2] = 0;
|
||||
} else {
|
||||
uint16_t scale_x = (uint16_t)(1.f * plut->x * rgbW / depthW + 0.5);
|
||||
uint16_t scale_y = (uint16_t)(1.f * plut->y * rgbH / depthH + 0.5);
|
||||
if(scale_x >= rgbW) scale_x = rgbW - 1;
|
||||
if(scale_y >= rgbH) scale_y = rgbH - 1;
|
||||
const uint8_t* inPtr = &inRgb[rgbW * scale_y * 3 + scale_x * 3];
|
||||
outPtr[0] = inPtr[0];
|
||||
outPtr[1] = inPtr[1];
|
||||
outPtr[2] = inPtr[2];
|
||||
}
|
||||
}
|
||||
free(lut);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYMapRGB48ImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH, const uint16_t* inRgb,
|
||||
uint16_t* mappedRgb, float f_scale_unit)
|
||||
{
|
||||
TY_PIXEL_DESC* lut = (TY_PIXEL_DESC*)malloc(sizeof(TY_PIXEL_DESC) * depthW * depthH);
|
||||
TYMAP_CHECKRET(TYCreateDepthToColorCoordinateLookupTable(
|
||||
depth_calib, depthW, depthH, depth,
|
||||
color_calib, depthW, depthH, lut, f_scale_unit), lut);
|
||||
TYPixelsOverlapRemove(lut, depthW * depthH, depthW, depthH);
|
||||
|
||||
for(uint32_t depthr = 0; depthr < depthH; depthr++)
|
||||
for(uint32_t depthc = 0; depthc < depthW; depthc++)
|
||||
{
|
||||
TY_PIXEL_DESC* plut = &lut[depthr * depthW + depthc];
|
||||
uint16_t* outPtr = &mappedRgb[depthW * depthr * 3 + depthc * 3];
|
||||
if(plut->x < 0 || plut->x >= (int)depthW || plut->y < 0 || plut->y >= (int)depthH){
|
||||
outPtr[0] = outPtr[1] = outPtr[2] = 0;
|
||||
} else {
|
||||
uint16_t scale_x = (uint16_t)(1.f * plut->x * rgbW / depthW + 0.5);
|
||||
uint16_t scale_y = (uint16_t)(1.f * plut->y * rgbH / depthH + 0.5);
|
||||
if(scale_x >= rgbW) scale_x = rgbW - 1;
|
||||
if(scale_y >= rgbH) scale_y = rgbH - 1;
|
||||
const uint16_t* inPtr = &inRgb[rgbW * scale_y * 3 + scale_x * 3];
|
||||
outPtr[0] = inPtr[0];
|
||||
outPtr[1] = inPtr[1];
|
||||
outPtr[2] = inPtr[2];
|
||||
}
|
||||
}
|
||||
free(lut);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYMapMono16ImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t rgbW, uint32_t rgbH, const uint16_t* gray,
|
||||
uint16_t* mappedGray, float f_scale_unit)
|
||||
{
|
||||
TY_PIXEL_DESC* lut = (TY_PIXEL_DESC*)malloc(sizeof(TY_PIXEL_DESC) * depthW * depthH);
|
||||
TYMAP_CHECKRET(TYCreateDepthToColorCoordinateLookupTable(
|
||||
depth_calib, depthW, depthH, depth,
|
||||
color_calib, depthW, depthH, lut, f_scale_unit), lut);
|
||||
TYPixelsOverlapRemove(lut, depthW * depthH, depthW, depthH);
|
||||
|
||||
for(uint32_t depthr = 0; depthr < depthH; depthr++)
|
||||
for(uint32_t depthc = 0; depthc < depthW; depthc++)
|
||||
{
|
||||
TY_PIXEL_DESC* plut = &lut[depthr * depthW + depthc];
|
||||
uint16_t* outPtr = &mappedGray[depthW * depthr + depthc];
|
||||
if(plut->x < 0 || plut->x >= (int)depthW || plut->y < 0 || plut->y >= (int)depthH){
|
||||
outPtr[0] = 0;
|
||||
} else {
|
||||
uint16_t scale_x = (uint16_t)(1.f * plut->x * rgbW / depthW + 0.5);
|
||||
uint16_t scale_y = (uint16_t)(1.f * plut->y * rgbH / depthH + 0.5);
|
||||
if(scale_x >= rgbW) scale_x = rgbW - 1;
|
||||
if(scale_y >= rgbH) scale_y = rgbH - 1;
|
||||
const uint16_t* inPtr = &gray[rgbW * scale_y + scale_x];
|
||||
outPtr[0] = inPtr[0];
|
||||
}
|
||||
}
|
||||
free(lut);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYMapMono8ImageToDepthCoordinate(
|
||||
const TY_CAMERA_CALIB_INFO* depth_calib,
|
||||
uint32_t depthW, uint32_t depthH, const uint16_t* depth,
|
||||
const TY_CAMERA_CALIB_INFO* color_calib,
|
||||
uint32_t monoW, uint32_t monoH, const uint8_t* inMono,
|
||||
uint8_t* mappedMono, float f_scale_unit)
|
||||
{
|
||||
TY_PIXEL_DESC* lut = (TY_PIXEL_DESC*)malloc(sizeof(TY_PIXEL_DESC) * depthW * depthH);
|
||||
TYMAP_CHECKRET(TYCreateDepthToColorCoordinateLookupTable(
|
||||
depth_calib, depthW, depthH, depth,
|
||||
color_calib, depthW, depthH, lut, f_scale_unit), lut);
|
||||
TYPixelsOverlapRemove(lut, depthW * depthH, depthW, depthH);
|
||||
|
||||
for(uint32_t depthr = 0; depthr < depthH; depthr++)
|
||||
for(uint32_t depthc = 0; depthc < depthW; depthc++)
|
||||
{
|
||||
TY_PIXEL_DESC* plut = &lut[depthr * depthW + depthc];
|
||||
uint8_t* outPtr = &mappedMono[depthW * depthr + depthc];
|
||||
if(plut->x < 0 || plut->x >= (int)depthW || plut->y < 0 || plut->y >= (int)depthH){
|
||||
outPtr[0] = 0;
|
||||
} else {
|
||||
uint16_t scale_x = (uint16_t)(1.f * plut->x * monoW / depthW + 0.5);
|
||||
uint16_t scale_y = (uint16_t)(1.f * plut->y * monoH / depthH + 0.5);
|
||||
if(scale_x >= monoW) scale_x = monoW - 1;
|
||||
if(scale_y >= monoH) scale_y = monoH - 1;
|
||||
const uint8_t* inPtr = &inMono[monoW * scale_y + scale_x];
|
||||
outPtr[0] = inPtr[0];
|
||||
}
|
||||
}
|
||||
free(lut);
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
|
||||
#endif
|
||||
1224
image_capture/third_party/percipio/include/TYDefs.h
vendored
Normal file
1224
image_capture/third_party/percipio/include/TYDefs.h
vendored
Normal file
File diff suppressed because it is too large
Load Diff
82
image_capture/third_party/percipio/include/TYImageProc.h
vendored
Normal file
82
image_capture/third_party/percipio/include/TYImageProc.h
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
/**@file TYImageProc.h
|
||||
* @breif Image post-process API
|
||||
* @copyright Copyright(C)2016-2018 Percipio All Rights Reserved
|
||||
**/
|
||||
|
||||
#ifndef TY_IMAGE_PROC_H_
|
||||
#define TY_IMAGE_PROC_H_
|
||||
|
||||
|
||||
#include "TYApi.h"
|
||||
#include "TYCoordinateMapper.h"
|
||||
#include "TyIsp.h"
|
||||
|
||||
/// @brief Image processing acceleration switch
|
||||
/// @param [in] en Enable image process acceleration switch
|
||||
TY_CAPI TYImageProcesAcceEnable(bool en);
|
||||
|
||||
/// @brief Do image undistortion, only support TY_PIXEL_FORMAT_MONO ,TY_PIXEL_FORMAT_RGB,TY_PIXEL_FORMAT_BGR.
|
||||
/// @param [in] srcCalibInfo Image calibration data.
|
||||
/// @param [in] srcImage Source image.
|
||||
/// @param [in] cameraNewIntrinsic Expected new image intrinsic, will use srcCalibInfo for new image intrinsic if set to NULL.
|
||||
/// @param [out] dstImage Output image.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
/// @retval TY_STATUS_NULL_POINTER Any srcCalibInfo, srcImage, dstImage, srcImage->buffer, dstImage->buffer is NULL.
|
||||
/// @retval TY_STATUS_INVALID_PARAMETER Invalid srcImage->width, srcImage->height, dstImage->width, dstImage->height or unsupported pixel format.
|
||||
TY_CAPI TYUndistortImage (const TY_CAMERA_CALIB_INFO *srcCalibInfo
|
||||
, const TY_IMAGE_DATA *srcImage
|
||||
, const TY_CAMERA_INTRINSIC *cameraNewIntrinsic
|
||||
, TY_IMAGE_DATA *dstImage
|
||||
);
|
||||
|
||||
|
||||
// -----------------------------------------------------------
|
||||
struct DepthSpeckleFilterParameters {
|
||||
int max_speckle_size; // blob size smaller than this will be removed
|
||||
int max_speckle_diff; // Maximum difference between neighbor disparity pixels
|
||||
};
|
||||
|
||||
///<default parameter value definition
|
||||
#define DepthSpeckleFilterParameters_Initializer {150, 64}
|
||||
|
||||
/// @brief Remove speckles on depth image.
|
||||
/// @param [in,out] depthImage Depth image to be processed.
|
||||
/// @param [in] param Algorithm parameters.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
/// @retval TY_STATUS_NULL_POINTER Any depth, param or depth->buffer is NULL.
|
||||
/// @retval TY_STATUS_INVALID_PARAMETER param->max_speckle_size <= 0 or param->max_speckle_diff <= 0
|
||||
TY_CAPI TYDepthSpeckleFilter (TY_IMAGE_DATA* depthImage
|
||||
, const DepthSpeckleFilterParameters* param
|
||||
);
|
||||
|
||||
|
||||
// -----------------------------------------------------------
|
||||
struct DepthEnhenceParameters{
|
||||
float sigma_s; ///< filter param on space
|
||||
float sigma_r; ///< filter param on range
|
||||
int outlier_win_sz; ///< outlier filter windows ize
|
||||
float outlier_rate;
|
||||
};
|
||||
|
||||
///<default parameter value definition
|
||||
#define DepthEnhenceParameters_Initializer {10, 20, 10, 0.1f}
|
||||
|
||||
/// @brief Remove speckles on depth image.
|
||||
/// @param [in] depthImage Pointer to depth image array.
|
||||
/// @param [in] imageNum Depth image array size.
|
||||
/// @param [in,out] guide Guide image.
|
||||
/// @param [out] output Output depth image.
|
||||
/// @param [in] param Algorithm parameters.
|
||||
/// @retval TY_STATUS_OK Succeed.
|
||||
/// @retval TY_STATUS_NULL_POINTER Any depthImage, param, output or output->buffer is NULL.
|
||||
/// @retval TY_STATUS_INVALID_PARAMETER imageNum >= 11 or imageNum <= 0, or any image invalid
|
||||
/// @retval TY_STATUS_OUT_OF_MEMORY Output image not suitable.
|
||||
TY_CAPI TYDepthEnhenceFilter (const TY_IMAGE_DATA* depthImages
|
||||
, int imageNum
|
||||
, TY_IMAGE_DATA *guide
|
||||
, TY_IMAGE_DATA *output
|
||||
, const DepthEnhenceParameters* param
|
||||
);
|
||||
|
||||
|
||||
#endif
|
||||
3
image_capture/third_party/percipio/include/TYVer.h
vendored
Normal file
3
image_capture/third_party/percipio/include/TYVer.h
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
#define TY_LIB_VERSION_MAJOR 3
|
||||
#define TY_LIB_VERSION_MINOR 6
|
||||
#define TY_LIB_VERSION_PATCH 75
|
||||
109
image_capture/third_party/percipio/include/TyIsp.h
vendored
Normal file
109
image_capture/third_party/percipio/include/TyIsp.h
vendored
Normal file
@@ -0,0 +1,109 @@
|
||||
/**@file TyIsp.h
|
||||
* @breif this file Include interface declare for raw color image (bayer format)
|
||||
* process functions
|
||||
*
|
||||
* Copyright(C)2016-2019 Percipio All Rights Reserved
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef TY_COLOR_ISP_H_
|
||||
#define TY_COLOR_ISP_H_
|
||||
#include "TYApi.h"
|
||||
|
||||
#define TYISP_CAPI TY_CAPI
|
||||
|
||||
typedef void* TY_ISP_HANDLE;
|
||||
|
||||
typedef enum{
|
||||
TY_ISP_FEATURE_CAM_MODEL = 0x000000,
|
||||
TY_ISP_FEATURE_CAM_DEV_HANDLE = 0x000001, ///<device handle for device control
|
||||
TY_ISP_FEATURE_CAM_DEV_COMPONENT = 0x000002, ///<the component to control
|
||||
TY_ISP_FEATURE_IMAGE_SIZE = 0x000100, ///<image size width&height
|
||||
TY_ISP_FEATURE_WHITEBALANCE_GAIN = 0x000200,
|
||||
TY_ISP_FEATURE_ENABLE_AUTO_WHITEBALANCE = 0x000300,
|
||||
TY_ISP_FEATURE_SHADING = 0x000400,
|
||||
TY_ISP_FEATURE_SHADING_CENTER = 0x000500,
|
||||
TY_ISP_FEATURE_BLACK_LEVEL = 0x000600, ///<global black level
|
||||
TY_ISP_FEATURE_BLACK_LEVEL_COLUMN = 0x000610, ///<to set different black level for each image column
|
||||
TY_ISP_FEATURE_BLACK_LEVEL_GAIN = 0x000700, ///<global pixel gain
|
||||
TY_ISP_FEATURE_BLACK_LEVEL_GAIN_COLUMN = 0x000710, ///<to set different gain for each image column
|
||||
TY_ISP_FEATURE_BAYER_PATTERN = 0x000800,
|
||||
TY_ISP_FEATURE_DEMOSAIC_METHOD = 0x000900,
|
||||
TY_ISP_FEATURE_GAMMA = 0x000A00,
|
||||
TY_ISP_FEATURE_DEFECT_PIXEL_LIST = 0x000B00,
|
||||
TY_ISP_FEATURE_CCM = 0x000C00,
|
||||
TY_ISP_FEATURE_CCM_ENABLE = 0x000C10, ///<ENABLE CCM
|
||||
TY_ISP_FEATURE_BRIGHT = 0x000D00,
|
||||
TY_ISP_FEATURE_CONTRAST = 0x000E00,
|
||||
TY_ISP_FEATURE_AUTOBRIGHT = 0x000F00,
|
||||
TY_ISP_FEATURE_INPUT_RESAMPLE_SCALE = 0x001000, //<set this if bayer image resampled before softisp process.
|
||||
TY_ISP_FEATURE_ENABLE_AUTO_EXPOSURE_GAIN = 0x001100,
|
||||
TY_ISP_FEATURE_AUTO_EXPOSURE_RANGE = 0x001200, ///<exposure range ,default no limit
|
||||
TY_ISP_FEATURE_AUTO_GAIN_RANGE = 0x001300, ///<gain range ,default no limit
|
||||
TY_ISP_FEATURE_AUTO_EXPOSURE_UPDATE_INTERVAL = 0x001400, ///<update device exposure interval , default 5 frame
|
||||
TY_ISP_FEATURE_DEBUG_LOG = 0xff000000, ///<display detail log information
|
||||
|
||||
} TY_ISP_FEATURE_ID;
|
||||
|
||||
typedef enum{
|
||||
TY_ISP_BAYER_GB = 0,
|
||||
TY_ISP_BAYER_BG = 1,
|
||||
TY_ISP_BAYER_RG = 2,
|
||||
TY_ISP_BAYER_GR = 3,
|
||||
TY_ISP_BAYER_AUTO = 0xff,
|
||||
}TY_ISP_BAYER_PATTERN;
|
||||
|
||||
typedef enum{
|
||||
TY_DEMOSAIC_METHOD_SIMPLE = 0,
|
||||
TY_DEMOSAIC_METHOD_BILINEAR = 1,
|
||||
TY_DEMOSAIC_METHOD_HQLINEAR = 2,
|
||||
TY_DEMOSAIC_METHOD_EDGESENSE = 3,
|
||||
} TY_DEMOSAIC_METHOD;
|
||||
|
||||
typedef struct{
|
||||
TY_ISP_FEATURE_ID id;
|
||||
int32_t size;
|
||||
const char * name;
|
||||
const char * value_type;
|
||||
TY_ACCESS_MODE mode;
|
||||
} TY_ISP_FEATURE_INFO;
|
||||
|
||||
TYISP_CAPI TYISPCreate(TY_ISP_HANDLE *handle);
|
||||
TYISP_CAPI TYISPRelease(TY_ISP_HANDLE *handle);
|
||||
TYISP_CAPI TYISPLoadConfig(TY_ISP_HANDLE handle,const uint8_t *config, uint32_t config_size);
|
||||
///@breif called by main thread to update & control device status for ISP
|
||||
TYISP_CAPI TYISPUpdateDevice(TY_ISP_HANDLE handle);
|
||||
|
||||
TYISP_CAPI TYISPSetFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, const uint8_t *data, int32_t size);
|
||||
TYISP_CAPI TYISPGetFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, uint8_t *data_buff, int32_t buff_size);
|
||||
TYISP_CAPI TYISPGetFeatureSize(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, int32_t *size);
|
||||
|
||||
TYISP_CAPI TYISPHasFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id);
|
||||
TYISP_CAPI TYISPGetFeatureInfoList(TY_ISP_HANDLE handle, TY_ISP_FEATURE_INFO *info_buffer, int buffer_size);
|
||||
TYISP_CAPI TYISPGetFeatureInfoListSize(TY_ISP_HANDLE handle, int32_t *buffer_size);
|
||||
///@breif convert bayer raw image to rgb image,output buffer is allocated by invoker
|
||||
TYISP_CAPI TYISPProcessImage(TY_ISP_HANDLE handle,const TY_IMAGE_DATA *image_bayer, TY_IMAGE_DATA *image_out);
|
||||
|
||||
#ifdef __cplusplus
|
||||
static inline TY_STATUS TYISPSetFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, int value){
|
||||
return TYISPSetFeature(handle, feature_id, (uint8_t*)&(value), sizeof(int));
|
||||
}
|
||||
|
||||
|
||||
static inline TY_STATUS TYISPGetFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, int *value){
|
||||
return TYISPGetFeature(handle, feature_id, (uint8_t*)value, sizeof(int));
|
||||
}
|
||||
|
||||
static inline TY_STATUS TYISPSetFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, float value){
|
||||
return TYISPSetFeature(handle, feature_id, (uint8_t*)&(value), sizeof(float));
|
||||
}
|
||||
|
||||
|
||||
static inline TY_STATUS TYISPGetFeature(TY_ISP_HANDLE handle, TY_ISP_FEATURE_ID feature_id, float *value){
|
||||
return TYISPGetFeature(handle, feature_id, (uint8_t*)value, sizeof(float));
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
635
image_capture/third_party/percipio/sample_v2/cpp/Device.cpp
vendored
Normal file
635
image_capture/third_party/percipio/sample_v2/cpp/Device.cpp
vendored
Normal file
@@ -0,0 +1,635 @@
|
||||
#include "Device.hpp"
|
||||
|
||||
struct to_string
|
||||
{
|
||||
std::ostringstream ss;
|
||||
template<class T> to_string & operator << (const T & val) { ss << val; return *this; }
|
||||
operator std::string() const { return ss.str(); }
|
||||
};
|
||||
|
||||
static std::string TY_ERROR(TY_STATUS status)
|
||||
{
|
||||
return to_string() << status << "(" << TYErrorString(status) << ").";
|
||||
}
|
||||
|
||||
static inline TY_STATUS searchDevice(std::vector<TY_DEVICE_BASE_INFO>& out, const char *inf_id = nullptr, TY_INTERFACE_TYPE type = TY_INTERFACE_ALL)
|
||||
{
|
||||
out.clear();
|
||||
ASSERT_OK( TYUpdateInterfaceList() );
|
||||
|
||||
uint32_t n = 0;
|
||||
ASSERT_OK( TYGetInterfaceNumber(&n) );
|
||||
if(n == 0) return TY_STATUS_ERROR;
|
||||
|
||||
std::vector<TY_INTERFACE_INFO> ifaces(n);
|
||||
ASSERT_OK( TYGetInterfaceList(&ifaces[0], n, &n) );
|
||||
|
||||
bool found = false;
|
||||
std::vector<TY_INTERFACE_HANDLE> hIfaces;
|
||||
for(uint32_t i = 0; i < ifaces.size(); i++){
|
||||
TY_INTERFACE_HANDLE hIface;
|
||||
if(type & ifaces[i].type) {
|
||||
//Interface Not setted
|
||||
if (nullptr == inf_id ||
|
||||
//Interface been setted and matched
|
||||
strcmp(inf_id, ifaces[i].id) == 0) {
|
||||
ASSERT_OK( TYOpenInterface(ifaces[i].id, &hIface) );
|
||||
hIfaces.push_back(hIface);
|
||||
found = true;
|
||||
//Interface been setted, found and just break
|
||||
if(nullptr != inf_id) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
if(!found) return TY_STATUS_ERROR;
|
||||
updateDevicesParallel(hIfaces);
|
||||
|
||||
for (uint32_t i = 0; i < hIfaces.size(); i++) {
|
||||
TY_INTERFACE_HANDLE hIface = hIfaces[i];
|
||||
uint32_t n = 0;
|
||||
TYGetDeviceNumber(hIface, &n);
|
||||
if(n > 0){
|
||||
std::vector<TY_DEVICE_BASE_INFO> devs(n);
|
||||
TYGetDeviceList(hIface, &devs[0], n, &n);
|
||||
for(uint32_t j = 0; j < n; j++) {
|
||||
out.push_back(devs[j]);
|
||||
}
|
||||
}
|
||||
TYCloseInterface(hIface);
|
||||
}
|
||||
|
||||
if(out.size() == 0){
|
||||
std::cout << "not found any device" << std::endl;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
namespace percipio_layer {
|
||||
|
||||
TYDeviceInfo::TYDeviceInfo(const TY_DEVICE_BASE_INFO& info)
|
||||
{
|
||||
_info = info;
|
||||
}
|
||||
|
||||
TYDeviceInfo::~TYDeviceInfo()
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
const char* TYDeviceInfo::mac()
|
||||
{
|
||||
if(!TYIsNetworkInterface(_info.iface.type)) {
|
||||
return nullptr;
|
||||
}
|
||||
return _info.netInfo.mac;
|
||||
}
|
||||
|
||||
const char* TYDeviceInfo::ip()
|
||||
{
|
||||
if(!TYIsNetworkInterface(_info.iface.type))
|
||||
return nullptr;
|
||||
return _info.netInfo.ip;
|
||||
}
|
||||
|
||||
const char* TYDeviceInfo::netmask()
|
||||
{
|
||||
if(!TYIsNetworkInterface(_info.iface.type))
|
||||
return nullptr;
|
||||
return _info.netInfo.netmask;
|
||||
}
|
||||
|
||||
const char* TYDeviceInfo::gateway()
|
||||
{
|
||||
if(!TYIsNetworkInterface(_info.iface.type))
|
||||
return nullptr;
|
||||
return _info.netInfo.gateway;
|
||||
}
|
||||
|
||||
const char* TYDeviceInfo::broadcast()
|
||||
{
|
||||
if(!TYIsNetworkInterface(_info.iface.type))
|
||||
return nullptr;
|
||||
return _info.netInfo.broadcast;
|
||||
}
|
||||
|
||||
static void eventCallback(TY_EVENT_INFO *event_info, void *userdata) {
|
||||
TYDevice* handle = (TYDevice*)userdata;
|
||||
handle->_event_callback(event_info);
|
||||
}
|
||||
|
||||
TYCamInterface::TYCamInterface()
|
||||
{
|
||||
TYContext::getInstance();
|
||||
Reset();
|
||||
}
|
||||
|
||||
TYCamInterface::~TYCamInterface()
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
TY_STATUS TYCamInterface::Reset()
|
||||
{
|
||||
TY_STATUS status;
|
||||
status = TYUpdateInterfaceList();
|
||||
if(status != TY_STATUS_OK) return status;
|
||||
|
||||
uint32_t n = 0;
|
||||
status = TYGetInterfaceNumber(&n);
|
||||
if(status != TY_STATUS_OK) return status;
|
||||
|
||||
if(n == 0) return TY_STATUS_OK;
|
||||
|
||||
ifaces.resize(n);
|
||||
status = TYGetInterfaceList(&ifaces[0], n, &n);
|
||||
return status;
|
||||
}
|
||||
|
||||
void TYCamInterface::List(std::vector<std::string>& interfaces)
|
||||
{
|
||||
for(auto& iter : ifaces) {
|
||||
std::cout << iter.id << std::endl;
|
||||
interfaces.push_back(iter.id);
|
||||
}
|
||||
}
|
||||
|
||||
FastCamera::FastCamera()
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
FastCamera::FastCamera(const char* sn)
|
||||
{
|
||||
const char *inf = nullptr;
|
||||
if (!mIfaceId.empty()) {
|
||||
inf = mIfaceId.c_str();
|
||||
}
|
||||
auto devList = TYContext::getInstance().queryDeviceList(inf);
|
||||
if(devList->empty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
device = (sn && strlen(sn) != 0) ? devList->getDeviceBySN(sn) : devList->getDevice(0);
|
||||
if(!device) {
|
||||
return;
|
||||
}
|
||||
|
||||
TYGetComponentIDs(device->_handle, &components);
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::open(const char* sn)
|
||||
{
|
||||
const char *inf = nullptr;
|
||||
if (!mIfaceId.empty()) {
|
||||
inf = mIfaceId.c_str();
|
||||
}
|
||||
|
||||
auto devList = TYContext::getInstance().queryDeviceList(inf);
|
||||
if(devList->empty()) {
|
||||
std::cout << "deivce list is empty!" << std::endl;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
device = (sn && strlen(sn) != 0) ? devList->getDeviceBySN(sn) : devList->getDevice(0);
|
||||
if(!device) {
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
return TYGetComponentIDs(device->_handle, &components);
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::openByIP(const char* ip)
|
||||
{
|
||||
const char *inf = nullptr;
|
||||
if (!mIfaceId.empty()) {
|
||||
inf = mIfaceId.c_str();
|
||||
}
|
||||
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
auto devList = TYContext::getInstance().queryNetDeviceList(inf);
|
||||
if(devList->empty()) {
|
||||
std::cout << "net deivce list is empty!" << std::endl;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
device = (ip && strlen(ip) != 0) ? devList->getDeviceByIP(ip) : devList->getDevice(0);
|
||||
if(!device) {
|
||||
std::cout << "open device failed!" << std::endl;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
return TYGetComponentIDs(device->_handle, &components);
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::setIfaceId(const char* inf)
|
||||
{
|
||||
mIfaceId = inf;
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
FastCamera::~FastCamera()
|
||||
{
|
||||
if(isRuning) {
|
||||
doStop();
|
||||
}
|
||||
}
|
||||
|
||||
void FastCamera::close()
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
if(isRuning) {
|
||||
doStop();
|
||||
}
|
||||
|
||||
if(device) device.reset();
|
||||
}
|
||||
|
||||
std::shared_ptr<TYFrame> FastCamera::fetchFrames(uint32_t timeout_ms)
|
||||
{
|
||||
TY_FRAME_DATA tyframe;
|
||||
TY_STATUS status = TYFetchFrame(handle(), &tyframe, timeout_ms);
|
||||
if(status != TY_STATUS_OK) {
|
||||
std::cout << "Frame fetch failed with err code: " << status << "(" << TYErrorString(status) << ")."<< std::endl;
|
||||
return std::shared_ptr<TYFrame>();
|
||||
}
|
||||
|
||||
std::shared_ptr<TYFrame> frame = std::shared_ptr<TYFrame>(new TYFrame(tyframe));
|
||||
CHECK_RET(TYEnqueueBuffer(handle(), tyframe.userBuffer, tyframe.bufferSize));
|
||||
return frame;
|
||||
}
|
||||
|
||||
static TY_COMPONENT_ID StreamIdx2CompID(FastCamera::stream_idx idx)
|
||||
{
|
||||
TY_COMPONENT_ID comp = 0;
|
||||
switch (idx)
|
||||
{
|
||||
case FastCamera::stream_depth:
|
||||
comp = TY_COMPONENT_DEPTH_CAM;
|
||||
break;
|
||||
case FastCamera::stream_color:
|
||||
comp = TY_COMPONENT_RGB_CAM;
|
||||
break;
|
||||
case FastCamera::stream_ir_left:
|
||||
comp = TY_COMPONENT_IR_CAM_LEFT;
|
||||
break;
|
||||
case FastCamera::stream_ir_right:
|
||||
comp = TY_COMPONENT_IR_CAM_RIGHT;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return comp;
|
||||
}
|
||||
bool FastCamera::has_stream(stream_idx idx)
|
||||
{
|
||||
return components & StreamIdx2CompID(idx);
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::stream_enable(stream_idx idx)
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
return TYEnableComponents(handle(), StreamIdx2CompID(idx));
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::stream_disable(stream_idx idx)
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
return TYDisableComponents(handle(), StreamIdx2CompID(idx));
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::start()
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
if(isRuning) {
|
||||
std::cout << "Device is busy!" << std::endl;
|
||||
return TY_STATUS_BUSY;
|
||||
}
|
||||
|
||||
uint32_t stream_buffer_size;
|
||||
TY_STATUS status = TYGetFrameBufferSize(handle(), &stream_buffer_size);
|
||||
if(status != TY_STATUS_OK) {
|
||||
std::cout << "Get frame buffer size failed with error code: " << TY_ERROR(status) << std::endl;
|
||||
return status;
|
||||
}
|
||||
if(stream_buffer_size == 0) {
|
||||
std::cout << "Frame buffer size is 0, is the data flow component not enabled?" << std::endl;
|
||||
return TY_STATUS_DEVICE_ERROR;
|
||||
}
|
||||
|
||||
for(int i = 0; i < BUF_CNT; i++) {
|
||||
stream_buffer[i].resize(stream_buffer_size);
|
||||
TYEnqueueBuffer(handle(), &stream_buffer[i][0], stream_buffer_size);
|
||||
}
|
||||
|
||||
status = TYStartCapture(handle());
|
||||
if(TY_STATUS_OK != status) {
|
||||
std::cout << "Start capture failed with error code: " << TY_ERROR(status) << std::endl;
|
||||
return status;
|
||||
}
|
||||
|
||||
isRuning = true;
|
||||
return TY_STATUS_OK;
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::stop()
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
return doStop();
|
||||
}
|
||||
|
||||
TY_STATUS FastCamera::doStop()
|
||||
{
|
||||
if(!isRuning)
|
||||
return TY_STATUS_IDLE;
|
||||
|
||||
isRuning = false;
|
||||
|
||||
TY_STATUS status = TYStopCapture(handle());
|
||||
if(TY_STATUS_OK != status) {
|
||||
std::cout << "Stop capture failed with error code: " << TY_ERROR(status) << std::endl;
|
||||
}
|
||||
//Stop will stop receive, need TYClearBufferQueue any way
|
||||
//Ignore TYClearBufferQueue ret val
|
||||
TYClearBufferQueue(handle());
|
||||
for(int i = 0; i < BUF_CNT; i++) {
|
||||
stream_buffer[i].clear();
|
||||
}
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
std::shared_ptr<TYFrame> FastCamera::tryGetFrames(uint32_t timeout_ms)
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_dev_lock);
|
||||
return fetchFrames(timeout_ms);
|
||||
}
|
||||
|
||||
TYDevice::TYDevice(const TY_DEV_HANDLE handle, const TY_DEVICE_BASE_INFO& info)
|
||||
{
|
||||
_handle = handle;
|
||||
_dev_info = info;
|
||||
_event_callback = std::bind(&TYDevice::onDeviceEventCallback, this, std::placeholders::_1);
|
||||
TYRegisterEventCallback(_handle, eventCallback, this);
|
||||
}
|
||||
|
||||
TYDevice::~TYDevice()
|
||||
{
|
||||
CHECK_RET(TYCloseDevice(_handle));
|
||||
}
|
||||
|
||||
void TYDevice::registerEventCallback(const TY_EVENT eventID, void* data, EventCallback cb)
|
||||
{
|
||||
_eventCallbackMap[eventID] = {data, cb};
|
||||
}
|
||||
|
||||
void TYDevice::onDeviceEventCallback(const TY_EVENT_INFO *event_info)
|
||||
{
|
||||
if(_eventCallbackMap[event_info->eventId].second != nullptr) {
|
||||
_eventCallbackMap[event_info->eventId].second(_eventCallbackMap[event_info->eventId].first);
|
||||
}
|
||||
}
|
||||
|
||||
std::shared_ptr<TYDeviceInfo> TYDevice::getDeviceInfo()
|
||||
{
|
||||
return std::shared_ptr<TYDeviceInfo>(new TYDeviceInfo(_dev_info));
|
||||
}
|
||||
|
||||
std::set<TY_INTERFACE_HANDLE> DeviceList::gifaces;
|
||||
DeviceList::DeviceList(std::vector<TY_DEVICE_BASE_INFO>& devices)
|
||||
{
|
||||
devs = devices;
|
||||
}
|
||||
|
||||
DeviceList::~DeviceList()
|
||||
{
|
||||
for (TY_INTERFACE_HANDLE iface : gifaces) {
|
||||
TYCloseInterface(iface);
|
||||
}
|
||||
gifaces.clear();
|
||||
}
|
||||
|
||||
std::shared_ptr<TYDeviceInfo> DeviceList::getDeviceInfo(int idx)
|
||||
{
|
||||
if((idx < 0) || (idx > devCount())) {
|
||||
std::cout << "idx out of range" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
return std::shared_ptr<TYDeviceInfo>(new TYDeviceInfo(devs[idx]));
|
||||
}
|
||||
|
||||
std::shared_ptr<TYDevice> DeviceList::getDevice(int idx)
|
||||
{
|
||||
if((idx < 0) || (idx > devCount())) {
|
||||
std::cout << "idx out of range" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
TY_INTERFACE_HANDLE hIface = NULL;
|
||||
TY_DEV_HANDLE hDevice = NULL;
|
||||
|
||||
TY_STATUS status = TY_STATUS_OK;
|
||||
status = TYOpenInterface(devs[idx].iface.id, &hIface);
|
||||
if(status != TY_STATUS_OK) {
|
||||
std::cout << "Open interface failed with error code: " << TY_ERROR(status) << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
gifaces.insert(hIface);
|
||||
std::string ifaceId = devs[idx].iface.id;
|
||||
std::string open_log = std::string("open device ") + devs[idx].id +
|
||||
"\non interface " + parseInterfaceID(ifaceId);
|
||||
std::cout << open_log << std::endl;
|
||||
status = TYOpenDevice(hIface, devs[idx].id, &hDevice);
|
||||
if(status != TY_STATUS_OK) {
|
||||
std::cout << "Open device < " << devs[idx].id << "> failed with error code: " << TY_ERROR(status) << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
TY_DEVICE_BASE_INFO info;
|
||||
status = TYGetDeviceInfo(hDevice, &info);
|
||||
if(status != TY_STATUS_OK) {
|
||||
std::cout << "Get device info failed with error code: " << TY_ERROR(status) << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
return std::shared_ptr<TYDevice>(new TYDevice(hDevice, info));
|
||||
}
|
||||
|
||||
std::shared_ptr<TYDevice> DeviceList::getDeviceBySN(const char* sn)
|
||||
{
|
||||
TY_STATUS status = TY_STATUS_OK;
|
||||
TY_INTERFACE_HANDLE hIface = NULL;
|
||||
TY_DEV_HANDLE hDevice = NULL;
|
||||
|
||||
if(!sn) {
|
||||
std::cout << "Invalid parameters" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
for(size_t i = 0; i < devs.size(); i++) {
|
||||
if(strcmp(devs[i].id, sn) == 0) {
|
||||
status = TYOpenInterface(devs[i].iface.id, &hIface);
|
||||
if(status != TY_STATUS_OK) continue;
|
||||
|
||||
gifaces.insert(hIface);
|
||||
std::string ifaceId = devs[i].iface.id;
|
||||
std::string open_log = std::string("open device ") + devs[i].id +
|
||||
"\non interface " + parseInterfaceID(ifaceId);
|
||||
std::cout << open_log << std::endl;
|
||||
status = TYOpenDevice(hIface, devs[i].id, &hDevice);
|
||||
if(status != TY_STATUS_OK) continue;
|
||||
|
||||
TY_DEVICE_BASE_INFO info;
|
||||
status = TYGetDeviceInfo(hDevice, &info);
|
||||
if(status != TY_STATUS_OK) {
|
||||
TYCloseDevice(hDevice);
|
||||
continue;
|
||||
}
|
||||
return std::shared_ptr<TYDevice>(new TYDevice(hDevice, info));
|
||||
}
|
||||
}
|
||||
|
||||
std::cout << "Device <sn:" << sn << "> not found!" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
std::shared_ptr<TYDevice> DeviceList::getDeviceByIP(const char* ip)
|
||||
{
|
||||
TY_STATUS status = TY_STATUS_OK;
|
||||
TY_INTERFACE_HANDLE hIface = NULL;
|
||||
TY_DEV_HANDLE hDevice = NULL;
|
||||
|
||||
if(!ip) {
|
||||
std::cout << "Invalid parameters" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
for(size_t i = 0; i < devs.size(); i++) {
|
||||
if(TYIsNetworkInterface(devs[i].iface.type)) {
|
||||
status = TYOpenInterface(devs[i].iface.id, &hIface);
|
||||
if(status != TY_STATUS_OK) continue;
|
||||
std::string open_log = "open device ";
|
||||
if(ip && strlen(ip)) {
|
||||
open_log += ip;
|
||||
status = TYOpenDeviceWithIP(hIface, ip, &hDevice);
|
||||
} else {
|
||||
open_log += devs[i].id;
|
||||
status = TYOpenDevice(hIface, devs[i].id, &hDevice);
|
||||
}
|
||||
std::string ifaceId = devs[i].iface.id;
|
||||
open_log += "\non interface " + parseInterfaceID(ifaceId);
|
||||
std::cout << open_log << std::endl;
|
||||
|
||||
if(status != TY_STATUS_OK) continue;
|
||||
|
||||
TY_DEVICE_BASE_INFO info;
|
||||
status = TYGetDeviceInfo(hDevice, &info);
|
||||
if(status != TY_STATUS_OK) {
|
||||
TYCloseDevice(hDevice);
|
||||
continue;;
|
||||
}
|
||||
|
||||
return std::shared_ptr<TYDevice>(new TYDevice(hDevice, info));
|
||||
}
|
||||
}
|
||||
|
||||
std::cout << "Device <ip:" << ip << "> not found!" << std::endl;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
std::shared_ptr<DeviceList> TYContext::queryDeviceList(const char *iface)
|
||||
{
|
||||
std::vector<TY_DEVICE_BASE_INFO> devs;
|
||||
searchDevice(devs, iface);
|
||||
return std::shared_ptr<DeviceList>(new DeviceList(devs));
|
||||
}
|
||||
|
||||
std::shared_ptr<DeviceList> TYContext::queryNetDeviceList(const char *iface)
|
||||
{
|
||||
std::vector<TY_DEVICE_BASE_INFO> devs;
|
||||
searchDevice(devs, iface, TY_INTERFACE_ETHERNET | TY_INTERFACE_IEEE80211);
|
||||
return std::shared_ptr<DeviceList>(new DeviceList(devs));
|
||||
}
|
||||
|
||||
bool TYContext::ForceNetDeviceIP(const ForceIPStyle style, const std::string& mac, const std::string& ip, const std::string& mask, const std::string& gateway)
|
||||
{
|
||||
ASSERT_OK( TYUpdateInterfaceList() );
|
||||
|
||||
uint32_t n = 0;
|
||||
ASSERT_OK( TYGetInterfaceNumber(&n) );
|
||||
if(n == 0) return false;
|
||||
|
||||
std::vector<TY_INTERFACE_INFO> ifaces(n);
|
||||
ASSERT_OK( TYGetInterfaceList(&ifaces[0], n, &n) );
|
||||
ASSERT( n == ifaces.size() );
|
||||
|
||||
bool open_needed = false;
|
||||
const char * ip_save = ip.c_str();
|
||||
const char * netmask_save = mask.c_str();
|
||||
const char * gateway_save = gateway.c_str();
|
||||
switch(style)
|
||||
{
|
||||
case ForceIPStyleDynamic:
|
||||
if(strcmp(ip_save, "0.0.0.0") != 0) {
|
||||
open_needed = true;
|
||||
}
|
||||
ip_save = "0.0.0.0";
|
||||
netmask_save = "0.0.0.0";
|
||||
gateway_save = "0.0.0.0";
|
||||
break;
|
||||
case ForceIPStyleStatic:
|
||||
open_needed = true;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
bool result = false;
|
||||
for(uint32_t i = 0; i < n; i++) {
|
||||
if(TYIsNetworkInterface(ifaces[i].type)) {
|
||||
TY_INTERFACE_HANDLE hIface;
|
||||
ASSERT_OK( TYOpenInterface(ifaces[i].id, &hIface) );
|
||||
if (TYForceDeviceIP(hIface, mac.c_str(), ip.c_str(), mask.c_str(), gateway.c_str()) == TY_STATUS_OK) {
|
||||
LOGD("**** Set Temporary IP/Netmask/Gateway ...Done! ****");
|
||||
if(open_needed) {
|
||||
TYUpdateDeviceList(hIface);
|
||||
TY_DEV_HANDLE hDev;
|
||||
if(TYOpenDeviceWithIP(hIface, ip.c_str(), &hDev) == TY_STATUS_OK){
|
||||
int32_t ip_i[4];
|
||||
uint8_t ip_b[4];
|
||||
int32_t ip32;
|
||||
sscanf(ip_save, "%d.%d.%d.%d", &ip_i[0], &ip_i[1], &ip_i[2], &ip_i[3]);
|
||||
ip_b[0] = ip_i[0];ip_b[1] = ip_i[1];ip_b[2] = ip_i[2];ip_b[3] = ip_i[3];
|
||||
ip32 = TYIPv4ToInt(ip_b);
|
||||
ASSERT_OK( TYSetInt(hDev, TY_COMPONENT_DEVICE, TY_INT_PERSISTENT_IP, ip32) );
|
||||
sscanf(netmask_save, "%d.%d.%d.%d", &ip_i[0], &ip_i[1], &ip_i[2], &ip_i[3]);
|
||||
ip_b[0] = ip_i[0];ip_b[1] = ip_i[1];ip_b[2] = ip_i[2];ip_b[3] = ip_i[3];
|
||||
ip32 = TYIPv4ToInt(ip_b);
|
||||
ASSERT_OK( TYSetInt(hDev, TY_COMPONENT_DEVICE, TY_INT_PERSISTENT_SUBMASK, ip32) );
|
||||
sscanf(gateway_save, "%d.%d.%d.%d", &ip_i[0], &ip_i[1], &ip_i[2], &ip_i[3]);
|
||||
ip_b[0] = ip_i[0];ip_b[1] = ip_i[1];ip_b[2] = ip_i[2];ip_b[3] = ip_i[3];
|
||||
ip32 = TYIPv4ToInt(ip_b);
|
||||
ASSERT_OK( TYSetInt(hDev, TY_COMPONENT_DEVICE, TY_INT_PERSISTENT_GATEWAY, ip32) );
|
||||
|
||||
result = true;
|
||||
std::cout << "**** Set Persistent IP/Netmask/Gateway ...Done! ****" <<std::endl;
|
||||
} else {
|
||||
result = false;
|
||||
}
|
||||
} else {
|
||||
result = true;
|
||||
}
|
||||
}
|
||||
ASSERT_OK( TYCloseInterface(hIface));
|
||||
}
|
||||
}
|
||||
return result;
|
||||
}
|
||||
}
|
||||
472
image_capture/third_party/percipio/sample_v2/cpp/Frame.cpp
vendored
Normal file
472
image_capture/third_party/percipio/sample_v2/cpp/Frame.cpp
vendored
Normal file
@@ -0,0 +1,472 @@
|
||||
#include <thread>
|
||||
|
||||
#include "Frame.hpp"
|
||||
#include "TYImageProc.h"
|
||||
|
||||
namespace percipio_layer {
|
||||
|
||||
|
||||
TYImage::TYImage()
|
||||
{
|
||||
memset(&image_data, 0, sizeof(image_data));
|
||||
}
|
||||
|
||||
TYImage::TYImage(const TY_IMAGE_DATA& image) :
|
||||
m_isOwner(false)
|
||||
{
|
||||
memcpy(&image_data, &image, sizeof(TY_IMAGE_DATA));
|
||||
}
|
||||
|
||||
TYImage::TYImage(const TYImage& src)
|
||||
{
|
||||
image_data.timestamp = src.timestamp();
|
||||
image_data.imageIndex = src.imageIndex();
|
||||
image_data.status = src.status();
|
||||
image_data.componentID = src.componentID();
|
||||
image_data.size = src.size();
|
||||
image_data.width = src.width();
|
||||
image_data.height = src.height();
|
||||
image_data.pixelFormat = src.pixelFormat();
|
||||
if(image_data.size) {
|
||||
m_isOwner = true;
|
||||
image_data.buffer = malloc(image_data.size);
|
||||
memcpy(image_data.buffer, src.buffer(), image_data.size);
|
||||
}
|
||||
}
|
||||
|
||||
TYImage::TYImage(int32_t width, int32_t height, TY_COMPONENT_ID compID, TY_PIXEL_FORMAT format, int32_t size)
|
||||
{
|
||||
image_data.size = size;
|
||||
image_data.width = width;
|
||||
image_data.height = height;
|
||||
image_data.componentID = compID;
|
||||
image_data.pixelFormat = format;
|
||||
if(image_data.size) {
|
||||
m_isOwner = true;
|
||||
image_data.buffer = calloc(image_data.size, 1);
|
||||
}
|
||||
}
|
||||
|
||||
bool TYImage::resize(int w, int h)
|
||||
{
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
cv::Mat src, dst;
|
||||
switch(image_data.pixelFormat)
|
||||
{
|
||||
case TY_PIXEL_FORMAT_BGR:
|
||||
case TY_PIXEL_FORMAT_RGB:
|
||||
src = cv::Mat(cv::Size(width(), height()), CV_8UC3, buffer());
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_MONO:
|
||||
src = cv::Mat(cv::Size(width(), height()), CV_8U, buffer());
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_MONO16:
|
||||
src = cv::Mat(cv::Size(width(), height()), CV_16U, buffer());
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_BGR48:
|
||||
src = cv::Mat(cv::Size(width(), height()), CV_16UC3, buffer());
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_RGB48:
|
||||
src = cv::Mat(cv::Size(width(), height()), CV_16UC3, buffer());
|
||||
break;
|
||||
case TY_PIXEL_FORMAT_DEPTH16:
|
||||
src = cv::Mat(cv::Size(width(), height()), CV_16U, buffer());
|
||||
break;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
|
||||
if(image_data.pixelFormat == TY_PIXEL_FORMAT_DEPTH16)
|
||||
cv::resize(src, dst, cv::Size(w, h), 0, 0, cv::INTER_NEAREST);
|
||||
else
|
||||
cv::resize(src, dst, cv::Size(w, h));
|
||||
image_data.size = dst.cols * dst.rows * dst.elemSize() * dst.channels();
|
||||
image_data.width = dst.cols;
|
||||
image_data.height = dst.rows;
|
||||
if(m_isOwner) free(image_data.buffer);
|
||||
image_data.buffer = malloc(image_data.size);
|
||||
memcpy(image_data.buffer, dst.data, image_data.size);
|
||||
return true;
|
||||
#else
|
||||
std::cout << "not support!" << std::endl;
|
||||
return false;
|
||||
#endif
|
||||
}
|
||||
|
||||
TYImage::~TYImage()
|
||||
{
|
||||
if(m_isOwner) {
|
||||
free(image_data.buffer);
|
||||
}
|
||||
}
|
||||
|
||||
ImageProcesser::ImageProcesser(const char* win, const TY_CAMERA_CALIB_INFO* calib_data, const TY_ISP_HANDLE isp_handle)
|
||||
{
|
||||
win_name = win;
|
||||
hasWin = false;
|
||||
color_isp_handle = isp_handle;
|
||||
if(calib_data != nullptr) {
|
||||
_calib_data = std::shared_ptr<TY_CAMERA_CALIB_INFO>(new TY_CAMERA_CALIB_INFO(*calib_data));
|
||||
}
|
||||
}
|
||||
|
||||
int ImageProcesser::parse(const std::shared_ptr<TYImage>& image)
|
||||
{
|
||||
if(!image) return -1;
|
||||
TY_PIXEL_FORMAT format = image->pixelFormat();
|
||||
#ifndef OPENCV_DEPENDENCIES
|
||||
std::cout << win() << " image size : " << image->width() << " x " << image->height() << std::endl;
|
||||
#endif
|
||||
switch(format) {
|
||||
/*
|
||||
case TY_PIXEL_FORMAT_BGR:
|
||||
case TY_PIXEL_FORMAT_RGB:
|
||||
case TY_PIXEL_FORMAT_MONO:
|
||||
case TY_PIXEL_FORMAT_MONO16:
|
||||
case TY_PIXEL_FORMAT_BGR48:
|
||||
case TY_PIXEL_FORMAT_RGB48:
|
||||
*/
|
||||
case TY_PIXEL_FORMAT_DEPTH16:
|
||||
{
|
||||
_image = std::shared_ptr<TYImage>(new TYImage(*image));
|
||||
return 0;
|
||||
}
|
||||
case TY_PIXEL_FORMAT_XYZ48:
|
||||
{
|
||||
std::vector<int16_t> depth_data(image->width() * image->height());
|
||||
int16_t* src = static_cast<int16_t*>(image->buffer());
|
||||
for (int pix = 0; pix < image->width()*image->height(); pix++) {
|
||||
depth_data[pix] = *(src + 3*pix + 2);
|
||||
}
|
||||
|
||||
_image = std::shared_ptr<TYImage>(new TYImage(image->width(), image->height(), image->componentID(), TY_PIXEL_FORMAT_DEPTH16, depth_data.size() * sizeof(int16_t)));
|
||||
memcpy(_image->buffer(), depth_data.data(), image->size());
|
||||
return 0;
|
||||
}
|
||||
default:
|
||||
{
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
cv::Mat cvImage;
|
||||
int32_t image_size;
|
||||
TY_PIXEL_FORMAT image_fmt;
|
||||
TY_COMPONENT_ID comp_id;
|
||||
comp_id = image->componentID();
|
||||
parseImage(image->image(), &cvImage, color_isp_handle);
|
||||
switch(cvImage.type())
|
||||
{
|
||||
case CV_8U:
|
||||
//MONO8
|
||||
image_size = cvImage.size().area();
|
||||
image_fmt = TY_PIXEL_FORMAT_MONO;
|
||||
break;
|
||||
case CV_16U:
|
||||
//MONO16
|
||||
image_size = cvImage.size().area() * 2;
|
||||
image_fmt = TY_PIXEL_FORMAT_MONO16;
|
||||
break;
|
||||
case CV_16UC3:
|
||||
//BGR48
|
||||
image_size = cvImage.size().area() * 6;
|
||||
image_fmt = TY_PIXEL_FORMAT_BGR48;
|
||||
break;
|
||||
default:
|
||||
//BGR888
|
||||
image_size = cvImage.size().area() * 3;
|
||||
image_fmt = TY_PIXEL_FORMAT_BGR;
|
||||
break;
|
||||
}
|
||||
_image = std::shared_ptr<TYImage>(new TYImage(cvImage.cols, cvImage.rows, comp_id, image_fmt, image_size));
|
||||
memcpy(_image->buffer(), cvImage.data, image_size);
|
||||
return 0;
|
||||
#else
|
||||
|
||||
//Without the OpenCV library, image decoding is not supported yet.
|
||||
return -1;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int ImageProcesser::DepthImageRender()
|
||||
{
|
||||
if(!_image) return -1;
|
||||
TY_PIXEL_FORMAT format = _image->pixelFormat();
|
||||
if(format != TY_PIXEL_FORMAT_DEPTH16) return -1;
|
||||
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
static DepthRender render;
|
||||
cv::Mat depth = cv::Mat(_image->height(), _image->width(), CV_16U, _image->buffer());
|
||||
cv::Mat bgr = render.Compute(depth);
|
||||
|
||||
_image = std::shared_ptr<TYImage>(new TYImage(_image->width(), _image->height(), _image->componentID(), TY_PIXEL_FORMAT_BGR, bgr.size().area() * 3));
|
||||
memcpy(_image->buffer(), bgr.data, _image->size());
|
||||
return 0;
|
||||
#else
|
||||
return -1;
|
||||
#endif
|
||||
}
|
||||
|
||||
TY_STATUS ImageProcesser::doUndistortion()
|
||||
{
|
||||
int ret = 0;
|
||||
if(ret == 0) {
|
||||
if(!_calib_data) {
|
||||
std::cout << "Calib data is empty!" << std::endl;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
|
||||
int32_t image_size = _image->size();
|
||||
TY_PIXEL_FORMAT image_fmt = _image->pixelFormat();
|
||||
TY_COMPONENT_ID comp_id = _image->componentID();
|
||||
|
||||
std::vector<uint8_t> undistort_image(image_size);
|
||||
|
||||
TY_IMAGE_DATA src;
|
||||
src.width = _image->width();
|
||||
src.height = _image->height();
|
||||
src.size = image_size;
|
||||
src.pixelFormat = image_fmt;
|
||||
src.buffer = _image->buffer();
|
||||
|
||||
TY_IMAGE_DATA dst;
|
||||
dst.width = _image->width();
|
||||
dst.height = _image->height();
|
||||
dst.size = image_size;
|
||||
dst.pixelFormat = image_fmt;
|
||||
dst.buffer = undistort_image.data();
|
||||
|
||||
TY_STATUS status = TYUndistortImage(&*_calib_data, &src, NULL, &dst);
|
||||
if(status != TY_STATUS_OK) {
|
||||
std::cout << "Do image undistortion failed!" << std::endl;
|
||||
return status;
|
||||
}
|
||||
|
||||
_image = std::shared_ptr<TYImage>(new TYImage(_image->width(), _image->height(), comp_id, image_fmt, image_size));
|
||||
memcpy(_image->buffer(), undistort_image.data(), image_size);
|
||||
return TY_STATUS_OK;
|
||||
} else {
|
||||
std::cout << "Image decoding failed." << std::endl;
|
||||
return TY_STATUS_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
int ImageProcesser::show()
|
||||
{
|
||||
if(!_image) return -1;
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
cv::Mat display;
|
||||
switch(_image->pixelFormat())
|
||||
{
|
||||
case TY_PIXEL_FORMAT_MONO:
|
||||
{
|
||||
display = cv::Mat(_image->height(), _image->width(), CV_8U, _image->buffer());
|
||||
break;
|
||||
}
|
||||
case TY_PIXEL_FORMAT_MONO16:
|
||||
{
|
||||
display = cv::Mat(_image->height(), _image->width(), CV_16U, _image->buffer());
|
||||
break;
|
||||
}
|
||||
case TY_PIXEL_FORMAT_BGR:
|
||||
{
|
||||
display = cv::Mat(_image->height(), _image->width(), CV_8UC3, _image->buffer());
|
||||
break;
|
||||
}
|
||||
case TY_PIXEL_FORMAT_BGR48:
|
||||
{
|
||||
display = cv::Mat(_image->height(), _image->width(), CV_16UC3, _image->buffer());
|
||||
break;
|
||||
}
|
||||
case TY_PIXEL_FORMAT_DEPTH16:
|
||||
{
|
||||
DepthImageRender();
|
||||
display = cv::Mat(_image->height(), _image->width(), CV_8UC3, _image->buffer());
|
||||
break;
|
||||
}
|
||||
default:
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if(!display.empty()) {
|
||||
hasWin = true;
|
||||
cv::imshow(win_name.c_str(), display);
|
||||
int key = cv::waitKey(1);
|
||||
return key;
|
||||
}
|
||||
else
|
||||
std::cout << "Unknown image encoding format." << std::endl;
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ImageProcesser::clear()
|
||||
{
|
||||
#ifdef OPENCV_DEPENDENCIES
|
||||
if (hasWin) {
|
||||
cv::destroyWindow(win_name.c_str());
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
TYFrame::TYFrame(const TY_FRAME_DATA& frame)
|
||||
{
|
||||
bufferSize = frame.bufferSize;
|
||||
userBuffer.resize(bufferSize);
|
||||
memcpy(userBuffer.data(), frame.userBuffer, bufferSize);
|
||||
|
||||
#define TY_IMAGE_MOVE(src, dst, from, to) do { \
|
||||
(to) = (from); \
|
||||
(to.buffer) = reinterpret_cast<void*>((std::intptr_t(dst)) + (std::intptr_t(from.buffer) - std::intptr_t(src)));\
|
||||
}while(0)
|
||||
|
||||
for (int i = 0; i < frame.validCount; i++) {
|
||||
TY_IMAGE_DATA img;
|
||||
if (frame.image[i].status != TY_STATUS_OK) continue;
|
||||
|
||||
// get depth image
|
||||
if (frame.image[i].componentID == TY_COMPONENT_DEPTH_CAM) {
|
||||
TY_IMAGE_MOVE(frame.userBuffer, userBuffer.data(), frame.image[i], img);
|
||||
_images[TY_COMPONENT_DEPTH_CAM] = std::shared_ptr<TYImage>(new TYImage(img));
|
||||
}
|
||||
// get left ir image
|
||||
if (frame.image[i].componentID == TY_COMPONENT_IR_CAM_LEFT) {
|
||||
TY_IMAGE_MOVE(frame.userBuffer, userBuffer.data(), frame.image[i], img);
|
||||
_images[TY_COMPONENT_IR_CAM_LEFT] = std::shared_ptr<TYImage>(new TYImage(img));
|
||||
}
|
||||
// get right ir image
|
||||
if (frame.image[i].componentID == TY_COMPONENT_IR_CAM_RIGHT) {
|
||||
TY_IMAGE_MOVE(frame.userBuffer, userBuffer.data(), frame.image[i], img);
|
||||
_images[TY_COMPONENT_IR_CAM_RIGHT] = std::shared_ptr<TYImage>(new TYImage(img));
|
||||
}
|
||||
// get color image
|
||||
if (frame.image[i].componentID == TY_COMPONENT_RGB_CAM) {
|
||||
TY_IMAGE_MOVE(frame.userBuffer, userBuffer.data(), frame.image[i], img);
|
||||
_images[TY_COMPONENT_RGB_CAM] = std::shared_ptr<TYImage>(new TYImage(img));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TYFrame::~TYFrame()
|
||||
{
|
||||
|
||||
}
|
||||
|
||||
|
||||
TYFrameParser::TYFrameParser(uint32_t max_queue_size, const TY_ISP_HANDLE isp_handle)
|
||||
{
|
||||
_max_queue_size = max_queue_size;
|
||||
isRuning = true;
|
||||
|
||||
setImageProcesser(TY_COMPONENT_DEPTH_CAM, std::shared_ptr<ImageProcesser>(new ImageProcesser("depth")));
|
||||
setImageProcesser(TY_COMPONENT_IR_CAM_LEFT, std::shared_ptr<ImageProcesser>(new ImageProcesser("Left-IR")));
|
||||
setImageProcesser(TY_COMPONENT_IR_CAM_RIGHT, std::shared_ptr<ImageProcesser>(new ImageProcesser("Right-IR")));
|
||||
setImageProcesser(TY_COMPONENT_RGB_CAM, std::shared_ptr<ImageProcesser>(new ImageProcesser("color", nullptr, isp_handle)));
|
||||
|
||||
processThread_ = std::thread(&TYFrameParser::display, this);
|
||||
}
|
||||
|
||||
TYFrameParser::~TYFrameParser()
|
||||
{
|
||||
isRuning = false;
|
||||
processThread_.join();
|
||||
}
|
||||
|
||||
int TYFrameParser::setImageProcesser(TY_COMPONENT_ID id, std::shared_ptr<ImageProcesser> proc)
|
||||
{
|
||||
stream[id] = proc;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int TYFrameParser::doProcess(const std::shared_ptr<TYFrame>& img)
|
||||
{
|
||||
auto depth = img->depthImage();
|
||||
auto color = img->colorImage();
|
||||
auto left_ir = img->leftIRImage();
|
||||
auto right_ir = img->rightIRImage();
|
||||
|
||||
if (left_ir) {
|
||||
stream[TY_COMPONENT_IR_CAM_LEFT]->parse(left_ir);
|
||||
}
|
||||
|
||||
if (right_ir) {
|
||||
stream[TY_COMPONENT_IR_CAM_RIGHT]->parse(right_ir);
|
||||
}
|
||||
|
||||
if (color) {
|
||||
stream[TY_COMPONENT_RGB_CAM]->parse(color);
|
||||
}
|
||||
|
||||
if (depth) {
|
||||
stream[TY_COMPONENT_DEPTH_CAM]->parse(depth);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void TYFrameParser::display()
|
||||
{
|
||||
int ret = 0;
|
||||
while(isRuning) {
|
||||
if(images.size()) {
|
||||
std::unique_lock<std::mutex> lock(_queue_lock);
|
||||
std::shared_ptr<TYFrame> img = images.front();
|
||||
|
||||
if(img) {
|
||||
images.pop();
|
||||
doProcess(img);
|
||||
}
|
||||
}
|
||||
|
||||
for(auto& iter : stream) {
|
||||
ret = iter.second->show();
|
||||
if(ret > 0) {
|
||||
if(func_keyboard_event) func_keyboard_event(ret, user_data);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
inline void TYFrameParser::ImageQueueSizeCheck()
|
||||
{
|
||||
while(images.size() >= _max_queue_size)
|
||||
images.pop();
|
||||
}
|
||||
|
||||
void TYFrameParser::update(const std::shared_ptr<TYFrame>& frame)
|
||||
{
|
||||
std::unique_lock<std::mutex> lock(_queue_lock);
|
||||
if(frame) {
|
||||
ImageQueueSizeCheck();
|
||||
images.push(frame);
|
||||
#ifndef OPENCV_DEPENDENCIES
|
||||
auto depth = frame->depthImage();
|
||||
auto color = frame->colorImage();
|
||||
auto left_ir = frame->leftIRImage();
|
||||
auto right_ir = frame->rightIRImage();
|
||||
|
||||
if (left_ir) {
|
||||
auto image = left_ir;
|
||||
std::cout << "Left" << " image size : " << image->width() << " x " << image->height() << std::endl;
|
||||
}
|
||||
|
||||
if (right_ir) {
|
||||
auto image = right_ir;
|
||||
std::cout << "Right" << " image size : " << image->width() << " x " << image->height() << std::endl;
|
||||
}
|
||||
|
||||
if (color) {
|
||||
auto image = color;
|
||||
std::cout << "Color" << " image size : " << image->width() << " x " << image->height() << std::endl;
|
||||
}
|
||||
|
||||
if (depth) {
|
||||
auto image = depth;
|
||||
std::cout << "Depth" << " image size : " << image->width() << " x " << image->height() << std::endl;
|
||||
}
|
||||
|
||||
#endif
|
||||
}
|
||||
}
|
||||
}//namespace percipio_layer
|
||||
239
image_capture/third_party/percipio/sample_v2/hpp/Device.hpp
vendored
Normal file
239
image_capture/third_party/percipio/sample_v2/hpp/Device.hpp
vendored
Normal file
@@ -0,0 +1,239 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <vector>
|
||||
#include <set>
|
||||
#include <functional>
|
||||
#include <mutex>
|
||||
#include <queue>
|
||||
#include <thread>
|
||||
#include <condition_variable>
|
||||
#include <stdint.h>
|
||||
|
||||
#include "Frame.hpp"
|
||||
|
||||
namespace percipio_layer {
|
||||
|
||||
class TYDevice;
|
||||
class DeviceList;
|
||||
class TYContext;
|
||||
class TYFrame;
|
||||
class FastCamera;
|
||||
|
||||
static std::string parseInterfaceID(std::string &ifaceId)
|
||||
{
|
||||
std::string type_s = ifaceId.substr(0, ifaceId.find('-'));
|
||||
if ("usb" == type_s) {
|
||||
//add usb specific parse if needed
|
||||
}
|
||||
if ("eth" == type_s || "wifi" == type_s) {
|
||||
//eth-2c:f0:5d:ac:5d:6265eea8c0
|
||||
//eth-2c:f0:5d:ac:5d:62
|
||||
size_t IdLength = 18 + type_s.length();
|
||||
std::string new_id = ifaceId.substr(0, IdLength);
|
||||
// 65eea8c0
|
||||
std::string ip_s = ifaceId.substr(IdLength, ifaceId.size() - IdLength);
|
||||
//base = 16
|
||||
uint32_t ip = static_cast<uint32_t>(std::stoul(ip_s, nullptr, 16));
|
||||
uint8_t *ip_arr = (uint8_t *)&ip;
|
||||
new_id += " ip:";
|
||||
for(int i = 0; i < 3; i++) {
|
||||
new_id += std::to_string((uint32_t) ip_arr[i]) + ".";
|
||||
}
|
||||
new_id += std::to_string((uint32_t) ip_arr[3]);
|
||||
return new_id;
|
||||
}
|
||||
return ifaceId;
|
||||
}
|
||||
|
||||
class TYDeviceInfo
|
||||
{
|
||||
public:
|
||||
~TYDeviceInfo();
|
||||
TYDeviceInfo(TYDeviceInfo const&) = delete;
|
||||
void operator=(TYDeviceInfo const&) = delete;
|
||||
|
||||
friend class TYDevice;
|
||||
friend class DeviceList;
|
||||
|
||||
const char* id() { return _info.id; }
|
||||
const TY_INTERFACE_INFO& Interface() { return _info.iface; }
|
||||
|
||||
const char* vendorName()
|
||||
{
|
||||
//specific Vendor name for some camera
|
||||
if (strlen(_info.userDefinedName) != 0) {
|
||||
return _info.userDefinedName;
|
||||
} else {
|
||||
return _info.vendorName;
|
||||
}
|
||||
}
|
||||
const char* modelName() { return _info.modelName; }
|
||||
const char* buildHash() { return _info.buildHash; }
|
||||
const char* configVersion() { return _info.configVersion; }
|
||||
|
||||
const TY_VERSION_INFO& hardwareVersion() { return _info.hardwareVersion; }
|
||||
const TY_VERSION_INFO& firmwareVersion() { return _info.firmwareVersion; }
|
||||
|
||||
const char* mac();
|
||||
const char* ip();
|
||||
const char* netmask();
|
||||
const char* gateway();
|
||||
const char* broadcast();
|
||||
private:
|
||||
TYDeviceInfo(const TY_DEVICE_BASE_INFO& info);
|
||||
TY_DEVICE_BASE_INFO _info;
|
||||
};
|
||||
|
||||
typedef std::function<void(void* userdata)> EventCallback;
|
||||
typedef std::pair<void*, EventCallback> event_pair;
|
||||
static void eventCallback(TY_EVENT_INFO *event_info, void *userdata);
|
||||
class TYDevice
|
||||
{
|
||||
public:
|
||||
~TYDevice();
|
||||
void operator=(TYDevice const&) = delete;
|
||||
|
||||
friend class FastCamera;
|
||||
friend class TYStream;
|
||||
friend class DeviceList;
|
||||
friend class TYPropertyManager;
|
||||
friend void eventCallback(TY_EVENT_INFO *event_info, void *userdata);
|
||||
|
||||
std::shared_ptr<TYDeviceInfo> getDeviceInfo();
|
||||
void registerEventCallback (const TY_EVENT eventID, void* data, EventCallback cb);
|
||||
|
||||
private:
|
||||
TYDevice(const TY_DEV_HANDLE handle, const TY_DEVICE_BASE_INFO& info);
|
||||
|
||||
TY_DEV_HANDLE _handle;
|
||||
TY_DEVICE_BASE_INFO _dev_info;
|
||||
|
||||
std::map<TY_EVENT, event_pair> _eventCallbackMap;
|
||||
|
||||
std::function<void(TY_EVENT_INFO*)> _event_callback;
|
||||
void onDeviceEventCallback(const TY_EVENT_INFO *event_info);
|
||||
};
|
||||
|
||||
class DeviceList {
|
||||
public:
|
||||
~DeviceList();
|
||||
DeviceList(DeviceList const&) = delete;
|
||||
void operator=(DeviceList const&) = delete;
|
||||
|
||||
bool empty() { return devs.size() == 0; }
|
||||
int devCount() { return devs.size(); }
|
||||
|
||||
std::shared_ptr<TYDeviceInfo> getDeviceInfo(int idx);
|
||||
std::shared_ptr<TYDevice> getDevice(int idx);
|
||||
std::shared_ptr<TYDevice> getDeviceBySN(const char* sn);
|
||||
std::shared_ptr<TYDevice> getDeviceByIP(const char* ip);
|
||||
|
||||
friend class TYContext;
|
||||
private:
|
||||
std::vector<TY_DEVICE_BASE_INFO> devs;
|
||||
static std::set<TY_INTERFACE_HANDLE> gifaces;
|
||||
DeviceList(std::vector<TY_DEVICE_BASE_INFO>& devices);
|
||||
};
|
||||
|
||||
enum ForceIPStyle {
|
||||
ForceIPStyleDynamic = 0,
|
||||
ForceIPStyleForce = 1,
|
||||
ForceIPStyleStatic = 2
|
||||
};
|
||||
|
||||
class TYContext {
|
||||
public:
|
||||
static TYContext& getInstance() {
|
||||
static TYContext instance;
|
||||
return instance;
|
||||
}
|
||||
|
||||
TYContext(TYContext const&) = delete;
|
||||
void operator=(TYContext const&) = delete;
|
||||
|
||||
std::shared_ptr<DeviceList> queryDeviceList(const char *iface = nullptr);
|
||||
std::shared_ptr<DeviceList> queryNetDeviceList(const char *iface = nullptr);
|
||||
|
||||
bool ForceNetDeviceIP(const ForceIPStyle style, const std::string& mac, const std::string& ip, const std::string& mask, const std::string& gateway);
|
||||
|
||||
private:
|
||||
TYContext() {
|
||||
ASSERT_OK(TYInitLib());
|
||||
TY_VERSION_INFO ver;
|
||||
ASSERT_OK( TYLibVersion(&ver) );
|
||||
std::cout << "=== lib version: " << ver.major << "." << ver.minor << "." << ver.patch << std::endl;
|
||||
}
|
||||
|
||||
~TYContext() {
|
||||
ASSERT_OK(TYDeinitLib());
|
||||
}
|
||||
};
|
||||
|
||||
class TYCamInterface
|
||||
{
|
||||
public:
|
||||
TYCamInterface();
|
||||
~TYCamInterface();
|
||||
|
||||
TY_STATUS Reset();
|
||||
void List(std::vector<std::string>& );
|
||||
private:
|
||||
std::vector<TY_INTERFACE_INFO> ifaces;
|
||||
};
|
||||
|
||||
class FastCamera
|
||||
{
|
||||
public:
|
||||
enum stream_idx
|
||||
{
|
||||
stream_depth = 0x1,
|
||||
stream_color = 0x2,
|
||||
stream_ir_left = 0x4,
|
||||
stream_ir_right = 0x8,
|
||||
stream_ir = stream_ir_left
|
||||
};
|
||||
friend class TYFrame;
|
||||
FastCamera();
|
||||
FastCamera(const char* sn);
|
||||
~FastCamera();
|
||||
|
||||
virtual TY_STATUS open(const char* sn);
|
||||
TY_STATUS setIfaceId(const char* inf);
|
||||
virtual TY_STATUS openByIP(const char* ip);
|
||||
virtual bool has_stream(stream_idx idx);
|
||||
virtual TY_STATUS stream_enable(stream_idx idx);
|
||||
virtual TY_STATUS stream_disable(stream_idx idx);
|
||||
|
||||
virtual TY_STATUS start();
|
||||
virtual TY_STATUS stop();
|
||||
virtual void close();
|
||||
|
||||
std::shared_ptr<TYFrame> tryGetFrames(uint32_t timeout_ms);
|
||||
|
||||
TY_DEV_HANDLE handle() {
|
||||
if (!device) {
|
||||
// std::cerr << "Error: Device handle accessed but device is null!" << std::endl;
|
||||
return 0;
|
||||
}
|
||||
return device->_handle;
|
||||
}
|
||||
|
||||
void RegisterOfflineEventCallback(EventCallback cb, void* data) { device->registerEventCallback(TY_EVENT_DEVICE_OFFLINE, data, cb); }
|
||||
|
||||
private:
|
||||
std::string mIfaceId;
|
||||
std::mutex _dev_lock;
|
||||
|
||||
TY_COMPONENT_ID components = 0;
|
||||
#define BUF_CNT (3)
|
||||
|
||||
bool isRuning = false;
|
||||
std::shared_ptr<TYFrame> fetchFrames(uint32_t timeout_ms);
|
||||
TY_STATUS doStop();
|
||||
|
||||
std::shared_ptr<TYDevice> device;
|
||||
std::vector<uint8_t> stream_buffer[BUF_CNT];
|
||||
};
|
||||
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user