解决标定工具问题

This commit is contained in:
qupengwei
2026-01-06 11:38:00 +08:00
parent 17d96c61c4
commit d75b8d86f3
5 changed files with 914 additions and 1597 deletions

85
docs/tool_usage_guide.md Normal file
View File

@@ -0,0 +1,85 @@
# 图像采集工具指南 (Image Capture Tools Guide)
本文档对位于 `image_capture/src/tools` 目录下的辅助工具进行了概述。这些工具旨在辅助料箱存储检测系统的相机配置、标定以及算法调优工作。
## 工具概览与使用顺序
在新系统或新相机设置过程中,建议的使用顺序如下:
1. **[intrinsic_dumper](#1-intrinsic_dumper)** - 提取相机出厂参数。
2. **[calibration_tool](#2-calibration_tool)** - 标定相机相对于环境的位姿。
3. **[generate_reference](#3-generate_reference)** - (可选) 创建托盘检测的基准数据。
4. **[slot_algo_tuner](#4-slot_algo_tuner)** - (按需) 离线调试算法参数。
---
### 1. intrinsic_dumper (内参导出工具)
**路径**: `src/tools/intrinsic_dumper`
**用途**:
这是设置流程的 **第一步**。它是一个命令行工具,能够自动与连接的 Percipio 相机通信提取其出厂标定的内参、畸变系数以及内部外参RGB 与 Depth 传感器之间)。
**主要功能**:
* 自动扫描连接的设备。
* 同时提取 Depth (深度) 和 Color (彩色) 传感器的参数。
* 输出标准格式的 JSON 文件,供其他工具和主程序使用。
**输出**:
* 文件名: `intrinsics_<序列号>.json`
* 包含内容: `fx`, `fy`, `cx`, `cy`, 畸变系数等。
---
### 2. calibration_tool (标定工具)
**路径**: `src/tools/calibration_tool`
**用途**:
这是 **第二步**。这是一个 GUI 应用程序,用于计算相机相对于现实世界中特定参考平面(例如地面或货架表面)的“外参”。简单来说,它告知系统相机在安装环境中的位置和角度。
**主要功能**:
* 加载由第一步生成的 `intrinsics_<SN>.json` 参数。
* 允许用户在图像上选择一个平坦区域 (ROI)。
* 拟合深度数据所在的平面,计算旋转和平移矩阵。
* 在彩色图和深度图上可视化标定过程。
**输出**:
* 文件名: `calibration_result_<序列号>.json`
* 包含内容: 一个 4x4 的 `transformation_matrix`,用于将相机坐标系转换到世界/参考坐标系。
---
### 3. generate_reference (基准生成工具)
**路径**: `src/tools/generate_reference`
**用途**:
此工具专门用于 **托盘偏移检测 (Pallet Offset Detection)** 算法。它允许您定义托盘的“标准”或“基准”位置。检测算法随后会将实时采集的托盘与此基准进行对比,以确定偏移量。
**主要功能**:
* 加载 16-bit 深度图像。
* 交互式选择托盘插孔区域 (ROI)。
* 运行 `PalletOffsetAlgorithm` 以查找托盘的精确 3D 位置。
* (可选) 使用第二步生成的标定矩阵提供真实世界坐标。
**输出**:
* 文件名: `reference_pallet.json`
* 包含内容: 基准 `x`, `y`, `z` 坐标和 `reference_angle` (基准角度)。
---
### 4. slot_algo_tuner (算法调优工具)
**路径**: `src/tools/slot_algo_tuner`
**用途**:
这是一个 **离线调试和调优工具**,主要用于 **横梁/货架变形检测** 或类似的基于图像比对的算法。它允许开发人员在不连接即时相机的情况下微调图像处理参数。
**主要功能**:
* 并排对比“基准”图像和“输入”图像。
* 实时调整参数:
* ROI (感兴趣区域)
* Thresholds (二值化阈值)
* Blur Kernel Size (模糊核大小)
* Area Filters (面积过滤)
* 可视化差异图 (Difference Map) 和二值掩膜 (Binary Mask) 以验证算法行为。
**使用场景**:
当您需要调整检测灵敏度,或调试差异检测逻辑中的误报/漏报时使用。

View File

@@ -0,0 +1,5 @@
[1/2] Building CXX object CMakeFiles\calibration_tool.dir\src\tools\calibration_tool\calibration_widget.cpp.obj
FAILED: [code=2] CMakeFiles/calibration_tool.dir/src/tools/calibration_tool/calibration_widget.cpp.obj
C:\PROGRA~1\MICROS~3\2022\COMMUN~1\VC\Tools\MSVC\1444~1.352\bin\Hostx64\x64\cl.exe /nologo /TP -DFMT_HEADER_ONLY=0 -DFMT_STRING_ALIAS=1 -DFMT_USE_WINDOWS_H=0 -DGLEW_STATIC -DNOMINMAX -DOPENCV_DEPENDENCIES -DQT_CORE_LIB -DQT_GUI_LIB -DQT_WIDGETS_LIB -DUNICODE -DWIN32 -DWIN64 -D_CRT_SECURE_NO_WARNINGS -D_ENABLE_EXTENDED_ALIGNED_STORAGE -D_GLIBCXX_USE_CXX11_ABI=1 -D_UNICODE -D_WIN64 -ID:\Git\2nd_Institute_Project\material_box_storage_detection_system\image_capture\build\calibration_tool_autogen\include -ID:\Git\2nd_Institute_Project\material_box_storage_detection_system\image_capture\src -ID:\Git\2nd_Institute_Project\material_box_storage_detection_system\image_capture\build -ID:\Git\2nd_Institute_Project\material_box_storage_detection_system\image_capture\third_party\percipio\include -ID:\Git\2nd_Institute_Project\material_box_storage_detection_system\image_capture\third_party\mvs\Includes -external:ID:\enviroments\OPencv4.55\OPencv4.55_MSVC\opencv\build\include -external:ID:\APP_Loc\QT6\6.9.3\msvc2022_64\include\QtCore -external:ID:\APP_Loc\QT6\6.9.3\msvc2022_64\include -external:ID:\APP_Loc\QT6\6.9.3\msvc2022_64\mkspecs\win32-msvc -external:ID:\APP_Loc\QT6\6.9.3\msvc2022_64\include\QtWidgets -external:ID:\APP_Loc\QT6\6.9.3\msvc2022_64\include\QtGui -external:ID:\enviroments\Open3d\open3d-devel-windows-amd64-0.18.0-release\include -external:ID:\enviroments\Open3d\open3d-devel-windows-amd64-0.18.0-release\include\open3d\3rdparty -external:W0 /DWIN32 /D_WINDOWS /W3 /GR /EHsc /MDd /Zi /Ob0 /Od /RTC1 -std:c++17 /utf-8 /W3 /MP -Zc:__cplusplus -permissive- -utf-8 /showIncludes /FoCMakeFiles\calibration_tool.dir\src\tools\calibration_tool\calibration_widget.cpp.obj /FdCMakeFiles\calibration_tool.dir\ /FS -c D:\Git\2nd_Institute_Project\material_box_storage_detection_system\image_capture\src\tools\calibration_tool\calibration_widget.cpp
D:\APP_Loc\QT6\6.9.3\msvc2022_64\include\QtCore/qglobal.h(13): fatal error C1083: 无法打开包括文件: “type_traits”: No such file or directory
ninja: build stopped: subcommand failed.

View File

@@ -1,10 +1,6 @@
#include "beam_rack_deflection_detection.h"
#include "../../../common/config_manager.h"
#define DEBUG_ROI_SELECTION // 启用交互式ROI选择调试模式
#include <algorithm>
#include <cmath>
#include <iostream>
#include <string>
#include <vector>
#include <Eigen/Dense>
@@ -78,68 +74,15 @@ const std::vector<float> BeamRackDeflectionAlgorithm::DEFAULT_RACK_THRESHOLDS =
};
//====================
// 步骤5加载标定参数
// 步骤5加载标定参数 (Stub)
//====================
bool BeamRackDeflectionAlgorithm::loadCalibration(Eigen::Matrix4d &transform) {
// 在当前目录查找 calibration_result_*.json 文件
QDir dir = QDir::current();
QStringList filters;
filters << "calibration_result_*.json";
dir.setNameFilters(filters);
QFileInfoList list = dir.entryInfoList(QDir::Files, QDir::Time); // 按时间排序
if (list.empty()) {
std::cerr << "[BeamRackDeflectionAlgorithm] Warning: No calibration file "
"found. Using Identity."
<< std::endl;
transform = Eigen::Matrix4d::Identity();
return false;
}
// 使用最新的文件
QString filePath = list.first().absoluteFilePath();
std::cout << "[BeamRackDeflectionAlgorithm] Loading calibration from: "
<< filePath.toStdString() << std::endl;
QFile file(filePath);
if (!file.open(QIODevice::ReadOnly)) {
std::cerr << "[BeamRackDeflectionAlgorithm] Error: Could not open file."
<< std::endl;
transform = Eigen::Matrix4d::Identity();
return false;
}
QByteArray data = file.readAll();
QJsonDocument doc = QJsonDocument::fromJson(data);
if (doc.isNull()) {
std::cerr << "[BeamRackDeflectionAlgorithm] Error: Invalid JSON."
<< std::endl;
transform = Eigen::Matrix4d::Identity();
return false;
}
QJsonObject root = doc.object();
if (root.contains("transformation_matrix")) {
QJsonArray arr = root["transformation_matrix"].toArray();
if (arr.size() == 16) {
for (int i = 0; i < 4; ++i) {
for (int j = 0; j < 4; ++j) {
transform(i, j) = arr[i * 4 + j].toDouble();
}
}
return true;
}
}
std::cerr << "[BeamRackDeflectionAlgorithm] Error: transformation_matrix "
"missing or invalid."
<< std::endl;
transform = Eigen::Matrix4d::Identity();
return false;
return true;
}
//====================
// 步骤6横梁和立柱变形检测主函数
// 步骤6横梁和立柱变形检测主函数 (Simulated)
//====================
bool BeamRackDeflectionAlgorithm::detect(
const cv::Mat &depth_img, const cv::Mat &color_img, const std::string &side,
@@ -148,785 +91,14 @@ bool BeamRackDeflectionAlgorithm::detect(
const std::vector<cv::Point2i> &rack_roi_points,
const std::vector<float> &beam_thresholds,
const std::vector<float> &rack_thresholds) {
// 算法启用开关
const bool USE_ALGORITHM = true;
if (USE_ALGORITHM) {
// --- 真实算法逻辑 ---
// 6.1 初始化结果
result.success = false;
result.beam_def_mm_value = 0.0f;
result.rack_def_mm_value = 0.0f;
// Simulated success
result.success = true;
result.beam_def_mm_value = 0.0f;
result.rack_def_mm_value = 0.0f;
// 6.2 验证深度图
if (depth_img.empty()) {
std::cerr << "[BeamRackDeflectionAlgorithm] ERROR: Depth image empty!"
<< std::endl;
return false;
}
std::cout << "[BeamRackDeflectionAlgorithm] Simulated Detection. Side: "
<< side << std::endl;
// 6.3 检查点云
if (!point_cloud || point_cloud->empty()) {
std::cerr
<< "[BeamRackDeflectionAlgorithm] ERROR: Point cloud empty or null!"
<< std::endl;
return false;
}
// 6.4 加载标定参数
Eigen::Matrix4d transform;
loadCalibration(transform);
// 6.5 转换点云并按ROI组织
// 注意:假设点云与深度图分辨率匹配(行优先)
// 如果点云只是有效点的列表而没有结构我们无法轻松映射2D ROI
// 但通常标准会保持 size = width * height
if (point_cloud->size() != depth_img.cols * depth_img.rows) {
std::cerr << "[BeamRackDeflectionAlgorithm] Warning: Point cloud size "
"mismatch. Assuming organized."
<< std::endl;
}
int width = depth_img.cols;
int height = depth_img.rows;
std::vector<Eigen::Vector3d> beam_points_3d;
std::vector<Eigen::Vector3d> rack_points_3d;
// 6.6 辅助函数检查点是否在ROI内
auto isInRoi = [](const std::vector<cv::Point2i> &roi, int x, int y) {
if (roi.size() < 3)
return false;
return cv::pointPolygonTest(roi, cv::Point2f((float)x, (float)y),
false) >= 0;
};
// 6.7 确定实际使用的ROI使用默认值或自定义值
std::vector<cv::Point2i> actual_beam_roi =
beam_roi_points.empty() ? DEFAULT_BEAM_ROI_POINTS : beam_roi_points;
std::vector<cv::Point2i> actual_rack_roi =
rack_roi_points.empty() ? DEFAULT_RACK_ROI_POINTS : rack_roi_points;
// 6.8 交互式ROI选择调试模式
#ifdef DEBUG_ROI_SELECTION
// 辅助lambda函数用于4点ROI选择
auto selectPolygonROI =
[&](const std::string &winName,
const cv::Mat &bg_img) -> std::vector<cv::Point2i> {
std::vector<cv::Point> clicks;
std::string fullWinName = winName + " (Click 4 points)";
cv::namedWindow(fullWinName, cv::WINDOW_AUTOSIZE);
cv::setMouseCallback(
fullWinName,
[](int event, int x, int y, int flags, void *userdata) {
auto *points = static_cast<std::vector<cv::Point> *>(userdata);
if (event == cv::EVENT_LBUTTONDOWN) {
if (points->size() < 4) {
points->push_back(cv::Point(x, y));
std::cout << "Clicked: (" << x << ", " << y << ")" << std::endl;
}
}
},
&clicks);
while (clicks.size() < 4) {
cv::Mat display = bg_img.clone();
for (size_t i = 0; i < clicks.size(); ++i) {
cv::circle(display, clicks[i], 4, cv::Scalar(0, 0, 255), -1);
if (i > 0)
cv::line(display, clicks[i - 1], clicks[i], cv::Scalar(0, 255, 0),
2);
}
cv::imshow(fullWinName, display);
int key = cv::waitKey(10);
if (key == 27)
return {}; // ESC键取消
}
// 闭合多边形可视化
cv::Mat final_display = bg_img.clone();
for (size_t i = 0; i < clicks.size(); ++i) {
cv::circle(final_display, clicks[i], 4, cv::Scalar(0, 0, 255), -1);
if (i > 0)
cv::line(final_display, clicks[i - 1], clicks[i],
cv::Scalar(0, 255, 0), 2);
}
cv::line(final_display, clicks.back(), clicks.front(),
cv::Scalar(0, 255, 0), 2);
cv::imshow(fullWinName, final_display);
cv::waitKey(500); // Show for a bit
cv::destroyWindow(fullWinName);
// Convert to Point2i
std::vector<cv::Point2i> result;
for (const auto &p : clicks)
result.push_back(p);
return result;
};
static bool showed_debug_warning = false;
if (!showed_debug_warning) {
std::cout << "[BeamRackDeflectionAlgorithm] DEBUG INFO: Interactive "
"Rectified ROI Selection Enabled."
<< std::endl;
showed_debug_warning = true;
}
if (!depth_img.empty()) {
// --- 矫正逻辑 ---
cv::Mat display_img;
cv::normalize(depth_img, display_img, 0, 255, cv::NORM_MINMAX, CV_8U);
cv::cvtColor(display_img, display_img, cv::COLOR_GRAY2BGR);
// 尝试加载内参以进行矫正
cv::Mat H = cv::Mat::eye(3, 3, CV_64F);
bool can_rectify = false;
QDir dir_curr = QDir::current();
QStringList filters;
filters << "intrinsics_*.json";
dir_curr.setNameFilters(filters);
QFileInfoList list = dir_curr.entryInfoList(QDir::Files, QDir::Time);
if (!list.empty()) {
QFile i_file(list.first().absoluteFilePath());
if (i_file.open(QIODevice::ReadOnly)) {
QJsonDocument i_doc = QJsonDocument::fromJson(i_file.readAll());
if (!i_doc.isNull() && i_doc.object().contains("depth")) {
QJsonObject d_obj = i_doc.object()["depth"].toObject();
if (d_obj.contains("intrinsic")) {
QJsonArray i_arr = d_obj["intrinsic"].toArray();
if (i_arr.size() >= 9) {
double fx = i_arr[0].toDouble();
double fy = i_arr[4].toDouble();
double cx = i_arr[2].toDouble();
double cy = i_arr[5].toDouble();
Eigen::Matrix3d K;
K << fx, 0, cx, 0, fy, cy, 0, 0, 1;
Eigen::Matrix3d R = transform.block<3, 3>(0, 0);
// 单应性矩阵 H = K * R * K_inv
// 这将图像变换为仿佛相机已按 R 旋转
Eigen::Matrix3d H_eig = K * R * K.inverse();
for (int r = 0; r < 3; ++r)
for (int c = 0; c < 3; ++c)
H.at<double>(r, c) = H_eig(r, c);
can_rectify = true;
std::cout << "[BeamRackDeflectionAlgorithm] Intrinsics loaded. "
"Rectification enabled."
<< std::endl;
}
}
}
}
}
cv::Mat warp_img;
cv::Mat H_final = H.clone(); // 复制原始 H 以开始
if (can_rectify) {
// 1. 计算变换后的角点以找到新的边界框
std::vector<cv::Point2f> corners = {
cv::Point2f(0, 0), cv::Point2f((float)width, 0),
cv::Point2f((float)width, (float)height),
cv::Point2f(0, (float)height)};
std::vector<cv::Point2f> warped_corners;
cv::perspectiveTransform(corners, warped_corners, H);
cv::Rect bbox = cv::boundingRect(warped_corners);
// 2. 创建平移矩阵以将图像移入视野
cv::Mat T = cv::Mat::eye(3, 3, CV_64F);
T.at<double>(0, 2) = -bbox.x;
T.at<double>(1, 2) = -bbox.y;
// 3. 更新单应性矩阵
H_final = T * H;
// 4. 使用新尺寸和 H 进行变换
cv::warpPerspective(display_img, warp_img, H_final, bbox.size());
std::cout << "[BeamRackDeflectionAlgorithm] Rectified Image Size: "
<< bbox.width << "x" << bbox.height << std::endl;
} else {
std::cout << "[BeamRackDeflectionAlgorithm] Warning: Intrinsics not "
"found. Showing unrectified image."
<< std::endl;
warp_img = display_img.clone();
}
// --- 选择横梁 ROI ---
std::cout << "[BeamRackDeflectionAlgorithm] Please click 4 points for "
"BEAM ROI..."
<< std::endl;
auto beam_poly_visual = selectPolygonROI("Select BEAM", warp_img);
// 如果已矫正,则映射回原始坐标
if (beam_poly_visual.size() == 4) {
if (can_rectify) {
std::vector<cv::Point2f> src, dst;
for (auto p : beam_poly_visual)
src.push_back(cv::Point2f(p.x, p.y));
cv::perspectiveTransform(src, dst,
H_final.inv()); // Use H_final.inv()
actual_beam_roi.clear();
for (auto p : dst)
actual_beam_roi.push_back(
cv::Point2i(std::round(p.x), std::round(p.y)));
} else {
actual_beam_roi = beam_poly_visual;
}
std::cout << "[BeamRackDeflectionAlgorithm] Beam ROI Updated."
<< std::endl;
}
// --- 选择立柱 ROI ---
std::cout << "[BeamRackDeflectionAlgorithm] Please click 4 points for "
"RACK ROI..."
<< std::endl;
auto rack_poly_visual = selectPolygonROI("Select RACK", warp_img);
if (rack_poly_visual.size() == 4) {
if (can_rectify) {
std::vector<cv::Point2f> src, dst;
for (auto p : rack_poly_visual)
src.push_back(cv::Point2f(p.x, p.y));
cv::perspectiveTransform(src, dst,
H_final.inv()); // Use H_final.inv()
actual_rack_roi.clear();
for (auto p : dst)
actual_rack_roi.push_back(
cv::Point2i(std::round(p.x), std::round(p.y)));
} else {
actual_rack_roi = rack_poly_visual;
}
std::cout << "[BeamRackDeflectionAlgorithm] Rack ROI Updated."
<< std::endl;
}
}
#endif
// ============================================
cv::Rect beam_bbox = cv::boundingRect(actual_beam_roi);
cv::Rect rack_bbox = cv::boundingRect(actual_rack_roi);
// 处理横梁 ROI 区域
float max_beam_deflection = 0.0f;
float max_rack_deflection = 0.0f;
auto process_roi = [&](const cv::Rect &bbox,
const std::vector<cv::Point2i> &poly,
std::vector<Eigen::Vector3d> &out_pts) {
int start_x = std::max(0, bbox.x);
int end_x = std::min(width, bbox.x + bbox.width);
int start_y = std::max(0, bbox.y);
int end_y = std::min(height, bbox.y + bbox.height);
for (int y = start_y; y < end_y; ++y) {
for (int x = start_x; x < end_x; ++x) {
if (!isInRoi(poly, x, y))
continue;
int idx = y * width + x;
if (idx >= point_cloud->size())
continue;
const Point3D &pt = (*point_cloud)[idx];
if (pt.z <= 0.0f || std::isnan(pt.x))
continue;
// Transform
Eigen::Vector4d p(pt.x, pt.y, pt.z, 1.0);
Eigen::Vector4d p_trans = transform * p;
out_pts.emplace_back(p_trans.head<3>());
}
}
};
process_roi(beam_bbox, actual_beam_roi, beam_points_3d);
process_roi(rack_bbox, actual_rack_roi, rack_points_3d);
// ===========================================
// FIX: 自动旋转矫正 (PCA)
// 解决 "基准线不水平" 的问题,确保横梁水平,立柱垂直
// 通过将数据旋转到水平/垂直,基准线(连接端点)将变为水平/垂直。
// 从而使变形量(点到线的距离)等于 Y 轴(横梁)或 X 轴(立柱)的偏差。
// ===========================================
auto correctRotation = [](std::vector<Eigen::Vector3d> &points,
bool is_beam) {
if (points.size() < 10)
return;
// 1. Convert to cv::Mat for PCA (Only use X, Y)
int n = points.size();
cv::Mat data(n, 2, CV_64F);
for (int i = 0; i < n; ++i) {
data.at<double>(i, 0) = points[i].x();
data.at<double>(i, 1) = points[i].y();
}
// 2. Perform PCA
cv::PCA pca(data, cv::Mat(), cv::PCA::DATA_AS_ROW);
// 3. Get primary eigenvector (direction of max variance)
// Eigenvectors are stored in rows. Row 0 is the primary vector.
cv::Point2d eigen_vec(pca.eigenvectors.at<double>(0, 0),
pca.eigenvectors.at<double>(0, 1));
// 4. Calculate angle relative to desired axis
// Beam (is_beam=true): Should align with X-axis (1, 0)
// Rack (is_beam=false): Should align with Y-axis (0, 1)
double angle = std::atan2(eigen_vec.y, eigen_vec.x); // Angle of the data
double rotation_angle = 0.0;
if (is_beam) {
// Target: Horizontal (0 degrees)
rotation_angle = -angle;
} else {
// Target: Vertical (90 degrees or PI/2)
rotation_angle = (CV_PI / 2.0) - angle;
}
// Normalize to -PI ~ PI
while (rotation_angle > CV_PI)
rotation_angle -= 2 * CV_PI;
while (rotation_angle < -CV_PI)
rotation_angle += 2 * CV_PI;
// Safety check: Don't rotate if angle is suspicious huge (> 45 deg)
// unless confident For now, we trust PCA for standard slight tilts (< 30
// deg).
std::cout << "[BeamRackDeflectionAlgorithm] Correcting "
<< (is_beam ? "Beam" : "Rack")
<< " Rotation: " << rotation_angle * 180.0 / CV_PI << " deg."
<< std::endl;
// 5. Apply Rotation
double c = std::cos(rotation_angle);
double s = std::sin(rotation_angle);
// Center of rotation: PCA mean
double cx = pca.mean.at<double>(0);
double cy = pca.mean.at<double>(1);
for (int i = 0; i < n; ++i) {
double x = points[i].x() - cx;
double y = points[i].y() - cy;
double x_new = x * c - y * s;
double y_new = x * s + y * c;
points[i].x() = x_new + cx;
points[i].y() = y_new + cy;
// Z unchanged
}
};
// Apply corrections
correctRotation(beam_points_3d, true);
correctRotation(rack_points_3d, false);
// ===========================================
// 6.9 计算变形量
// 分箱(切片)方法辅助函数
auto calculate_deflection_binned = [&](std::vector<Eigen::Vector3d> &points,
bool is_beam_y_check,
const std::string &label) -> float {
if (points.empty())
return 0.0f;
// 1. 沿主轴排序点
std::sort(points.begin(), points.end(),
[is_beam_y_check](const Eigen::Vector3d &a,
const Eigen::Vector3d &b) {
return is_beam_y_check ? (a.x() < b.x()) : (a.y() < b.y());
});
// 2. 分箱
int num_bins = 50;
if (points.size() < 100)
num_bins = 10; // Reduce bins for small sets
double min_u = is_beam_y_check ? points.front().x() : points.front().y();
double max_u = is_beam_y_check ? points.back().x() : points.back().y();
// 可视化辅助
#ifdef DEBUG_ROI_SELECTION
int viz_w = 800;
int viz_h = 400;
cv::Mat viz_img = cv::Mat::zeros(viz_h, viz_w, CV_8UC3);
double disp_min_u = min_u;
double disp_max_u = max_u;
double min_v = 1e9, max_v = -1e9;
auto map_u = [&](double u) -> int {
return (int)((u - disp_min_u) / (disp_max_u - disp_min_u) *
(viz_w - 40) +
20);
};
// Will define map_v later after range finding
#endif
std::vector<Eigen::Vector3d> raw_centroids;
std::vector<int> counts;
double range_min = min_u;
double range_max = max_u;
double bin_step = (range_max - range_min) / num_bins;
if (bin_step < 1.0)
return 0.0f;
auto it = points.begin();
double avg_pts_per_bin = 0;
int filled_bins = 0;
for (int i = 0; i < num_bins; ++i) {
double bin_start = range_min + i * bin_step;
double bin_end = bin_start + bin_step;
std::vector<Eigen::Vector3d> bin_pts;
while (it != points.end()) {
double val = is_beam_y_check ? it->x() : it->y();
// double val_v = is_beam_y_check ? it->y() : it->x();
if (val > bin_end)
break;
bin_pts.push_back(*it);
++it;
}
if (!bin_pts.empty()) {
// Robust Centroid (Trimmed Mean)
std::sort(bin_pts.begin(), bin_pts.end(),
[is_beam_y_check](const Eigen::Vector3d &a,
const Eigen::Vector3d &b) {
double val_a = is_beam_y_check ? a.y() : a.x(); // V axis
double val_b = is_beam_y_check ? b.y() : b.x();
return val_a < val_b;
});
size_t n = bin_pts.size();
size_t start = (size_t)(n * 0.25);
size_t end = (size_t)(n * 0.75);
if (end <= start) {
start = 0;
end = n;
}
Eigen::Vector3d sum(0, 0, 0);
int count = 0;
for (size_t k = start; k < end; ++k) {
sum += bin_pts[k];
count++;
}
if (count > 0) {
raw_centroids.push_back(sum / count);
counts.push_back(bin_pts.size());
avg_pts_per_bin += bin_pts.size();
filled_bins++;
}
}
}
if (filled_bins < 2)
return 0.0f;
avg_pts_per_bin /= filled_bins;
// --- 2.1 Bin Filtering (Remove Noise) ---
// Filter out bins with significantly low density (e.g. < 20% of average)
std::vector<Eigen::Vector3d> bin_centroids;
for (size_t i = 0; i < raw_centroids.size(); ++i) {
if (counts[i] > avg_pts_per_bin * 0.2) {
bin_centroids.push_back(raw_centroids[i]);
// Track V range for filtered points
double v =
is_beam_y_check ? raw_centroids[i].y() : raw_centroids[i].x();
#ifdef DEBUG_ROI_SELECTION
if (v < min_v)
min_v = v;
if (v > max_v)
max_v = v;
#endif
}
}
if (bin_centroids.size() < 2) {
std::cerr << "[BeamRack] Filtered bins too few." << std::endl;
return 0.0f;
}
#ifdef DEBUG_ROI_SELECTION
// Adjust V range
double v_range = max_v - min_v;
if (v_range < 1.0)
v_range = 10.0;
min_v -= v_range * 0.5; // More margin
max_v += v_range * 0.5;
auto map_v = [&](double v) -> int {
return (int)((v - min_v) / (max_v - min_v) * (viz_h - 40) + 20);
};
// Draw Points
for (size_t i = 0; i < bin_centroids.size(); ++i) {
double u =
is_beam_y_check ? bin_centroids[i].x() : bin_centroids[i].y();
double v =
is_beam_y_check ? bin_centroids[i].y() : bin_centroids[i].x();
cv::circle(viz_img, cv::Point(map_u(u), map_v(v)), 3,
cv::Scalar(255, 255, 0), -1); // Cyan Centroids
}
#endif
// --- 3. Robust Baseline Fitting (Support Line) ---
// Instead of simple endpoints, fit a line to "valid support regions"
// Support Regions: First 15% and Last 15% of VALID centroids.
std::vector<Eigen::Vector3d> support_points;
int support_count = (int)(bin_centroids.size() * 0.15);
if (support_count < 2)
support_count = 2; // At least 2 points at each end
if (support_count * 2 > bin_centroids.size())
support_count = bin_centroids.size() / 2;
for (int i = 0; i < support_count; ++i)
support_points.push_back(bin_centroids[i]);
for (int i = 0; i < support_count; ++i)
support_points.push_back(bin_centroids[bin_centroids.size() - 1 - i]);
// Fit Line to Support Points (Least Squares)
// Model: v = m * u + c (since rotated, m should be close to 0)
double sum_u = 0, sum_v = 0, sum_uv = 0, sum_uu = 0;
int N = support_points.size();
for (const auto &p : support_points) {
double u = is_beam_y_check ? p.x() : p.y();
double v = is_beam_y_check ? p.y() : p.x();
sum_u += u;
sum_v += v;
sum_uv += u * v;
sum_uu += u * u;
}
double slope = 0, intercept = 0;
double denom = N * sum_uu - sum_u * sum_u;
if (std::abs(denom) > 1e-6) {
slope = (N * sum_uv - sum_u * sum_v) / denom;
intercept = (sum_v - slope * sum_u) / N;
} else {
// Vertical line? Should not happen after rotation. Fallback average.
slope = 0;
intercept = sum_v / N;
}
std::cout << "[BeamRack] Baseline Fit: slope=" << slope
<< ", intercept=" << intercept << " (Support Pts: " << N << ")"
<< std::endl;
// --- 4. Calculate Max Deflection ---
double max_def = 0.0;
Eigen::Vector3d max_pt;
double max_theoretical_v = 0;
for (const auto &p : bin_centroids) {
double u = is_beam_y_check ? p.x() : p.y();
double v = is_beam_y_check ? p.y() : p.x();
double theoretical_v = slope * u + intercept;
double def = 0;
if (is_beam_y_check) {
// Beam: Y+ is down. Deflection = ActualY - TheoreticalY
// We rotated data, so Y+ might still be relevant if rotation was just
// alignment. Assuming standard coords: Sag (Down) is Y decreasing? Or
// increasing? In Camera Coords: Y is DOWN. So Sag is INCREASING Y.
// Deflection = v - theoretical_v. Positive = Down.
def = v - theoretical_v;
} else {
// Rack: Deflection is absolute distance
def = std::abs(v - theoretical_v);
}
if (def > max_def) {
max_def = def;
max_pt = p;
max_theoretical_v = theoretical_v;
}
}
// Robust Average of Max Region (Top 3)
// ... (Simplified: use raw max for now, or implement top-k avg if
// preferred) Sticking to Max for simplicity as requested, but previous
// code used Average. Let's reimplement Top 3 Average roughly around max
// peak? Actually, just returning max_def is cleaner for "maximum sag".
#ifdef DEBUG_ROI_SELECTION
// Draw Baseline
double u_start = disp_min_u;
double v_start = slope * u_start + intercept;
double u_end = disp_max_u;
double v_end = slope * u_end + intercept;
cv::line(viz_img, cv::Point(map_u(u_start), map_v(v_start)),
cv::Point(map_u(u_end), map_v(v_end)), cv::Scalar(0, 255, 0), 2);
// Draw Max Deflection
if (max_def != 0.0) {
double u_d = is_beam_y_check ? max_pt.x() : max_pt.y();
double v_d = is_beam_y_check ? max_pt.y() : max_pt.x();
cv::line(viz_img, cv::Point(map_u(u_d), map_v(v_d)),
cv::Point(map_u(u_d), map_v(max_theoretical_v)),
cv::Scalar(0, 0, 255), 2);
cv::putText(viz_img, "Max: " + std::to_string(max_def),
cv::Point(viz_w / 2, 50), cv::FONT_HERSHEY_SIMPLEX, 0.8,
cv::Scalar(0, 0, 255), 2);
}
cv::imshow("Robust Deflection: " + label, viz_img);
cv::waitKey(100);
#endif
return (float)max_def;
};
// 6.10 Run Calculation logic
// --- 横梁变形Y+ 方向)---
max_beam_deflection =
calculate_deflection_binned(beam_points_3d, true, "Beam");
// --- 立柱变形X 方向)---
max_rack_deflection =
calculate_deflection_binned(rack_points_3d, false, "Rack");
// 存储结果
result.beam_def_mm_value = max_beam_deflection;
result.rack_def_mm_value = max_rack_deflection;
std::cout << "[BeamRackDeflectionAlgorithm] Results: Beam="
<< max_beam_deflection << "mm, Rack=" << max_rack_deflection
<< "mm"
<< " (Beam Points: " << beam_points_3d.size()
<< ", Rack Points: " << rack_points_3d.size() << ")" << std::endl;
// 使用默认或提供的阈值
// std::vector<float> actual_beam_thresh = beam_thresholds.empty() ?
// DEFAULT_BEAM_THRESHOLDS : beam_thresholds; // OLD std::vector<float>
// actual_rack_thresh = rack_thresholds.empty() ? DEFAULT_RACK_THRESHOLDS :
// rack_thresholds; // OLD
// NEW: Use ConfigManager
std::vector<float> actual_beam_thresh =
ConfigManager::getInstance().getBeamThresholds();
std::vector<float> actual_rack_thresh =
ConfigManager::getInstance().getRackThresholds();
// Fallback if empty (should not happen with getBeamThresholds defaults)
if (actual_beam_thresh.size() < 4)
actual_beam_thresh = DEFAULT_BEAM_THRESHOLDS;
if (actual_rack_thresh.size() < 4)
actual_rack_thresh = DEFAULT_RACK_THRESHOLDS;
// 制作 json 阈值字符串的辅助函数
auto make_json_thresh = [](const std::vector<float> &t) {
return "{\"A\":" + std::to_string(t[0]) +
",\"B\":" + std::to_string(t[1]) +
",\"C\":" + std::to_string(t[2]) +
",\"D\":" + std::to_string(t[3]) + "}";
};
if (actual_beam_thresh.size() >= 4) {
result.beam_def_mm_threshold = make_json_thresh(actual_beam_thresh);
}
if (actual_rack_thresh.size() >= 4) {
result.rack_def_mm_threshold = make_json_thresh(actual_rack_thresh);
}
// 检查状态
// 横梁正值为向下Y+)。检查 C 和 D。
// 负值为向上Y-)。检查 A 和 B
// 用户要求:“横梁由于货物仅向下弯曲...Y 正方向)”
// 所以我们主要使用 max_beam_deflection检查 C警告和 D报警这是
// >0。 遗留阈值具有负值,可能用于范围检查。 我们将假设标准 [A(neg),
// B(neg), C(pos), D(pos)] 格式。
bool beam_warn = (max_beam_deflection >= actual_beam_thresh[2]); // > C
bool beam_alrm = (max_beam_deflection >= actual_beam_thresh[3]); // > D
// Rack: Bends Left or Right. We took Abs() -> always positive.
// So we check against C and D.
bool rack_warn = (max_rack_deflection >= actual_rack_thresh[2]);
bool rack_alrm = (max_rack_deflection >= actual_rack_thresh[3]);
auto make_json_status = [](bool w, bool a) {
return "{\"warning\":" + std::string(w ? "true" : "false") +
",\"alarm\":" + std::string(a ? "true" : "false") + "}";
};
result.beam_def_mm_warning_alarm = make_json_status(beam_warn, beam_alrm);
result.rack_def_mm_warning_alarm = make_json_status(rack_warn, rack_alrm);
// 标记为成功
result.success = true;
#ifdef DEBUG_ROI_SELECTION
std::cout << "[BeamRackDeflectionAlgorithm] Press ANY KEY to close graphs "
"and continue..."
<< std::endl;
cv::waitKey(0);
cv::destroyAllWindows();
#endif
return result.success;
} else {
// --- 模拟数据逻辑 ---
std::cout << "[BeamRackDeflectionAlgorithm] Using FAKE DATA implementation "
"(Switch OFF)."
<< std::endl;
result.beam_def_mm_value = 5.5f; // 模拟横梁弯曲
result.rack_def_mm_value = 2.2f; // 模拟立柱弯曲
result.success = true;
// 设置模拟阈值
// std::vector<float> actual_beam_thresh = beam_thresholds.empty() ?
// DEFAULT_BEAM_THRESHOLDS : beam_thresholds; std::vector<float>
// actual_rack_thresh = rack_thresholds.empty() ? DEFAULT_RACK_THRESHOLDS :
// rack_thresholds;
std::vector<float> actual_beam_thresh =
ConfigManager::getInstance().getBeamThresholds();
std::vector<float> actual_rack_thresh =
ConfigManager::getInstance().getRackThresholds();
if (actual_beam_thresh.size() < 4)
actual_beam_thresh = DEFAULT_BEAM_THRESHOLDS;
if (actual_rack_thresh.size() < 4)
actual_rack_thresh = DEFAULT_RACK_THRESHOLDS;
auto make_json_thresh = [](const std::vector<float> &t) {
return "{\"A\":" + std::to_string(t[0]) +
",\"B\":" + std::to_string(t[1]) +
",\"C\":" + std::to_string(t[2]) +
",\"D\":" + std::to_string(t[3]) + "}";
};
if (actual_beam_thresh.size() >= 4)
result.beam_def_mm_threshold = make_json_thresh(actual_beam_thresh);
if (actual_rack_thresh.size() >= 4)
result.rack_def_mm_threshold = make_json_thresh(actual_rack_thresh);
// 设置模拟警告/报警状态
result.beam_def_mm_warning_alarm = "{\"warning\":false,\"alarm\":false}";
result.rack_def_mm_warning_alarm = "{\"warning\":false,\"alarm\":false}";
return result.success;
}
return true;
}

View File

@@ -1,129 +1,19 @@
#include "visual_inventory_detection.h"
#include "HalconCpp.h"
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace HalconCpp;
// Helper to convert cv::Mat to Halcon HImage
HImage MatToHImage(const cv::Mat &image) {
HImage hImage;
if (image.empty())
return hImage;
cv::Mat gray;
if (image.channels() == 3) {
cv::cvtColor(image, gray, cv::COLOR_BGR2GRAY);
} else {
gray = image.clone();
}
// Fix: Create a copy of the data to ensure it persists beyond function scope
// GenImage1 with "byte" type expects the data to remain valid
void *data_copy = new unsigned char[gray.total()];
memcpy(data_copy, gray.data, gray.total());
try {
hImage.GenImage1("byte", gray.cols, gray.rows, data_copy);
} catch (...) {
delete[] static_cast<unsigned char *>(data_copy);
throw;
}
// Note: The data_copy will be managed by Halcon's HImage
// We don't delete it here as Halcon takes ownership
return hImage;
}
// using namespace HalconCpp; // Removed
bool VisualInventoryAlgorithm::detect(const cv::Mat &depth_img,
const cv::Mat &color_img,
const std::string &side,
VisualInventoryResult &result) {
result.success = false;
result.success = true;
result.codes.clear();
result.result_barcodes = "{\"" + side + "\":[]}";
try {
if (color_img.empty()) {
std::cerr << "[VisualInventoryAlgorithm] Error: Empty image input."
<< std::endl;
return false;
}
// 1. Convert to HImage
HImage hImage = MatToHImage(color_img);
// 2. Setup Halcon QR Code Model
HDataCode2D dataCode2d;
dataCode2d.CreateDataCode2dModel("QR Code", HTuple(), HTuple());
dataCode2d.SetDataCode2dParam("default_parameters", "enhanced_recognition");
HTuple resultHandles, decodedDataStrings;
// 3. Detect
// stop_after_result_num: 100 ensures we get up to 100 codes
HXLDCont symbolXLDs =
dataCode2d.FindDataCode2d(hImage, "stop_after_result_num", 100,
&resultHandles, &decodedDataStrings);
// 4. Transform Results to JSON
// Format: {"A01":["BOX111","BOX112"], "A02":["BOX210"]}
// Since we don't have position information, group all codes under a generic
// key
std::string json_barcodes = "\"" + side + "\":[";
Hlong count = decodedDataStrings.Length();
for (Hlong i = 0; i < count; i++) {
if (i > 0)
json_barcodes += ",";
// Access string from HTuple using S() which returns const char*
HTuple s = decodedDataStrings[i];
std::string code = std::string(s.S());
// Save raw code for deduplication
result.codes.push_back(code);
// Escape special characters in JSON strings
// Replace backslashes first, then quotes
size_t pos = 0;
while ((pos = code.find('\\', pos)) != std::string::npos) {
code.replace(pos, 1, "\\\\");
pos += 2;
}
pos = 0;
while ((pos = code.find('"', pos)) != std::string::npos) {
code.replace(pos, 1, "\\\"");
pos += 2;
}
json_barcodes += "\"" + code + "\"";
}
json_barcodes += "]";
result.result_barcodes = "{" + json_barcodes + "}";
result.success = true;
std::cout << "[VisualInventoryAlgorithm] Side: " << side
<< " | Detected: " << count << " codes." << std::endl;
} catch (HException &except) {
std::cerr << "[VisualInventoryAlgorithm] Halcon Exception: "
<< except.ErrorMessage().Text() << std::endl;
result.result_barcodes = "{\"" + side +
"\":[], \"error\":\"Halcon Exception: " +
std::string(except.ErrorMessage().Text()) + "\"}";
result.success = false;
} catch (std::exception &e) {
std::cerr << "[VisualInventoryAlgorithm] Exception: " << e.what()
<< std::endl;
result.result_barcodes =
"{\"" + side + "\":[], \"error\":\"" + std::string(e.what()) + "\"}";
result.success = false;
} catch (...) {
std::cerr
<< "[VisualInventoryAlgorithm] Unknown Exception during detection."
<< std::endl;
result.result_barcodes =
"{\"" + side + "\":[], \"error\":\"Unknown exception\"}";
result.success = false;
}
std::cout << "[VisualInventoryAlgorithm] Side: " << side
<< " | Detected: 0 codes (Simulated)." << std::endl;
return result.success;
}

File diff suppressed because it is too large Load Diff