留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于U-NET多景深图片目标物定位自动聚焦算法

梁路瑶 赵晓云 赵金泉

梁路瑶, 赵晓云, 赵金泉. 一种基于U-NET多景深图片目标物定位自动聚焦算法[J]. 强激光与粒子束, 2022, 34: 129001. doi: 10.11884/HPLPB202234.220086
引用本文: 梁路瑶, 赵晓云, 赵金泉. 一种基于U-NET多景深图片目标物定位自动聚焦算法[J]. 强激光与粒子束, 2022, 34: 129001. doi: 10.11884/HPLPB202234.220086
Liang Luyao, Zhao Xiaoyun, Zhao Jinquan. An automatic focusing algorithm based on U-Net for target location in multiple depth-of-field scene[J]. High Power Laser and Particle Beams, 2022, 34: 129001. doi: 10.11884/HPLPB202234.220086
Citation: Liang Luyao, Zhao Xiaoyun, Zhao Jinquan. An automatic focusing algorithm based on U-Net for target location in multiple depth-of-field scene[J]. High Power Laser and Particle Beams, 2022, 34: 129001. doi: 10.11884/HPLPB202234.220086

一种基于U-NET多景深图片目标物定位自动聚焦算法

doi: 10.11884/HPLPB202234.220086
基金项目: 成都理工大学教改项目(JG2130033,2022YAL013)
详细信息
    作者简介:

    梁路瑶,438205499@qq.com

    通讯作者:

    赵晓云,zhaoxiaoyun2012@cdut.cn

  • 中图分类号: O435.2

An automatic focusing algorithm based on U-Net for target location in multiple depth-of-field scene

  • 摘要: 在多景深场景下,已知目标物类型,当目标物位于图像中心位置时,传统的聚焦评价函数曲线灵敏度较低;当目标物偏离中心位置时,聚焦评价函数曲线容易出现局部极大值或无法准确判断出准焦图像,影响自动聚焦系统。针对这两种情况,提出了一种基于U-Net神经网络判断目标物位置,设定对应窗口和评价函数的方法,即当目标物位于图像中心位置时,提出了一种新的聚焦评价函数——SMD-Roberts函数;当目标物不在图像中心位置时,设定对应窗口,选择SML评价函数对图像像质进行评价。实验结果表明,与传统的灰度梯度自动聚焦评价函数和传统的取窗法相比,该方法得到的聚焦评价函数灵敏度最少提高0.0241,耗时最少减少0.0355 s,单峰最少减少1个次峰,有效地解决了多景深场景下,应用聚焦评价函数判断目标物最清晰位置不准确及聚焦评价函数曲线出现双峰的问题,明显地提高了评价函数的无偏性、单峰性以及灵敏度。该方法普适性强,更适用于自动聚焦系统。
  • 图  1  多景深图像

    Figure  1.  Multi-depth-of-field images of the object in the foreground and in the background

    图  2  多景深场景下聚焦评价函数曲线

    Figure  2.  Focus evaluation function curve in multi-depth-of-field scene

    图  3  目标物体大致在图像中分布位置

    Figure  3.  The target objects are roughly distributed in the image

    图  4  使用U-Net进行目标物定位流程图

    Figure  4.  Flow chart of target location using U-net

    图  5  聚焦窗口算法流程图

    Figure  5.  Flowchart of focusing window algorithm

    图  6  定位目标物设定对应窗口流程

    Figure  6.  Flow to locate the target and set the corresponding window

    图  7  五组图像的部分图像

    Figure  7.  Partial images of five sets of images

    图  8  一组图像使用U-Net得到的预测图和本文算法得到的待评价图像

    Figure  8.  For a group of images, the prediction map obtained by U-net and the images to be evaluated

    图  9  第一组图像聚焦评价函数曲线

    Figure  9.  The first group of image focus evaluation function curves

    图  10  第二组图像加窗口前后聚焦评价函数

    Figure  10.  Focus evaluation function curves of the second group of images off and with windows

    图  11  第三组图像加窗口前后聚焦评价函数曲线

    Figure  11.  Focus evaluation function curves of the third group of images off and with windows

    图  12  第四组图像加窗口前后聚焦评价函数曲线

    Figure  12.  Focus evaluation function curves of the fourth group of images before and after adding windows

    图  13  第五组图像加窗口前后聚焦评价函数曲线

    Figure  13.  Focus evaluation function curves of the fifth group of images before and after

    图  14  中央取窗法及本文方法得到的图像

    Figure  14.  Image obtained by the central window method and obtained by the proposed method

    图  15  中央取窗法和本文取窗法的聚焦评价函数

    Figure  15.  Focus evaluation function of central window method and window method proposed in this paper

    表  1  第一组图像聚焦评价函数的评价指标

    Table  1.   Evaluation indexes of focus evaluation functions of group one

    functionw/pixR/pixS/pixδSE/pixɑ/pcsΤ/sϛ/pcs
    SMD 24 6.8875 0.0411 7.2582 0 1.6137 1
    Roberts 24 7.3431 0.0410 6.8648 0 1.6456 1
    Sobel 24 1.4623 0.0389 1.2439 0 3.1041 1
    Brenner 23 8.1275 0.0420 5.8426 0 1.2990 1
    SML 22 1.3183 0.0409 0.9761 1 2.1598 1
    SMD-Roberts 23 1404.7 0.0434 121.69 0 2.2995 1
    下载: 导出CSV

    表  2  第二组图像聚焦评价函数的评价指标

    Table  2.   Evaluation indexes of focus evaluation functions of group two

    functionw/pixR/pixS/pixδSE/pixɑ/pcsΤ/sϛ/pcs
    SMD/SMD-W 29/25 2.1698/2.5157 0.0325/0.0283 1.9085/0.839 3/0 2.0453/2.1611 1/1
    Roberts/Roberts-W 29/25 2.2958/2.5687 0.0324/0.0278 1.7515/0.7671 2/0 2.2270/2.0975 1/1
    Sobel/Sobel-W 29/28 1.5113/1.7246 0.0307/0.0315 1.8000/2.9075 0/0 3.6727/3.7181 1/1
    Brenner/Brenner-W 30/30 2.8219/3.7761 0.0311/0.0314 1.5077/4.0807 2/0 1.6520/1.5994 1/1
    SML/SML-W28/271.1721/1.27260.0344/0.03351.7768/3.74762/22.5761/2.66161/1
    下载: 导出CSV

    表  3  第三组图像聚焦评价函数的评价指标

    Table  3.   Evaluation indexes of focus evaluation functions of group three

    functionw/pixR/pixS/pixδSE/pixɑ/pcsΤ/sϛ/pcs
    SMD/SMD-W 6/11 18.6244/10.0582 0.1372/0.0891 3.1678/21.556 1/0 1.6038/1.6348 1/1
    Roberts/Roberts-W 6/11 14.2448/7.6339 0.1328/0.0886 2.2774/15.969 1/0 1.8546/1.6742 1/1
    Sobel/Sobel-W 8/11 3.0674/3.7650 0.1005/0.0871 1.9598/9.0097 1/0 3.2885/3.3039 1/1
    Brenner/Brenner-W 6/10 10.2044/11.6104 0.1269/0.0942 2.0341/10.880 1/0 1.2648/1.2660 1/1
    SML/SML-W7/111.43591/9.37510.0734/0.08900.3356/19.3141/02.4652/1.92950.86/1
    下载: 导出CSV

    表  4  第四组图像聚焦评价函数的评价指标

    Table  4.   Evaluation indexes of focus evaluation functions of group four

    functionw/pixR/pixS/pixδSE/pixɑ/pcsΤ/sϛ/pcs
    SMD/SMD-W 22/21 12.374/3.5626 0.0448/0.0459 3.8276/5.0336 1/0 1.5648/1.5551 0.90/1
    Roberts/Roberts-W 22/21 13.184/3.4488 0.0449/0.0458 3.6610/4.7100 1/0 1.5518/1.6737 0.90/1
    Sobel/Sobel-W 22/21 1.9829/1.6798 0.0421/0.0440 1.0151/1.7624 2/1 2.9408/2.9797 0.90/1
    Brenner/Brenner-W 22/22 12.778/10.217 0.0448/0.0450 4.4590/6.5856 2/1 1.2274/1.2255 0.90/1
    SML/SML-W22/211.4090/1.28700.0409/0.03970.7853/0.80941/12.5329/1.93050.90/1
    下载: 导出CSV

    表  5  第五组图像聚焦评价函数的评价指标

    Table  5.   Evaluation indexes of focus evaluation functions of group five

    functionw/pixR/pixS/pixδSE/pixɑ/pcsΤ/sϛ/pcs
    SMD/SMD-W 29/26 3.5789/2.3022 0.0335/0.0377 0.0334/2.9570 1/0 2.1879/2.1025 0.94/1
    Roberts/Roberts-W 29/26 3.8800/2.3054 0.0334/0.0376 1.9252/2.7358 1/1 2.2622/2.0914 0.94/1
    Sobel/Sobel-W 27/28 1.5170/1.4799 0.0324/0.0347 0.8593/0.8298 1/1 4.4030/3.9267 1/1
    Brenner/Brenner-W 27/28 4.3384/4.4870 0.0352/0.0353 2.1226/1.8485 1/1 1.7418/1.6939 1/1
    SML/SML-W28/281.3850/1.35310.0345/0.03531.2580/0.87972/12.7556/2.65461/1
    下载: 导出CSV

    表  6  中央取窗法和本文取窗法聚焦评价函数的评价指标

    Table  6.   Evaluation indexes of focus evaluation functions of center window method and proposed method

    functionw/pixR/pixS/pixδSE/pixɑ/pcsΤ/sϛ/pcs
    SMD-C/SMD-W 10/11 3.5506/10.0582 0.0861/0.0891 0.0861/21.556 3/0 1.6262/1.5907 1/1
    Roberts-C/Roberts-W 10/11 2.8268/7.6339 0.08214/0.0886 1.9627/15.969 3/0 1.6108/1.6742 1/1
    Sobel-C/Sobel-W 8/11 3.0189/3.7650 0.1045/0.0871 2.3171/9.0097 1/0 2.9433/3.3039 1/1
    Brenner-C/Brenner-W 10/10 3.0372/11.610 0.0804/0.0942 1.9952/10.880 4/0 1.2488/1.2660 1/1
    SML-C/SML-W11/116.4727/1.60530.0827/0.07464.3409/1.88730/01.8998/2.18361/1
    下载: 导出CSV
  • [1] 包丞啸. 基于图像处理的自动聚焦技术研究[D]. 济南: 山东大学, 2021: 15-16

    Bao Chengxiao. Study on auto-focusing technology based on image processing[D]. Ji’nan: Shandong University, 2021: 15-16
    [2] 何帆. 基于图像模糊度预测的快速聚焦算法研究[D]. 重庆: 重庆邮电大学, 2021: 25-28

    He Fan. Research on focusing algorithm based on image degree of blur prediction[D]. Chongqing: Chongqing University of Posts and Telecommunications, 2021: 25-28
    [3] Groen F C A, Young I T, Ligthart G. A comparison of different focus functions for use in autofocus algorithms[J]. Cytometry, 1985, 6(2): 81-91. doi: 10.1002/cyto.990060202
    [4] Firestone L, Cook K, Culp K, et al. Comparison of autofocus methods for automated microscopy[J]. Cytometry, 1991, 12(3): 195-206. doi: 10.1002/cyto.990120302
    [5] 王立昌. 监控摄像机的自动聚焦系统设计与实现[D]. 苏州: 苏州大学, 2015: 23-27

    Wang Lichang. Design and implementation of network camera auto-focus system[D]. Suzhou: Suzhou University, 2015: 23-27
    [6] 王烨茹. 基于数字图像处理的自动对焦方法研究[D]. 杭州: 浙江大学, 2018: 90-91

    Wang Yeru. Research on auto-focus methods based on digital imaging[D]. Hangzhou: Zhejiang University, 2018: 90-91
    [7] Lee S Y, Kumar Y, Cho J M, et al. Enhanced autofocus algorithm using robust focus measure and fuzzy reasoning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2008, 18(9): 1237-1246. doi: 10.1109/TCSVT.2008.924105
    [8] Liu Lianjie, Zheng Yayu, Feng Jiaqin, et al. A fast auto-focusing technique for multi-objective situation[C]//2010 International Conference on Computer Application and System Modeling. 2010: 607-610.
    [9] 李学胜. 家蚕微粒子病显微图像自动聚焦与识别方法研究[D]. 武汉: 湖北工业大学, 2021: 21-30

    Li Xuesheng. Research on automatic focusing and recognition method of microscopic image of silkworm pebrine disease[D]. Wuhan: Hubei University of Technology, 2021: 21-30
    [10] 郭建, 赵显. 一种基于图像处理的快速自动聚焦算法[J]. 湘潭大学自然科学学报, 2012, 34(2):22-25 doi: 10.3969/j.issn.1000-5900.2012.02.005

    Guo Jian, Zhao Xian. An improved auto focus algorithm based on image processing[J]. Natural Science Journal of Xiangtan University, 2012, 34(2): 22-25 doi: 10.3969/j.issn.1000-5900.2012.02.005
    [11] Yousefi S, Rahman M, Kehtarnavaz N. A new auto-focus sharpness function for digital and smart-phone cameras[J]. IEEE Transactions on Consumer Electronics, 2011, 57(3): 1003-1009. doi: 10.1109/TCE.2011.6018848
    [12] Zhang Yani, Zhang Ying, Wen Changyun. A new focus measure method using moments[J]. Image and Vision Computing, 2000, 18(12): 959-965. doi: 10.1016/S0262-8856(00)00038-X
    [13] 熊锐, 顾乃庭, 徐洪艳. 一种适应多方向灰度梯度变化的自动对焦评价函数[J]. 激光与光电子学进展, 2022, 59:0418001

    Xiong Rui, Gu Naiting, Xu Hongyan. An auto-focusing evaluation function adapted to multi-directional gray gradient change[J]. Laser & Optoelectronics Progress, 2022, 59: 0418001
    [14] 熊锐. 基于数字图像处理的显微自动对焦技术研究[D]. 成都: 中国科学院大学(中国科学院光电技术研究所), 2021: 15-17

    Xiong Rui. Study on microscopic autofocus technology based on digital image processing[D]. Chengdu: University of Chinese Academy of Sciences (Institute of Optoelectronic Technology, Chinese Academy of sciences), 2021: 15-17
    [15] 尹爱军, 张焱, 杨彬, 等. 多窗口模式Roberts聚焦评价方法及其应用[J]. 重庆大学学报, 2011, 34(11):25-30 doi: 10.11835/j.issn.1000-582X.2011.11.005

    Yin Aijun, Zhang Yan, Yang Bin, et al. Roberts focused evaluation method and its application in multi-windows mode[J]. Journal of Chongqing University, 2011, 34(11): 25-30 doi: 10.11835/j.issn.1000-582X.2011.11.005
    [16] 夏浩盛, 余飞鸿. 数码显微镜自动对焦算法[J]. 激光与光电子学进展, 2021, 58:0400002

    Xia Haosheng, Yu Feihong. Auto focusing algorithm of digital microscope[J]. Laser & Optoelectronics Progress, 2021, 58: 0400002
    [17] 李斯文. 细胞筛选平台显微自动对焦系统研究[D]. 洛阳: 河南科技大学, 2017: 30-32

    Li Siwen. Study on microscopic auto-focus system of cell screening platform[D]. Luoyang: Henan University of Science and Technology, 2017: 30-32
    [18] 李忠智, 尹航, 左剑凯, 等. 基于UNet++网络与多边输出融合策略的船舶检测模型[J]. 计算机工程, 2022, 48(4):276-283 doi: 10.19678/j.issn.1000-3428.0058696

    Li Zhongzhi, Yin Hang, Zuo Jiankai, et al. Ship detection model based on UNet++ network and multiple side-output fusion strategy[J]. Computer Engineering, 2022, 48(4): 276-283 doi: 10.19678/j.issn.1000-3428.0058696
    [19] 彭方达. 基于语义分割的驾驶员安全带检测算法研究[D]. 郑州: 中原工学院, 2022: 18-21

    Peng Fangda. Research on driver seat belt detection algorithm based on semantic segmentation[D]. Zhengzhou: Zhongyuan University of Technology, 2022: 18-21
    [20] 徐思则, 刘威. 基于UNet网络的乳腺癌肿瘤细胞图像分割[J]. 电子设计工程, 2022, 30(12):63-66,73 doi: 10.14022/j.issn1674-6236.2022.12.013

    Xu Size, Liu Wei. UNet-based image segmentation of breast cancer tumor cells[J]. Electronic Design Engineering, 2022, 30(12): 63-66,73 doi: 10.14022/j.issn1674-6236.2022.12.013
    [21] 张杰, 唐立新, 陈子章, 等. 基于改进UNet网络的金丝球焊直径测量研究[J]. 现代制造工程, 2022(5):110-114 doi: 10.16731/j.cnki.1671-3133.2022.05.017

    Zhang Jie, Tang Lixin, Chen Zizhang, et al. Research on diameter measurement of gold wire ball solder joints based on improved UNet network[J]. Modern Manufacturing Engineering, 2022(5): 110-114 doi: 10.16731/j.cnki.1671-3133.2022.05.017
    [22] 孟庆成, 李明健, 万达, 等. 基于M-Unet的混凝土裂缝实时分割算法[J]. 土木与环境工程学报, 2022:1-9

    Meng Qingcheng, Li Mingjian, Wan Da, et al. Real-time segmentation algorithm of concrete cracks based on M-Unet[J]. Journal of Civil and Environmental Engineering, 2022: 1-9
    [23] 郭臻. 基于Unet深度学习的超级慢动作实践[J]. 现代电视技术, 2021(8):141-143 doi: 10.3969/j.issn.1671-8658.2021.08.034

    Guo Zhen. Super slow motion practice based on U-Net deep learning[J]. Advanced Television Engineering, 2021(8): 141-143 doi: 10.3969/j.issn.1671-8658.2021.08.034
    [24] 张欢, 仇大伟, 冯毅博, 等. U-Net模型改进及其在医学图像分割上的研究综述[J]. 激光与光电子学进展, 2022, 59:0200005

    Zhang Huan, Qiu Dawei, Feng Yibo, et al. Improved U-Net models and its applications in medical image segmentation: a review[J]. Laser & Optoelectronics Progress, 2022, 59: 0200005
    [25] 葛云皓. 基于卷积神经网络的病理显微镜自动对焦与全局精准成像研究[D]. 上海: 上海交通大学, 2019: 20-30

    Ge Yunhao. Automatic focusing and global precise imaging of pathological microscope based on convolutional neural network[D]. Shanghai: Shanghai Jiao Tong University, 2019: 20-30
    [26] Nguyen T, Thai A, Adwani P, et al. Autofocusing of fluorescent microscopic images through deep learning convolutional neural networks[C]//Digital Holography and Three-Dimensional Imaging. 2019: W3A. 32.
    [27] 王原, 马瑜, 江妍, 等. U-Net改进的视网膜血管图像分割算法[J]. 计算机工程与设计, 2021, 42(10):2884-2893 doi: 10.16208/j.issn1000-7024.2021.10.025

    Wang Yuan, Ma Yu, Jiang Yan, et al. Improved retinal vascular image segmentation algorithm based on U-Net[J]. Computer Engineering and Design, 2021, 42(10): 2884-2893 doi: 10.16208/j.issn1000-7024.2021.10.025
    [28] 蔡畅, 陈军波, 陈心浩. 基于改进U-Net方法的脑肿瘤磁共振图像分割[J]. 中南民族大学学报(自然科学版), 2021, 40(4):417-423

    Cai Chang, Chen Junbo, Chen Xinhao. MRI image segmentation of brain tumor based on improved U-Net method[J]. Journal of South-Central University for Nationalities (Natural Science Edition), 2021, 40(4): 417-423
    [29] Zeng Zhenhuan, Fan Chaodong, Xiao Leyi, et al. DEA-UNet: a dense-edge-attention UNet architecture for medical image segmentation[J]. Journal of Electronic Imaging, 2022, 31: 043032.
    [30] 何承恩, 徐慧君, 王忠, 等. 多模态磁共振脑肿瘤图像自动分割算法研究[J]. 光学学报, 2020, 40:0610001 doi: 10.3788/AOS202040.0610001

    He Cheng’en, Xu Huijun, Wang Zhong, et al. Automatic segmentation algorithm for multimodal magnetic resonance-based brain tumor images[J]. Acta Optica Sinica, 2020, 40: 0610001 doi: 10.3788/AOS202040.0610001
  • 加载中
图(15) / 表(6)
计量
  • 文章访问数:  564
  • HTML全文浏览量:  249
  • PDF下载量:  72
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-03-28
  • 修回日期:  2022-10-07
  • 录用日期:  2022-10-09
  • 网络出版日期:  2022-11-02
  • 刊出日期:  2022-11-02

目录

    /

    返回文章
    返回