• 首页
  • 期刊简介
  • 编委会
  • 投稿指南
  • 收录情况
  • 杂志订阅
  • 联系我们
引用本文:舒幸哲.基于并行卷积核的Attention U-Net虚拟试衣方法研究[J].软件工程,2022,25(6):13-17.【点击复制】
【打印本页】   【下载PDF全文】   【查看/发表评论】  【下载PDF阅读器】  
←前一篇|后一篇→ 过刊浏览
分享到: 微信 更多
基于并行卷积核的Attention U-Net虚拟试衣方法研究
舒幸哲
(浙江理工大学信息学院,浙江 杭州 310018)
1036413161@qq.com
摘 要: 针对虚拟试衣中特征提取不足、人物肢体被衣服遮挡的问题,在基于图像特征保留的虚拟试衣方法基础上,提出基于并行卷积核的Attention U-Net虚拟试衣方法。该方法采用并行卷积核代替原有的3×3卷积核来提取特征,并在U-Net网络中融入注意力机制形成新的Attention U-Net图像合成器,通过不断调整网络学习参数,将模型放在数据集VITON Dataset上进行虚拟试衣实验。实验结果表明,与原方法相比,该方法能提取出更多的细节纹理,在结构相似性上提升了15.6%,虚拟试衣效果更好。
关键词: 虚拟试衣;特征提取;并行卷积核;注意力机制;结构相似性
中图分类号: TP391.41    文献标识码: A
基金项目: 绍兴市技术创新计划(揭榜挂帅)项目(2020B41006).
Research on Attention U-Net Virtual Try-On Method based on Parallel Convolution Kernel
SHU Xingzhe
(School of Information, Zhejiang Sci-Tech University, Hangzhou 310018, China)
1036413161@qq.com
Abstract: Virtual try-on has problem of insufficient feature extraction in and people's limbs being covered by clothes. On the basis of the virtual try-on method with image feature retention, this paper proposes an Attention U-Net virtual try-on method based on parallel convolution kernel. In this method, parallel convolution kernel is used to replace the original 3×3 convolution kernel to extract features, and the attention mechanism is integrated into the u-net network to form a new Attention U-Net image synthesizer. By constantly adjusting the network learning parameters, the model is placed on the data set VITON (Virtual Try-On Network) Dataset for virtual fitting experiment. Experimental results show that compared with the original method, the proposed method can extract more detailed textures, improve the structural similarity by 15.6%, and the virtual fitting effect is better.
Keywords: virtual try-on; feature extraction; parallel convolution kernel; attention mechanism; structural similarity


版权所有:软件工程杂志社
地址:辽宁省沈阳市浑南区新秀街2号 邮政编码:110179
电话:0411-84767887 传真:0411-84835089 Email:semagazine@neusoft.edu.cn
备案号:辽ICP备17007376号-1
技术支持:北京勤云科技发展有限公司

用微信扫一扫

用微信扫一扫