首页> 外文会议>IEEE International Conference on Artificial Intelligence and Virtual Reality >A Compensation Method of Two-Stage Image Generation for Human-AI Collaborated In-Situ Fashion Design in Augmented Reality Environment
【24h】

A Compensation Method of Two-Stage Image Generation for Human-AI Collaborated In-Situ Fashion Design in Augmented Reality Environment

机译:增强现实环境下人与人工智能协同原位时装设计的两阶段图像生成补偿方法

获取原文

摘要

In this paper, we consider a human-AI collaboration task, fashion design, in augmented reality environment. In particular, we propose a compensation method of two-stage image generation neural network for generating fashion design with progressive users' inputs. Our work is based on a recent proposed deep learning model, pix2pix, that can successfully transform an image from one domain into another domain, such as from line drawings to color images. However, the pix2pix model relies on the condition that input images should come from the same distribution, which is usually hard for applying it to real human computer interaction tasks, where the input from users differs from individual to individual. To address the problem, we propose a compensation method of two-stage image generation. In the first stage, we ask users to indicate their design preference with an easy task, such as tuning clothing landmarks, and use the input to generate a compensation input. With the compensation input, in the second stage, we then concatenate it with the real sketch from users to generate a perceptual better result. In addition, to deploy the two-stage image generation neural network in augmented reality environment, we designed and implemented a mobile application where users can create fashion design referring to real world human models. With the augmented 2D screen and instant feedback from our system, users can design clothing by seamlessly mixing the real and virtual environment. Through an online experiment with 46 participants and an offline use case study, we showcase the capability and usability of our system. Finally, we discuss the limitations of our system and further works on human-AI collaborated design.
机译:在本文中,我们考虑了在增强现实环境中的人与人工智能协作任务,即服装设计。特别地,我们提出了一种两阶段图像生成神经网络的补偿方法,用于利用渐进的用户输入来生成服装设计。我们的工作基于最近提出的深度学习模型pix2pix,该模型可以成功地将图像从一个域转换到另一个域,例如从线条图到彩色图像。但是,pix2pix模型依赖于输入图像应来自相同分布的条件,这通常很难将其应用于真实的人机交互任务,其中用户输入的输入因个人而异。为了解决该问题,我们提出了一种两阶段图像生成的补偿方法。在第一阶段,我们要求用户通过简单的任务(例如调整服装地标)来表明他们的设计偏好,并使用输入生成补偿输入。在第二阶段,使用补偿输入,然后将其与用户的真实草图连接起来,以产生可感知的更好结果。此外,为了在增强现实环境中部署两阶段图像生成神经网络,我们设计并实现了一个移动应用程序,用户可以在其中参考现实世界的人体模型来创建时装设计。借助增强的2D屏幕和来自我们系统的即时反馈,用户可以通过无缝混合真实和虚拟环境来设计服装。通过与46位参与者进行的在线实验和离线使用案例研究,我们展示了系统的功能和可用性。最后,我们讨论了系统的局限性,并进一步进行了人与人工智能的协作设计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号