首页> 外国专利> Method for providng smart learning education based on sensitivity avatar emoticon and smart learning education device for the same

Method for providng smart learning education based on sensitivity avatar emoticon and smart learning education device for the same

机译:基于敏感性头像表情提供智能学习教育的方法及其智能学习教育设备

摘要

The present ntion relates to a smart learning learning method based on emotional avatar emoticons, and a smart learning learning terminal device for implementing the same. The controller 160 includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transceiver 150, a controller 160, and a storage 170, In the smart learning learning terminal device 100 having the emotional avatar emoticon generation module 161, the hybrid feature point detection algorithm module 162 and the smart learning learning module 163, the emotional avatar emoticon generation module 161, The avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit 161d, and a voice parsing unit 161e. The detection means 161a detects the template character information previously stored in the storage unit 170 (in the case of the template character information, group icon information for each part of the emotional avatar face character, Information on the eye A group icon having a group icon such as under-eye wrinkles, a white eye, a pupil frame in case of the group icon information, and a group icon such as a lip line, a lip gloss, and a lip wrinkle in the mouth) Template character information, which is standard data for expressing a user character based on a 2D front face photograph of the feature point detection means 161a (in the case of template character information for character creation, the final product, which is implemented in an animation form, The emotional avatar face character included in the emoticon as the face region, and the emotional avatar face character, which are formed as group information for each face component with the standard data stored in advance in order to generate the animation including the emotional avatar face character) , 2D frontal face photograph The face region is extracted and the position of the eyes, nose, and mouth (including the ear, jaw, and the like), which are the face components of the face region, is automatically grasped. From the template character information, Extracts standard data matching the eyes, nose, mouth, ear and jaw using the group information of the 2D front face photograph image, which is the object to be actually characterized, The character generation unit 161b generates a similarity transformation matrix selected from the group of the respective sites by using the reference points selected on the outline of the jaw and the standard data stored in advance and the character generation unit 161b generates an animation using the emotional avatar face character that implements the face area of the user The automatic extraction of the facial area from the 2D frontal facial photograph image is performed in order to generate the 'emotional avatar emoticon of the form' The template character information previously stored in the storage unit 170 (template character information, change of the lips, change of the eye into the wink expression, change of the lips in the sloping expression of the lips) in accordance with the similarity transformation matrix generated by the step 161a The avatar facial character normalization information is normalized and stored in the storage unit 170. The feature point detection unit 161a then stores the normalized facial character normalization information in the storage unit 170, The emotional avatar face character normalization information is extracted from one of the group icons of each part of the emotional avatar face character by using the similarity transformation matrix generated by the facial character generating unit, Generate for use for animation effects, shadow effects, and apply gamma correction and shading for eye eyebrow replacement. Emotional avatar face character normalization information ", and the template resource image generation means 161c generates the template animation image information by parsing the face area among the 2D front face image images extracted by the character generation means 161b Parsing), the process of creating the 2D emotion avatar emoticon is performed by parsing the face area of the 2D front face photograph image with the template animation content information that can be expressed in the animation form, and the facial component is parsed in the parsed face area In the case of judging whether or not it is suitable for the self-image animation implemented according to the template animation content information, the information of the standard face part in animation in which the parsed face area is implemented according to the template animation content information, Percentage of fitness that is matching information of nose, mouth, ear and jaw If it is determined that it is inappropriate if it is less than a preset threshold percentage and if it is suitable for its own image animation, it is determined as a face component constituting the face area of the 2D front face photograph image The 2D emotional avatar emoticon in the form of an animation is completed by changing the face component of the animation. On the other hand, when the determination result according to the determination criterion is not suitable for the self image animation, the 'emotional avatar' Facial character normalization information 'as a facial component template in the self-image animation implemented according to the template animation content information previously stored in the storage unit 170, and the storage unit 170 stores the facial character normalization information' Element When the template animation content information corresponding to the animation template information for the facial expression animation is also stored in advance, the normalization information of the emotional avatar face character, which is the normalization information of the similarity transformation matrix, The facial component for the emotional avatar implementing the animation is changed and stored in the storage unit 170, and then the facial component for the emotional avatar is stored in the storage unit 170. Then, When extracting the skin color and state of the user for implementing the face component corresponding to the selected face component template, and creating the skin corresponding to the face component template selected as the first partial process for applying to the 2D emotion avatar emoticon Depending on the part of the animation application, It is a second part process for removing and automatically generating facial skin by reflecting the skin property of user according to animation effect after extracting skin color attribute and applying it to 2D emotion avatar emoticon. When facial component change is performed, The shape of eyebrows, eyes, lips and jaws of the face are changed. The shapes of eyebrows, eyes, lips and jaws of the user's face components are changed and the template face components generated by the first partial process are changed The facial feature image is automatically applied to the user's face by automatically adjusting the face element color and shape attributes of the user and extracts the size and color attributes of the face area to be applied as a face character icon in the 2D front facial image image, To the size and color property of the extracted face area so as to apply it as an animation target icon And the 3D face application means 161d completes the generation of the 2D emotion avatar emoticon by changing the color and the size to match the face region of the 2D emotion avatar emoticon generated by the template resource image generation means 161c (3D face modeling based on 2D) is performed to automatically generate the 3D emotion avatar emoticons displayed in the animation format in order to automatically generate the upper, lower, left, Based 2D facial morphing for performing a 3D face animation effect for distorting the top, bottom, left, and right sides of a user's 2D frontal face photograph image to produce a rotation effect. 2D), the face region of the 2D emotion avatar emoticon generated as the 2D front face photograph image is decoded and decoded And stores the generated image in the storage unit 170. Thereafter, the polygon which is the smallest unit used to express the three-dimensional shape in the three-dimensional graphic is generated, a plurality of polygons are created and converted into a set of polygons, The polygon set is stored in the storage unit 170 and the texture mapping is performed to attach the decoded image stored in the storage unit 170 to the generated polygon set to generate the 3D mapped face area data, 3D emotion with 3D face area data Emoticons with a level of 1/100 (0.The scaled down by 01-fold), and stored in the storage unit 170, and output to the touch screen 120, a pre-processing the image process on the 3D face Morel ring (3D face modeling based on 2D), a 2D frontal face images The outline of the image is detected from the image information of the face region of the generated 2D emotional avatar emoticon. In order to detect the outline of the image, the image of the photo is subjected to binarization or special processing. the process of lowering the color value to a value of zero and one, and to clear the edge information of the image through binarization, and change the color image during a special treatment to the gray level or run the contour detection, is extracted and a special treatment during the outline user the reference point improves the selection of the outline, a pre-stored template in the storage unit 170 corresponding to the reference point and the standard data selected in outline And the voice parsing unit 161e uses the character information. The voice parsing unit 161e extracts the emoticon template for the 3D emotion avatar emoticons generated by the 3D emotion avatar emoticon, the 2D emotion avatar emoticons generated by the template resource image generating unit 161c, After extracting the content from the storage unit 170, the 2D emotion avatar emoticon or the 3D emotion avatar emoticon is output to the touch screen 120 in an animation format based on the template animation content information, and then the voice signal input to the microphone 130 is Converted into a 3D emotional avatar emoticon including a 2D emotional avatar emoticon including a voice expression and a voice emotion for a 2D emotional avatar emoticon or a 3D emotional avatar emoticon upon receipt of a voice signal and storing the same in the storage unit 170, wherein the completion of the creation of the Avatar emoticon emotion It shall be.Thereby, the part of the learner himself voice learners their emotional avatar is generated immediately by parsing are inserted by parsing the templated group making learning contents portion, the learner's own voice content operation (menu-driven, etc.) It provides a realistic indirect experience that is very effective in educating children with experience-centered features by providing a part that is instantly inserted into the content and associated with the librarge of the emotional avatar corresponding to the learner's own avatar or standard avatar.In addition, it provides the effect of improving the immersion degree of the smart learning learning using the emotional avatar emoticons of the automatic generation method of the form and color of the eyes, nose, do.In addition, it performs string parsing, generates emotional avatar emoticons including voice as lip sync animation type, and offers the advantage of real-time combination of emotional animation.;
机译:本发明涉及基于情绪化身表情符号的智能学习学习方法,以及用于实现该方法的智能学习学习终端设备。在具有情感化身表情符号生成模块161的智能学习学习终端设备100中,控制器160包括照相机110,触摸屏120,麦克风130,扬声器140,收发器150,控制器160和存储器170。 ,混合特征点检测算法模块162和智能学习学习模块163,情感化身表情符号生成模块161,化身表情符号生成模块161包括特征点检测单元161a,角色生成单元161b,模板资源图像生成单元161c,3D面部应用单元161d和语音解析单元161e。检测装置161a检测预先存储在存储单元170中的模板字符信息(在模板字符信息的情况下,为情感化身面部字符的每个部分的组图标信息,在眼睛上的信息具有组图标的组图标例如眼下的皱纹,白眼,在使用组图标信息的情况下的瞳孔框以及在唇线,唇彩和嘴唇上的皱纹等组图标)模板字符信息,即用于基于特征点检测装置161a的2D正面照片来表达用户角色的标准数据(在用于角色创建的模板角色信息的情况下,以动画形式实现的最终产品,情感化身面部角色表情符号中包含的脸部区域和情感化身的脸部角色,作为每个脸部组成部分的组信息,具有预先存储在ord中的标准数据er生成包含情感化身的面部表情的动画),2D正面面部照片。提取面部区域以及作为面部组成部分的眼睛,鼻子和嘴巴(包括耳朵,下巴等)的位置自动识别脸部区域。从模板字符信息中,使用要实际表征的对象的2D正面照片图像的组信息,提取与眼睛,鼻子,嘴巴,耳朵和下巴匹配的标准数据,字符生成单元161b生成相似度变换通过使用在颌部轮廓上选择的参考点和预先存储的标准数据,从各个部位的组中选择矩阵,并且角色生成单元161b使用情感化身面部角色来生成动画,该情感化身面部角色实现了面部的面部区域。用户执行从2D正面面部照片图像中自动提取面部区域的操作,以生成“表单的情感头像表情”。预先存储在存储单元170中的模板字符信息(模板字符信息,嘴唇变化) ,将眼睛改变为眨眼表情,将嘴唇改变为倾斜的嘴唇表情)根据步骤161a生成的相似度变换矩阵,将化身面部字符标准化信息标准化并存储在存储单元170中。特征点检测单元161a然后将标准化后的面部字符标准化信息存储在存储单元170中,通过使用面部角色生成单元生成的相似度变换矩阵,从情感化身面部表情的每个部分的组图标之一中提取情感化身面部表情标准化信息,生成以用于动画效果,阴影效果并应用伽玛矫正和遮荫以更换眉毛。情感化身面部字符归一化信息”,并且模板资源图像生成装置161c通过解析由字符生成装置161b提取的2D正面图像图像中的面部区域来解析模板图像,从而生成模板动画图像信息。情感化身表情是通过将2D正面照片图像的面部区域与可以以动画形式表达的模板动画内容信息进行解析来执行的,并且在所解析的面部区域中解析面部成分。不适用于根据模板动画内容信息实现的自图像动画,根据模板动画内容信息实现解析后的人脸区域的动画中的标准脸部信息,匹配的适应度鼻子,嘴,耳朵和下巴的信息如果确定不适当e如果小于预设的阈值百分比,并且适合于自己的图像动画,将其确定为构成2D正面照片图像的面部区域的面部成分。通过更改动画的面部成分,可以完成动画形式的2D情感化身表情符号。另一方面,当根据确定标准的确定结果不适合于自身图像动画时,在根据模板动画实现的自身图像动画中,将“情绪化身”面部特征归一化信息“作为面部成分模板”内容信息预先存储在存储单元170中,并且存储单元170存储面部字符归一化信息。当还预先存储了与用于面部表情动画的动画模板信息相对应的模板动画内容信息时,作为相似度变换矩阵的归一化信息的情感化身面部特征被改变,用于实现动画的情感化身的面部成分被存储在存储单元170中,然后,情感化身的面部成分被存储在存储单元170中。存储单元170。然后,当额外设置用户的皮肤颜色和状态,以实现与所选面部组件模板相对应的面部组件,并创建与所选面部组件模板相对应的皮肤,以作为应用于2D情感化身图释的第一部分过程。这是动画应用程序的第二部分过程,它是在提取出皮肤颜色属性并将其应用于2D情感化身图释之后,通过根据动画效果反映用户的皮肤属性来删除并自动生成面部皮肤的第二部分过程。当执行面部成分更改时,面部的眉毛,眼睛,嘴唇和下巴的形状也会更改。更改用户面部组件的眉毛,眼睛,嘴唇和下巴的形状,并更改通过第一部分处理生成的模板面部组件。通过自动调整面部元素的颜色和形状,将面部特征图像自动应用于用户的面部用户的属性,并提取要在2D正面面部图像图像中用作脸部角色图标的脸部区域的大小和颜色属性,对提取的脸部区域的大小和颜色属性进行动画处理目标图标,并且3D面部应用程序部件161d通过更改颜色和大小以匹配由模板资源图像生成部件161c生成的2D情感化身图释的脸部区域,完成2D情感化身图释的生成(基于3D面部建模在2D上执行)以自动生成以动画格式显示的3D情感头像表情,以便自动生成上,下基于左侧,左侧的2D面部变形,用于执行3D面部动画效果,以使用户的2D正面面部照片图像的顶部,底部,左侧和右侧变形,以产生旋转效果。 (2D),对作为2D正面照片图像生成的2D情感化身图释的脸部区域进行解码和解码,并将所生成的图像存储在存储单元170中。此后,作为用于表示3D图像的最小单元的多边形生成三维图形中的三维形状,创建多个多边形并将其转换为一组多边形,将多边形组存储在存储单元170中,并执行纹理映射以附加存储在存储中的解码图像单元170到所生成的多边形集合,以生成3D映射的面部区域数据,具有3D面部区域数据的表情符号的3D情感,其表情符号为1/100(0。按01倍缩小),并存储在存储单元170中然后,将3D脸Morel环上的图像处理进行预处理(基于2D的3D脸部建模),2D正面图像并输出到触摸屏120,从脸部的图像信息中检测出图像的轮廓生成区域ed 2D情感化身图释。为了检测图像的轮廓,将照片的图像进行二值化或特殊处理。提取将颜色值降低为零和一的值,并通过二值化清除图像的边缘信息,并在特殊处理期间将彩色图像更改为灰度或运行轮廓检测的过程,然后提取在轮廓用户的特殊处理中,参考点改进了轮廓的选择,与参考点相对应的存储在存储单元170中的模板以及在轮廓中选择的标准数据,并且语音解析单元161e使用字符信息。语音解析单元161e提取由3D情感化身表情生成的3D情感化身表情的表情模板。然后,由模板资源图像生成单元161c生成的2D情感化身表情符号,在从存储单元170中提取内容之后,以基于以下内容的动画格式将2D情感化身表情符号或3D情感化身表情符号输出到触摸屏120。模板动画内容信息,然后将输入到麦克风130的语音信号转换为包括2D情感化身表情符号的3D情感化身表情符号,该2D情感化身表情符号包括用于2D情感化身表情符号或3D情感化身表情符号的语音表达和语音情感当接收到语音信号并将其存储在存储单元170中时,其中将完成头像表情情感的创建。因此,学习者本人的一部分语音学习者通过解析立即生成他们的情感化身。通过分析制作学习内容部分的模板化分组(学习者自己的语音内容操作)插入(菜单驱动等) c。)它提供了一种现实的间接体验,通过提供一个立即插入到内容中并与对应于学习者自己的化身或标准化身的情感化身的自由相关联的部分,对以经验为中心的儿童的教育非常有效此外,它还具有使用眼睛,鼻子,鼻子和眼睛的形状和颜色的自动生成方法的情感化身表情符号来提高智能学习学习的沉浸度的效果。此外,它还执行字符串解析,生成情感化身表情符号,包括语音作为口型同步动画类型,并具有情感动画实时组合的优势。

著录项

  • 公开/公告号KR101743763B1

    专利类型

  • 公开/公告日2017-06-05

    原文格式PDF

  • 申请/专利权人 (주)참빛솔루션;

    申请/专利号KR20150092072

  • 发明设计人 김영자;

    申请日2015-06-29

  • 分类号G06Q50/20;G06Q50/10;G09B5/02;

  • 国家 KR

  • 入库时间 2022-08-21 13:25:23

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号