The present ntion relates to a smart learning learning method based on emotional avatar emoticons, and a smart learning learning terminal device for implementing the same. The controller 160 includes a camera 110, a touch screen 120, a microphone 130, a speaker 140, a transceiver 150, a controller 160, and a storage 170, In the smart learning learning terminal device 100 having the emotional avatar emoticon generation module 161, the hybrid feature point detection algorithm module 162 and the smart learning learning module 163, the emotional avatar emoticon generation module 161, The avatar emoticon generation module 161 includes a feature point detection unit 161a, a character generation unit 161b, a template resource image generation unit 161c, a 3D face application unit 161d, and a voice parsing unit 161e. The detection means 161a detects the template character information previously stored in the storage unit 170 (in the case of the template character information, group icon information for each part of the emotional avatar face character, Information on the eye A group icon having a group icon such as under-eye wrinkles, a white eye, a pupil frame in case of the group icon information, and a group icon such as a lip line, a lip gloss, and a lip wrinkle in the mouth) Template character information, which is standard data for expressing a user character based on a 2D front face photograph of the feature point detection means 161a (in the case of template character information for character creation, the final product, which is implemented in an animation form, The emotional avatar face character included in the emoticon as the face region, and the emotional avatar face character, which are formed as group information for each face component with the standard data stored in advance in order to generate the animation including the emotional avatar face character) , 2D frontal face photograph The face region is extracted and the position of the eyes, nose, and mouth (including the ear, jaw, and the like), which are the face components of the face region, is automatically grasped. From the template character information, Extracts standard data matching the eyes, nose, mouth, ear and jaw using the group information of the 2D front face photograph image, which is the object to be actually characterized, The character generation unit 161b generates a similarity transformation matrix selected from the group of the respective sites by using the reference points selected on the outline of the jaw and the standard data stored in advance and the character generation unit 161b generates an animation using the emotional avatar face character that implements the face area of the user The automatic extraction of the facial area from the 2D frontal facial photograph image is performed in order to generate the 'emotional avatar emoticon of the form' The template character information previously stored in the storage unit 170 (template character information, change of the lips, change of the eye into the wink expression, change of the lips in the sloping expression of the lips) in accordance with the similarity transformation matrix generated by the step 161a The avatar facial character normalization information is normalized and stored in the storage unit 170. The feature point detection unit 161a then stores the normalized facial character normalization information in the storage unit 170, The emotional avatar face character normalization information is extracted from one of the group icons of each part of the emotional avatar face character by using the similarity transformation matrix generated by the facial character generating unit, Generate for use for animation effects, shadow effects, and apply gamma correction and shading for eye eyebrow replacement. Emotional avatar face character normalization information ", and the template resource image generation means 161c generates the template animation image information by parsing the face area among the 2D front face image images extracted by the character generation means 161b Parsing), the process of creating the 2D emotion avatar emoticon is performed by parsing the face area of the 2D front face photograph image with the template animation content information that can be expressed in the animation form, and the facial component is parsed in the parsed face area In the case of judging whether or not it is suitable for the self-image animation implemented according to the template animation content information, the information of the standard face part in animation in which the parsed face area is implemented according to the template animation content information, Percentage of fitness that is matching information of nose, mouth, ear and jaw If it is determined that it is inappropriate if it is less than a preset threshold percentage and if it is suitable for its own image animation, it is determined as a face component constituting the face area of the 2D front face photograph image The 2D emotional avatar emoticon in the form of an animation is completed by changing the face component of the animation. On the other hand, when the determination result according to the determination criterion is not suitable for the self image animation, the 'emotional avatar' Facial character normalization information 'as a facial component template in the self-image animation implemented according to the template animation content information previously stored in the storage unit 170, and the storage unit 170 stores the facial character normalization information' Element When the template animation content information corresponding to the animation template information for the facial expression animation is also stored in advance, the normalization information of the emotional avatar face character, which is the normalization information of the similarity transformation matrix, The facial component for the emotional avatar implementing the animation is changed and stored in the storage unit 170, and then the facial component for the emotional avatar is stored in the storage unit 170. Then, When extracting the skin color and state of the user for implementing the face component corresponding to the selected face component template, and creating the skin corresponding to the face component template selected as the first partial process for applying to the 2D emotion avatar emoticon Depending on the part of the animation application, It is a second part process for removing and automatically generating facial skin by reflecting the skin property of user according to animation effect after extracting skin color attribute and applying it to 2D emotion avatar emoticon. When facial component change is performed, The shape of eyebrows, eyes, lips and jaws of the face are changed. The shapes of eyebrows, eyes, lips and jaws of the user's face components are changed and the template face components generated by the first partial process are changed The facial feature image is automatically applied to the user's face by automatically adjusting the face element color and shape attributes of the user and extracts the size and color attributes of the face area to be applied as a face character icon in the 2D front facial image image, To the size and color property of the extracted face area so as to apply it as an animation target icon And the 3D face application means 161d completes the generation of the 2D emotion avatar emoticon by changing the color and the size to match the face region of the 2D emotion avatar emoticon generated by the template resource image generation means 161c (3D face modeling based on 2D) is performed to automatically generate the 3D emotion avatar emoticons displayed in the animation format in order to automatically generate the upper, lower, left, Based 2D facial morphing for performing a 3D face animation effect for distorting the top, bottom, left, and right sides of a user's 2D frontal face photograph image to produce a rotation effect. 2D), the face region of the 2D emotion avatar emoticon generated as the 2D front face photograph image is decoded and decoded And stores the generated image in the storage unit 170. Thereafter, the polygon which is the smallest unit used to express the three-dimensional shape in the three-dimensional graphic is generated, a plurality of polygons are created and converted into a set of polygons, The polygon set is stored in the storage unit 170 and the texture mapping is performed to attach the decoded image stored in the storage unit 170 to the generated polygon set to generate the 3D mapped face area data, 3D emotion with 3D face area data Emoticons with a level of 1/100 (0.The scaled down by 01-fold), and stored in the storage unit 170, and output to the touch screen 120, a pre-processing the image process on the 3D face Morel ring (3D face modeling based on 2D), a 2D frontal face images The outline of the image is detected from the image information of the face region of the generated 2D emotional avatar emoticon. In order to detect the outline of the image, the image of the photo is subjected to binarization or special processing. the process of lowering the color value to a value of zero and one, and to clear the edge information of the image through binarization, and change the color image during a special treatment to the gray level or run the contour detection, is extracted and a special treatment during the outline user the reference point improves the selection of the outline, a pre-stored template in the storage unit 170 corresponding to the reference point and the standard data selected in outline And the voice parsing unit 161e uses the character information. The voice parsing unit 161e extracts the emoticon template for the 3D emotion avatar emoticons generated by the 3D emotion avatar emoticon, the 2D emotion avatar emoticons generated by the template resource image generating unit 161c, After extracting the content from the storage unit 170, the 2D emotion avatar emoticon or the 3D emotion avatar emoticon is output to the touch screen 120 in an animation format based on the template animation content information, and then the voice signal input to the microphone 130 is Converted into a 3D emotional avatar emoticon including a 2D emotional avatar emoticon including a voice expression and a voice emotion for a 2D emotional avatar emoticon or a 3D emotional avatar emoticon upon receipt of a voice signal and storing the same in the storage unit 170, wherein the completion of the creation of the Avatar emoticon emotion It shall be.Thereby, the part of the learner himself voice learners their emotional avatar is generated immediately by parsing are inserted by parsing the templated group making learning contents portion, the learner's own voice content operation (menu-driven, etc.) It provides a realistic indirect experience that is very effective in educating children with experience-centered features by providing a part that is instantly inserted into the content and associated with the librarge of the emotional avatar corresponding to the learner's own avatar or standard avatar.In addition, it provides the effect of improving the immersion degree of the smart learning learning using the emotional avatar emoticons of the automatic generation method of the form and color of the eyes, nose, do.In addition, it performs string parsing, generates emotional avatar emoticons including voice as lip sync animation type, and offers the advantage of real-time combination of emotional animation.;
展开▼