Issue |
Mechanics & Industry
Volume 23, 2022
|
|
---|---|---|
Article Number | 13 | |
Number of page(s) | 17 | |
DOI | https://doi.org/10.1051/meca/2022011 | |
Published online | 30 June 2022 |
Regular Article
Research on custom-tailored swimming goggles applied to the internet
School of Art and Design, Xi’an University of Technology, 710048 Shaanxi, China
* e-mail: xiaobo413@126.com
Received:
16
December
2021
Accepted:
24
April
2022
Custom-tailored designs have attracted increasing attention from both consumers and manufacturers due to increasingly intense market competition. We propose and verify a method for custom designing swimming goggles that is suitable for use on the Internet. Twenty-five points representing head features were first identified, and the relationship between these points and the size of the goggles were confirmed. The correct position for photography was then experimentally determined, and a camera-position corrector was designed and manufactured. A three-dimensional (3D) scanning model was divided into 18 planes based on the feature points, and the contour curve of the surface on each plane was extracted. Secondly a Hermite interpolation curve was then used to describe the contour curve for the head, and a parametric 3D head model was established. The method of using orthographic photographs with patches to obtain 3D data was summarized to determine the size of the user’s head, and a 3D model of the user’s head and the 3D model of the goggles were established. Lastly, we developed an algorithm for eliminating errors in the photographs. We also produced an operational flowchart for an application (APP) following the research approaches and then determined the page structure of the APP based on the flowchart to verify the validity of our proposed method and ultimately to establish an APP for interactively designing swimming goggles. The entire APP operation process was completed using a volunteer as an experimental subject when a model for custom-tailored goggles was obtained. The model was then processed and applied using 3D printing. The volunteer confirmed the model by declaring that the goggles were comfortable to wear and perfectly positioned on his face, thereby verifying the validity of the method.
Key words: Design by customers / personalized design / design of swimming goggles / dimensional prototyping of photographs
© X. Bai et al., Published by EDP Sciences 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
The technology for manufacturing products is gradually maturing and improving with the rapid development of science and technology, reducing the barriers to product manufacturing, thereby leading to strong competition in the marketplace and increasing the homogeneity of products. The production of enterprises is becoming increasingly intelligent in the fourth era of the industrial revolution, and the demand for individualized products is becoming stronger. Enterprises should thus be able to respond quickly to the diversified needs of customers and dynamic changes in the marketplace, which have become the main means for most companies to improve their core competitiveness [1].
Custom-tailored designs have become increasingly desirable in the last 10 years with the upgrading of Internet technology, the popularization of three-dimensional (3D) printing, and the improvement of printing technology [2]. The Spreadshirt and Designhill websites provide a variety of T-shirt customization services, and the Cafepress, Zazzle, and Etsy websites provide customized design services for a variety of daily necessities and gifts. Many websites for customized furniture and kitchenware, such as Boconcept and Crddesignbuild, provide a variety of customized solutions. Many companies in the home-appliance industry plan for custom-tailored solutions in advance. Samsung has launched a website for customizing refrigerators, where consumers can customize color and size based on the decor of their kitchens [3], and companies in China such as Haier have also entered the era of customizing the design of home appliances [4]. Automobile companies have also prepared and begun to enter the era of custom design. General Motors can build and deliver custom cars within 15–20 days. DaimlerChrysler’s Fast Car and Ford’s Consumer Connect programs aim to link design, manufacturing, and marketing using advanced IT systems, which will make real-time product development transparent and allow the production of profitable customized vehicles [5].
The market share of wearable products has recently been expanding, because the products can seamlessly link to other electronic devices to meet the needs of users. Some wearables even have more than one application, and application products can be developed for a variety of individuals and workplaces [6]. As the object of this study, swimming goggles are typical wearable products. The design of swimming goggles should follow the principles of ergonomics; it needs to conform to the anatomical structure of the human face, nose, and eyes to ensure that the goggles are comfortable to wear and do not leak. The interaction between humans and computers is not currently emphasized in the design of swimming goggles, and most of the existing customized design systems in the market and academia are inclined to the customization of product appearance and color, without considering the difference of head size. An effective method for customizing the design of products to integrate the existing technology, quickly and effectively establish a model of the user’s head, and use the size data of the model to carry out the customized design of (swimming goggles) products is also lacking, so swimming goggles may cause discomfort and fatigue if they do not fit well or if customers wear them for a long time. Such inappropriate designs may threaten the safety and health of users. Swimming goggles should fit the user’s head well to provide a comfortable and safe user experience. We therefore used advanced methods for measurement to obtain samples of human heads and then established a parameterized head model. Photographs provided by the user were used to obtain data for the size of the user’s head using a program for obtaining sizes from photographs. The size data were then used to modify the parameterization model for establishing a 3D model of the user’s head. The product size was then obtained based on the 3D model. The parameterized product model was modified for comfort, and the product appearance was modified to make it consistent with the user’s image. Lastly, an interactive design application (APP) for swimming goggles was developed and verified by examples based on the theoretical results.
2 Pertinent research
2.1 Measurement of body size
Anthropometry is widely used in product design to make the products more comfortable and safer. People of different races, ages, heights, and weights differ greatly in body size [7]. Some studies have therefore used a group of people of a particular race and age as samples for measuring the human body in one dimension [8,9] and then used statistical methods for analyzing the one-dimensional (1D) size of the human body. The sparse data obtained from anthropometry in 1D space, however, is not enough to ensure the comfort and functionality of the product. The shape and size of the body are not only very diverse due to gender, race, and age, but also have a complicated 3D surface-fitting relationship with the products, which makes wearables uncomfortable or cumbersome for users. The products will also interfere with normal user activity.
Formfitting consumer products usually needs a complete 3D configuration during the design to adapt to the body or to specific body parts, so conducting a comprehensive study of the shape and size of the human body is necessary. Robinette et al. [10,11] used a 3D-scanning method to collect 3D data of body size for people aged 18–65 years from four countries, the United States of America, the Netherlands, Italy, and Canada. In particular, they collected detailed data on 30 head and facial features and established the Civilian American and European Surface Anthropometry Resource (CAESAR) database, which provided a reliable basis for the design of wearable products. Yan Luximo and others of the Hong Kong Polytechnic University [12] collected 1620 3D-scanned samples of different ages and genders in six regions in northern and southern China and created the SizeChina database of head sizes. A principal component analysis of male and female heads was then performed for head height, head length, head breadth, face height, face length, mandible width, nasion depth, and chin depth. Models of ordinary male and female heads were then obtained, which indicated that head and face sizes were significantly larger for males than females. Adapting the size of wearables such as helmets, glasses, and masks to different races at the same time is difficult due to ethnic differences. Roger Ball [13] compared the head and face data from the SizeChina and CAESAR databases and found that heads were rounder for Chinese people than Europeans and Americans, but the foreheads and backs were straighter.
The acquisition of data for human body size has entered a stage where 3D scanning is now the primary tool, and physical measurement has become secondary. Data for the human body will thereby become more accurate. These methods for measuring body size, however, will eventually statistically obtain an average size, which can then be used for product design and other tasks. Few studies, though, have proposed a method for the real-time measurement of human body size for custom-tailored designs.
2.2 Method for obtaining 3D data of the human body
Almost all 3D scanning data of the human body are currently from expensive laser 3D scanners, e.g. the Cyberware WB4 [14] and Vitronic Viro 3D Pro [15] scanners are often used [16], but both scanners are large and difficult to transport, and their use is severely restricted, which causes some difficulties in sample collection. Some researchers thus use the portable Cyberware 3030 and LT3DFaceCamTM EXI [17] scanners to scan the head and face, and some researchers use computerized tomography (CT) to obtain 3D data of the human body [18–20]. Both 3D and CT scanning equipment, however, are expensive, and ordinary consumers are not able to purchase this equipment. These two types of scanning instruments therefore cannot currently be used for custom-tailored designs.
Some researchers have tried to use inexpensive methods to obtain 3D data of the human body. For example, depth cameras and 3D photographic stereoimaging technology are now popular research fields. Kinect is a motion-sensing camera from Microsoft that sells for less than US$ 200. Its sensors are mainly composed of depth and RGB (red, green, blue) cameras, which are used to obtain 3D information of objects. Kinect can obtain the depth and RGB images of the object at the same time. It can also obtain the point cloud coordinates of the object in space and the obtained RGB image by reading the depth image to restore a high-precision 3D model with the original color. Some researchers have optimized and improved the Kinect algorithm to improve scanning accuracy and to scan small objects and reconstruct them using Kinect [21]. Other researchers have used Kinect in health care and have evaluated its image performance, and some believe that Kinect can be used in facial imaging or breast surgery [22]. Kinect has also been widely used to model the human body. Jing Tong used multiple Kinect cameras to build a 3D scanning platform and established an accurate and complete 3D model of the human body [23]. Hou used Kinect to simultaneously obtain depth and color images of the user and used Laplace Mesh Deformation (Laplace Mesh Processing) to obtain a more realistic 3D model of faces [24]. Wang presented a novel approach for the dynamic reconstruction and tracking the motion of the human body using low-cost depth cameras, and a novel data-driven 3D model of the human body was introduced to efficiently reconstruct models of the human body with wide shapes and pose variations using only a limited number of training databases with a standard standing pose [25].
Another reconstruction algorithm is based on the principle of camera imaging systems or video [26], using multiple photographs from different angles to reconstruct a 3D model. This algorithm usually includes the following steps: extraction of image characteristics, depth restoration, point-cloud registration, and deep integration. Huang et al. [27] used machine-learning algorithms to obtain facial features and then modified the size of glasses based on the facial features, thereby enabling interactions with users for modifying the shape of the glasses based on the contents of product semantics. Chin and Kim [28] used orthogonal photographs of users to capture facial features, proposed an effective low-polygon algorithm to generate low polygonal 3D models of human face, and then applied this theory to wearables. Experimental research was next conducted on three methods of measurement, and the advantages and disadvantages were summarized. Laser scanning is very accurate but cannot be popularized due to the high cost, so it is not a suitable tool for custom-tailored design (Tab. 1). Depth cameras are more affordable, but they generally only appear in the market as game hardware and have recently begun to gradually leave the market, so they are also not suitable. Taking photographs has become a part of people’s daily life since the advent of smart telephones, and the accuracy of the 3D reconstruction of photographs can be improved by algorithms and the use of the correct capture mode. We therefore applied the 3D prototyping of photographs to obtain data of the heads of users.
Comparison of three methods used to obtain human data.
2.3 Design of wearables
Statistical data can be obtained by the 3D measurement of the human body. Some researchers have endeavored to conduct design research on wearables based on these data. Ellena first collected data for 3D models of the heads of several Australian cyclists and then divided their head shapes into four levels [29]. The inner surfaces of helmets were constructed referring to the outer surfaces of the four levels of head shapes. Finally, a helmet suitable for Australian cyclists was designed [30]. Zhu Zhaohua et al. first used 3D scanning and other methods to produce a number of models of ear canals of young Chinese males and females and then divided the shapes of their auricles into 24 groups using a cluster analysis of the 3D model [31]. Earphones were later personalized, customized, and finally verified after setting the design objectives as skid-proof and wearable, which led to good results [32]. Chih-Hsing used a principal component analysis to reduce the complexity of 3D data of faces, determined the correlation between different parameters, and defined the feature points of the face based on the standard of the face database. The key dimensions of the facial features were then used to construct a 3D face model based on kriging nonlinear regression to parametrically design glasses [33]. Roger Ball et al. used a 3D scanner to obtain data of the human body and imported it into Rhino software to achieve the 3D design of wearables [17], and Buonamici et al. designed easy-to-use semi-automatic tools for modeling the medical device and proposed a methodology to produce 3D printable casts for wrist immobilization [34]. Wonsup Lee used the CAESAR database to analyze the relationship between the sizes of various wearables and the feature points of the human head and established a system for analyzing the relationship between the size of the human body and the wearables. The target product and the relationships between the target population, the number of scales, and the representative face models were analyzed using this system to help the design of wearables [35].
A literature review indicated that personalized designs could be achieved in many ways, but these methods were not completely applicable to the custom-tailored design of wearables applied to the Internet. In the process of personalized custom design, the designer needs to communicate with the user in real time, and the equipment used to obtain the body data should be easy to obtain. In addition, the literature on head measurement and head 3D reconstruction that we have seen so far is basic research for mass customization design, without considering that with the maturity of 3D printing technology and 5G technology, personalized interactive customization design of products has become possible. We therefore developed a method for obtaining the user’s head size based on orthogonal photographs, which would help users to take photographs correctly using auxiliary tools designed by us. The patches and auxiliary tools are then used to improve the accuracy of obtaining data from photographs. Parametric drawing is also used to obtain a 3D model of the user’s head, and the design of the wearables is completed based on this model.
3 Methodology
3.1 Description of the methods
The main and specific research components of this study were divided into several categories (Fig. 1). The anatomy of the human head was first studied, and the head-feature points were determined based on the characteristics of the human head and the product to be designed, which can be quickly identified and calibrated by non-professionals, and a 3D scanning model was divided into 18 planes based on the feature points, and the contour curve of the surface on each plane was extracted. Then a Hermite interpolation curve was then used to describe the contour curve for the head, and parametric models of the head and swimming goggles were established. We experimented on how to use stickers, auxiliary tools, voice reminders, and other methods to help users take photographs correctly so that they could obtain more accurate head sizes. An algorithm for preprocessing photographs was used to preprocess the user’s photographs to obtain the head size using a photograph-reading algorithm, and the stickers and auxiliary tools were used in the photographs as a reference to correct errors in the extracted data of the user’s head size. The calculated data of user head sizes were used to modify the established head parametric model to construct a 3D model of the user’s head. The user could then choose to modify the shape and color of the swimming goggles based on their preferences, and the system would establish a parametric model of the goggles based on their choices and the characteristics of the surface of the user’s head. We used 3D printing technology based on the model of the goggles for rapid production. We used all this information to experiment on taking photographs, determining the correlation between the size of the swimming goggles and the human body, establishing a 3D parametric model of the head, and acquiring the photograph sizes. Finally, the interactive interface of the APP was designed based on theoretical research and the product was made using 3D printing.
Fig. 1 Components of this study. |
3.2 Determination of head-feature points
The feature points of the human head have been previously defined in detail [12,16], but both of these studies used different research methods, so the selected feature points differ slightly. We defined 25 feature points (Fig. 2) based on the position where the product was worn on the human body, combined with the definitions of the feature points of the body in various studies, with more feature points especially around the eyes. Point 25 was chosen as the center of rotation of the human head and was used for subsequent data compensation.
Fig. 2 Determination of the 25 head-feature points. 1, Vertex; 2, metopion; 3, right superciliare; 4, sellion; 5, right nasal root point; 6, right pupil; 7, right temple; 8, right endocanthus; 9, right ectocanthion; 10, right infraorbitale; 11, right lateral zygomatic; 12, pronasale; 13, right alare; 14, subnasale; 15, labiale superius; 16, right cheillion; 17, stomion; 18, labiale inferius; 19, sublabiale; 20, chin; 21, menton; 22, right otobasion superiuss; 23, right postaurale; 24, right tragion; 25, midpoint of rear hairline. |
3.3 Correlation between the size parameters of the swimming goggles and head size
The contoured silicone gasket of swimming goggles must be precisely pressed on the red line (around the eyes, see Fig. 3) and must fit perfectly with the curved surface around this line to ensure comfort and a good seal. The design process should therefore ensure that the black curve in Figure 4a is outside the red line in Figure 3 and that the red and black curves in Figure 4c should be similar to the curvature of the face surface at that point. Ten decisive parameters are determined (Tab. 2) based on the characteristics of swimming goggles when designing the goggles. Eight of these 10 parameters are associated with the feature points of the head, which are mainly concentrated around the eyes.
The determination of feature points should follow two principles: they should be clear and effective. We verified the validity of feature-point selection by conducting a virtual wearing experiment on a 3D model of the swimming goggles (Fig. 5a) and 3D scanning models of three volunteers (Figs. 5b–5d). The experiment identified a large distance between the goggles and the volunteer’s head at the upper edge of the goggles, the lower eyelid area of the lower edge, the nasal width of the lower edge, and the outer edges of the goggles on both sides. Combining the literature [35,37] and the information in Table 2, we have therefore determined the feature points associated with the goggles and have determined the effective positions of the feature points. To ensure that users could preview the aesthetic effect of the swimming goggles and their own head model at the later stage, we also used the literature [35,37] to determine other feature points of the head (Fig. 2) to build a full-head 3D model.
Relationships between goggle size and head-feature points.
Fig. 5 Verification of the validity of feature point selection. |
3.4 Establishment of a 3D parametric model of the head
3.4.1 Extraction of head contour lines
Figure 6 shows a 3D model of the head that has been scanned and repaired. The 3D head model was segmented using 18 parallel planes passing through the feature points in Figure 2 based on the distributional positions of the points. A denser segmentation plane was set above and below the eyes because it was intended for designing swimming goggles.
The plane segmentation was first used to scan the head model and to extract contour lines for each part of the segmented surface. Some of the contour lines were simplified, such as the eyeballs and ears, and the extra curve segments were then removed and connected smoothly. Finally, 18 closed curves on 18 parallel planes were obtained.
Fig. 6 Layering of a scanned three-dimensional model. |
3.4.2 Establishment of a 3D coordinate system of the head
A 3D coordinate system of the head needed to be established in order to build a parametric model (Fig. 6). Based on studies of the human head, the Frankfurt plane was used as a horizontal plane, and a vertical plane was established on the plane containing the axis of symmetry of the head to form a coordinate system with the vertical plane passing through point 24 in Figure 2. The origin of the coordinates was the intersection of the above three planes. The X-axis was then the line of intersection between the Frankfurt plane and the vertical plane passing through point 24, the Y-axis was the line of intersection between the Frankfurt plane and the symmetry plane of the head, and the Z-axis was the line of intersection between the vertical plane passing through point 24 and the symmetry plane of the head.
3.4.3 Establishment of an interpolation curve
Interpolating the extracted and simplified head contour curve with a curve equation for subsequent analysis was necessary for constructing a parameterized head model. All curves in this study were calculated by the interpolation of 20 points. We mainly used the user’s orthogonal photographs and the calibrated coordinates in the photographs to modify the parametric model. The coordinates of some points in the head profile curve could therefore be obtained directly from the photographs (Fig. 7), but others needed to be automatically obtained based on the interpolation equation. We thus chose the Hermite interpolation curve with free parameters to describe the head contour curve [38]. Figure 7 shows the curve obtained using the point interpolation method.
Through a series of known points, Pi, and the corresponding tangent vectors Ti, where i = 1, 2, …, n, this curve equation can be expressed as: (1)
where is a Quintic Bernstein Polynomial, and δ and θ are the two free real parameters, where 0 ≤ k ≤ 1.
For arbitrary free parameters δ and θ, the point Pi and Ti in equation (1) and the curve satisfy the C2 continuity.
From equations (1), (2), and (3): (2) (3)
From equations (2) and (3): (4)
The free parameter δ for a type A curve is generally a single precision number between –2 and 2, and θ should be a single precision number between 2.5 and 2. The free parameter δ for a type B curve is generally a single precision number between –5 and 3.5, and θ should be a single precision number between –10 and 2. The free parameter δ for a type C curve is generally a single precision number between –5 and 3.5, and θ should be a single precision number between –6 and 3.
Fig. 7 Curve after interpolation. |
3.4.4 Parametric models of the head and swimming goggles
The VB script module function was used in Grasshopper parametric programming to compile equations (1)–(4) into codes, which were embedded in the Grasshopper parametric programming. In Grasshopper, the functional relationship between the size of the human head and the difference curve of each layer was first established, the difference curve of each layer was drawn, and the surface command was then used to generate a parametric surface (Fig. 8), i.e. the head surface after the parametric modeling. This kind of parametric model can be deformed based on the point coordinate positions of the head-feature points on the head coordinate system.
The curve data were extracted on the L3–L9 plane in Figure 6, the functional relationship was established between the size data in Figure 4 and Table 2, and the parametric design of the swimming goggles was completed. Figure 9 shows the procedure and results of the parametric modeling of the goggles. The model changes with the shape of the head surface, and its appearance can then be changed by modifying some of the parameters of the goggles.
Fig. 8 Parametric model of the head. |
Fig. 9 Parametric model of swimming goggles. |
3.5 Acquisition and processing of image data
3.5.1 Acquisition of the head data
The premise of establishing an accurate head model following our method was to be able to obtain accurate data for key points from photographs. Photographs taken by users, however, often lead to errors in the accuracy of the extracted data due to multiple reasons. These errors can be mainly summarized as objective and subjective errors [39].
Ensuring the accuracy of the user’s head position when taking photographs is necessary to ensure the accuracy of the head data. The specific steps were: when taking a frontal photograph, the back of the head should touch the wall, the body should be naturally uptight, and the shoulders should be relaxed; when taking a profile photograph, the body should be upright, and the strip tape on the head should be kept horizontal. Point 21 should also be at the lowest point of the chin for both photographs. The user’s head-feature points were pasted with round patches of equal diameter for determining the location of the feature points (Fig. 10). The basic accuracy of the posture during shooting was ensured by maintaining an approximately parallel orientation to the ground of a clear narrow colored tape with a hard material pasted between the right infraorbital and the right tragion.
Multiple design schemes were also verified to further ensure the correct position of the camera for designing the auxiliary tool (Fig. 10). This auxiliary tool helps designers to take photographs correctly and guarantees the subsequent correction of head data. Component 1 was a suction cup (Fig. 11), which was used to attach the entire auxiliary tool to point 20 in Figure 2. Component 2 was a T-shaped correction plate with multicolored patches, which could be used to correct data by reading and calculating the point coordinates of the colored patches. Components 3 and 4 were a nylon rope and a plastic ball, respectively, which were connected to part 2 to ensure that the Frankfurt plane was approximately parallel to the ground.
The user can attach the patches, tape, and auxiliary tools based on the requirements and take two vertical frontal and profile photographs, following the prescribed posture, and follow the steps below to obtain the data in the photographs (Fig. 12).
- 1)
Generate six head contour curves at the tips of the user’s hair that can be manually adjusted by the user based on the extraction of the color of the pure background of the user’s image. The user then adjusts the interpolation points based on the condition of their hair to generate an accurate head contour curve. Finally, the actual head contour is outlined (Fig. 12).
- 2)
The user manually enters the diameter of the patch used and selects the color of the patch (not black) in the photograph.
- 3)
The system converts the RGB image into an HSI (hue, saturation, intensity) color space and Lab color space, removes the background through [S, I], a two-dimensional (2D) data segmentation, and then segments it through [S, I, a, b], a four-dimensional data set. Based on the K-means algorithm [40] of the preset clustering center, the system then traverses the pixel information of the photo and selects the pixel set of the patch with the preset patch [S, I, a, b] value and a reasonable rang of color value. The color values of the patches are read, the pixel coordinates with difference in reasonable color value are clustered with the center of K-means clustering algorithm, and the central coordinates of K color clusters {G1, G2, G3,…, Gk} and their cluster center coordinates are obtained and x1…k and y1…k are the plane coordinates of the pixel in the photograph.
- 4)
Using all the samples (xi, yi) in color gamut cluster Gi, calculate the cluster center coordinates of the color gamut cluster as the coordinates of the patch center point (the 2D coordinates of the feature points in the two plane rectangular coordinate systems ): (5)
- 5)
Calculate the diameter, Di, and average diameter, , of each patch in the current photograph based on the two sets of samples (xp, yp), (xq, yq) of all samples of color gamut cluster Gi: (6) (7)
- 6)
The system automatically crosses the horizontal and vertical lines of and intersects the outline of the head to produce two sets of contour points, M and N: (8) (9)
- 7)
The cluster center coordinates calculated using the above algorithm are the feature-point coordinate set A in the photograph, the intersection points are M and N, the average diameter of the patch is , and the size of the circular patch input by the user is D0, by which the key point set T, including the feature and outline point sets with the head, is obtained: (10)
Fig. 10 Acquisition of the head data. |
Fig. 11 Auxiliary tool for taking photographs. |
Fig. 12 Correct method for taking photographs. |
3.5.2 Correction of subjective errors
The user cannot fully present the ideal angle of the head due to individual reasons when taking photographs, leading to subjective errors, 2D or 3D deflection, and even different deflection angles when taking two photographs. Specifically, the user’s head is deflected in 3D around the atlanto-axial joint. We abstracted the atlanto-axial joint into the midpoint R of the posterior hairline (Fig. 10), regarded the midpoint of the back hairline of the user’s head as the origin of the head’s rotation, and abstracted the user’s head as a cube to simplify the deflection for easier calculation for generating the deviation of the image data (Fig. 13).
To solve this problem, we divided the 3D deflection of the data generated by the user’s head during the frontal and side shots into three types of 2D deflections relative to the XOY, YOZ, and XOZ planes (Fig. 14), which correspond to the three data errors: rotation, translational zoom, and zoom. We then designed a correction tool (Fig. 4) to obtain the deflection data of the user’s head for better compensation. Image-recognition technology was used to detect and calculate the three 2D basic deflection angles of the front and profile shots. We correspondingly established rotational, translational, and zoom compensation matrices to compensate the key point data for obtaining more accurate data.
We thereby established an algorithm to complete the acquisition of data and correction of the photograph.
In the front shot:
The R coordinate of the center point of deflection is obtained, and the coordinate set of the feature points R obtained from the side view is searched to obtain the coordinates of the deflection center point, especially .
To compensate for the rotation errors, calculating the four corner coordinates of the black color cluster of the correction tool in Figure 11 in the front or profile shots using equation (11) is necessary for obtaining the data representing the user’s head in the frontal view. The function value of angle α1 between the midpoint of the short side of the red rectangle representing the user’s head data and the Z-axis in the front photograph can only be obtained using this method (Fig. 14a), The value can be obtained from the coordinates of the upper-left, upper-right, lower-left, and lower-right corners of the gray cluster (xul,zul), (xur,zur), (xll,zll), and (xlr,zlr) in equation (11), respectively, and the compensation is then calculated by establishing a rotation matrix, as in equation (12).
To compensate for the translational and zoom errors, calculating the X value deviation of the center coordinate value of the yellow and blue color cluster of the correction tool in the front view based on the principle in Figure 15 is necessary, combined with the actual straight-line distance, l, between the center points of the two color gamuts in the known correction tool. Equation (13) is then used to calculate the trigonometric function value of the angle γ1 between the symmetry plane of the user’s head and the YOZ plane of the imaging plane, thereby obtaining Figure 16. AB in Figure 16 is the theoretical photographic plane needed, A1B1 is the user imaging plane with an angle γ1 between the required theoretical plane, A2B2 is the actual imaging plane, and R is the abstracted center of rotation of the rotating head, i.e. the midpoint of the back hairline. The translational distance of the user’s head data in the front shot is then calculated using equations (14) and (15), the translation distance s1 (line segment CC1 in Fig. 16) and the zoom ratio n1 of the head data of the user in the front view are obtained. Finally, the key point coordinate data are compensated using equation (16).
To compensate for the zoom errors, the Z value difference between the yellow and blue color cluster center coordinate value of the auxiliary tool in the front shot combined with the known size of the correction tool is calculated based on the principle in Figure 15. equation (17) is then used to calculate the 2D deflection angle of the user’s head in the YOZ plane surface. The trigonometric function value of the angle β1 between the midpoint of the short side of the gray rectangle and the Z-axis is obtained (Fig. 14b). The scaling factor, t1, is then obtained using equation (18), the atlanto-axial joint abstract point R is the zoom center point, and equation (19) is used to obtain the X,Y coordinate value of the key point is calculated using the scale factor.
The method of data compensation for the profile shot was similar to the method of data processing for the front shot, except for:
To compensate for the rotation error, using equation is similar to equation (11) to obtain the trigonometric function value of the angle β2 between the midpoint of the short side of the gray frame and the Z-axis through the black color cluster is first calculated, and the rotation matrix of equation (12) are then used to compensate for the data of the key point.
Compensating for the translational zoom error is basically the same as the method of data compensation used for the front shot. The trigonometric function value of the angle γ2 between the symmetry plane surface of the user’s head and the XOZ plane surface is calculated as the difference between the center point Y of the red and green color clusters. Equations are similar to equations (14) and (15) are then used to calculate the translational distance, s2, of the user’s head data and the scaling ratio, n2, of the user’s head data in the frontal photograph, and key point coordinate data are compensated for using equation is similar to equation (16).
To compensate for the zoom error, using equation is similar to equation (17) the trigonometric function value of the angle α2 between the symmetry plane of the user’s head and the YOZ plane is first calculated as the difference between the center point Z values of the red and green color clusters, and the zoom factor t2 is obtained using equation is similar to equation (18). Using the atlanto-axial joint abstract point as the zoom center point, the Z coordinate in the coordinate of the key point is then zoomed based on the scale factor obtained using equation is similar to equation (19).
Based on the above compensation-calculation steps, the 2D key point coordinate set T1 = {(x1, z1), (x2, z2), (x3, z3), ⋯ (xt, zt)} of the front shot can be obtained, and the 2D key point coordinate set of the profile shot can be obtained in the same way. The values of y in T1 and T2, however, may contain small errors due to shooting errors and the differences of calculation accuracy. Finally, the average is therefore used to reduce the errors, and equation (20) is used to obtain the 3D space point coordinate set of each key point.
Using correction tools to ensure that the user’s posture is basically correct when taking photographs is obviously needed, and real-time detection using the APP should be performed.
Photographs can be taken under the condition that the deflection in all directions is controlled within 10°. The user needs to be reminded by the APP to prevent the deviation when the angular deviation is >10°, because errors will be larger and the subsequent image preprocessing and processing will be inconvenient. Below is a detailed description of how to use the correction tool to calculate and compensate for the key point data of the front and profile shots of the user’s head.
Fig. 13 Three-dimensional deviation and its two-dimensional projection. |
Fig. 14 Breakdown of two-dimensional deflection. |
Fig. 15 Schematic of the deflection of the center of the correction tool color cluster. |
Fig. 16 Schematic of the translational deviation and calculation of the photographic data. |
3.5.3 Parameterization of the user’s head and the establishment of a model of swimming goggles
The type and location of the key points are determined by sorting the values of the x, y, and z coordinates in the 3D space point coordinate set Td. The calculated data are then transmitted to the designed parametric model program. In the process of parametric modeling, the feature points of each layer in Figures 2 and 6 are known points, and three-dimensional coordinates can be calculated using the above method based on the distribution of the feature points of the head. The interpolation curve of each layer has at least four interpolation points as known points. The value of the unknown point in the interpolation curve can be obtained by multiplying the coordinate value of the known feature point by an adjustable coefficient, and the control “number slider” in Grasshopper can then be used to adjust the coefficient to ensure the smoothness of the difference curve of each layer (Fig. 17a), which shows the user's head model and creates the parametric swimming-goggle model that fits the user's head (Fig. 17b).
Figure 18 verifies the accuracy of the correction algorithm and parametric modeling method established in this paper. Figure 18a is a coordinate alignment diagram of the laser 3D scanning model and the parameterized 3d model obtained without using the correction algorithm. Figure 18c is a coordinate alignment diagram of the laser 3D scanning model and the parameterized 3d model obtained using the correction algorithm. Figures 18b and 18d shows the results of the similarity experiment of Figures 18a and 18c. In Figure 18b, the average positive and negative deviations of the two curved surfaces are +3.5 and –5.713 mm, respectively, the maximum positive and negative deviations are +14.076 and –14.1 mm, respectively, and the standard deviation is 5.496 mm. In Figure 18d, the average positive and negative deviations of the two curved surfaces are +0.890 and –4.38 mm, respectively, the maximum positive and negative deviations are +8.108 and –14.1 mm, respectively, and the standard deviation is 4.011 mm. A comparison of the two groups of experiments indicated that the parameterized model obtained using the correction algorithm was more similar to the scanning mode. The correction of the algorithm for subjective errors proposed in this paper was effective. Figure 18d also verifies that our head model is very similar to the scanning model, and most of the surface deviation can be controlled between –1.505 mm and 1.505 mm, which can meet the design of headwear products.
Fig. 17 Parameterization of the user's head and a diagram of the swimming-goggle model. |
Fig. 18 Verification of the accuracy of parametric modeling of the head of the user. |
4 Implementation of the APP
The rapid popularization and development of smart telephones and their photographic technology have led to highly accurate photographs, which can meet the needs of our method. We therefore designed and developed the mobile APP for the customized design and production of swimming mirrors for users by combining the above methods and tools for the development of mobile software, mainly for designing the functional framework of the APP, designing and developing the interactive interface. Finally, we used the APP to provide an example of the application of goggle design for verifying the effectiveness of the proposed method and the comfort of the users.
4.1 Interactive interface
4.1.1 Framework design of the APP
We designed a page frame diagram of the mobile APP based on the research ideas in Figure 1 and on previous research (Fig. 19). The 1st level of user pages of the mobile APP included a page for designing swimming goggles, an order page, and a personal-center page. The page for designing the goggles mainly contained functions for the acquisition of user head data and the interactive design of swimming goggles, the order page contained functions for previewing the user’s choices and their validation, and the personal-center page was mainly for user communication.
The entire APP operation begins with the user first completing the personal registration or logging in. The user then enters the design page. If a head model has not been obtained, the user must enter the prompt page to prompt the user how to correctly take a photograph and use the auxiliary photographic tool. The user enters the camera interface and completes the preparation based on the requirements (adhesive film, fixed auxiliary camera equipment). The user then follows the APP prompts to take a photograph and upload it to the system. The system then uses our algorithm to obtain the user’s head size and other parameters. The user’s head model is later automatically generated for preview. If not satisfied, the user takes another photograph. If satisfied, the user starts to design the googles. The user should THUS choose the shape, texture of the material, and color and then communicate with the designers. Finally, the user previews the final product design, and when satisfied, an order is generated and sent to the manufacturer for production along with the user’s head size for a customized design.
Fig. 19 The page frame diagram of the mobile APP. |
4.1.2 Design of the interactive interface
The interactive interface of the APP was designed based on theoretical research. The fonts and colors of the entire interactive interface have been specially selected, and the style of the entire interface is simple and clear. The operator can thus easily familiarize themselves with all the contents, complete all the steps following the instructions of the APP, and finally receive a personalized custom design (Fig. 20).
Fig. 20 Personal design APP for swimming goggles. |
4.2 Example verification
4.2.1 Use flow
After entering the APP, the user will first calibrate the camera following the prompts and calculate the compensation coefficient of the objective errors (Fig. 21). When the interface is entered, the system first uses the built-in algorithm of the APP to determine if the Frankfurt plane of the head is parallel to the vertical plane of the camera based on the degree of overlap between the stripe patch on the face and the black straight line on the APP. Whether the head position of the user remains within a slight correctable error when taking photographs is then detected by the change in the size of the T-shaped bar of the photographic auxiliary tool. If not, the system will prompt the user to adjust the position and take another photograph. The user uploads the new photograph to the system, and the system automatically generates a rough head contour, including hair. The user then drags the six curve interpolation points to obtain an accurate forehead contour. The user’s head data is obtained for correction based on the program for extracting photographic data. Finally, the corrected key parameters are entered into the standard parameterized head model to obtain the user’s head model and a model of the swimming goggles linked to its data.
The user can enter and operate the design interface to personalize the color and shape of the goggles using the interactive operations in the APP (Fig. 22a). The goggles can then be produced (Fig. 22b).
Fig. 21 Photographing the user. |
Fig. 22 Appearance of the designed swimming goggles. |
4.2.2 User validation and feedback
To verify the comfort of the swimming goggles, we put the parametrically designed 3D model of swimming goggles on the laser 3D scanned model of the user’s head, aligned the coordinate systems of the two models (Fig. 23a), and then conducted a fitting experiment (Fig. 23b). The maximum positive and negative deviations of the two curved surfaces were +5.546 and –3.12 mm, respectively, the average positive and negative deviations were +0.325 and –1.057 mm, respectively, and the standard deviation was 0.898 mm. The distance between the surface of the swimming goggles and the surface of the 3D scanned head was kept within the range of –1.115 mm to 1.115 mm. The surface of the swimming goggles fit the volunteer’s head well, which better realizes the goal of the customized design, taking into account the elasticity of human skin, and the comfort requires the user to actually wear them to further evaluate the comfort. After the user completes all steps, the order will therefore be sent to the platform, which can send the 3D model of the goggles to the processing workshop to be rapidly manufactured. 3D printing is used to complete the processing, and photographs are taken from different angles (Fig. 24). A volunteer participated in our wearing experiment, and he is a swimming enthusiast with long-term experience in wearing swimming goggles. A 3D printed swimming goggles is designed according to the process of APP in this paper, and its size matches the parameterized model of the volunteer’s head. From Figure 24, the goggles match the user’s face very well. The product is then verified by the user after wearing the goggles, confirming that they are comfortable to wear and that the product meets the user’s individual needs.
Fig. 23 Virtual wear verification. |
Fig. 24 Validation of the designed swimming goggles. |
5 Discussion and conclusion
Wearables have become popular in the technology industry and will be the next competitive field. Many suppliers, including Google, Apple, Sony, and Samsung, have invested funds for the research and development of wearable technology and relevant equipment. Research on wearables for the head can support the design for many fields, such as military and civilian helmets and glasses, respiratory mask in hospital, and face masks. Despite the popularity, the main problems with wearables that need to be resolved include assessing the utility of the designed wearables, because the curved surfaces of the heads of different races and genders are very different, and minimizing the cost of obtaining user’s head data. The solutions to these problems are all considered in this study.
Our study differs from previous studies by only considering mass customization design. We considered the comfort of each user wearing goggles and used image-processing technology and parameterization technology to realize personalized designs for users in remote and non-contact situations. For non-contact, fast, and accurate characteristics of this research, our proposed method may provide ideas for the design of medical goggles, thereby making a specific contribution to ensuring the safety and comfort of medical staff.
We used 25 head-feature points to establish a 3D coordinate system for the user’s head depending on the needs of the product to be designed. A 3D model created by scanning was then divided into 17 layers based on the distribution of these feature points. Contour lines of each layer of the 3D scanning model were extracted, and the Hermite interpolation curve was then used to describe the head contour curve. Finally, parametric modeling was used to complete the establishment of a parametric model of the head. We chose the simplest method of taking photographs with mobile telephones to obtain the user’s head size for demonstrating the universality of the method. We designed photographic aids for improving the accuracy of taking photographs, and circular and strip stickers were used to assist in taking the photographs. Photographs of the checkerboard also need to be taken and uploaded to the system to be able to use the photographs for calibrating the camera parameters of the user. A program was used to correct the photographic data after the user has taken the photograph, The head data in the user’s photographs is analyzed and obtained by the mobile APP, and the user’s head 3D model is generated in the parameterized head 3D model after being modified by the auxiliary tool and patches. The mobile APP was designed after completing the theoretical research and a user-oriented framework. A volunteer was chosen to complete the entire personalized customization. The final product was made using 3D printing, with good results: the goggles were comfortable to wear and fit closely to the curved surface of the head.
Funding
This research was funded by THE NATIONAL SOCIAL SCIENCE FUND OF CHINA, grant number 18BG132.
Conflicts of interest/Competing interests
The authors provide their consent to publish this article.
Availability of data and material
All data generated or analysed during this study are included in this published article and its supplementary information files.
Code availability
No code was generated or used during the study.
Ethics approval
No animals were harmed during these experiments. Eth-ical review and approval were waived for this study, due to REASON (The main object of this study is wearable products).
Consent to participate
All authors consent to participate.
Consent for publication
The authors provide their consent to publish this article.
References
- R. Dou, D. Lin, A method for product personalized design based on prospect theory improved with interval reference, Comput. Ind. Eng. 125, 708–719 (2018) [CrossRef] [Google Scholar]
- Y. Xu, G. Chen, J. Zheng, An integrated solution—KAGFM for mass customization in customer-oriented product design under cloud manufacturing environment, Int. J. Adv. Manuf. Technol. 84, 85–101 (2016) [CrossRef] [Google Scholar]
- https://www.samsung.com/global/bespoke [Google Scholar]
- Y.S. Fan, G.Q. Huang, Networked manufacturing and mass customization in the ecommerce era: the Chinese perspective, Int. J. Comput. Integr. Manufactur. 20, 107–114 (2007) [CrossRef] [Google Scholar]
- A. Yassine, K.-C. Kim, Investigating the role of IT in customized product design, Product. Plan. Control 15, 422–434 (2004) [CrossRef] [Google Scholar]
- K.-Y. Lin, C.-F. Chien, UNISON framework of data-driven innovation for extracting user experience of product design of wearable devices, Comput. Ind. Eng. 99, 487–502 (2016) [CrossRef] [Google Scholar]
- H.-J. Lee, S.-J. Park, Comparison of Korean and Japanese head and face anthropometric characteristics, Human Biol. 80, 313–330 (2008) [Google Scholar]
- Y.-C. Lin, M.-J. Wang, The comparisons of anthropometric characteristics among four peoples in East Asia, Appl. Ergon. 35, 173–178 (2004) [CrossRef] [Google Scholar]
- M. Kouchi, Secular changes in the Japanese head form viewed from somatometric data, Anthropolog. Sci. 112, 41–52 (2004) [CrossRef] [Google Scholar]
- K.M. Robinette, H. Daanen, The Caesar project: a 3-D surface anthropometry survey[C], in Second International Conference on 3-D Digital Imaging and Modeling, IEEE, Canada (1999), p, 10 [Google Scholar]
- K.M. Robinette, 3-D landmark detection and identification in the CAESAR Project[C], in Third International Conference on 3-D Digital Imaging and Modeling, IEEE, Canada (2001), p, 7 [Google Scholar]
- Y. Luximo, SizeChina: a 3D anthropometric survey of the Chinese Head[D], Canada (2011) [Google Scholar]
- R. Ball, C. Shu, A comparison between Chinese and Caucasian head shapes, Appl. Ergon. 41, 832–839 (2010) [CrossRef] [Google Scholar]
- www.cyberware.com [Google Scholar]
- www.vitronic.de [Google Scholar]
- K.M. Robinette, H.A.M. Daanen, Precision of the CAESAR scan-extracted measurements, Appl. Ergon. 3, 259–265 (2006) [CrossRef] [Google Scholar]
- R. Ball, H. Wang, Scan and print: a digital design method for wearable products, Ergon. Des. 12, 26–34 (2019) [Google Scholar]
- J. Niu, Z. Li, Multi-resolution description of three-dimensional anthropometric data for design simplification, Appl. Ergon. 40, 807–810 (2009) [CrossRef] [Google Scholar]
- X. Chen, M. Shi, H. Zhou, X. Wang, G. Zhou, The ‘‘standard head’’ for sizing military helmet based on computerized tomography and the headform sizing algorithm, Acta Armament 23, 476–480 (2002) [Google Scholar]
- J. Niu, Z. Li, G. Salvendy, Multi-resolution description of three-dimensional anthropometric data for design simplification, Appl. Ergon. 40, 807–810 (2009) [CrossRef] [Google Scholar]
- E.M. Lachat, Assessment and calibration of a RGB-D camera (Kinect v2 Sensor) towards a potential use for close-range 3D modeling, Remote Sens. 7, 13070–13097 (2015) [CrossRef] [Google Scholar]
- S.T.L. Pöhlmann, Evaluation of kinect 3D sensor for healthcare imaging, J. Med. Biolog. Eng. 36, 857–870 (2016) [CrossRef] [PubMed] [Google Scholar]
- J. Tong, Z. Jin, Scanning 3D full human bodies using kinects, IEEE Trans. Visualiz. Comput. Graph. 18, 643–650 (2012) [CrossRef] [PubMed] [Google Scholar]
- S.M. Hou, C.F. Du, Laplace’s grid deformation 3D face modeling based on kinect, J. Graph. 39, 970–975 (2018) [Google Scholar]
- K. Wang, G. Zhang, J. Yang, Dynamic human body reconstruction and motion tracking with low-cost depth cameras, Visual Comput. 37, 603–618 (2021) [CrossRef] [Google Scholar]
- A. Agudo, F. Moreno-Noguer, Real-time 3D reconstruction of non-rigid shapes with a single moving camera, Comput. Vis. Image Understand. 153, 37–54 (2016) [CrossRef] [Google Scholar]
- S.-H. Huang, Y.-I. Yang, C.-H. Chu, Human-centric design personalization of 3D glasses frame in markerless augmented reality [Google Scholar]
- S. Chin, K.-Y. Kim, Facial configuration and BMI based personalized face and upper body modeling for customer-oriented wearable product design, Comput. Ind. 61, 559–575 (2010) [CrossRef] [Google Scholar]
- T. Ellena, S. Skals, 3D Anthropometric investigation of head and face characteristics of Australian Cyclists, Proc. Eng. 112, 98–103 (2015) [CrossRef] [Google Scholar]
- T. Ellena, H. Mustafa, A design framework for the mass customisation of custom-fit bicycle helmet models, Int. J. Ind. Ergon. 64, 122–133 (2018) [CrossRef] [Google Scholar]
- Z. Zhu, X. Ji, A morphometric study of auricular Concha in the population of young Chinese adults, Int. J. Morphol. 35, 1451–1458 (2017) [CrossRef] [Google Scholar]
- X. Ji, Z. Zhu, Anthropometry and classification of auricular concha for the ergonomic design of earphones, Human Factors Ergon. Manufact. Serv. Ind. 28, 90–99 (2018) [CrossRef] [Google Scholar]
- C.-H. Chu, I.-J. Wang, 3D parametric human face modeling for personalized product design: eyeglasses frame design case, Adv. Eng. Inf. 32, 202–223 (2017) [CrossRef] [Google Scholar]
- F. Buonamici, R. Furferi, L. Governi et al., A practical methodology for computer-aided design of custom 3D printable casts for wrist fractures, Visual Comput. 36, 375–390 (2020) [CrossRef] [Google Scholar]
- W. Leea, B. Lee, A 3D anthropometric sizing analysis system based on North American CAESAR 3D scan data for design of head wearable products, Comput. Ind. Eng. 117, 121–130 (2018) [CrossRef] [Google Scholar]
- F.C. Menezes Franco, Brachycephalic, dolichocephalic and mesocephalic: is it appropriate to describe the face using skull patterns? Dental Press J. Orthodont. 18, 159–163 (2013) [CrossRef] [PubMed] [Google Scholar]
- L. Yan, L.R. Ba, L. Justice, The 3D Chinese head and face modeling, Comput.-Aided Des. 44, 40–47 (2012) [CrossRef] [Google Scholar]
- J. Li, A class of quintic Hermiteinterpolation curve and the free parameters selection, J. Adv. Mech. Des. Syst. Manufactur. 13, 1–8 (2019) [Google Scholar]
- J.H. Park, S.H. Park, Improvement on Zhang’s Camera Calibration, Appl. Mech. Mater. 479-480s, 170–173 (2013) [CrossRef] [Google Scholar]
- K. Tian, J. Li, J. Zeng, A. Evans, L. Zhang, Segmentation of tomato leaf images based on adaptive clustering number of K-means algorithm, Comput. Electr. Agric. 165, 104962 (2019) [CrossRef] [Google Scholar]
Cite this article as: X. Bai, K. Wu, S. Qin, Y. Wang, Q. Yang, Research on custom-tailored swimming goggles applied to the internet, Mechanics & Industry 23, 13 (2022)
All Tables
All Figures
Fig. 1 Components of this study. |
|
In the text |
Fig. 2 Determination of the 25 head-feature points. 1, Vertex; 2, metopion; 3, right superciliare; 4, sellion; 5, right nasal root point; 6, right pupil; 7, right temple; 8, right endocanthus; 9, right ectocanthion; 10, right infraorbitale; 11, right lateral zygomatic; 12, pronasale; 13, right alare; 14, subnasale; 15, labiale superius; 16, right cheillion; 17, stomion; 18, labiale inferius; 19, sublabiale; 20, chin; 21, menton; 22, right otobasion superiuss; 23, right postaurale; 24, right tragion; 25, midpoint of rear hairline. |
|
In the text |
Fig. 3 Human skull [36]. |
|
In the text |
Fig. 4 Size parameters of swimming goggles (see Tab. 2). |
|
In the text |
Fig. 5 Verification of the validity of feature point selection. |
|
In the text |
Fig. 6 Layering of a scanned three-dimensional model. |
|
In the text |
Fig. 7 Curve after interpolation. |
|
In the text |
Fig. 8 Parametric model of the head. |
|
In the text |
Fig. 9 Parametric model of swimming goggles. |
|
In the text |
Fig. 10 Acquisition of the head data. |
|
In the text |
Fig. 11 Auxiliary tool for taking photographs. |
|
In the text |
Fig. 12 Correct method for taking photographs. |
|
In the text |
Fig. 13 Three-dimensional deviation and its two-dimensional projection. |
|
In the text |
Fig. 14 Breakdown of two-dimensional deflection. |
|
In the text |
Fig. 15 Schematic of the deflection of the center of the correction tool color cluster. |
|
In the text |
Fig. 16 Schematic of the translational deviation and calculation of the photographic data. |
|
In the text |
Fig. 17 Parameterization of the user's head and a diagram of the swimming-goggle model. |
|
In the text |
Fig. 18 Verification of the accuracy of parametric modeling of the head of the user. |
|
In the text |
Fig. 19 The page frame diagram of the mobile APP. |
|
In the text |
Fig. 20 Personal design APP for swimming goggles. |
|
In the text |
Fig. 21 Photographing the user. |
|
In the text |
Fig. 22 Appearance of the designed swimming goggles. |
|
In the text |
Fig. 23 Virtual wear verification. |
|
In the text |
Fig. 24 Validation of the designed swimming goggles. |
|
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.