Issue 
Mechanics & Industry
Volume 20, Number 8, 2019
Selected scientific topics in recent applied engineering – 20 Years of the ‘French Association of Mechanics – AFM’



Article Number  804  
Number of page(s)  16  
DOI  https://doi.org/10.1051/meca/2020009  
Published online  25 February 2020 
Regular Article
Advanced model order reduction and artificial intelligence techniques empowering advanced structural mechanics simulations: application to crash test analyses
^{1}
Gestamp Autotech Engineering France, 1719, Rue Jeanne Braconnier, 92360 Meudon, France
^{2}
ESI Group Chair @ PIMM Laboratory, Arts et Métiers ParisTech, 151 Boulevard de l’Hôpital, 75013 Paris, France
^{3}
ESI Group, Batiment Seville, 3 bis Saarinen, 50468 Rungis, France
^{*} email: francisco.chinesta@ensam.eu
Received:
25
May
2019
Accepted:
2
July
2019
This paper proposes a general framework for expressing parametrically quantities of interest related to the solution of complex structural mechanics models, in particular the ones involved in crash analyses where strongly coupled nonlinear and dynamic behaviors coexist with spacetime localized mechanisms. Advanced nonlinear regressions able to proceed in the lowdata limit, enabling to accommodate heterogeneous parameters, will be proposed and their performances evaluated in the case of crash simulations. As soon as these parametric expressions will be determined, they can be used for generating large amounts of realizations of the quantity of interest for different choices of the parameters, for supporting dataanalytics. On the other hand, such parametric representations allow the use advanced optimization techniques, evaluate sensitivities and propagate uncertainty all them under the stringent realtime constraint.
© V. Limousin et al., published by EDP Sciences 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
Plenty of effort has been dedicated throughout history to design an optimization process. It is certain that the primary design of a new component could become a tough task, especially when constraints coming from different fields have to be satisfied. Furthermore, evaluating the improvement of a given attribute under a change in the input parameter may become a tedious task when each evaluation of the direct problem involves either, numerous experimental tests or several highfidelity simulations.
Thus, design is usually stated as an optimization problem, where a cost function is intended to be minimized using any appropriate technique, as for instance a steepest descent method. The main drawback of such a procedure relies in the necessity of solving many times the problem for each tentative choice of the design parameters.
To enhance design performances Model Order Reduction – MOR – techniques enable faster simulations [1]. On the other hand the construction and use of metamodels (surrogate models) also facilitates the design process because the solution for a given choice of the parameters can be evaluated online in almost realtime. Thus, as soon as the parametric solution of a given problem is available, simulation, optimization, inverse analysis, uncertainty propagation and even control, can be efficiently performed under the stringent realtime constraint [2].
Among the different MOR techniques, Proper Generalized Decomposition – PGD – allows the offline construction of a parametric solution, as reported in [3]. However, standard PGD constructors induced too invasive computational procedures [4]. To mitigate that issue, we proposed in our recent works some strategies enabling nonintrusive solution procedures, able to compute the parametric solution from some runs of the associated highfidelity solver. The first proposal was applying a hierarchical approximation and considering as sampling points its associated GaussLobattoChebyshev points, leading to the socalled SSLPGD (for Sparse Subspace Learning Proper Generalized Decomposition) [5].
The main drawback of SSLPGDbased procedures is the increase of the sampling points with the design space dimensionality. For mitigating that issue, we recently proposed an alternative procedure using sparser sampling, and proved that reasonable results can be obtained with a number of sampling points (runs of the highfidelity model) scaling with the number of parameters involved in the considered model. This technique was called sPGD (for sparse Proper Generalized Decomposition) [6] and is being nowadays checked in a panoply of application at ESI Group.
However, many times, more than being interested in parametric fields, the last associated with the solution of parametric models (expressed from partial differential equation), the design requires the evaluation of one (or some) quantity (quantities) of interest –QoI. In what follows and for the sake of simplicity we will assume a single QoI, explicitly or implicitly dependent on the parametric field itself. In that case a procedure is needed for extracting the parametric form of the QoI. The main specificities of such a procedure are: (i) the fact of dealing with highdimensional spaces, with as many dimensions as the number of parameters involved in the model; (ii) the fact of considering many parameters of different nature (some parameters could be discrete and even qualitative).
In the present work we are dealing with crash simulations, and more particularly on the parametric deformation of a BPillar provided by Gestamp. Some highfidelity simulations were performed by varying different structural parameters, from which a QoI (the maximum intrusion able to quantify the car occupants safety) was extracted for the different parameter choices. In order to proceed with design optimization, a parametric expression of the QoI seems very valuable, however, for reducing the design cost and complexity, it seems compulsory reducing as much as possible the sampling points, that is, the required runs of the highfidelity model.
In that sense, in this paper we propose two different methodologies, the first is based on the use of a PGDbased sparse nonlinear regression making use of the separated representations at the heart of the PGD methodologies, and the second that efficiently circumvents the issue related to potential parameters heterogeneity. The last technique was called Code2Vect because it proceeds by mapping points in a representative domain into a vector space equipped with a convenable metric able to safely construct parametric approximations.
When dealing with timedependent QoI, the PGDbased regression was generalized in a multitimedomain framework, allowing for compact local regressions evolving (with continuity) in time, the socalled multilocal sparse nonlinear PGDbased regression.
As soon as the parametric QoI is available, sensibility analyses become straigthforward, as well as uncertainty quantification and propagation when parameters are assumed being statistically distributed, by using standard Monte Carlo approaches or by calculating the different QoI statistical moments. In all cases and again thanks to the compact parametric expression of the QoI such analyses can be performed extremely accurately and under the stringent realtime constraint, facilitating the design process. Finally, because extradata can be generated from the parametric QoI, one could use any available dataanalytics procedure for visualizing, classifying or modeling. In the present work MINESET^{TM} by ESI will be used for those purposes.
After this introduction the paper outline is as follows: Section 2 defines the problem to be addressed. Section 3 summarizes the main numerical technologies. Finally Section 4 presents and discusses numerical results, before finishing with some general conclusions and prospects in Section 5.
2 Problem statement
2.1 Context and glab simulation model presentation
With constant evolution of regulations regarding environment and pollutants emission, carmakers target an optimal vehicle light weighting while fulfilling cost and safety requirements. Given that physical validation tests are long and expensive, carmakers consider numerical models as a parallel option. Based on Finite Element theories, these models (FEM) enable a better understanding of physical phenomena and caseandeffect relationships between design parameters, manufacturing processes and crash outputs. The constant improvement in the accuracy and efficiency make them reliable to achieve car manufacturers objectives.
In this context, Gestamp, an international group dedicated to the design, development and manufacturing of automotive components, has developed the GLab family (see Fig. 1). It is an R&D program focusing on the development of numerical vehicle prototypes. The objective is to represent different automotive segments (B, C/D, SUV) with several types of powertrain (ICE, PHEV, EV) to validate new concepts and technologies.
Each Glab model is an advanced numerical model, dealing with all types of nonlinearities: geometries, material, buckling instabilities and multiple contacts. For example, material cards consider high deformation, plasticity and failure with speed dependent properties. To handle these nonlinearities and ensure stability, explicit schemes with a very small timestep (close to 3e–4s) are used to solve numerical equations. Crash duration is usually inferior to 100 ms. Therefore, for the G3 model (with more than 6.7 million elements and a 4 mm mesh size) calculation time is between 10 and 20 hours for a single crash, depending on boundary conditions and regulations.
To achieve mass and performance targets, single or multiobjective optimizations are used. This is an appropriate way to find and define the best solutions to attain the desired targets. The primary objective is to find the optimal concepts, for which one of the essential points is to understand the influence of parameters variation. As the vehicle structure is more and more complex (new materials, technologies, assemblies, etc.), the number of parameters that can be optimized increases. Moreover, the level of detail of FEM models, and so the number of elements, is continuously increasing. Therefore, optimization loops become time consuming and their analysis quickly gets tedious.
Thereby, testing all or part of the parameters combinations can be time consuming in a full car simulation. Classic optimizations using Response Surface Method are efficient but do not allow to reduce the number of calculations.
PGD technique has been used to solve this issue, by reducing simulation number and increasing predictability. This technique has been applied on a Bodyinwhite component (B Pillar) optimization and a sensitivity analysis. The crash selected is a EuroNCAP AEMDB side crash [7], with a 1400 kg barrier impacting a static vehicle, at 60kph (see Fig. 2). Results accuracy and prediction will be compared as well as the time spent and ease of analysis.
Fig. 1 GLab family. 
Fig. 2 EURONCAP AEMDB Side crash regulation applied on G3 model. 
2.2 BIW, BPillar scope and parametrization
The Body in White (BIW) is the set of sheet metal components that forms the structure of a vehicle. It is the main passive security element that ensures passengers safety during a crash. In case of a side crash, lateral components are the most solicited, particularly the BPillar (see Fig. 3). This set of parts, localized between the frontal and rear doors, represents a strategic component for side and rollover crash performances.
Bpillar is generally made of several parts (see Fig. 4):

Outer: main structural element of the Bpillar. The use of full press hardening allows to achieve a very high ultimate tensile grade (up to 1500 MPa). To control bending and improve energy absorption, it is possible to create a ductile area (UTS between 400 and 800 MPa).

Inner: part used to close the Bpillar and to receive surroundings parts. Its contribution to crash performances is low.

Reinforcement: additional part to locally reinforce the pillar. To avoid using two different tools for the outer and reinforcement, the component can be reinforced by applying a local patch on the blank before press hardening. Thus, outer and patch are stamped together in only one tool.
To improve side crash performances, modifications at different level can be made, such as changing materials, thicknesses, and geometry. The key of light weighting is to apply the right material with the right design at the right place. For this Bpillar optimization study, five parameters have been selected:
 1
Outer thickness t_{1}: Once the outer material is fixed, its thickness is supposed to be the most influential parameter. Increasing this thickness improves performances but adds weight. The parameter range varies between 1.1 mm and 1.7 mm, with a discrete distribution every 0.05 mm.
 2
Ductile zone material grade y: By locally controlling the temperature of the tools, partial hardening of the part can be achieved allowing tailored materialproperties in hot stamped monolithic component. A more ductile zone will absorb more energy but will allow more deformation. For this study, ultimate tensile stress of this zone continuously varies between 350 MPa and 600 MPa.
 3
Ductile zone size z: like the material grade of the ductile zone, its size can also be changed. Two different sizes of soft zone have been includedin the design space: 60 mm and 90 mm width.
 4
and 5. Inner part and patch thicknesses t_{2} and t_{3} : Decreasing parts thicknesses is the main way to save mass. Inner thickness varies between 0.9mm to 1.3mm while patch thickness evolves between 1.0mm and 1.6mm every 0.05mm.
The objective of this optimization is to find the best combination of parameters that minimize the mass of the Bpillar while achieving safety targets.
Fig. 3 Glab G3 BIW with highlighted BPillar. 
Fig. 4 Bpillar parts and parameters with color legend. 
2.3 Quantity of Interest and expected results (crash analysis)
A considerable amount of kinetic energy is generated by the barrier, which the BIW must absorb to its utmost to protect the passengers. If insufficient, dummies may suffer from internal and external damages. Internal injuries are linked to a too high speed or deceleration change. External injuries appear when intrusions into the cabin cell are too high. A delicate compromise must be found between energy absorption (to reduce deceleration) and cabin cell resistance (to reduce intrusion).
To protect the passengers efficiently, it is important to avoid deformation in the upper area, close to the head. To do so, the Bpillar concept with SoftZone has an upper part in an ultrahigh strength steel to limit intrusion. Its lower area displays a soft zone with a lower material grade to localize deformation and absorb a maximum of energy.
In this study, the post processing is done on the Bpillar with two main Quantities of Interest (QoI): intrusions and velocities. These QoI have been measured at four specific points, representing the main zones of the dummy to protect: Head, Thorax, Abdomen and Pelvis. Measurement points position are displayed on Figure 5.
The first QoI is the intrusion, measured as Ydisplacement (see Fig. 5) in the car referential. As the vehicle moves during the crash, this local coordinate system allows to analyze Bpillar deformations, as seen by the passengers. Maximum values of the four intrusions are assessed with some specific constraints to respect (see Fig. 6). For example, one constraint is that the maximum value of intrusion on Abdomen (N3) must be lower than 205 mm.
The second QoI are the velocities, measured as Yvelocities in the global coordinate system. The three lower measurement points are considered: thorax, abdomen and pelvis. A filter has been applied to smooth curves and reduce numerical noise. Contrary to the intrusion, on which only the maximum value is considered, the temporal evolution of the velocities is studied (see Fig. 7). Alongside maximum value, the average velocity is assessed on several time zones.
A high accuracy level of the QoI is needed for a parametric model to be used. For the intrusion, the maximum tolerated error for the PGD model is 5 mm. Concerning velocities, maximum tolerated accuracy is an average error rate inferior to 2.5%.
Fig. 5 Bpillar measurement points. 
Fig. 6 Car deformed view with highlighted Bpillar section and location of points: Head (N1), Thorax (N2), Abdomen (N3) and Pelvis (N4). 
Fig. 7 Measured velocities at three measurement points. 
3 Methods
When considering a QoI depending on a set of M parameters μ_{1}, …, μ_{M}, defining vector μ, μ = (μ_{1}, …, μ_{M}), i.e. $\mathcal{O}(\mu )$, its explicit form is a valuable tool in design optimization. However, in general QoI depends on the structural model solution.
The parametrized model solution is expressed into a separated form by making use of the SSLPGD or the sPGD solvers, it reads: $$u(x,t,{\mu}_{1},\dots ,{\mu}_{M})={\displaystyle \sum _{i=1}^{N}{X}_{i}}(x){T}_{i}(t){M}_{i}^{1}({\mu}_{1})\cdots {M}_{i}^{M}({\mu}_{M}),$$(1)
where the different functions are computed as described in [5, 6]
Sometimes the quantity of interest can be directly and explicitly extracted from equation (1), inheriting its separated form. Imagine for a while, that one is interested by the final value of the field at a certain point P and time Θ. In that case the parametric QoI reads: $$\begin{array}{ll}\mathcal{O}(\mu )\hfill & =u(P,\Theta ,{\mu}_{1},\dots ,{\mu}_{M})\hfill \\ \hfill & ={\displaystyle \sum _{i=1}^{N}{X}_{i}}(P){T}_{i}(\Theta ){M}_{i}^{1}({\mu}_{1})\cdots {M}_{i}^{M}({\mu}_{M})\hfill \\ \hfill & ={\displaystyle \sum _{i=1}^{N}{\alpha}_{i}}{M}_{i}^{1}({\mu}_{1})\cdots {M}_{i}^{M}({\mu}_{M}).\hfill \end{array}$$(2)
However, in most of cases when dealing with complex quantities of interest such an extraction becomes difficult to perform. The simplest alternative consists of evaluating the QoI for different choices of the parameters, i.e. $\mathcal{O}({\mu}^{j})=\mathcal{O}({\mu}_{1}^{j},{\mu}_{2}^{j},\dots ,{\mu}_{M}^{j})\equiv {\mathcal{O}}^{j},\text{\hspace{0.17em}}j=1,\dots ,S$, and then from those values infer the parametric form of the QoI, $\mathcal{O}(\mu )$, by enforcing the best fitting of the data ${\mathcal{O}}^{j},\text{\hspace{0.17em}}j=1,\dots ,S$. In what follows we assume being operating in the last scenario.
The main difficulties related to the construction of the parametric expression $\mathcal{O}(\mu )$, by assimilating the availabledata ${\mathcal{O}}^{j}$, are multiple:

The choice of the parameters. How to be sure that the hidden parameters affecting the output were considered?

In some cases the list of parameters, a priori selected, is too large, and many of them have not a significant influence on the output;

In many cases some parameters are exhibiting correlations, that means that the list of explicative uncorrelated parameters is more reduced that the initially considered. Here nonlinear dimensionality reduction techniques (manifold learning in particular) can help to extract the dimensionality of the slow manifolds;

In some cases the parameters do not act individually, but in a combined manner. Imagine for a while the Euler theory of buckling. For a beam of length L with a rectangular cross section b × h (b being the width and h its height), the buckling critical force depends on bh^{3}∕L^{2}. Thus, a polynomial regression expressing the buckling critical load from the geometrical parameters b, h and L needs the appropriate richness (third order in h) and the sufficient number of terms for representing L^{−2} from a polynomial expansion.
Most of the last points are object of intense researches, and today no definitive answer exist for most of them in the more general settings.
In any case, as soon as the different parameters composing the entries of vector μ are assumed able to express the output, regressions based on decision trees or its random forest counterpart, neural networks at the heart of deep learning, whose main drawback is the amount of data needed in the training stage, dynamic mode decomposition [8], sparse identification [9], or usual linear and nonlinear regressions, to cite few, can be applied.
A first choice consists of using classical regression strategies. In that case one could consider a polynomial dependence of the QoI, $\mathcal{O}$, on the parameters μ_{k}, k = 1, …, M. The simplest choice, linear regression, reads $$\mathcal{O}(\mu )={\beta}_{0}+{\beta}_{1}{\mu}_{1}+\cdots +{\beta}_{M}{\mu}_{M},$$(3)
where the M + 1 coefficients β_{k} can be computed from the availabledata. If 1 + M data are available, ${\mathcal{O}}^{j}$, j = 1, …, 1 + M, we can write the matrix system $$\left(\begin{array}{c}{\mathcal{O}}^{1}\\ {\mathcal{O}}^{2}\\ \vdots \\ {\mathcal{O}}^{M+1}\end{array}\right)=\left(\begin{array}{ccccc}1& {\mu}_{1}^{1}& {\mu}_{2}^{1}& \cdots & {\mu}_{M}^{1}\\ 1& {\mu}_{1}^{2}& {\mu}_{2}^{2}& \cdots & {\mu}_{M}^{2}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1& {\mu}_{1}^{M+1}& {\mu}_{2}^{M+1}& \cdots & {\mu}_{M}^{M+1}\end{array}\right)\left(\begin{array}{c}{\beta}_{0}\\ {\beta}_{1}\\ \vdots \\ {\beta}_{M}\end{array}\right),$$(4)
that allows calculating coefficients β_{k} and from them the linear regression (3).
When the number of available data is smaller or larger than M + 1, the previous systems results under or overdetermined respectively. Different techniques exist for solving them: pseudoinverse, L2 or L1 optimization, the last intimately related to compressed sensing, or the usual Matlab^{TM} or Scilab^{TM} backslash.
Linear regression is simple to use because it requires a reasonable amount of available data, of the same order than the number of the parameters involved in the approximation, however, in some case the approximation becomes too poor for representing rich nonlinear behaviors.
Higher degree approximations (nonlinear regressions) are possible without major difficulties when the number of involved parameters remain small enough. For instance the quadratic approximation reads $$\mathcal{O}(\mu )={\beta}_{0}+{\displaystyle \sum _{i=1}^{M}{\beta}_{i}}{\mu}_{i}+{\displaystyle \sum _{i=1}^{M}{\displaystyle \sum _{j\ge i}^{M}{\beta}_{ij}}}{\mu}_{i}{\mu}_{j},$$(5)
where it can be noticed that the number of coefficients (and consequently the required data) roughly scales with M^{D}, where D is the approximation degree.
Thus, if M remains reasonably small, one could expect increasing the approximation degree D, however the multiparametric case seems privileging linear regressions. The crucial question is: can be the multiparametric case compatible with highdegree approximations while keeping as reduced as possible the sampling?
A response to that question is provided in the next section (Sect. 3.1) that proposes a multilocal sparse nonlinear PGDbased regression. Even if such a proposal enables rich approximations in multiparametric settings, one difficulty persists, the one related to the use of L2 norm that determines its own choice of the overdetermined approximation. At present, a work in progress consists in extending the procedure presented in the next section, by using the L1 norm enabling sparser approximations.
When the parameters are of very different nature, the definition of metrics in the parametric space becomes a tricky issue. Separated representations circumvent that issue when using an alternated directions constructor that allows avoiding parameters mixing. In Section 3.2 we propose an alternative procedure able to circumvent that issue in a very general setting.
Finally, as soon as the parametric expression of the QoI is available, it allows generating as many parametric particularizations as desired, making possible performing efficiently dataanalytics, sensitivity analyses and uncertainty propagation, as addressed in Section 3.3.
3.1 Sparse PGD: a nonlinear regression processing at the lowdata limit
Sparse PGD regression consists in defining a sparse approximation in high dimensional settings [6] revisited for the sake of completness in what follows. For the ease of exposition and without loss of generality, let us begin by assuming that the QoI lives in ${\mathbb{R}}^{2}$, $\mathcal{O}({\mu}_{1},{\mu}_{2})$, $\mu =({\mu}_{1},{\mu}_{2})\in \Omega \subset {\mathbb{R}}^{2}$, and that it is to be recovered from sparse data ${\mathcal{O}}^{j}$. For that purpose we consider the Galerkin projection for calculating the approximate $\tilde{\mathcal{O}}(\mu )$ of $\mathcal{O}(\mu )$: $${\int}_{\Omega}w}(\mu )\left(\tilde{\mathcal{O}}(\mu )\mathcal{O}\right)d\mu =0,$$(6)
where $w(\mu )\in {\mathcal{C}}^{0}(\Omega )$ is an arbitrary test function and $$\mathcal{O}={\sum}_{j=1}^{S}{\mathcal{O}}^{j}\delta ({\mu}^{j}).$$(7)
Following the Proper Generalized Decomposition (PGD) rationale, the next step is to express the approximated function $\tilde{\mathcal{O}}$ in the separated form $$\tilde{\mathcal{O}}(\mu )\approx {\sum}_{k=1}^{N}{M}_{i}^{1}({\mu}_{1}){M}_{i}^{2}({\mu}_{2}),$$(8)
constructed by using the standard rankone update [4].
It is worth noting that the product of the test function w(μ) times the objective function $\mathcal{O}(\mu )$ is only evaluated at few locations (the ones corresponding to the available sampled data). Since information is just known at these S sampling points μ^{j}, j = 1, …, S, it seems reasonable to express the test function not in a finite element context, but to express it as a set of Dirac delta functions collocated at the sampling points, $$w(\mu )={\tilde{\mathcal{O}}}^{*}(\mu ){\sum}_{j=1}^{S}\delta ({\mu}^{j}).$$(9)
In the expressions above nothing has been specified about the basis in which each one of the onedimensional modes was expressed. An appealing choice ensuring accuracy and avoiding spurious oscillations consists of using interpolants based on Kriging techniques.
The just described procedure defines a powerful nonlinear regression. It is important to note that when calculating functions ${M}_{i}^{j}({\mu}_{j})$ (that defines a onedimensional problem in the coordinate μ_{j}) S datapoints are available. Thus, S points enables quite rich approximations in each parametric dimension. The only drawback, as commented at the beginning of the present section, is the use of the L2norm in the Galerkin projection (6) leading to a particular solution of the underdetermined problem, among the infinity of possible solutions. The use of a L1 minimization should lead to sparser approximations. This last route constitutes a work in progress.
3.2 Code2Vect
Defining distances between qualitative data has no sense, and usual learning approaches suffers of such an illness. For instance, one could consider that yellow and red are quite close because both are representing colors, however, from both words such a proximity becomes difficult to quantify.
In what follows we propose a technique, sketched in Figure 8, for mapping points in a representation space into a target space equipped of an euclidean metric allowing quantifying distances, crucial for constructing approximations.
We assume that points in the origin space (space of representation) consist of S arrays composed on M entries, noted by μ^{j}. Their images in the vector space are noted by ${x}^{j}\in {\mathbb{R}}^{D}$. That vector space is equipped with the standard scalar product and the associated Euclidian distance. The mapping is described by the D × M matrix W, $$x=W\mu ,$$(10)
where both, the components of W and the images ${x}^{j}\in {\mathbb{R}}^{D}$ of μ^{j}, j = 1, …, S, must be calculated.
Each point x^{j} keep the label (value of the output of interest) associated with is origin point μ^{j}, denoted by ${\mathcal{O}}^{j}$.
We would like placing points x^{j}, such that the Euclidian distance with each other point x^{i} scales with their outputs difference, i.e. $$(W{\mu}^{i}W{\mu}^{j})\cdot (W{\mu}^{i}W{\mu}^{j})=\Vert {x}^{i}{x}^{j}{\Vert}^{2}={\mathcal{O}}^{i}{\mathcal{O}}^{j},$$(11)
where the coordinates of one of the points can be arbitrarily chosen.
Thus, there are $\frac{{S}^{2}}{2}S$ relations to determine the M × D unknowns (the components of W).
Linear mappings are limited and do not allow proceeding in nonlinear settings. Thus, a better choice consists of the nonlinear mapping W(μ) suitably approximated [10].
Fig. 8 Input space (left) and target vector space (right). 
3.3 Sensitivities and uncertainty propagation
With the QoI expressed parametrically, $$\mathcal{O}(\mu )\approx {\displaystyle \sum _{i=1}^{N}{M}_{i}^{1}}({\mu}_{1})\cdots {M}_{i}^{M}({\mu}_{M}),$$(12)
sensitivity of the output to a given parameter, e.g. to μ_{1} reads $$\frac{\partial \mathcal{O}(\mu )}{\partial {\mu}_{1}}\approx {\displaystyle \sum _{i=1}^{N}\frac{\partial {M}_{i}^{1}({\mu}_{1})}{\partial {\mu}_{1}}}{M}_{i}^{2}({\mu}_{2})\cdots {M}_{i}^{M}({\mu}_{M}).$$(13)
Now, if parameters are totally uncorrelated the probability distribution of all them becomes independent and then the probability density function can be expressed as $$\Xi ({\mu}_{1},\dots ,{\mu}_{M})={\xi}_{1}({\mu}_{1})\cdots {\xi}_{P}({\mu}_{M}).$$(14)
When correlations cannot be totally avoided, we can express the joint probability density Ξ(μ_{1}, …, μ_{M}) in a separated form (by invoking the SSL, the sPGD or even the HOSVD) [4]: $$\Xi ({\mu}_{1},\dots ,{\mu}_{M})\approx {\displaystyle \sum _{i=1}^{R}{F}_{i}^{1}}({\mu}_{1})\cdots {F}_{i}^{M}({\mu}_{M}).$$(15)
Now, with both the output and joint probability density expressed in a separated form, the calculation of the different statistical moments becomes straightforward. Thus, the first moment, the average field results $$\overline{\mathcal{O}}={\displaystyle {\int}_{{\Omega}_{1}\times \cdots \times {\Omega}_{M}}\mathcal{O}}({\mu}_{1},\dots ,{\mu}_{M})\text{\hspace{0.17em}}\Xi ({\mu}_{1},\dots ,{\mu}_{M})\text{\hspace{0.17em}}d{\mu}_{1}\cdots d{\mu}_{M},$$(16)
where Ω_{k} defined the domain of parameter μ_{k}. The separated representation is a key point for the efficient evaluation of this multidimensional integral, that becomes a series of one dimensional integrals.
The calculation of higher order statistical moments (variance,...) requires the precalculation of the output powers ${\mathcal{O}}^{s}$, s > 1 within the SSL or sPGD frameworks to define a separated representation to be introduced in the calculation of the sstatistical moment: $${\int}_{{\Omega}_{1}\times \cdots \times {\Omega}_{M}}{\mathcal{O}}^{s}}({\mu}_{1},\dots ,{\mu}_{M})\text{\hspace{0.17em}}\Xi ({\mu}_{1},\dots ,{\mu}_{M})\text{\hspace{0.17em}}d{\mu}_{1}\cdots d{\mu}_{M}.$$(17)
An alternative procedure for propagating uncertainty consists in using Monte Carlo techniques. The parametric QoI allows particularizing it in almost realtime to any choice of the parameters, according to their probability distribution, in order to evaluate the QoI probability distribution. These simple and cheap particularizations can be also used for performing a variety of dataanalytics.
4 Numerical results
The numerical results are structured as follows: first, the performance to create a surrogate model of the maximum intrusion displacement is shown. Afterwards, the sPGD will be employed to capture not only the maximum intrusion displacement but also the temporal evolution of the intrusion. In the last application of the sPGD, a surrogate model regarding the temporal evolution of the intrusion velocity will be created as well. Finally, the Code2Vect will be used for two purposes, first to find a representation space where all the intrusion data is displayed; second, to study as well its prediction capability for calculating the maximum intrusion value.
4.1 Maximum intrusion from sPGD
Our parameter space consists of five independent parameters, namely z, t_{2}, t_{1}, t_{3} and y. The Design of Experiments (DoE) consists of 22 highfidelity simulations where different values of the independent parameters have been used as shown in Table 1.
As previously mentioned, the BPillar structure inside the car plays an important role to guarantee the safety of all passengers. Therefore, four different spatial points placed at different heights on this BPillar structure will be our object of study. This four points, illustrated in Figure 5 will be referred in the sequel: “Head”, “Thorax", “Abdomen" and ‘Pelvis", also sometimes will be referred from their first letter: H, T, A and P respectively. Indeed, the intrusion of these four points when a crash occurswill determine the degree of safety of a given configuration.
Figure 9 shows the temporal evolution at the four positions of the BPillar structure for each one of the DoE appearing in Table 1. As it can be seen all of them follow the main trend, from 0 to 0.05 seconds there is a increment on the intrusion value due to the crash, afterwards there is a relaxation of the intrusion once the main impact has finished due to springback effects.
Since the maximum intrusion of the BPillar is going to provide a reliable safety indicator, the first surrogate model is based on selecting the maximum intrusion at the H, T, A and P points as a function of the five parameters previously introduced. The parameters are set into vector μ $$\mu =({\mu}_{1},\dots ,{\mu}_{5})=[\text{z},\text{\hspace{0.17em}}{\text{t}}_{2},{\text{t}}_{1},{\text{t}}_{3},\text{y}].$$(18)
The maximum intrusion I_{M} at point Q (Q refers to locations H, T, A and P) is defined from $${I}_{M}^{Q}(\mu )={\mathrm{max}}_{t}I(t;\mu ,Q).$$(19)
To test the performance of the sPGD algorithm, only the maximum intrusions related to points [1, 4, 13, 15, 17, 21] inside the DoE are taken into account to construct the regression. The other 16 points are used as an error indicator to show how accurate the regression is. It is important to remark that the low amount of data to build the sPGD regression forces the algorithm to work with low order interpolation basis. Indeed, when generating such a surrogate model up to linear interpolations in each one of the directions will be used. More in detail, the greedy nature of the sPGD algorithm is used to adapt the basis for each one of the modes i.e. the first mode will be constant in each direction, the second mode linear in the first direction and constant in the other ones, the third mode linear in the second direction and constant in the other ones, etc.
Figure 10 shows the real versus the estimated prediction on the maximum intrusion for Head (H), Thorax (T), Abdomen (A) and Pelvis (P) points i.e. ${I}_{M}^{H}$, ${I}_{M}^{T}$, ${I}_{M}^{A}$, ${I}_{M}^{P}$, respectively. The yellow points correspond to the ones used to build the sPGD regression, whereas the blue points are just used to measure the sPGD regression predictive accuracy. If all points were on the red line, the surrogate model would be perfect. Nevertheless, the dispersion of these points with respect to the red line gives us a visual indicator of how good the surrogate model is. Indeed, the relative error based on the blue points is shown in Table 2. As it can be seen, the highest error is the one present in Pelvis (P) point, reaching a value of 4.8%, the closest one to the soft zone.
Design of Experiments –DoE.
Fig. 9 Temporal evolution of the intrusion I at four different locations of the BPillar: I^{H}, I^{T}, I^{A}, & I^{P}. 
Relative error in maximum intrusion for sPGD.
Fig. 10 Estimated versus real maximum intrusion for Head (H) ${I}_{M}^{H}$, Thorax (T) ${I}_{M}^{T}$, Abdomen (A) ${I}_{M}^{A}$ and Pelvis (P) ${I}_{M}^{P}$ Bpillar points. Yellow points, used in the sPGD. Blue points, used as error indicator of the regression model. 
4.2 Modeling the intrusion time evolution using the sPGD
The previous section analyzed the surrogate models based on the maximum intrusion. However, it is also important to understand how the intrusion evolves in time. Indeed, this temporal evolution gathers the information of the maximum intrusion just like the cumulated energy stored in the BPillar structure. Nevertheless, the generation of the surrogate model involving the time coordinate becomes more complex.
To be consistent with the former subsection, in order to build the sPGD model only the time evolution of intrusion associated to parameters in Table 1 ([1, 4, 13, 15, 17, 21]) is taken into consideration. It is important to note that the time coordinate exhibits a rather richer behavior compared to the evolution along the other coordinates. However, it is possible to increase the interpolation order along the time coordinate while keeping low order in the other coordinates. Thus, Chebyshev polynomials of degree 40 are used in the time approximation, while keeping either constant or linear interpolation in the parametric μ space. The intrusion reads now I^{Q}(t;μ, Q).
Figure 11 depicts both the error considering only the points inside the sPGD regression and the one considering both sPGD regression points plus the ones outside the training dataset. As it can be seen, the error decreases as a function of the number of modes introduced in our sPGD approximation. The most important decrease in the error is seen in the fourth sPGD which involves a linear interpolation along μ_{3} (that corresponds to t_{1}), hence it canbe assumed that this parameter plays an important role in order to explain the variation of the QoI inside the parameter space. Moreover, the error decrease almost monotonic while exhibiting different plateaux.
Figure 12 shows the predicted temporal evolution (red) against the real temporal evolution (blue) for a Head point inside the training set (i.e. DOE 1 left) and outside the training set (i.e. DOE 22 right). As it can be seen, there is almost no difference between predicted and real values for the point in the data set, i.e. blue and red curves almost overlap, whereas for the point outside the data set there is a very slight difference between red and blue curves. Nevertheless, points outside the trainning set present acceptable errors in this particular case.
Analogously, Figures 13–15 contains the same information than Figure 12 but in the case of Thorax, Abdomen and Pelvis points, respectively. The predicted behaviour is in good agreement with respect to the real behaviour.
Fig. 11 Convergence with respect number of sPGD modes for Head (topleft), Thorax (topright), Abdomen (bottomleft) and Pelvis (bottomright). 
Fig. 12 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Head point inside the training set (left) and outside the training set (right). 
Fig. 13 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Thorax point inside the training set (left) and outside the training set (right). 
Fig. 14 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Abdomen point inside the training set (left) and outside the training set (right). 
Fig. 15 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Pelvis point inside the training set (left) and outside the training set (right). 
4.3 Modeling the intrusion velocity time evolution by using the sPGD
Another quantity of interest that is important from a safety point of view is the velocity magnitude experienced at certain points belonging to the body of the passenger throughout the impact. Indeed, the velocity magnitude at three points placed at the Thorax, Abdomen and Pelvis is our subject of study, namely V_{T} (μ, t), V_{A} (μ, t) and V_{P} (μ, t), respectively.
Figure 16 depicts the temporal evolution of the velocity magnitude throughout the crash for different set of parameters μ. As it can beseen, there is a notable change of these curves within the parameter space. Initially the car is static, and consequently the velocity magnitude is zero, then it starts to grow due to the impact. Hence, the main task is to be able to predict suchtemporal evolution for different values of parameters. Again, richer time approximation are again expected.
Figure 17 shows the convergence of the estimated solution for different number of sPGD modes. The error ismeasured using a L2 relative error norm. The points used in the sPGD regression are again [1, 4, 13, 15, 17, 21], with respect to Table 1, the other points are only used to measure the regression accuracy. As it can be seen, the magnitude of all error indicators are acceptable (around 1–2%) being the highest error the one associated with the pelvis velocity. Indeed, the pelvis velocity is the one presenting the highest error since the variation of the curve along the parameter space is the highest one as well.
Figures 18–20 show the predicted temporal evolution (red) versus the real temporal evolution (blue) for velocity magnitudes inside the training set (left) and outside the training set (right). It can be notice how the predicted curve is closer to the real curve when the point is inside the training data set. Nevertheless, the prediction outside the training points also captures the main features of the real curve.
Fig. 16 Temporal evolution of the thorax, abdomen and pelvis velocity magnitude throughout the crash simulation for different values of the parametric space. As velocities results from the intrusion derivatives they result less smoother. 
Fig. 17 Convergence with respect number of sPGD modes for thorax velocity (topleft), abdomen velocity (topright) and pelvis velocity (bottom). 
Fig. 18 Predicted temporal evolution (red) versus real temporal evolution (blue) for a thorax velocity magnitude inside the training set (left) and outside the training set (right). 
Fig. 19 Predicted temporal evolution (red) versus real temporal evolution (blue) for a abdomen velocity magnitude inside the training set (left) and outside the training set (right). 
Fig. 20 Predicted temporal evolution (red) versus real temporal evolution (blue) for a pelvis velocity magnitude inside the training set (left) and outside the training set (right). 
4.4 Classifying multidimensional data by employing the Code2Vect technique
In this section the application of Code2Vect to classify multidimensional data is discussed. As before stated the GESTAMP dataset is conformed by five parameters, being the maximum intrusion the QoI. The obtained lowdimensional vectors ${x}^{j}\in {\mathbb{R}}^{2}$ (see Section 3.2) are depicted for the Head and Thorax cases in Figure 21 and for the Abdomen and Pelvis ones in Figure 22. Each sampling point of the dataset becomes a point in the target vector space as described in Section 3.2. For the sake of easy visualization the vector space was enforced to have low dimensionality, 2D in the examples discussed below.
As soon as all the points were mapped into the 2D vector space, a color was assigned to each of them, corresponding to the QoI but also to the value of the different parameters in order to see which of them clusterize in a similar way that the QoI. Such correlations serve to conclude on which parameters directly explain the QoI.
Figure 21 proves that the third parameter μ_{3} ≡t_{1} has a direct influence on the output because when t_{1} is maximumintrusion is minimum and viceversa. This conclusion reinforces the observation when using the sPGD. The other parameters are not individually correlated, and consequently if they influence the output, they should proceed in a combined manner. It has been proved that by removing them (all parameters except the third parameter μ_{3} ≡t_{1}) predictions remain very accurate, mainly at H, T & A locations, proving the fact that responses are almost dictated by the third parameter. This fact is physically interpretable because, as soon as the soft zone localizes the deformation, the mechanical response is almost determined by parameter μ_{3} ≡t_{1} related to the outer thickness t_{1}, being quite insensible to the soft zone location and grade as well as to the inner and patch thicknesses. Obviously, when approaching the soft zone (Pelvis) this tendency seems, as expected, less pronounced as Figure 22(right) proves.
Fig. 21 Mapped data concerning Head (top) and Thorax (bottom) intrusion. 
Fig. 22 Mapped data concerning the Abdomen (top) and Pelvis (bottom) intrusion. 
4.5 Maximum Intrusion using the Code2Vect technique
In this section Code2Vect is used as regression procedure. The same points considered in Section 4.1 were used here for training purposes, that is, for computing matrix W. Then, the remaining points were mapped and the QoI interpolated from the neighbor data. Thus, the error between the known QoI at that points and the one predicted was calculated, as reported in Table 3 and depicted in Figure 23. The best prediction is achieved for the intrusion at the Pelvis location, while the worst prediction is related to the one related to the Abdomen.
Relative error in the prediction of the maximal intrusion when using the Code2Vectbased regression.
Fig. 23 Estimated versus real maximum intrusion for Head (H), Thorax (T), Abdomen (A) and Pelvis (P) Bpillar points. Yellow points, are used for training purposes in Code2Vect whereas blue points serves to quantify the prediction accuracy. 
4.6 Sensitivity analysis and uncertainty propagation
Part thickness and material properties can slightly change from one steel blank to another, due to process variability. To perform a statistical analysis on the intrusions of the points of interest, we assumed that parameters [t_{2}, t_{1}, t_{3}, y] follow gaussian distributions (Fig. 24), respectively $\mathcal{N}(1.1,0.06)$, $\mathcal{N}(1.4,0.09)$, $\mathcal{N}(1.3,0.08)$, $\mathcal{N}(1.09,0.07)$, where the discrete parameter z = 107. Using the constructed PGD regression from points [1, 4, 13, 15, 17, 21] (previously discussed), 10 000 extraconfigurations were generated by varying parameter values.
Maximum intrusions were calculated at the four points of interest: Head, Thorax, Abdomen, and Pelvis, and their resulting distributions are represented in Figure 25. Two probability density functions are shown for each quantity: (i) the gaussian density distribution and (ii) the kernel density estimation (KDE). On the one hand, the Abdomen intrusion distribution is relatively wide because a local bending may appear if the Bpillar is not strong enough to withstand the resultant forces. On the other hand, the Pelvis intrusion displays a low standard deviation that can be explained physically from the fact that the lower grade applied in this area is used to localize the deformation. The bending will hence always occur here. Moreover, the resultant displacement is also driven by the deformation of the side sill component, which remains unchanged. The impact of the energy absorption by the lower area has a greater effect on upper intrusions than on the lower intrusion because it drives the overall distribution of forces in the Bpillar.
In order to explore better the design space, we generate 20000 new configurations and then compute (by using the PGD nonlinear regression model) the related maximum intrusion for the points of interest: H, T, A and P. Using the analytics features of Mineset^{TM}, sensitivity analysis and optimization studies were performed.
Figure 26 displays the parameters influence on the intrusion. As expected, it confirms that μ_{3} ≡t_{1} is the critical design parameter for this QoI. Similar conclusions can be stressed for the other intrusions points; the outer thickness has an overwhelming importance compared to the other parameters.
Finally, two optimization studies were done, one considering a single constraint related to the Abdomen, Figure 27, and the other by considering constraints on every measurement point (see Fig. 28). Optimized parameter values are consistent for this type of crash example. As several values are proposed for z and y parameters, it seems that their influence is low for the selected constraints.
Fig. 24 Distributions of (topleft) t_{2}; (topright) t_{1}; (bottomleft) t_{3} and (bottomright) y. 
Fig. 25 (top left) the Maximum Head intrusion distribution; (top right) Maximum Thorax Intrusion distribution; (bottom left) Maximum Pelvis intrusion distribution, (bottom right) the Maximum Abdomen intrusion distribution. 
Fig. 26 Column of importance on the maximum intrusion on the four points. 
Fig. 27 Optimization using parallel coordinates considering Abdomen intrusions. Blue lines represents the parameter combinations associated with the allowed outputs. 
Fig. 28 Optimization using parallel coordinates considering all constraints. Blue lines represents the parameter combinations associated with the allowed outputs. 
5 Conclusions
In this paper we proposed two different techniques able to extract quantities of interest – QoI – and express them parametrically, in the lowdata limit. These procedures were successfully applied to analyze parametrically a BPillar within a full vehicle crash test. Indeed, it allowed to reduce significantly the number of simulations needed to perform prediction, optimization and sensitivity analyses.
It was proved that in this particular case, the computational complexity scales linearly with the number of parameters, i.e., 6 data suffice for expressing a parametric solution involving 5 parameters, a quite impressive performance due to the features of sparse PGD formulations. Moreover, the data generated by using these parametric solutions was considered for performing sensitivity and uncertainty propagation analyses.
The prediction of maximal intrusions and time evolution of intrusion and intrusion velocities were successfully accomplished by invoking the sPGD. By using the Code2Vect the influence of the different parameters on the quantity of interest – QoI – was determined, and it was concluded that it remains quite insensible to the variation of 4 among the 5 parameters. The QoI obviously depends on those parameters, but it does not vary significantly when these 4 parameters vary in their considered intervals. These tendencies were confirmed by all the analysis techniques considered: sPGD, Code2Vect and MINESET^{TM} software on dataanalytics, the last considered in the present work for performing uncertainty propagation.
To go further, performances could be evaluated based on more complex cases, involving highly sensitive parameters and bifurcations.
References
 F. Chinesta, A. Huerta, G. Rozza, K. Willcox, Model Order Reduction. Chapter in the Encyclopedia of Computational Mechanics, Second Edition, Erwin Stein, René de Borst & Tom Hughes Edt., John Wiley & Sons Ltd. (2015) [Google Scholar]
 F. Chinesta, P. Ladeveze, E. Cueto, A short review in model order reduction based on proper generalized decomposition. Arch. Comput. Methods Eng. 18, 395–404 (2011) [Google Scholar]
 F. Chinesta, A. Leygue, F. Bordeu, J.V. Aguado, E. Cueto, D. Gonzalez, I. Alfaro, A. Ammar, A. Huerta, PGDbased computational vademecum for efficient design, optimization and control, Arch. Comput. Methods Eng. 20, 31–59 (2013) [Google Scholar]
 F. Chinesta, R. Keunings, A. Leygue, The Proper Generalized Decomposition for Advanced Numerical Simulations. A primer. Springerbriefs, Springer (2014) [Google Scholar]
 D. Borzacchiello, J.V. Aguado, F. Chinesta, Nonintrusive sparse subspace learning for parametrized problems. Arch. Comput. Methods Eng. 26, 303–326 (2019) [Google Scholar]
 R. Ibanez, E. AbissetChavanne, A. Ammar, D. Gonzalez, E. Cueto, A. Huerta, J.L. Duval, F. Chinesta, A multidimensional datadriven sparse identification technique: the sparse proper generalized decomposition, Complexity 2018, 5608286 (2018) [Google Scholar]
 Safety Wissen, February 2019, European News Car Assessment Programme (EuroNCAP), available at https://www.safetywissen.com/#/requirement/ [Google Scholar]
 P.J. Schmid, Dynamic mode decomposition of numerical and experimental data, J. Fluid Mech. 656, 5–28 (2010) [Google Scholar]
 S.L. Brunton, J.L. Proctor, N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, PNAS, April 12, 113, 3932–3937 (2016) [Google Scholar]
 C. Argerich, R. Ibanez, F. Chinesta, Code2vect: An efficient heterogenous data classifier and nonlinear regression technique. CRAS Mécanique. 347, 754–761 (2019) [Google Scholar]
Cite this article as: V. Limousin, X. Delgerie, E. Leroy, R. Ibáñez, C. Argerich, F. Daim, J.L. Duval, F. Chinesta, Advanced model order reduction and artificial intelligence techniques empowering advanced structural mechanics simulations: application to crash test analyses, Mechanics & Industry 20, 804 (2019)
All Tables
Relative error in the prediction of the maximal intrusion when using the Code2Vectbased regression.
All Figures
Fig. 1 GLab family. 

In the text 
Fig. 2 EURONCAP AEMDB Side crash regulation applied on G3 model. 

In the text 
Fig. 3 Glab G3 BIW with highlighted BPillar. 

In the text 
Fig. 4 Bpillar parts and parameters with color legend. 

In the text 
Fig. 5 Bpillar measurement points. 

In the text 
Fig. 6 Car deformed view with highlighted Bpillar section and location of points: Head (N1), Thorax (N2), Abdomen (N3) and Pelvis (N4). 

In the text 
Fig. 7 Measured velocities at three measurement points. 

In the text 
Fig. 8 Input space (left) and target vector space (right). 

In the text 
Fig. 9 Temporal evolution of the intrusion I at four different locations of the BPillar: I^{H}, I^{T}, I^{A}, & I^{P}. 

In the text 
Fig. 10 Estimated versus real maximum intrusion for Head (H) ${I}_{M}^{H}$, Thorax (T) ${I}_{M}^{T}$, Abdomen (A) ${I}_{M}^{A}$ and Pelvis (P) ${I}_{M}^{P}$ Bpillar points. Yellow points, used in the sPGD. Blue points, used as error indicator of the regression model. 

In the text 
Fig. 11 Convergence with respect number of sPGD modes for Head (topleft), Thorax (topright), Abdomen (bottomleft) and Pelvis (bottomright). 

In the text 
Fig. 12 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Head point inside the training set (left) and outside the training set (right). 

In the text 
Fig. 13 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Thorax point inside the training set (left) and outside the training set (right). 

In the text 
Fig. 14 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Abdomen point inside the training set (left) and outside the training set (right). 

In the text 
Fig. 15 Predicted temporal evolution (red) versus real temporal evolution (blue) for a Pelvis point inside the training set (left) and outside the training set (right). 

In the text 
Fig. 16 Temporal evolution of the thorax, abdomen and pelvis velocity magnitude throughout the crash simulation for different values of the parametric space. As velocities results from the intrusion derivatives they result less smoother. 

In the text 
Fig. 17 Convergence with respect number of sPGD modes for thorax velocity (topleft), abdomen velocity (topright) and pelvis velocity (bottom). 

In the text 
Fig. 18 Predicted temporal evolution (red) versus real temporal evolution (blue) for a thorax velocity magnitude inside the training set (left) and outside the training set (right). 

In the text 
Fig. 19 Predicted temporal evolution (red) versus real temporal evolution (blue) for a abdomen velocity magnitude inside the training set (left) and outside the training set (right). 

In the text 
Fig. 20 Predicted temporal evolution (red) versus real temporal evolution (blue) for a pelvis velocity magnitude inside the training set (left) and outside the training set (right). 

In the text 
Fig. 21 Mapped data concerning Head (top) and Thorax (bottom) intrusion. 

In the text 
Fig. 22 Mapped data concerning the Abdomen (top) and Pelvis (bottom) intrusion. 

In the text 
Fig. 23 Estimated versus real maximum intrusion for Head (H), Thorax (T), Abdomen (A) and Pelvis (P) Bpillar points. Yellow points, are used for training purposes in Code2Vect whereas blue points serves to quantify the prediction accuracy. 

In the text 
Fig. 24 Distributions of (topleft) t_{2}; (topright) t_{1}; (bottomleft) t_{3} and (bottomright) y. 

In the text 
Fig. 25 (top left) the Maximum Head intrusion distribution; (top right) Maximum Thorax Intrusion distribution; (bottom left) Maximum Pelvis intrusion distribution, (bottom right) the Maximum Abdomen intrusion distribution. 

In the text 
Fig. 26 Column of importance on the maximum intrusion on the four points. 

In the text 
Fig. 27 Optimization using parallel coordinates considering Abdomen intrusions. Blue lines represents the parameter combinations associated with the allowed outputs. 

In the text 
Fig. 28 Optimization using parallel coordinates considering all constraints. Blue lines represents the parameter combinations associated with the allowed outputs. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.