Program

No items found.

Day 1: Monday April 3

3/4/23 9:00
-
10:40 am

Enrollment / Schools

3/4/23 10:40
-
11:00 am

Welcome remarks

3/4/23 11:00
-
12:00 pm

Invited Session 1 - Binary and compositional data

Jorge  Luis Bazán   (UFSCAR-Brazil)

Title:

A new class of binary regression  models for unbalanced data with applications in medical data

Abstract

Unbalanced data for binary regression may be more common than expected in medical trial. In this paper, we propose the use of the skew-probit link function for random effects models considering a mixed approach which is applied in longitudinal and unbalanced medical data. Ad- ditionally, we extend this link for a new class of link function for binary response by considering the scale mixture of skew normal distributions, is proposed. The novel link class has, as special cases, several link functions proposed in the literature and could be applied for unbalanced and balanced data. A Bayesian inference approach is developed for both frameworks. Also, measures of model comparison and case in􏰦uence diagnostics are introduced. The applications considered illustrate the potential of proposed models' class as an alternative for binary regression models using common links and seems to be more 􏰦exible when unbalancing medical dataset are analyzed.

Victor  Eduardo Lachos Olivares (USP-Brazil)

Title:

Principal component analysis for  compositional data in votes of the peruvian presidential election 2021

Abstract

The purpose of this work is to present a compositional data methodology to analyze votes data from Peruvian elections by electoral circumscriptions. Principal Component analysis is adopted in multiparty data by considering a centered log-ratio transformation The methodo- logy is applied to data from the Peruvian presidential election in the 􏰥rst round of the year 2021. Results show that the electoral data is naturally compositional, and the proportion of votes is more important than even the existing number of votes. Additionally, we identify a polarization of the votes by circumscriptions in Peru which can be explained by considering two components: conservative-progressive parties and new-old parties. Finally, we analyze the 2 principal compo- nents taking in consideration the scores and their variation, so they need to be normalized for better interpretation. Afterwards, we propose a Beta regression model, it is elaborated considering as covariates the main indicators of human development (health, education, and income) and as response variables the scores.

3/4/23 12:00
-
1:00 pm

Keynote Speaker 1

Daniela Castro-Camilo (University of Glasgow - UK)

Title

Beyond the mean: Statistics of extremes for environmental problems


Summary

In this talk, I will review the basics of Extreme Value Theory, which has gained considerable attention in environmental statistics over the last years as extreme observations have increased in size and frequency. Using two case studies, I will show you different ways to do statistics of extremes and provide risk measures associated with extreme events.

3/4/23 13:00
-
3:00 pm

Lunch time

3/4/23 15:00
-
4:00 pm

Thematic session 1

Vicente  Cancho  (USP - Brazil)

Title:

A Bayesian Survival Regression  Models Using Hamiltonian Monte Carlo Methods    

Abstract

In this paper, we deal with a Bayesian analysis for right-censored lifetime data sui- table for populations with long-term survivors. We consider a long-term survival model based on the generalized Poisson distribution. which encompasses the bound cumulative hazard model as a special case. Bayesian analysis is based on Hamiltonian Monte Carlo (HMC) methods. We also present some discussion on model comparison. Bayesian diagnostics. and an illustration with a real data set from the medical (Acquired Immunodeficiency Syndrome - AIDS) area is analyzed under the proposed regression model.

Alex de la Cruz Huayanay  (USP - UFSCAR -Brazil)

Title:

Performance of evaluation metrics for classification in  imbalanced data

Abstract

In this paper, we study the performance of some metrics selecting the adequate model for binary classi􏰥cation when the data set is imbalanced. We conduct an extensive simulation study considering 12 common metrics. The results suggest that the metrics: Matthews Correlation Coefficient (MCC), G-Mean (GM) and Cohen's kappa (KAPPA) perform well and that the metrics area under the curve (AUC) and Accuracy (ACC) do not perform well in all scenarios studied and that the other seven metrics perform well in some scenarios. Finally, we use an application in the 􏰥nancial area and show the metrics work very well for choosing the best model between alternative links.

3/4/23 16:00
-
4:20 pm

Coffe Break

3/4/23 16:20
-
5:20 pm

Invited Session 2: Recent advances in classical and machine learning techniques for time series forecasting (I)

Paulo  Canas  (UFBA - Brazil)

Title:

The usefulness of singular  spectrum analysis in hybrid methodologies for time series forecasting    

Abstract:

Time series forecasting plays a key role in areas such as energy, environment, economy, and finances. Hybrid methodologies, combining the results of statistical and machine learning methods, have become popular for time series analysis and forecasting, as they allow researchers to compensate the limitations of one approach with the strengths of the other, and combine them into new frameworks while improving forecasting accuracy. In this class of methods, algorithms for time series forecasting are applied sequentially, i.e., the second model can be applied to the residuals that were not captured by the first, by considering that the observed data is a combination of linear and nonlinear components. In this talk, I will discuss several strategies for time series forecasting that use singular spectrum analysis in hybrid methodologies, with application to electricity load forecasting and to PM10 (inhalable particles, with diameters that are generally 10 micrometers and smaller) forecasting.

Javier Linkolk López-Gonzales  (Universidad Peruana Unión - Perú, Universidad de Valparaíso- Chile)

Title:

Air quality prediction based on singular spectrum analysis and  artificial neural networks

Abstract:

Singular spectrum analysis is a powerful nonparametric technique to decompose the original time series into a set of components that can be interpreted as trend, seasonal, and noise. For their part, neural networks are a family of information-processing techniques capable of approximating highly nonlinear functions. This study proposes to improve the precision in the prediction of air quality. For this purpose, a hybrid adaptation is considered. It is based on an integration of the singular spectrum analysis and the recurrent neural network Long Short-Term Memory; the SSA is applied to the original time series to split signal and noise, which are then predicted separately and added together to obtain the final forecasts. This hybrid method provided better performance when compared to other methods.

3/4/23 17:20
-
6:20 pm

Invited Session 3: Recent advances in classical and machine learning techniques for time series forecasting (II)

Carlos  Trucios  (UNICAMP - Brazil)

Title:

Forecasting risk measures in high-dimensional portfolios with additive outliers    

Abstract:

Beyond their importance from the regulatory policy point of view, risk measures play an important role in risk management, portfolio allocation, capital level requirements, trading systems, and hedging strategies. However, due to the curse of dimensionality and the presence of extreme observations, their estimation and forecast in large portfolios is a difficult task. To overcome these problems, we propose a new procedure based on filtered historical simulation, the general dynamic factor model and robust volatility models. The new procedure is applied in US stocks and the backtesting results indicate that the new proposal outperforms several existing alternatives.

Jorge Saavedra  (Universidad de Valparaíso - Chile)

Title:

Machine learning methods applied to the detection of lake  surface changes using satellite images

Abstract:

Climate change in Chile has caused a megadrought since 2010, which has produced a 30% decrease in rainfall. In the V Region of Chile, Lake Peñuelas has reduced its capacity, and since January 2021, the local water company has stopped using it as a source of supply. This study used the Principal Components method and the Normalized Difference Water Index (NDWI) to perform a multitemporal analysis of the lake surface between the years 2010 and 2021, using Landsat-8 and Landsat-9 satellite imagery. The results indicated an accelerated decrease in the surface of Lake Peñuelas since 2019, with 2021 being one of the most critical years. A recurrent neural network (RNN) model that considered precipitation, daily average temperature, and NDWI were also implemented to make more accurate and reliable predictions of lake surface changes. These analyses and models may provide valuable information to take effective control measures and protect the country´'s water resources.

3/4/23 18:20
-
7:20 pm

Board of directors

Day 2: Tuesday April 4

4/4/23 9:00
-
10:00 am

Invited Session 4: Stochastic processes and copulas

Verónica  Gonzales  - López (UNICAMP - Brazil)

Title:

Building a Metric through the  Efficient Determination Criterion    

Abstract

[1] proves that the Efficient Determination Criterion (EDC), see [2], is strongly con- sistent in estimating Partition Markov Models (PMM), introduced in [3]. PMM includes Variable Lenght Markov Models (VLMC) and has been intensively used due to their economical conception concerning the number of parameters needed to be established (see [3]). For quite some time, the Bayesian Information Criterion (BIC) has been used for their strongly consistent estimation, see [6] and, in practice, using metrics based on the BIC, clustering algorithms have been implemented for estimating PMM. Between the BIC-based metrics we found [3], [7], and [8]. In this paper, we show that the EDC criterion is also associated with a notion that can be used in clustering algorithms, and we formally prove that such a notion is a metric. The advantage of using the EDC criterion and its associated metric lies in the fact that it is possible to redefine the penalty term of the criterion (that in the BIC is ln(n)), improving the estimation process for a moderate sample size.

Jesús Enrique García (UNICAMP - Brazil)

Title:

Independence test for continuous random variables based on the  Young diagram

Abstract

Consider a bi-variate sample of a continuous random vector. The scatter plot of the sample de􏰥nes a permutation (see [1] and [2]). From the permutation, the corresponding Young tableau and the corresponding diagram can be computed (see [3]). The profile of the Young dia- gram depends on the copula de􏰥ning the dependence between the random variables. In this work, we present an independence test based on a distance between the Young diagram of the sample and the typical young diagram for the case of independence. This procedure expands the proposal introduced in [1] and [2], which used the length of the longest increasing subsequence of the per- mutation to build an independence test.

4/4/23 10:00
-
10:20 am

Coffe Break

4/4/23 10:20
-
11:30 am

Invited Session 5: Methods and Applications of Stochastic Simulation in Computational Statistics

Luis A.  Moncayo Martínez  (ITAM - México)

Title:

A Model to Solve the Newsvendor  Problem with Demand Parameter Uncertainty    

Abstract

In the newsvendor problem, a decision-maker must decide the order quantity (Q) size for each ordering period before
the real demand occurs. The classical model assumes that the decision maker has access to the “real” distribution of the
demand; thus, there is no variability in the demand parameters (Surti, 2020). This problem is relevant not only for their
applications in manufacturing, investment, or emergency resources but also for their role in inventory theory, given that
it is the based model to develop continuous and periodic-review inventory policies.
In this work, we propose a formulation to the newsvendor problem under a Bayesian framework that allows us to
incorporate uncertainty on the parameters of the demand model induced by the estimation process.

Javier Hernandez  (ITAM - México)

Title:

Empirical Comparison of Several Methods for Parameter  Estimation of a Johnson Unbounded Distribution    

Abstract

Distribution fitting is one of the classical tools for modeling a random component of a stochastic simulation input and, due to its great flexibility and ease to modelling correlated data (by taking advantage of being transformations of a normal distribution), the Johnson system  has been widely applied to diverse areas such as forestry, floods, biology, among others. To model a random variable (rv) X, Johnson proposed three normalizing transformations having the general form of where denotes one of three proposed functions, is a normal rv, and are shape parameters, is a scale parameter and is a location parameter. For the case of we obtain the Johnson unbounded distribution, which is suitable to model an unbounded (two-tailed) rv X (the other two functions provide a bounded, and a left-bounded distribution, respectively). As reported by several authors, the partial derivatives of the log-likelihood function with respect to ξ and λ are not simple and, for some samples, this function does not have a local maximum with respect to parameters ξ and λ. This non-regularity of the likelihood function caused occasional non-convergence of algorithms for the estimation of parameters of the distribution using maximum likelihood (ML). This is why there has been several alternative proposals to estimate the parameters of the distribution, including the four-quantile matching rule, a method based on moments, and a method based on ML and least squares. However, all the above-mentioned methods need some conditions on the sample to be applied. In this article, we report the implementation in C++ and empirical comparison of the methods of ML, Slifker-Shapiro, Tuenter and George-Ramchandran, plus a new implementation based on the minimization of the Cramér-von Mises statistics. Our preliminary results show that the new implementation performs very well, with no requirements to produce reasonable estimates.

David Muñoz  (ITAM- México)

Title:

Estimation of Expectations in Two-Level Nested Simulation  Experiments

Abstract

In contrast to the estimation of performance measures, input parameters of a simulation experiment are usually estimated from real-data observations (x) and, while most applications covered in the relevant literature assume that no uncertainty exists in the value of these parameters, the uncertainty can be significant when little data is available. In these cases, Bayesian statistics can be used to incorporate this uncertainty in the output analysis of simulation experiments via the use of a posterior distribution. A Bayesian approach using simulation as a forecasting tool has been reported in diverse areas; for example, healthcare or software development. A methodology currently proposed for the analysis of simulation experiments under parameter uncertainty, and for the estimation of expected values, is a two-level nested simulation method. In the outer level, we simulate (n) observations for the parameters from a posterior distribution, while in the inner level we simulate (m) observations for the response variable with the parameter fixed at the value generated in the outer level. In this paper, we focus on the output analysis of two-level simulation experiments, for the case where the observations of the inner level are independent, showing how the variance of a simulated observation can be decomposed into parametric (and stochastic ( variance components. Afterwards, we derive a Central Limit Theorem (CLT) for both the estimator of the point forecast () and the estimators of the variance components. Our CLT´s allows us to compute an ACI for each estimator. Our theoretical results are validated through experiments with a forecasting model for sporadic demand. Following an introduction, we present the proposed methodology for the construction of an ACI for the point forecast and the variance components in a two-level simulation experiment. Afterwards, we present our illustrative example and the results of experiments that illustrate the application and validity of our proposed methodologies. Finally, we present conclusions and directions for future research.

4/4/23 11:30
-
12:30 am

Keynote Speaker 2

Natalia da Silva (Universidad de la República - Uruguay)

Title:

Black-box models interpretability for property valuation

Abstract:

Statistical learning methods are used in a wide variety of complex problems due to its flexibility, good predictive performance and its ability to capture complex relationships among variables. One of the main drawbacks of statistical learning methods is the lack of interpretation in their results which is why they are called black-box models. In the past few years an important amount of research has been focused on methods for interpreting black-box models. Having interpretable statistical learning methods is especially relevant in economic applications such real estate problems where understanding the property valuation key drivers is relevant for decision making. In this paper property valuation prices in Montevideo are analyzed and statistical learning methods were applied to predict property valuation. Additionally to understand the key drivers of property valuation some model-agnostic methods for interpreting statistical learning models were used such as partial dependence plots (PD-plot), accumulated local plot (ALE-plot) and individual conditional expectation plots (ICE-plot). ICE-plot are grouped using the spatial information which is relevant for this problem and simplify ICE-plot interpretation. The empirical application shows that size, and some location variables such as neighborhood and distance to the beach are the most relevant drivers for property valuation in Montevideo. The interpretable methods results show that in some cases the interpretation differs even for statistical learning methods that are comparable in terms of predictive performance.

4/4/23 12:30
-
2:30 pm

Lunch time

4/4/23 14:30
-
3:30 pm

Poster Session / Regional Assambly

4/4/23 15:30
-
3:50 pm

Coffe break

4/4/23 15:50
-
4:50 pm

Thematic Session 2

Sergio  Camiz  (Universidad La Sapienza de Roma - Italia)

Title:

Exploratory methods for complex  multidimensional data    

Abstract

Dealing with two-dimensional real variables raises problems with principal component analysis (PCA), due to both the lack of correlation between them as two-dimensional and the in- terpretation difficulties raised by the issue of components formed by a linear combination of both dimensions. For this reason, their use as one-dimensional complex variables allows to introduce the complex principal component analysis (CPCA), based on the complex rotational correlation bet- ween them. Unfortunately, its raw use issues indeterminacies in the results, since the eigenvectors are defined up to a unit complex number, i.e. a rotation. For this reason, Denimal and Camiz (2022) proposed a second optimization, univocally fixing it, therefore allowing graphic representation that may be interpreted just as the real PCA. As the matter of facts, two correlations exist between complex variables, the second being the reflectional one, that CPCA may not take into account. In order to deal with both, widely linear components (WLC) were developped, as particular trans- formations in CPCA framework. In a submitted paper, Denimal and Camiz developed exploratory tools for WLC, in order to provide analogous interpretation tools and helping in the selection of the more suitable correlations between WLC and variables, whether rotational or reflectional. In the proposed communication, small examples of application will be shown, based on winds data sets.

Anna Sikov  (Universidad Nacional de Ingeniería - Perú)

Title:

The Role of Neighborhood in Spatial Data Modeling: Result from  the Study of Anemia Rates among Children in Peru

Abstract

In this paper we attempt to answer the following question: Is it possible to obtain reliable estimates for the prevalence of anemia rates in children under five years in the districts of Peru? Specifically, the objective of the present paper is to understand to which extent employing the basic and the spatial Fay-Herriot models can compensate for inadequate sample size in most of the sampled districts, and whether the way of choosing the spatial neighbors has an impact on the resulting inference. Furthermore, it is raised the question of how to choose an optimal way to define the neighbours. We present an illustrative analysis using the data from the Demographi and Family Health Survey of the year 2019, and the National Census carried out in 2017.

4/4/23 17:00
-
6:00 pm

Keynote Speaker 3

Luciano Oliveira (UFBA - Brazil)

Title

ChatGPT: inside the box, myths and truths

Abstract

Large language models applied in natural language processing and text-to-image generation have been a hype in the last two years, and ChatGPT is the one that has received more attention. Even non-technical people have been surfing in the wave of this trending topic, discussing the outputs of the generative pre-trained transformer (GPT) model and how it will impact the future of society. On the other hand, even specialists in AI tend to discuss the subject without a due deep dive into the technical details involved. This talk aims to present details about autoregressive transformers, the model used inside the ChatGPT, and the myths and truths about that technology.

Day 3: Wednesday April 5

5/4/23 9:00
-
10:00 am

Thematic Session 3

Doris  Miranda  (Universidad de Almería - España)

Title:

Dynamic functional Bayesian  regression versus spatial spectral regression of curves    

Abstract

We adopt a pure point and continuous spectral approaches, for predicting COVID- 19 incidence from a Bayesian and a nonparametric framework, respectively. Firstly, we consider a particular example of the dynamical multiple linear regression model in function spaces. The functional regression parameter vector is estimated in terms of the Bayesian approximation of the functional entries of the inverse covariance matrix operator of the Hilbert-valued error term, by applying generalized least-squares estimation. Under this functional linear modeling, spatial co- rrelations are re􏰦ected in the matrix covariance operator of the functional error term. Secondly, we adopt a continuous spectral approach, assuming spatial stationarity in the functional corre- lation model, representing possible interactions between the COVID-19 incidence curves at the Spanish Communities analyzed. We reformulate, for spatially distributed correlated curves, the nonparametric estimator of the spectral density operator, based on the periodogram operator, in the functional time series context. This estimator allows us to compute the functional regression vector parameter estimator to our spatial functional spectral context. To implement the approach proposed, a computation is developed in the real-data analysis of COVID-19 incidence. Particu- larly, the non-parametric estimator of the spatial spectral density kernels, at 1061x1061 cross-times, is computed over the 37x37 spatial nodes of the frequency grid. Finally, a comparative study is carried out to assess the performance of both approaches in the prediction of COVID-19 incidence.

Alex Rodrigo dos Santos Sousa  (UNICAMP - Brazil)

Title:

A wavelet-based method in aggregated functional data analysis

Abstract

In this paper we consider aggregated functional data composed by a linear combina- tion of component curves and the problem of estimating these component curves. We propose the application of a bayesian wavelet shrinkage rule based on a mixture of a point mass function at zero and the logistic distribution as prior to wavelet coe􏰧cients to estimate mean curves of components. This procedure has the advantage of estimating component functions with important local characteristics such as discontinuities, spikes and oscillations for example, due the features of wavelet basis expansion of functions. Simulation studies were done to evaluate the performance of the proposed method and its results are compared with a spline-based method. An application on the so called tecator dataset is also provided.

5/4/23 10:00
-
10:30 am

Coffe Break / Official photo LACSC

5/4/23 10:30
-
11:30 am

Invited Session 6 - Computational Statistics: Theory and Applications (I)

Tarik  Faouzi  (Universidad De Santiago de Chile - Chile)

Title:

Temporal tapering for spatially  dynamically supported kernels    

Abstract

Our work provides the following contributions. We engage on the analytic closed form associated with the tapered spectral density. This tapered spectral density related to Generalized Wendland covariance function with Dynamical Compact Supports. We then provide the asymptotic properties of the tapered spectrum. This opens to the study of the parametric conditions ensuring equivalence of Gaussian measure when the true covariance is (either) the space-time Matérn class or the dynamically supported class, with the surrogate covariance being the tapered class proposed in this work. Then, we provide the proposal of this work in concert with the theoretical results involving spectral properties. Next, we inspect conditions for equivalence of Gaussian measures. Finally, we present results for maximum likelihood estimation and prediction based on the proposed taper.

Luis Mauricio Castro  (Pontificia Universidad Católica de Chile - Chile)

Title:

Modelling Point Referenced Spatial Count Data: A Poisson  Process Approach

Abstract

Random fields are useful mathematical tools for representing natural phenomena with complex dependence structures in space and/or time. In particular, the Gaussian random field is commonly used due to its attractive properties and mathematical tractability. However, this assumption seems to be restrictive when dealing with counting data. To deal with this situation, we propose a random field with a Poisson marginal distribution considering a sequence of independent copies of a random field with an exponential marginal distribution as “inter-arrival times” in the counting renewal processes framework. Our proposal can be viewed as a spatial generalization of the Poisson counting process. Unlike the classical hierarchical Poisson Log-Gaussian model, our proposal generates a (non)-stationary random field that is mean square continuous and with Poisson marginal distributions. For the proposed Poisson spatial random field, analytic expressions for the covariance function and the bivariate distribution are provided. In an extensive simulation study, we investigate the weighted pairwise likelihood as a method for estimating the Poisson random field parameters. Finally, the effectiveness of our methodology is illustrated by an analysis of reindeer pellet-group survey data, where a zero-inflated version of the proposed model is compared with zero-inflated Poisson Log-Gaussian and Poisson Gaussian copula models.

5/4/23 11:30
-
12:30 pm

Invited Session 7 - Computational Statistics: Theory and Applications (II)

Alba  Martínez-Ruiz  (Universidad Diego Portales - Chile)

Title:

Incremental SVD for some  numerical aspects of multiblock redundancy analysis and big data streams

Abstract

Based on the incremental SVD algorithm, we propose a new procedure to solve the decomposition problem found by multiblock redundancy analysis when analyzing streaming data, i.e. data that is generated continuously. The redundancy procedure involves the SVD of a square and symmetric matrix of q × q, where q is the number of variables in the endogenous block of variables. If q is large, the factorization is a time- and resource-consuming task, much more if the matrix is continuously updated in real time. A good strategy is analyzing the data in small sets that are continually updated. To preserve the column-wise formulation of the incremental SVD algorithm, we derived the column-wise variant of the redundancy method and implemented an incremental approach for the procedure. Numerical experiments are reported to illustrate the accuracy and performance of the incremental solution for analyzing streaming multiblock data. In addition, we report results for examining how the incremental SVD algorithm approximates singular vectors when varying a forgetting factor and the number of significant singular vectors kept at each iteration. The results provide evidence about the suitability of the new approach for the analysis of large streaming data.

   

Luis Firinguetti  (Universidad del Bío-Bío - Chile)

Title:

Asymptotic Confidence Intervals for Partial Least Squares: A  Monte Carlo Evaluation

Abstract

Partial Least Squares (PLS) initially developed by Wold can be thought of as a data compression or a dimension reduction method. In the context of a regression model, PLS can be used to describe the relationship between one or more dependent variables and a large set of explanatory variables. The assumption behind the method is that the large set of explanatory variables can be replaced by a smaller number of linear combinations of them. PLS has become a very useful tool in Chemometrics and the calibration of instruments, where it is common to work with a large number of variables and fewer observations than variables. In the context of a multiple regression this situation is commonly known as perfect multicollinearity. In this presentation we discuss asymptotic confidence intervals for Partial Least Squares, in the context of the Classical Linear Regression Model. Due to the nonlinear character of Partial Least Squares, it is particularly difficult, if not impossible, to derive the exact small sample properties of the estimator. For this reason, some authors have derived asymptotic confidence interval. We investigate the behavior of the proposals of Denham and Phatak,Reilly and Pendilis in small samples by means of a Monte-Carlo study. An important conclusion of this study is that the asymptotic confidence intervals proposed by Denham and Phatak, Reilly y Pendilis are comparable, even in small samples, to those produced by the Ordinary Least Squares estimator in terms of coverage probabilities.

5/4/23 15:00
-
3:30 pm

Thematic Session 4

Elias  Arias  (ITAM - México)

Title:

An Accelerated Degradation  Analysis of Cellulose Nanocrystals Bio-composites

Abstract

Cellulose Nanocrystals are produced from acid hydrolysis of di􏰤erent bio 􏰥bers by re- moving most of the amorphous regions of cellulose. The incorporation of biopolymers, such as PLA in packaging and automotive industries has been limited mainly by poor properties of the material: thermal stability and poor barriers to mention some; researchers have turned their attention to possible solutions to this kind of limitation, the reinforcement material is a way to improve several properties of biopolymer including mechanical, physical and even appearance. The incorporation of cellulose nanocrystals (CNCs) has attracted signi􏰥cant attention as reinforcement material for biodegradable plastics matrices to develop completely degradable nanocomposites. In this research, an experiment is conducted to evaluate the degradation rate of the material throughout mechani- cal testing. The degradation experiment included exposures to multiple accelerating factors, heat, UV light, and humidity under ASTM guidelines; The manufacturing process to fabricate speci- mens for testing includes two sequential processes; extrusion and injection molding process. The combination of the statistical techniques, analysis of variance, and repeated measures, supports the goal of this research. The results allowed a better understanding of the degradation path of the cellulose nanocrystal composites. A revision in methods and analysis techniques of di􏰤erent composites, wood plastic, and nanocrystals was investigated for this research. Statistical analysis was performed to evaluate the e􏰤ect of the cellulose nanocrystals composite in combination with Polylactic acid through an analysis of variance (ANOVA). The methodology used in this study to analyze the di􏰤erent changes in PLA might be crucial for further studies. The statistical results concluded that changes in the structure of the composite in terms of the content of nanocrystals have a direct e􏰤ect on the mechanical properties of the material after an accelerated degradation exposure. The Cellulose nanocrystals composite ended up with a lower Ultimate tensile strength in the mechanical testing. The control group (PLA 0), having a higher UTS during the mechanical test, helped this study to separate the e􏰤ect of the aging process and the nanocrystal content e􏰤ect. The methodology and analysis proposed in this paper are intended to be used as a tool for characterization and understanding of material behavior in regards to its durability. One impor- tant output of this paper is that information related to a new cellulose nanocrystal composite was obtained and can be used for further analysis involving di􏰤erent biopolymers.

5/4/23 15:30
-
3:50 pm

Coffe Break

5/4/23 15:30
-
4:30 pm

Keynote Speaker 4: Moments of Doubly Truncated Multivariate Distributions: Computation, Existence, and Applications

Christian Galarza (Escuela Superior Politécnica del Litoral - Ecuador)

Title:

Moments of Doubly Truncated Multivariate Distributions: Computation, Existence, and Applications

Abstract:

Can you imagine a variance-covariance matrix where some elements exist and some do not? In the context of multivariate truncated variables, this can happen.  From a probabilistic perspective, doubly truncated moments have long been of interest. Truncated moments are commonly used for predictive models in areas such as the environment, survival analysis, finance, and others. These are important not only for modeling restricted responses within some interval (e.g., indices, scores, etc.) but also in the context of censored interval models. We focus on the family of elliptically-contoured distributions, a broad class of multivariate asymmetric elliptical distributions that includes some well-known asymmetric multivariate elliptical distributions such as the normal, Student's t, among others. We address moments for doubly truncated members of this family, establishing a formulation for high-order moments as well as their first two moments. We establish sufficient and necessary conditions for their existence. Simulation studies are presented to confirm our results.

5/4/23 16:30
-
5:40 pm

Prizes and  Closure

5/4/23 17:40
-
6:30 pm

Cocktail

Share this event