1 |
stress analysis of mixed type finite element of circular plateChang, Jih-Yueh 04 September 2001 (has links)
ABSTRACT
In the present study, it is emphasized that mixed-type finite element formulation, which is different from the conventional displacement-type formulation, has both displacements and stresses as its primary variables. Therefore, stress, as well as displacement boundary conditions, can be imposed easily and exactly. Except around the outer edge where support is placed, stresses obtained by both displacement and mixed formulation are close to each other when the circular plate is subject to transverse uniform loading. However, large discrepancies exist around the locations of constraints, where the stresses are always significant and critical. Since mixed formulation of the present study can completely satisfy the stress and displacement boundary conditions, it can theoretically provide more accurate stress analysis and should be considered as a more appropriate analysis tool.
|
2 |
Robust Feature Screening Procedures for Mixed Type of DataSun, Jinhui 16 December 2016 (has links)
High dimensional data have been frequently collected in many fields of scientific research and technological development. The traditional idea of best subset selection methods, which use penalized L_0 regularization, is computationally too expensive for many modern statistical applications. A large number of variable selection approaches via various forms of penalized least squares or likelihood have been developed to select significant variables and estimate their effects simultaneously in high dimensional statistical inference. However, in modern applications in areas such as genomics and proteomics, ultra-high dimensional data are often collected, where the dimension of data may grow exponentially with the sample size. In such problems, the regularization methods can become computationally unstable or even infeasible. To deal with the ultra-high dimensionality, Fan and Lv (2008) proposed a variable screening procedure via correlation learning to reduce dimensionality in sparse ultra-high dimensional models. Since then many authors further developed the procedure and applied to various statistical models. However, they all focused on single type of predictors, that is, the predictors are either all continuous or all discrete. In practice, we often collect mixed type of data, which contains both continuous and discrete predictors. For example, in genetic studies, we can collect information on both gene expression profiles and single nucleotide polymorphism (SNP) genotypes. Furthermore, outliers are often present in the observations due to experimental errors and other reasons. And the true trend underlying the data might not follow the parametric models assumed in many existing screening procedures. Hence a robust screening procedure against outliers and model misspecification is desired. In my dissertation, I shall propose a robust feature screening procedure for mixed type of data. To gain insights on screening for individual types of data, I first studied feature screening procedures for single type of data in Chapter 2 based on marginal quantities. For each type of data, new feature screening procedures are proposed and simulation studies are performed to compare their performances with existing procedures. The aim is to identify a best robust screening procedure for each type of data. In Chapter 3, I combine these best screening procedures to form the robust feature screening procedure for mixed type of data. Its performance will be assessed by simulation studies. I shall further illustrate the proposed procedure by the analysis of a real example. / Ph. D. / In modern applications in areas such as genomics and proteomics, ultra-high dimensional data are often collected, where the dimension of data may grow exponentially with the sample size. To deal with the ultra-high dimensionality, Fan and Lv (2008) proposed a variable screening procedure via correlation learning to reduce dimensionality in sparse ultra-high dimensional models. Since then many authors further developed the procedure and applied to various statistical models. However, they all focused on single type of predictors, that is, the predictors are either all continuous or all discrete. In practice, we often collect mixed type of data, which contains both continuous and discrete predictors. Furthermore, outliers are often present in the observations due to experimental errors and other reasons. Hence a robust screening procedure against outliers and model misspecification is desired. In my dissertation, I shall propose a robust feature screening procedure for mixed type of data. I first studied feature screening procedures for single type of data based on marginal quantities. For each type of data, new feature screening procedures are proposed and simulation studies are performed to compare their performances with existing procedures. The aim is to identify a best robust screening procedure for each type of data. Then i combined these best screening procedures to form the robust feature screening procedure for mixed type of data. Its performance will be assessed by simulation studies and the analysis of real examples.
|
3 |
Supersonic Euler and Magnetohydrodynamic Flow Past ConesHolloway, Ian C. 18 December 2019 (has links)
No description available.
|
4 |
Unsupervised learning with mixed type data : for detecting money laundering / Klusteranalys av heterogen dataEngardt, Sara January 2018 (has links)
The purpose of this master's thesis is to perform a cluster analysis on parts of Handelsbanken's customer database. The ambition is to explore if this could be of aid in identifying type customers within risk of illegal activities such as money laundering. A literature study is conducted to help determine which of the clustering methods described in the literature are most suitable for the current problem. The most important constraints of the problem are that the data consists of mixed type attributes (categorical and numerical) and the large presence of outliers in the data. An extension to the self-organising map as well as the k-prototypes algorithms were chosen for the clustering. It is concluded that clusters exist in the data, however in the presence of outliers. More work is needed on handling missing values in the dataset. / Syftet med denna masteruppsats är att utföra en klusteranalys på delar av Handelsbankens kunddatabas. Tanken är att undersöka ifall detta kan vara till hjälp i att identifiera typkunder inom olagliga aktiviteter såsom penningtvätt. Först genomförs en litteraturstudie för att undersöka vilken algoritm som är bäst lämpad för att lösa problemet. Kunddatabasen består av data med både numeriska och kategoriska attribut. Ett utökat Kohonen-nätverk (eng: self-organising map) samt k-prototyp algoritmen används för klustringen. Resultaten visar att det finns kluster i datat, men i närvaro av brus. Mer arbete behöver göras för att hantera tomma värden bland attributen.
|
5 |
Estimating Veterans' Health Benefit Grants Using the Generalized Linear Mixed Cluster-Weighted Model with Incomplete DataDeng, Xiaoying January 2018 (has links)
The poverty rate among veterans in US has increased over the past decade, according to the U.S. Department of Veterans Affairs (2015). Thus, it is crucial to veterans who live below the poverty level to get sufficient benefit grants. A study on prudently managing health benefit grants for veterans may be helpful for government and policy-makers making appropriate decisions and investments. The purpose of this research is to find an underlying group structure for the veterans' benefit grants dataset and then estimate veterans' benefit grants sought using incomplete data. The generalized linear mixed cluster-weighted model based on mixture models is carried out by grouping similar observations to the same cluster. Finally, the estimates of veterans' benefit grants sought will provide reference for future public policies. / Thesis / Master of Science (MSc)
|
6 |
Chemoproteomic Profiling of a Pharmacophore-Focused Chemical Library / ファーマコフォアに焦点を当てたケミカルライブラリーのケモプロテオミクスプロファイリングPUNZALAN, LOUVY LYNN CALVELO 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第22733号 / 医博第4651号 / 新制||医||1046(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 萩原 正敏, 教授 岩田 想, 教授 渡邊 直樹 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
7 |
Scalable And Efficient Outlier Detection In Large Distributed Data Sets With Mixed-type AttributesKoufakou, Anna 01 January 2009 (has links)
An important problem that appears often when analyzing data involves identifying irregular or abnormal data points called outliers. This problem broadly arises under two scenarios: when outliers are to be removed from the data before analysis, and when useful information or knowledge can be extracted by the outliers themselves. Outlier Detection in the context of the second scenario is a research field that has attracted significant attention in a broad range of useful applications. For example, in credit card transaction data, outliers might indicate potential fraud; in network traffic data, outliers might represent potential intrusion attempts. The basis of deciding if a data point is an outlier is often some measure or notion of dissimilarity between the data point under consideration and the rest. Traditional outlier detection methods assume numerical or ordinal data, and compute pair-wise distances between data points. However, the notion of distance or similarity for categorical data is more difficult to define. Moreover, the size of currently available data sets dictates the need for fast and scalable outlier detection methods, thus precluding distance computations. Additionally, these methods must be applicable to data which might be distributed among different locations. In this work, we propose novel strategies to efficiently deal with large distributed data containing mixed-type attributes. Specifically, we first propose a fast and scalable algorithm for categorical data (AVF), and its parallel version based on MapReduce (MR-AVF). We extend AVF and introduce a fast outlier detection algorithm for large distributed data with mixed-type attributes (ODMAD). Finally, we modify ODMAD in order to deal with very high-dimensional categorical data. Experiments with large real-world and synthetic data show that the proposed methods exhibit large performance gains and high scalability compared to the state-of-the-art, while achieving similar accuracy detection rates.
|
8 |
The Null-Field Methods and Conservative schemes of Laplace¡¦s Equation for Dirichlet and Mixed Types Boundary ConditionsLiaw, Cai-Pin 12 August 2011 (has links)
In this thesis, the boundary errors are defined for the NFM to explore the convergence rates, and the condition numbers are derived for simple cases to explore numerical stability. The optimal convergence (or exponential) rates are discovered numerically. This thesis is also devoted to seek better choice of locations for the field nodes of the FS expansions. It is found that the location of field nodes Q does not affect much on convergence rates, but do have influence on stability. Let £_ denote the distance of Q to ∂S. The larger £_ is chosen, the worse the instability of the NFM occurs. As a result, £_ = 0 (i.e., Q ∈ ∂S) is the best for stability. However, when £_ > 0, the errors are slightly smaller. Therefore, small £_ is a favorable choice for both high accuracy and good stability. This new discovery enhances the proper application of the NFM.
However, even for the Dirichlet problem of Laplace¡¦s equation, when the logarithmic capacity (transfinite diameter) C_£F = 1, the solutions may not exist, or not unique if existing, to cause a singularity of the discrete algebraic equations. The problem with C_£F = 1 in the BEM is called the degenerate scale problems. The original explicit algebraic equations do not satisfy the conservative law, and may fall into the degenerate scale problem discussed in Chen et al. [15, 14, 16], Christiansen [35] and Tomlinson [42]. An analysis is explored in this thesis for the degenerate scale problem of the NFM. In this thesis, the new conservative schemes are derived, where an equation between two unknown variables must satisfy, so that one of them is removed from the unknowns, to yield the conservative schemes. The conservative schemes always bypasses the degenerate scale problem; but it causes a severe instability. To restore the good stability, the overdetermined system and truncated singular value decomposition (TSVD) are proposed. Moreover, the overdetermined system is more advantageous due to simpler algorithms and the slightly better performance in error and stability. More importantly, such numerical techniques can also be used, to deal with the degenerate scale problems of the original NFM in [15, 14, 16].
For the boundary integral equation (BIE) of the first kind, the trigonometric functions are used in Arnold [3], and error analysis is made for infinite smooth solutions, to derive the exponential convergence rates. In Cheng¡¦s Ph. Dissertation [18], for BIE of the first kind the source nodes are located outside of the solution domain, the linear combination of fundamental solutions are used, error analysis is made only for circular domains. So far it seems to exist no error analysis for the new NFM of Chen, which is one of the goal of this thesis. First, the solution of the NFM is equivalent to that of the Galerkin method involving the trapezoidal rule, and the renovated analysis can be found from the finite element theory. In this thesis, the error boundary are derived for the Dirichlet, the Neumann problems and its mixed types. For certain regularity of the solutions, the optimal convergence rates are derived under certain circumstances. Numerical experiments are carried out, to support the error made.
|
9 |
Change Detection and Analysis of Data with Heterogeneous StructuresChu, Shuyu 28 July 2017 (has links)
Heterogeneous data with different characteristics are ubiquitous in the modern digital world. For example, the observations collected from a process may change on its mean or variance. In numerous applications, data are often of mixed types including both discrete and continuous variables. Heterogeneity also commonly arises in data when underlying models vary across different segments. Besides, the underlying pattern of data may change in different dimensions, such as in time and space. The diversity of heterogeneous data structures makes statistical modeling and analysis challenging.
Detection of change-points in heterogeneous data has attracted great attention from a variety of application areas, such as quality control in manufacturing, protest event detection in social science, purchase likelihood prediction in business analytics, and organ state change in the biomedical engineering. However, due to the extraordinary diversity of the heterogeneous data structures and complexity of the underlying dynamic patterns, the change-detection and analysis of such data is quite challenging.
This dissertation aims to develop novel statistical modeling methodologies to analyze four types of heterogeneous data and to find change-points efficiently. The proposed approaches have been applied to solve real-world problems and can be potentially applied to a broad range of areas. / Ph. D. / Heterogeneous data with different characteristics are ubiquitous in the modern digital world. Detection of change-points in heterogeneous data has attracted great attention from a variety of application areas, such as quality control in manufacturing, protest event detection in social science, purchase likelihood prediction in business analytics, and organ state change in the biomedical engineering. However, due to the extraordinary diversity of the heterogeneous data structures and complexity of the underlying dynamic patterns, the change-detection and analysis of such data is quite challenging.
This dissertation focuses on modeling and analysis of data with heterogeneous structures. Particularly, four types of heterogeneous data are analyzed and different techniques are proposed in order to nd change-points efficiently. The proposed approaches have been applied to solve real-world problems and can be potentially applied to a broad range of areas.
|
10 |
Eletrodinâmica variacional e o problema eletromagnético de dois corpos / Variational Electrodynamics and the Electromagnetic Two-Body ProblemSouza, Daniel Câmara de 18 December 2014 (has links)
Estudamos a Eletrodinâmica de Wheeler-Feynman usando um princípio variacional para um funcional de ação finito acoplado a um problema de valor na fronteira. Para trajetórias C2 por trechos, a condição de ponto crítico desse funcional fornece as equações de movimento de Wheeler-Feynman mais uma condição de continuidade dos momentos parciais e energias parciais, conhecida como condição de quina de Weierstrass-Erdmann. Estudamos em detalhe um sub-caso mais simples, onde os dados de fronteira têm um comprimento mínimo. Nesse caso, mostramos que a condição de extremo se reduz a um problema de valor na chegada para uma equação diferencial com retardo misto dependente do estado e do tipo neutro. Resolvemos numericamente esse problema usando um método de shooting e um método de Runge-Kutta de quarta ordem. Para os casos em que as fronteiras mínimas têm velocidades descontínuas, elaboramos uma técnica para resolver as condições de quina de Weierstrass-Erdmann junto com o problema de valor na chegada. As trajetórias com velocidades descontínuas previstas pelo método variacional foram verificadas por experimentos numéricos. Em um segundo desenvolvimento, para o caso mais difícil de fronteiras de comprimento arbitrário, implementamos um método de minimização com gradiente fraco para o princípio variacional e problema de fronteira acima citado. Elaboramos dois métodos numéricos, ambos implementados em MATLAB, para encontrar soluções do problema eletromagnético de dois corpos. O primeiro combina o método de elementos finitos com o método de Newton para encontrar as soluções que anulam o gradiente fraco do funcional para fronteiras genéricas. O segundo usa o método do declive máximo para encontrar as soluções que minimizam a ação. Nesses dois métodos as trajetórias são aproximadas dentro de um espaço de dimensão finita gerado por uma Galerkiana que suporta velocidades descontínuas. Foram realizados diversos testes e experimentos numéricos para verificar a convergência das trajetórias calculada numericamente; também comparamos os valores do funcional calculados numericamente com alguns resultados analíticos sobre órbitas circulares. / We study the Wheeler-Feynman electrodynamics using a variational principle for an action functional coupled to a finite boundary value problem. For piecewise C2 trajectories, the critical point condition for this functional gives the Wheeler-Feynman equations of motion in addition to a continuity condition of partial moments and partial energies, known as the Weierstrass-Erdmann corner conditions. In the simplest case, for the boundary value problem of shortest length, we show that the critical point condition reduces to a two-point boundary value problem for a state-dependent mixed-type neutral differential-delay equation. We solve this special problem numerically using a shooting method and a fourth order Runge-Kutta. For the cases where the boundary segment has discontinuous velocities we developed a technique to solve the Weierstrass-Erdmann corner conditions and the two-point boundary value problem together. The trajectories with discontinuous velocities presupposed by the variational method were verified by numerical experiments. In a second development, for the harder case with boundaries of arbitrary length, we implemented a method of minimization with weak gradient for the variational principle quoted above. Two numerical methods were implemented in MATLAB to find solutions of the two-body electromagnetic problem. The first combines the finite element method with Newtons method to find the solutions that vanish the weak gradient. The second uses the method of steepest descent to find the solutions that minimize the action. In both methods the trajectories are approximated within a finite-dimensional space generated by a Galerkian that supports discontinuous velocities. Many tests and numerical experiments were performed to verify the convergence of the numerically calculated trajectories; also were compared the values of the functional computed numerically with some known analytical results on circular orbits.
|
Page generated in 0.0586 seconds