• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 59
  • 34
  • 24
  • 20
  • 17
  • 8
  • 8
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 439
  • 439
  • 60
  • 56
  • 56
  • 53
  • 52
  • 45
  • 38
  • 35
  • 35
  • 34
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimal regression design under second-order least squares estimator: theory, algorithm and applications

Yeh, Chi-Kuang 23 July 2018 (has links)
In this thesis, we first review the current development of optimal regression designs under the second-order least squares estimator in the literature. The criteria include A- and D-optimality. We then introduce a new formulation of A-optimality criterion so the result can be extended to c-optimality which has not been studied before. Following Kiefer's equivalence results, we derive the optimality conditions for A-, c- and D-optimal designs under the second-order least squares estimator. In addition, we study the number of support points for various regression models including Peleg models, trigonometric models, regular and fractional polynomial models. A generalized scale invariance property for D-optimal designs is also explored. Furthermore, we discuss one computing algorithm to find optimal designs numerically. Several interesting applications are presented and related MATLAB code are provided in the thesis. / Graduate
72

The Impact of Partial Measurement Invariance on Between-group Comparisons of Latent Means for a Second-Order Factor

January 2016 (has links)
abstract: A simulation study was conducted to explore the influence of partial loading invariance and partial intercept invariance on the latent mean comparison of the second-order factor within a higher-order confirmatory factor analysis (CFA) model. Noninvariant loadings or intercepts were generated to be at one of the two levels or both levels for a second-order CFA model. The numbers and directions of differences in noninvariant loadings or intercepts were also manipulated, along with total sample size and effect size of the second-order factor mean difference. Data were analyzed using correct and incorrect specifications of noninvariant loadings and intercepts. Results summarized across the 5,000 replications in each condition included Type I error rates and powers for the chi-square difference test and the Wald test of the second-order factor mean difference, estimation bias and efficiency for this latent mean difference, and means of the standardized root mean square residual (SRMR) and the root mean square error of approximation (RMSEA). When the model was correctly specified, no obvious estimation bias was observed; when the model was misspecified by constraining noninvariant loadings or intercepts to be equal, the latent mean difference was overestimated if the direction of the difference in loadings or intercepts of was consistent with the direction of the latent mean difference, and vice versa. Increasing the number of noninvariant loadings or intercepts resulted in larger estimation bias if these noninvariant loadings or intercepts were constrained to be equal. Power to detect the latent mean difference was influenced by estimation bias and the estimated variance of the difference in the second-order factor mean, in addition to sample size and effect size. Constraining more parameters to be equal between groups—even when unequal in the population—led to a decrease in the variance of the estimated latent mean difference, which increased power somewhat. Finally, RMSEA was very sensitive for detecting misspecification due to improper equality constraints in all conditions in the current scenario, including the nonzero latent mean difference, but SRMR did not increase as expected when noninvariant parameters were constrained. / Dissertation/Thesis / Masters Thesis Educational Psychology 2016
73

Um modelo de calibração de segunda ordem para determinação espectrofluorimétrica de hidrocarbonetos policíclicos aromáticos em bebidas destiladas

Silva, Amanda Cecília da Silva 06 June 2015 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-05-20T14:55:01Z No. of bitstreams: 1 arquivo total.pdf: 6564375 bytes, checksum: cc7202d5f55c0bd22656d7a99ef5e9df (MD5) / Made available in DSpace on 2016-05-20T14:55:01Z (GMT). No. of bitstreams: 1 arquivo total.pdf: 6564375 bytes, checksum: cc7202d5f55c0bd22656d7a99ef5e9df (MD5) Previous issue date: 2015-06-06 / Alcoholic beverage consumption increases annually worldwide and consequently the higher is the intake of harmful compounds that is present in these products, such as polycyclic aromatic hydrocarbons (PAH's) which has attracted the attention of researchers because of their carcinogenic capacity. The spirits is the class of drinks most affected by the presence of this group of contaminants (HPA's) that reaches it by burning the raw material used in the production. Despite the existing concern about the HPA's there is still no legislation or control for these contaminants in spirits, so as soon as possible the creation of legislation is necessary and for this, is necessary to develop rapid, robust and low waste production analytical methods. Most quantitation methods for PAH's in food uses HPLC-FLU or GC-MS, but the use of liquid or gas chromatography coupled to mass spectra generates huge amount of waste beyond the analysis time and high associated costs. In this work we present a rapid methodology, relatively simple and low cost to simultaneous quantification of five HPA's (BaP, FL, AC, AN and P) in three types of spirits (rum, cachaça and vodka) using fluorescence spectroscopy EEM 3D and second order calibration to circumvent the problems caused by the complexity of the matrix by the second order advantage. Calibration models were built by PARAFAC and U-PLS/RBL using pure analyte individual standard solutions. And the models were validated using a set of analytes mixtures adding an interfering (FE). For the development of validation blends the Taguchi design was used. The validation parameters obtained were satisfactory for both models (PARAFAC and U-PLS / RBL), with REP on a range from 4.58% to 8.55% and 1.75% to 9.16% respectively. The application of calibration models in real samples is still being processed. processed. The application of calibration models in spirits showed good performance with recovery values in the range of 85.99% to 115.18% for PARAFAC and 81.02% to 106.05% for U-PLS / RBL. Therefore, we can say that the models built had satisfactory performance for the determination of PAH's in spirits, reaching the second advantage, with little generation of waste, simplicity and low cost associated. / O consumo de bebidas alcoólicas aumenta anualmente em todo o mundo e, consequentemente, maior é a ingestão dos compostos prejudiciais à saúde que estão presentes nesses produtos. Como exemplo, os hidrocarbonetos policíclicos aromáticos (HPA’s), que vem chamando a atenção dos pesquisadores devido ao seu potencial cancerígeno. As bebidas destiladas é a classe mais afetadas pela presença desse grupo de contaminantes, que chega ao produto através da queima da matéria-prima utilizada na produção da bebida. Apesar da preocupação existente sobre os HPA’s, ainda não existe nenhuma legislação ou controle para esses contaminantes nas bebidas destiladas. Portanto, é necessário, o quanto antes, que esses contaminantes sejam legislados para bebidas destiladas. Nesse contexto, é relevante o desenvolvimento de metodologias analíticas rápidas, robustas e com a mínima geração de resíduos. A maioria dos métodos para quantificação de HPA’s em alimentos faz uso de HPLC-FLU ou CG-EM, porém as técnicas cromatográficas geram muitos resíduos além do tempo de análise e dos gastos associados. Nesse trabalho é apresentada uma metodologia rápida, relativamente simples e de baixo custo para quantificação simultânea de cinco HPA’s (BaP, FL, AC, AN e P) em três tipos de bebidas destiladas (rum, cachaça e vodca), empregando espectrométria de fluorescência com EEM 3D e calibração de segunda ordem para contornar os problemas causados pela complexidade da matriz. Modelos de calibração foram construídos via PARAFAC e U-PLS/RBL e validados através de um conjunto de misturas dos analitos, com acréscimo de um interferente (FE). Para a elaboração das misturas de validação, o planejamento Taguchi foi utilizado. Os parâmetros de validação obtidos mostraram-se satisfatórios para ambos os modelos (PARAFAC e U-PLS/RBL), com faixa de REP variando de 4,58% a 8,55% e 1,75% a 9,16% respectivamente. A aplicação dos modelos de calibração nas amostras de bebidas destiladas demonstrou satisfatório desempenho analítico com valores de recuperação na faixa de 85,99% a 115,18% para o PARAFAC e de 81,02% a 106,05% para o U-PLS/RBL. Portanto, é possível afirmar que os modelos construídos apresentaram desempenho satisfatório para determinação de HPA’s em bebidas destiladas, alcançando a vantagem de segundo, com pouca geração de resíduo, simplicidade e baixo custo associados.
74

Comprimento efetivo de colunas de aço em pórticos deslocáveis / Effective length for steel columns of plane un-braced frames

Maurício Carmo Antunes 14 September 2001 (has links)
Dentro da prática de verificação e projeto de estruturas metálicas, o cálculo de instabilidade exerce papel importante, já que o aço, por sua elevada resistência, incentiva o uso de colunas significativamente esbeltas. É comum na verificação da instabilidade de pórticos metálicos de andares múltiplos a utilização do conhecido fator K, que define, para a coluna, um comprimento efetivo. Tal fator é usualmente obtido em ábacos construídos a partir de duas hipóteses distintas para o mecanismo de instabilidade: a de flambagem por deslocamento lateral do andar e a de flambagem com esse deslocamento impedido. Essa divisão, e os modelos usualmente utilizados para tratá-la, se mostram incompletos para o caso de pórticos que se distanciem das hipóteses simplificadoras adotadas, e podem induzir a confusões e mal-entendido no uso do fator K. Neste trabalho, serão mostrados modelos alternativos para a determinação desse fator, buscando-se maior generalidade, assim como tentativas de esclarecer algumas possíveis ambiguidades no seu uso; além disso, esses modelos serão aplicados a alguns exemplos particulares. Como complemento ao trabalho, foi criado um programa de computador para determinar deslocamentos e esforços em segunda ordem para esse tipo de edificação, assim como ábacos alternativos. Os resultados obtidos nos exemplos serão contrastados com os fornecidos pelo programa e pelos ábacos. / In the practice of analysis and design of steel structures, instability calculations is a very important feature, since steel, for its high strength, motivates the use of significantly slender columns. It is usual in the analysis of multi-storey steel plane frames, the use of the K factor, that defines, for a column, an effective length. Such factor is usually obtained from nomograms, based on two different hypothesis for the instability mode: one with a side-way mode and the other with no lateral displacements. That division, and the models usually used to deal with, are incomplete to treat frames for which behaviour go away from the adopted hypothesis and simplifications, and they can induce to confusions and misunderstanding in the use of the K factor. In this work, alternative models will be shown for the determination of the K factor, looking for a larger generality, as well as attempting to clear ambiguities in its use; a series of examples will be then presented as applications of the models. As a complement, a computer program was created to determine first and second order nodal displacements and member forces, as well alternative nomograms; the results obtained from the models will be contrasted with those obtained from the program and the nomograms.
75

Algoritimos geneticos para seleção de variaveis em metodos de calibração de segunda ordem / Genetic algorithm for selection of variables in second-order calibration methods

Carneiro, Renato Lajarim 07 October 2007 (has links)
Orientador: Ronei Jesus Poppi / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Quimica / Made available in DSpace on 2018-08-08T23:32:47Z (GMT). No. of bitstreams: 1 Carneiro_RenatoLajarim_M.pdf: 4176371 bytes, checksum: cbe2edc08ad07ea0e4607e69fc38aec5 (MD5) Previous issue date: 2007 / Resumo: Esse trabalho teve por objetivo desenvolver um programa em MatLab baseado no Algoritmo Genético (GA) para aplicar e verificar as principais vantagens deste na seleção de variáveis para métodos de calibração de segunda ordem (BLLS-RBL, PARAFAC e N-PLS). Para esta finalidade foram utilizados três conjuntos de dados: 1. Determinação de pesticidas e um metabólito em vinho tinto por HPLC-DAD em três situações distintas. Nestas três situações foram observadas sobreposições dos interferentes sobre os compostos de interesse. Estes compostos eram os pesticidas carbaril (CBL), tiofanato metílico (TIO), simazina (SIM) e dimetoato (DMT) e o metabólito ftalimida (PTA). 2. Quantificação das vitaminas B2 (riboflavina) e B6 (piridoxina) por espectrofluorimetria de excitação/emissão em formulações infantis comerciais, sendo três leites em pó e dois suplementos alimentares. 3. Análise dos fármacos ácido ascórbico (AA) e ácido acetilsalicílico (AAS) em formulações farmacêuticas por FIA com gradiente de pH e detecção por arranjo de diodos, onde a variação de pH causa alteração na estrutura das moléculas dos fármacos mudando seus espectros na região do ultravioleta. A performance dos modelos, com e sem seleção de variáveis, foi comparada através de seus erros, expressados como a raiz quadrada da média dos quadrados dos erros de previsão (RMSEP), e os erros relativos de previsão (REP). Resultados melhores foram claramente observados quando o GA foi utilizado para a seleção de variáveis nos métodos de calibração de segunda ordem. / Abstract: The aim of this work was to develop a program in MatLab using Genetic Algorithm (GA) to apply and to verify the main advantages of variables selection for second-order calibration methods (BLLS-RBL, PARAFAC and N-PLS). For this purpose three data sets had been used: 1. Determination of pesticides and a metabolite in red wines using HPLC-DAD in three distinct situations, where overlappings of the interferentes on interest compounds are observed. These composites were the pesticides carbaryl (CBL), methyl thiophanate (TIO), simazine (SIM) and dimethoate (DMT) and the metabolite phthalimide (PTA). 2. Quantification of the B2 (riboflavine) and (pyridoxine) B6 vitamins for spectrofluorimetry of excitation-emission in commercial infantile products, being three powder milk and two supplement foods. 3. Analysis of ascorbic acid (AA) and acetylsalicylic acid (AAS) in pharmaceutical tablets by FIA with pH gradient and detection for diode array, where the variation of pH causes alterations in the structure of molecules of analites shifting its spectra in the region of the ultraviolet. The performance of the models, with and without selection of variable, was compared through its errors, expressed as the root mean square error of prediction (RMSEP), and the relative errors of prediction (REP). The best results were obtained when the GA was used for the selection of variable in second-order calibration methods. / Mestrado / Quimica Analitica / Mestre em Química
76

Contribuição da rigidez transversal à flexão das lajes na distribuição dos esforços em estruturas de edifícios de andares múltiplos, em teoria de segunda ordem / Contribution of bending stiffness transverse of slabs in the forces distribution in structures of multistory buildings, in second order theory

Carlos Humberto Martins 10 August 1998 (has links)
O principal objetivo deste trabalho é calcular esforços e deslocamentos de estruturas tridimensionais de edifícios de andares múltiplos, sujeitos às ações verticais e laterais, considerando a rigidez transversal à flexão das lajes, em teoria de 2ª ordem. O elemento finito de placa adotado na discretização do pavimento, responsável pela consideração da rigidez transversal das lajes na análise do edifício é o DKT (Discrete Kirchhoff Theory). Para os pilares o equilíbrio de forças é verificado na sua posição deformada, ou como é conhecido da literatura técnica, análise em teoria de 2ª ordem, considerando a não linearidade geométrica. Para o cálculo dos esforços e deslocamentos na estrutura são aplicadas as técnicas de subestruturação em série e paralelo na matriz de rigidez global da estrutura. Elaborou-se um programa de computador para o processo de cálculo, utilizando a linguagem computacional Fortran Power Station 90 e pré e pós processadores em Visual Basic 4.0 para ambiente Windows. Finalmente são apresentados alguns exemplos para comprovar a validade do processo de cálculo utilizado / The main aim of this work is to calculate stresses and displacements of threedimensional structures of multistory buildings, subjected to vertical and lateral loads, considering the transverse bending stiffness of slabs, in second order theory. The plate finite element adopted in floor discretization, responsible for considering the bending stiffness contribution of slabs in the analysis of buildings, is the DKT (Discrete Kirchhoff Theory). For columns the forces equilibrium is verified for the columns in their deformed position, which is known in the technical literature as 2nd order analysis, considering the geometric non-linearity. The techniques of serial and parallel analysis of substructures are applied to the global stiffness matrix for the calculus of forces and displacements in the strucuture. A computer program was developed for the calculation process, using the computer language Fortran Power Station 90 and pre and post-processors in Visual Basic 4.0 for a Windows environment. Finally, some examples are presented to check the validity of the employed calculus process
77

Phasor Measurement Unit Data-based States and Parameters Estimation in Power System

Ghassempour Aghamolki, Hossein 08 November 2016 (has links)
The dissertation research investigates estimating of power system static and dynamic states (e.g. rotor angle, rotor speed, mechanical power, voltage magnitude, voltage phase angle, mechanical reference point) as well as identification of synchronous generator parameters. The research has two focuses: i. Synchronous generator dynamic model states and parameters estimation using real-time PMU data. ii.Integrate PMU data and conventional measurements to carry out static state estimation. The first part of the work focuses on Phasor Measurement Unit (PMU) data-based synchronous generator states and parameters estimation. In completed work, PMU data-based synchronous generator model identification is carried out using Unscented Kalman Filter (UKF). The identification not only gives the states and parameters related to a synchronous generator swing dynamics but also gives the states and parameters related to turbine-governor and primary and secondary frequency control. PMU measurements of active power and voltage magnitude, are treated as the inputs to the system while voltage phasor angle, reactive power, and frequency measurements are treated as the outputs. UKF-based estimation can be carried out at real-time. Validation is achieved through event play back to compare the outputs of the simplified simulation model and the PMU measurements, given the same input data. Case studies are conducted not only for measurements collected from a simulation model, but also for a set of real-world PMU data. The research results have been disseminated in one published article. In the second part of the research, new state estimation algorithm is designed for static state estimation. The algorithm contains a new solving strategy together with simultaneous bad data detection. The primary challenge in state estimation solvers relates to the inherent non-linearity and non-convexity of measurement functions which requires using of Interior Point algorithm with no guarantee for a global optimum solution and higher computational time. Such inherent non-linearity and non-convexity of measurement functions come from the nature of power flow equations in power systems. The second major challenge in static state estimation relates to the bad data detection algorithm. In traditional algorithms, Largest Normalized Residue Test (LNRT) has been used to identify bad data in static state estimation. Traditional bad data detection algorithm only can be applied to state estimation. Therefore, in a case of finding any bad datum, the SE algorithm have to rerun again with eliminating found bad data. Therefore, new simultaneous and robust algorithm is designed for static state estimation and bad data identification. In the second part of the research, Second Order Cone Programming (SOCP) is used to improve solving technique for power system state estimator. However, the non-convex feasible constraints in SOCP based estimator forces the use of local solver such as IPM (interior point method) with no guarantee for quality answers. Therefore, cycle based SOCP relaxation is applied to the state estimator and a least square estimation (LSE) based method is implemented to generate positive semi-definite programming (SDP) cuts. With this approach, we are able to strengthen the state estimator (SE) with SOCP relaxation. Since SDP relaxation leads the power flow problem to the solution of higher quality, adding SDP cuts to the SOCP relaxation makes Problem’s feasible region close to the SDP feasible region while saving us from computational difficulty associated with SDP solvers. The improved solver is effective to reduce the feasible region and get rid of unwanted solutions violate cycle constraints. Different Case studies are carried out to demonstrate the effectiveness and robustness of the method. After introducing the new solving technique, a novel co-optimization algorithm for simultaneous nonlinear state estimation and bad data detection is introduced in this dissertation. ${\ell}_1$-Norm optimization of the sparse residuals is used as a constraint for the state estimation problem to make the co-optimization algorithm possible. Numerical case studies demonstrate more accurate results in SOCP relaxed state estimation, successful implementation of the algorithm for the simultaneous state estimation and bad data detection, and better state estimation recovery against single and multiple Gaussian bad data compare to the traditional LNRT algorithm.
78

A Dirichlet-Dirichlet DD-pre-conditioner for p-FEM

Beuchler, Sven 31 August 2006 (has links) (PDF)
In this paper, a uniformly elliptic second order boundary value problem in 2D is discretized by the p-version of the finite element method. An inexact Dirichlet-Dirichlet domain decomposition pre-conditioner for the system of linear algebraic equations is investigated. The solver for the problem in the sub-domains and a pre-conditioner for the Schur-complement are proposed as ingredients for the inexact DD-pre-conditioner. Finally, several numerical experiments are given.
79

Approximation numérique sur maillage cartésien de lois de conservation : écoulements compressibles et élasticité non linéaire

Gorsse, Yannick 09 November 2012 (has links)
Dans cette thèse, nous nous intéressons à la simulation numérique d’écoulements compressibles comportant des interfaces. Ces interfaces peuvent séparer un fluide et un solide rigide, deux fluides de lois d’état différentes ou encore un fluide et un solide élastique. Dans un premier temps, nous avons élaboré une méthode de type frontières immergées afin d’imposer une condition de glissement au bord d’un obstacle rigide de manière précise. Nous avons ensuite étudié et validé un schéma de type interface non diffuse pour les écoulements multi-matériaux en vue d’appliquer la méthode de frontières immergées aux solides déformables. / We are interested in numerical simulation of compressible flows with interfaces. Theses interfaces can separatea fluid and a rigid solid, two fluids with differents constitutive law, or a fluid and an elastic solid. First, we havedevelopped an immersed boundary method to impose precisely a non penetration condition at the border of anobstacle. Then, a sharp interface method for compressible multimaterials have been studied and validated. Theimmersed boundary method of the first part is applied in this context.
80

A systemic stigmatization of fat people

Brandheim, Susanne January 2017 (has links)
The aim of this work was to develop knowledge about and awareness of fatness stigmatization from a systemic perspective. The stigmatization of fat people was located as a social problem in a second-order reality in which human fatness is observed and responded to, in turn providing it with negative meaning. Four separate studies of processes involved in this systemic stigmatization were performed. In study I, the association between weight and psychological distress was investigated. When controlling for an age-gender variable, this association was almost erased, questioning the certainty by which a higher weight in general is approached as a medical issue. In study II, the focus was on stigma internalization where negative and positive responses combined were connected to fat individuals’ distress. We found that both responses seemed to have a larger impact on fat individuals, suggesting that the embodied stigma of being fat sensitizes them to responses in general. In study III, justifications of fatness stigmatization was explored by a content analysis of a reality TV weight-loss show. The analysis showed how explicit bullying of a fat partner could be justified by animating the thin Self as violated by the fat Other, thus downplaying the evils of the bullying act in favor of highlighting the ideological value of thinness. The implications of these studies were related and seated in a context comprising a historical aversion toward the fat body, a declared obesity epidemic, a new public health ideology, a documented failure to reverse this obesity epidemic, and a market of weight-loss stakeholders who thrive on keeping the negative meanings of being fat alive. The stigmatization of fat people was intelligible from a systemic perspective, where processes of structural ignorance, internalized self-discrimination, and applied prejudice reinforce each other to form a larger stigmatizing process. In paper IV, it was argued that viewing fatness stigmatization as oppression rather than misrecognition could hold transformative keys to social change. / There are social groups in society that are categorically connected, for example by their physical, cultural or psychological markers. For political, or moral, reasons, some of these groups seem to trigger special attention in form of forceful response processes at several societal levels. This is the case with the contemporary ‘obesity epidemic’ phenomenon; postulated by the World Health Organization as one of the most severe threats to the health of future mankind. One of the downsides with such special attention is that the fat individuals find themselves caught up in seemingly unavoidable processes of devaluation. Instead of investigating the catastrophic (well-known) psycho-social consequences of these individuals, this work focuses on connecting the devaluing processes that form a systemic stigmatization of fat individuals. From this critical perspective, it is argued that the pervasive stigmatization of fat people is not an unfortunate consequence of structural norms that passively exclude its ‘non-fits’, but an intelligible outcome of a highly active set of processes that continuously construct and re-construct a historical aversion towards fat people.

Page generated in 0.0674 seconds