• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 392
  • 85
  • 67
  • 50
  • 27
  • 13
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 791
  • 220
  • 112
  • 82
  • 67
  • 58
  • 56
  • 55
  • 55
  • 55
  • 52
  • 52
  • 51
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Planejamento da expansão de sistemas de transmissão usando técnicas especializadas de programação inteira mista / Transmission network expansion planning via efficient mixed-integer linear programming techniques

Vanderlinde, Jeferson Back [UNESP] 06 September 2017 (has links)
Submitted by JEFERSON BACK VANDERLINDE null (jefersonbv@yahoo.com.br) on 2017-11-01T16:38:25Z No. of bitstreams: 1 jeferson_tese_final_20171101.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-11-13T15:38:34Z (GMT) No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) / Made available in DSpace on 2017-11-13T15:38:34Z (GMT). No. of bitstreams: 1 vanderlinde_jb_dr_ilha.pdf: 4860852 bytes, checksum: 2f99c37969be3815f82b1b4455a40230 (MD5) Previous issue date: 2017-09-06 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho, consideram-se a análise teórica e a implementação computacional dos algoritmos Primal Simplex Canalizado (PSC) e Dual Simplex Canalizado (DSC) especializados. Esses algoritmos foram incorporados em um algoritmo Branch and Bound (B&B) de modo a resolver o problema de Planejamento da Expansão de Sistemas de Transmissão (PEST). Neste caso, o problema PEST foi modelado usando os chamados modelo de Transportes e modelo Linear Disjuntivo (LD), o que produz um problema de Programação Linear Inteiro Misto (PLIM). O algoritmo PSC é utilizado na resolução do problema de Programação Linear (PL) inicial após desconsiderar a restrição de integralidade do problema PLIM original. Juntamente com o algoritmo PSC, foi implementada uma estratégia para reduzir o número de variáveis artificiais adicionadas ao PL, consequentemente reduzindo o número de iterações do algoritmo PSC. O algoritmo DSC é utilizado na reotimização eficiente dos subproblemas gerados pelo algoritmo B&B, através do quadro ótimo do PL inicial, excluindo, assim, a necessidade da resolução completa de cada subproblema e, consequentemente, reduzindo o consumo de processamento e memória. Nesta pesquisa, é apresentada uma nova proposta de otimização, e, consequentemente, a implementação computacional usando a linguagem de programação FORTRAN que opera independentemente de qualquer solver. / In this research, the theoretical analysis and computational implementation of the specialized dual simplex algorithm (DSA) and primal simplex algorithm (PSA) for bounded variables is considered. These algorithms have been incorporated in a Branch and Bound (B&B) algorithm to solve the Transmission Network Expansion Planning (TNEP) problem. In this case, the TNEP problem is modeled using transportation model and linear disjunctive model (DM), which produces a mixed-integer linear programming (MILP) problem. After relaxing the integrality of investment variables of the original MILP problem, the PSA is used to solve the initial linear programming (LP) problem. Also, it has been implemented a strategy in PSA to reduce the number of artificial variables which are added into the LP problem, and consequently reduces the number of iterations of PSA. Through optimal solution of the initial LP, the DSA is used in efficient reoptimization of subproblems, resulting from the B&B algorithm, thus excludes the need for complete resolution of each subproblems, which results reducing the CPU time and memory consumption. This research presents the implementation of the proposed approach using the FORTRAN programming language which operates independently and does not use any commercial solver.
632

Flow-shop with time delays, linear modeling and exact solution approaches / Flow-shop avec temps de transport, modélisation linéaire et approches de résolution exacte

Mkadem, Mohamed Amine 07 December 2017 (has links)
Dans le cadre de cette thèse, nous traitons le problème de flow-shop à deux machines avec temps de transport où l’objectif consiste à minimiser le temps de complétion maximal. Dans un premier temps, nous nous sommes intéressés à la modélisation de ce problème. Nous avons proposé plusieurs programmes linéaires en nombres entiers. En particulier, nous avons introduit une formulation linéaire basée sur une généralisation non triviale du modèle d’affectation pour le cas où les durées des opérations sur une même machine sont identiques. Dans un deuxième temps, nous avons élargi la portée de ces formulations mathématiques pour développer plusieurs bornes inférieures et un algorithme exact basé sur la méthode de coupe et branchement (Branch-and-Cut). En effet, un ensemble d’inégalités valides a été considéré afin d’améliorer la relaxation linéaire de ces programmes et d’accélérer leur convergence. Ces inégalités sont basées sur la proposition de nouvelles règles de dominance et l’identification de sous-instances faciles à résoudre. L’identification de ces sous-instances revient à déterminer les cliques maximales dans un graphe d’intervalles. En plus des inégalités valides, la méthode exacte proposée inclut la considération d’une méthode heuristique et d’une procédure visant à élaguer les nœuds. Enfin, nous avons proposé un algorithme par séparation et évaluation (Branch-and-Bound) pour lequel, nous avons introduit des règles de dominance et une méthode heuristique basée sur la recherche locale. Nos expérimentations montrent l’efficacité de nos approches qui dominent celles de la littérature. Ces expérimentations ont été conduites sur plusieurs classes d’instances qui incluent celles de la littérature, ainsi que des nouvelles classes d’instances où les algorithmes de la littérature se sont montrés peu efficaces. / In this thesis, we study the two-machine flow-shop problem with time delays in order to minimize the makespan. First, we propose a set of Mixed Integer Programming (MIP) formulations for the problem. In particular, we introduce a new compact mathematical formulation for the case where operations are identical per machine. The proposed mathematical formulations are then used to develop lower bounds and a branch-and-cut method. A set of valid inequalities is proposed in order to improve the linear relaxation of the MIPs. These inequalities are based on proposing new dominance rules and computing optimal solutions of polynomial-time-solvable sub-instances. These sub-instances are extracted by computing all maximal cliques on a particular Interval graph. In addition to the valid inequalities, the branch-and-cut method includes the consideration of a heuristic method and a node pruning procedure. Finally, we propose a branch-and-bound method. For which, we introduce a local search-based heuristic and dominance rules. Experiments were conducted on a variety of classes of instances including both literature and new proposed ones. These experiments show the efficiency of our approaches that outperform the leading methods published in the research literature.
633

Otimização de estruturas reticuladas planas com comportamento geometricamente não linear / Optimization of plane frame structures with behavior geometrically nonlinear

ASSIS, Lilian Pureza de 20 October 2006 (has links)
Made available in DSpace on 2014-07-29T15:03:39Z (GMT). No. of bitstreams: 1 lilian pureza.pdf: 2774999 bytes, checksum: 2a074d04ee02c7e1c87fdbe8c2c68ef6 (MD5) Previous issue date: 2006-10-20 / The aim of this work is to present a formulation and corresponding computational implementation for sizing optimization of plane frames and cable-stayed columns considering geometric non liner behavior. The structural analysis is based on the finite element method using the updated lagrangian approach for plane frame and cable elements, which are represented by plane truss elements. The non linear system is solved by the Newton-Raphson method coupled to load increment strategies such as the arch length method and the generalized displacement parameter method, which allow the algorithm to transpose any critical point that happen to appear along the equilibrium path. In the optimization process the design variables are the heights of the crosssection of the frame elements, the objective function represents the volume of the structure and the constraints impose limits to displacements and critical load. Lateral constraints impose limits to the design variables. The finite difference method is used in the sensitivity analysis of the displacement and critical load constraints. The optimization process is carried out using three different optimization strategies: the sequential quadratic programming algorithm; the interior points algorithm; and the branch and bound method. Some numerical experiments are carried out so as to test the analysis and the sensitivity strategies. Numerical experiments are presented to show the validity of the implementation presented in this dissertation. / O objetivo deste trabalho é a otimização de dimensões de pórticos planos e de colunas estaiadas planas pela minimização do volume da estrutura, considerando os efeitos da não-linearidade geométrica em seu comportamento. A formulação utiliza, para análise das estruturas, elementos finitos de pórtico e de treliça planos e referencial lagrangeano atualizado. O método de Newton-Raphson foi utilizado como estratégia para solução do sistema de equações não lineares. Foram acopladas estratégias especiais para ultrapassagem de pontos críticos que possam existir ao longo da trajetória de equilíbrio, tais como o comprimento de arco cilíndrico e o controle dos deslocamentos generalizados. Na otimização, as variáveis de projeto são as alturas das seções transversais dos elementos, a função objetivo é o volume do material e as restrições dizem respeito a limitações impostas a deslocamentos e à carga limite, além de limitações impostas aos valores das variáveis. A sensibilidade da função objetivo foi obtida por diferenciação direta e a sensibilidade das restrições pelo método das diferenças finitas. Foram utilizados o algoritmo de programação quadrática seqüencial, PQS, o algoritmo de pontos interiores, PI, e o algoritmo de Branch and Bound, B&B. São apresentados exemplos de validação das estratégias de análise não linear e da análise de sensibilidade, além dos exemplos de validação da formulação empregada para a otimização resolvidos pelos métodos implementados.
634

Essays in quantitative macroeconomics : assessment of structural models with financial and labor market frictions and policy implications / Essais de macroéconomie quantitative : évaluation des modèles structurels avec des frictions financières et du marché du travail et implications aux politiques macroéconomiques

Zhutova, Anastasia 21 November 2016 (has links)
Dans cette thèse, je fournis une évaluation empirique des relations entre les principales variables macroéconomiques qui animent le cycle économique. Nous traitons dans chacun des trois chapitres une question empirique en utilisant une approche économétrique bayésienne. Dans le premier chapitre nous étudions la contribution conditionnelle des taux de transition du marché du travail (le taux de retour en emploi et le taux de séparation). La littérature n'est pas parvenue à un consensus sur lequel des taux dominent la dynamique du marché du travail. Alors que Blanchard et Diamond (1990) ont conclu que la baisse de l'emploi en période de récession résulte d'un taux de séparation plus élevé, Shimer (2012), ainsi que Hall (2005), expliquent que les variations du chômage sont principalement expliqués par la variation du taux de retour en emploi. Notre résultat, obtenu grâce à une estimation d'un modèle VAR structurel, montre que l'importance de chaque taux de transition dépend des chocs qui ont frappé le marché du travail et de l'importance des institutions du marché du travail. Dans le second chapitre, nous évaluons l'impact de la réforme du marché du travail réalisée par le Président des États-Unis H. Hoover au début de la Grande Dépression. Nous montrons que ces politiques ont permis à l'économie américaine d'échapper à une grande spirale déflationniste. L'estimation d'un modèle DSGE à l'échelle agrégée, nous permet de comparer deux effets opposés que ces politiques impliquent : effet négatif dû à une baisse de l'emploi et l'effet positif dû aux anticipations inflationnistes qui sont expansionnistes quand l'économie est dans la trappe à liquidité. Les résultats dépendent de la règle de politique monétaire que nous supposons : le principe de Taylor ou le ciblage du niveau de prix. Le troisième chapitre est consacré à la relation entre le taux d'intérêt réel et l'activité économique qui dépend du nombre des participants aux marchés financiers. En utilisant un modèle DSGE et en permettant à la proportion de ces agents d'être stochastiques en suivant une chaîne de Markov, nous identifions les périodes historiques où la proportion était assez faible pour inverser la courbe IS. Pour le cas des États-Unis, nous montrons que cette relation est positive pendant la période de la Grande Inflation et pendant une courte période au début de la Grande Récession. Dans l'union européenne, la proportion de non­-participants a été augmentée pendant les années 2009-2015 mais seulement pour amplifier la corrélation négative entre le taux d'intérêt réel et la croissance de la production. / In this thesis I provide an empirical assessment of the relations between the main macroeconomic variables that drive the Business Cycle. We treat the empirical question that arises in each chapter using Bayesian estimation. In the first chapter we investigate conditional contribution of the labor market transition rates (the job finding rate and the separation rate) to unemployment. The literature did not have a consensus on which rate dominates in explaining the labor market dynamics. While Blanchard and Diamond (1990) concluded that the fall in employment during slumps resulted from a higher separation rate, Shimer (2012), as well as Hall (2005), explain unemployment variations by mainly the job finding rate. Our result, obtained through an estimation of a structural VAR model, shows that the importance of the transition rated depends on the shocks that hit an economy and hence the importance of the labor market institutions. In the second chapter, we assess the impact of the labor market reform of the US president H. Hoover implemented at the beginning of the Great Depression. We show that these policies prevented the US economy to enter a big deflationary spiral. Estimating a medium scale DSGE model, we also compare two opposite effects these policies lead to: negative effect through a fall in employment and positive effect though inflationary expectations which are expansionary when monetary policy is irresponsive to the rise in prices. The results depend on the monetary policy rule we assume: The Taylor principle or price level targeting. The third chapter is devoted to the relation between the real interest rate and the economic activity which depends on the number of asset market participants. Using a DSGE model and allowing to the proportion of these agents to be stochastic and to follow a Markov chain, we identify the historical sub-periods where this proportion was low enough to reverse the IS curve. For the US case, we report the studied relation to be positive during the Great Inflation period and for a short period at the edge of the Great Recession. In the EA, the proportion of non-participants has been increased during 2009-2015, but only to amplify the negative correlation between the real interest rate and output growth.
635

Quantum coin flipping and bit commitment : optimal bounds, pratical constructions and computational security / Pile-ou-face et mise-en-gage de bit quantique : bornes optimales, constructions pratiques et sécurité calculatoire

Chailloux, André 24 June 2011 (has links)
L'avènement de l'informatique quantique permet de réétudier les primitives cryptographiques avec une sécurité inconditionnelle, c'est à dire sécurisé même contre des adversaires tout puissants. En 1984, Bennett et Brassard ont construit un protocole quantique de distribution de clé. Dans ce protocole, deux joueurs Alice et Bob coopèrent pour partager une clé secrète inconnue d'une tierce personne Eve. Ce protocole a une sécurité inconditionnelle et n'a pasd'équivalent classique.Dans ma thèse, j'ai étudié les primitives cryptographiques à deux joueurs où ces joueurs ne se font pas confiance. J'étudie principalement le pile ou face quantique et la mise-en-gage quantique de bit. En informatique classique, ces primitivessont réalisables uniquement avec des hypothèses calculatoires, c'est-à-dire en supposant la difficulté d'un problème donné. Des protocoles quantiques ont été construits pour ces primitives où un adversaire peut tricher avec une probabilité constante strictement inférieure à 1, ce qui reste impossible classiquement. Néanmoins, Lo et Chau ont montré l'impossibilité de créer ces primitives parfaitement même en utilisant l'informatique quantique. Il reste donc à déterminer quelles sont les limites physiques de ces primitives.Dans une première partie, je construis un protocole quantique de pile ou face où chaque joueur peut tricher avec probabilité au plus 1/racine(2) + eps pour tout eps > 0. Ce résultat complète un résultat de Kitaev qui dit que dans un jeu de pile ou face quantique, un joueur peut toujours tricher avec probabilité au moins 1/racine(2). J'ai également construit un protocole de mise-en-gage de bit quantique optimal où un joueur peut tricher avec probabilité au plus 0,739 + eps pour tout eps > 0 puis ai montré que ce protocole est en fait optimal. Finalement, j'ai dérivé des bornes inférieures et supérieures pour une autre primitive: la transmission inconsciente, qui est une primitive universelle.Dans une deuxième partie, j'intègre certains aspects pratiques dans ces protocoles. Parfois les appareils de mesure ne donnent aucun résultat, ce sont les pertes dans la mesure. Je construis un protocole de lancer de pièce quantique tolérant aux pertes avec une probabilité de tricher de 0,859. Ensuite, j'étudie le modèle dispositif-indépendant où on ne suppose plus rien sur les appareils de mesure et de création d'état quantique.Finalement, dans une troisième partie, j'étudie ces primitives cryptographiques avec un sécurité computationnelle. En particulier, je fais le lien entre la mise en gage de bit quantique et les protocoles zero-knowledge quantiques. / Quantum computing allows us to revisit the study of quantum cryptographic primitives with information theoretic security. In 1984, Bennett and Brassard presented a protocol of quantum key distribution. In this protocol, Alice and Bob cooperate in order to share a common secret key k, which has to be unknown for a third party that has access to the communication channel. They showed how to perform this task quantumly with an information theoretic security; which is impossible classically.In my thesis, I study cryptographic primitives with two players that do not trust each other. I study mainly coin flipping and bit commitment. Classically, both these primitives are impossible classically with information theoretic security. Quantum protocols for these primitives where constructed where cheating players could cheat with probability stricly smaller than 1. However, Lo, Chau and Mayers showed that these primitives are impossible to achieve perfectly even quantumly if one requires information theoretic security. I study to what extent imperfect protocols can be done in this setting.In the first part, I construct a quantum coin flipping protocol with cheating probabitlity of 1/root(2) + eps for any eps > 0. This completes a result by Kitaev who showed that in any quantum coin flipping protocol, one of the players can cheat with probability at least 1/root(2). I also constructed a quantum bit commitment protocol with cheating probability 0.739 + eps for any eps > 0 and showed that this protocol is essentially optimal. I also derived some upper and lower bounds for quantum oblivious transfer, which is a universal cryptographic primitive.In the second part, I study some practical aspects related to these primitives. I take into account losses than can occur when measuring a quantum state. I construct a Quantum Coin Flipping and Quantum Bit Commitment protocols which are loss-tolerant and have cheating probabilities of 0.859. I also construct these primitives in the device independent model, where the players do not trust their quantum device. Finally, in the third part, I study these cryptographic primitives with information theoretic security. More precisely, I study the relationship between computational quantum bit commitment and quantum zero-knowledge protocols.
636

Elektrochemická charakterizace nanostrukturovaných povrchů modifikovaných biolátkami s thiolovou vazbou / Electrochemical Characterization of Nanostructured Surfaces Modified by Substancies with Thiol Bound

Urbánková, Kateřina January 2014 (has links)
This master thesis deals with nanotechnology, nanoparticles and nanostructured surfaces, electrochemical methods, especially voltammetry, cyclic voltammetry, electrochemical impedance spectroscopy and contact angle measurement. One part is focused on electrodes primarily nanostructured and modified by substancies with thiol bound. Tutorial for preparation of gold nanostructured electrods is introduced in practical section including SEM photos of electrode surface. Nanostructured and bare gold electrodes were modified by 11-mercaptoundecanoic acid, streptavidin, glycine and biotin and measured by cyclic voltammetry, electrochemical impedance spectroscopy and contact angle.
637

Algorithme de branch-and-price-and-cut pour le problème de conception de réseaux avec coûts fixes, capacités et un seul produit

Kéloufi, Ghalia K. 12 1900 (has links)
No description available.
638

[en] DISCRETE PRECODING AND ADJUSTED DETECTION FOR MULTIUSER MIMO SYSTEMS WITH PSK MODULATION / [pt] PRECODIFICAÇÃO DISCRETA E DETECÇÃO CORRESPONDENTE PARA SISTEMAS MIMO MULTIUSUÁRIO QUE UTILIZAM MODULAÇÃO PSK

ERICO DE SOUZA PRADO LOPES 10 September 2021 (has links)
[pt] Com um número crescente de antenas em sistemas MIMO, o consumo de energia e os custos das interfaces de rádio correspondentes tornam-se relevantes. Nesse contexto, uma abordagem promissora é a utilização de conversores de dados de baixa resolução. Neste estudo, propomos dois novos pré-codificadores ótimos para a sinais de envelope constante e quantização de fase. O primeiro maximiza a distância mínima para o limite de decisão (MMDDT) nos receptores, enquanto o segundo minimiza o erro médio quadrático entre os símbolos dos usuários e o sinal de recepção. O design MMDDT apresetado nesse estudo é uma generalização de designs anteriores que baseiam-se em quantização de 1-bit. Além disso, ao contrário do projeto MMSE anterior que se baseia na resolução de 1-bit, a abordagem proposta emprega quantização de fase uniforme e a etapa de limite no método branch-and-bound é diferente em termos de considerar o relaxamento mais restritivo do problema não convexo, que é então utilizado para um design sub ótimo também. Além disso, três métodos diferentes de detecção suave e um esquema iterativo de detecção e decodificação que permite a utilização de codificação de canal em conjunto com pré-codificação de baixa resolução são propostos. Além de uma abordagem exata para calcular a informação extrínseca, duas aproximações com reduzida complexidade computacional são propostas. Os algoritmos propostos de pré-codificação branch-and-bound são superiores aos métodos existentes em termos de taxa de erro de bit. Resultados numéricos mostram que as abordagens propostas têm complexidade significativamente menor do que a busca exaustiva. Finalmente, os resultados baseados em um código de bloco LDPC indicam que os esquemas de processamento de recepção geram uma taxa de erro de bit menor em comparação com o projeto convencional. / [en] With an increasing number of antennas in multiple-input multiple-output (MIMO) systems, the energy consumption and costs of the corresponding front ends become relevant. In this context, a promising approach is the consideration of low-resolution data converters. In this study two novel optimal precoding branch-and-bound algorithms constrained to constant envelope signals and phase quantization are proposed. The first maximizes the minimum distance to the decision threshold (MMDDT) at the receivers, while the second minimizes the MSE between the users data symbols and the receive signal. This MMDDT design presented in this study is a generalization of prior designs that rely on 1-bit quantization. Moreover, unlike the prior MMSE design that relies on 1-bit resolution, the proposed MMSE approach employs uniform phase quantization and the bounding step in the branch-and-bound method is different in terms of considering the most restrictive relaxation of the nonconvex problem, which is then utilized for a suboptimal design also. Moreover, three different soft detection methods and an iterative detection and decoding scheme that allow the utilization of channel coding in conjunction with low-resolution precoding are proposed. Besides an exact approach for computing the extrinsic information, two approximations with reduced computational complexity are devised. The proposed branch-and-bound precoding algorithms are superior to the existing methods in terms of bit error rate. Numerical results show that the proposed approaches have significantly lower complexity than exhaustive search. Finally, results based on an LDPC block code indicate that the proposed receive processing schemes yield a lower bit-error-rate compared to the conventional design.
639

Sensitivity Analysis and Material Parameter Estimation using Electromagnetic Modelling / Känslighetsanalys och estimering av materialparametrar med elektromagnetisk modellering

Sjödén, Therese January 2012 (has links)
Estimating parameters is the problem of finding their values from measurements and modelling. Parameters describe properties of a system; material, for instance, are defined by mechanical, electrical, and chemical parameters. Fisher information is an information measure, giving information about how changes in the parameter effect the estimation. The Fisher information includes the physical model of the problem and the statistical model of noise. The Cramér-Rao bound is the inverse of the Fisher information and gives the best possible variance for any unbiased estimator. This thesis considers aspects of sensitivity analysis in two applied material parameter estimation problems. Sensitivity analysis with the Fisher information and the Cramér-Rao bound is used as a tool for evaluation of measurement feasibilities, comparison of measurement set-ups, and as a quantitative measure of the trade-off between accuracy and resolution in inverse imaging. The first application is with estimation of the wood grain angle parameter in trees and logs. The grain angle is the angle between the direction of the wood fibres and the direction of growth; a large grain angle strongly correlates to twist in sawn timber. In the thesis, measurements with microwaves are argued as a fast and robust measurement technique and electromagnetic modelling is applied, exploiting the anisotropic properties of wood. Both two-dimensional and three-dimensional modelling is considered. Mathematical modelling is essential, lowering the complexity and speeding up the computations. According to a sensitivity analysis with the Cramér-Rao bound, estimation of the wood grain angle with microwaves is feasible. The second application is electrical impedance tomography, where the conductivity of an object is estimated from surface measurements. Electrical impedance tomography has applications in, for example, medical imaging, geological surveillance, and wood evaluation. Different configurations and noise models are evaluated with sensitivity analysis for a two-dimensional electrical impedance tomography problem. The relation between the accuracy and resolution is also analysed using the Fisher information. To conclude, sensitivity analysis is employed in this thesis, as a method to enhance material parameter estimation. The sensitivity analysis methods are general and applicable also on other parameter estimation problems. / Estimering av parametrar är att finna deras värde utifrån mätningar och modellering. Parametrar beskriver egenskaper hos system och till exempel material kan definieras med mekaniska, elektriska och kemiska parametrar. Fisherinformation är ett informationsmått som ger information om hur ändringar i en parameter påverkar estimeringen. Fisherinformationen ges av en fysikalisk modell av problemet och en statistisk modell av mätbruset. Cramér-Rao-gränsen är inversen av Fisherinformationen och ger den bästa möjliga variansen för alla väntevärdesriktiga estimatorer.Den här avhandlingen behandlar aspekter av känslighetsanalys i två tillämpade estimeringsproblem för materialparametrar. Känslighetsanalys med Fisherinformation och Cramér-Rao-gränsen används som ett redskap för utvärdering av möjligheten att mäta och för jämförelser av mätuppställningar, samt som ett kvantitativt mått på avvägningen mellan noggrannhet och upplösning för inversa bilder. Den första tillämpningen är estimering av fibervinkeln hos träd och stockar. Fibervinkeln är vinkeln mellan växtriktningen och riktningen hos träfibern och en stor fibervinkel är relaterad till problem med formstabilitet i färdiga brädor. Mikrovågsmätningar av fibervinkeln presenteras som en snabb och robust mätteknik. I avhandlingen beskrivs två- och tredimensionella elektromagnetiska modeller som utnyttjar anisotropin hos trä. Eftersom matematisk modellering minskar komplexiteten och beräkningstiden är det en viktig del i estimeringen. Enligt känslighetsanalys med Cramér-Rao-gränsen är estimering av fibervinkeln hos trä möjlig. Den andra tillämpningen är elektrisk impedanstomografi, där ledningsförmågan hos objekt bestäms genom mätningar på ytan. Elektrisk impedanstomografi har tillämpningar inom till exempel medicinska bilder, geologisk övervakning och trämätningar. Olika mätkonfigurationer och brusmodeller utvärderas med känslighetsanalys för ett tvådimensionellt exempel på elektrisk impedanstomografi. Relationen mellan noggrannhet och upplösning analyseras med Fisher information. För att sammanfatta beskrivs känslighetsanalys som en metod för att förbättra estimeringen av materialparametrar. Metoderna för känslighetsanalys är generella och kan tillämpas också på andra estimeringsproblem för parametrar.
640

Dynamics of few-cluster systems.

Lekala, Mantile Leslie 30 November 2004 (has links)
The three-body bound state problem is considered using configuration-space Faddeev equations within the framework of the total-angular-momentum representation. Different three-body systems are considered, the main concern of the investigation being the i) calculation of binding energies for weakly bounded trimers, ii) handling of systems with a plethora of states, iii) importance of three-body forces in trimers, and iv) the development of a numerical technique for reliably handling three-dimensional integrodifferential equations. In this respect we considered the three-body nuclear problem, the 4He trimer, and the Ozone (16 0 3 3) system. In practice, we solve the three-dimensional equations using the orthogonal collocation method with triquintic Hermite splines. The resulting eigenvalue equation is handled using the explicitly Restarted Arnoldi Method in conjunction with the Chebyshev polynomials to improve convergence. To further facilitate convergence, the grid knots are distributed quadratically, such that there are more grid points in regions where the potential is stronger. The so-called tensor-trick technique is also employed to handle the large matrices involved. The computation of the many and dense states for the Ozone case is best implemented using the global minimization program PANMIN based on the well known MERLIN optimization program. Stable results comparable to those of other methods were obtained for both nucleonic and molecular systems considered. / Physics / D.Phil. (Physics)

Page generated in 0.0303 seconds