• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Formulations and Exact Solution Methods For a Class of New Continous Covering Problems

Cakir, Ozan January 2009 (has links)
<p>This thesis is devoted to introducing new problem formulations and exact solution methods for a class of continuous covering location models. The manuscript includes three self-contained studies which are organized as in the following. </p> <p> In the first study, we introduce the planar expropriation problem with non-rigid rectangular facilities which has many applications in regional planning and undesirable facility location domains. This model is proposed for determining the locations and formations of two-dimensional rectangular facilities. Based on the geometric properties of such facilities, we developed a new formulation which does not require employing distance measures. The resulting model is a mixed integer nonlinear program. For solving this new model, we derived a continuous branch-and-bound framework utilizing linear approximations for the tradeoff curve associated with the facility formation alternatives. Further, we developed new problem generation and bounding strategies suitable for this particular branch-and-bound procedure. We designed a computational study where we compared this algorithm with two well-known mixed integer nonlinear programming solvers. Computational experience showed that the branch-and-bound procedure we developed performs better than BARON and SBB both in terms of processing time and size of the branching tree.</p> <p> The second study is referred to as the planar maximal covering problem with single convex polygonal shapes and it has ample applications in transmitter location, inspection of geometric shapes and directional antenna location. In this study, we investigated maximal point containment by any convex polygonal shape in the Euclidean plane. Using a fundamental separation property of convex sets, we derived a mixed integer linear formulation for this problem. We were able to identify two types of special cuts based on the geometric properties of the shapes under study, which were later employed for developing a branch-and-cut procedure for solving this particular location model. We also evaluated the resultant bound quality after employing the afore-mentioned cuts. </p> <p> In the third study, we discuss the dynamic planar expropriation problem with single convex polygonal shapes. We showed how the basic problem formulations discussed in the first two studies extend to their diametric opposites, and further to models in higher dimensions. Subsequently, we allowed a dynamic setting where the shape under study is expected to function over a finite planning horizon and the system parameters such as the fixed point locations and expropriation costs are subject to change. The shape was permitted to relocate at the beginning of each time period according to particular relocation costs. We showed that this dynamic problem structure can be decomposed into a set of static problems under a particular vector of relocations. We discussed the solution of this model by two enumeration procedures. Subsequently, we derived an incomplete dynamic programming procedure which is suitable for this distinct problem structure. In this method, there is no need to evaluate all the branches of the branching tree and one proceeds with keeping the minimum stage cost. The evaluation of a branch is postponed until relocation takes place in the lower-level problems. With this postponing structure, the procedure turned out to be superior to the two enumeration procedures in terms of tree size. </p> / Thesis / Doctor of Philosophy (PhD)
12

Theoretical and numerical aspects of advection-pressure splitting for 1D blood flow models

Spilimbergo, Alessandra 19 April 2024 (has links)
In this Thesis we explore, both theoretically and numerically, splitting strategies for a hyperbolic system of one-dimensional (1D) blood flow equations with a passive scalar transport equation. Our analysis involves a two-step framework that includes splitting at the level of partial differential equations (PDEs) and numerical methods for discretizing the ensuing problems. This study is inspired by the original flux splitting approach of Toro and Vázquez-Cendón (2012) originally developed for the conservative Euler equations of compressible gas dynamics. In this approach the flux vector in the conservative case, and the system matrix in the non-conservative one, are split into advection and pressure terms: in this way, two systems of partial differential equations are obtained, the advection system and the pressure system. From the mathematical as well as numerical point of view, a basic problem to be solved is the special Cauchy problem called the Riemann problem. This latter provides an analytical solution to evaluate the performance of the numerical methods and, in our approach, it is of primary importance to build the presented numerical schemes. In the first part of the Thesis a detailed theoretical analysis is presented, involving the exact solution of the Riemann problem for the 1D blood flow equations, depicted for a general constant momentum correction coefficient and a tube law that allows to describe both arteries and veins with continuous or discontinuous mechanical and geometrical properties and an advection equation for a passive scalar transport. In literature, this topic has been already studied only for a momentum correction coefficient equal to one, that is related to the prescribed velocity profile and in this case corresponds to a flat one, i.e. an inviscid fluid. In the case of discontinuous properties, only the subsonic regime is considered. In addition we propose a procedure to compute the obtained exact solution and finally we validate it numerically, by comparing exact solutions to those obtained with well-known, numerical schemes on a carefully designed set of test problems. Furthermore, an analogous theoretical analysis and resolution algorithm are presented for the advection system and the pressure system arising from the splitting at the level of PDEs of the complete system of 1D blood flow equations. It is worth noting that the pressure system, in case of veins, presents a loss of genuine non-linearity resulting in the formation of rarefactions, shocks and compound waves, these latter being a composition of rarefactions and shocks. In the second part of the Thesis we present novel finite volume-type, flux splitting-based, numerical schemes for the conservative 1D blood flow equations and splitting-based numerical schemes for the non-conservative 1D blood flow equations that incorporate an advection equation for a passive scalar transport, considering tube laws that allow to model blood flow in arteries and veins and take into account a general constant momentum correction coefficient. A detailed efficiency analysis is performed in order to showcase the advantages of the proposed methodologies in comparison to standard approaches.
13

Simulation and Optimal Design of Nuclear Magnetic Resonance Experiments

Nie, Zhenghua 10 1900 (has links)
<p>In this study, we concentrate on spin-1/2 systems. A series of tools using the Liouville space method have been developed for simulating of NMR of arbitrary pulse sequences.</p> <p>We have calculated one- and two-spin symbolically, and larger systems numerically of steady states. The one-spin calculations show how SSFP converges to continuous wave NMR. A general formula for two-spin systems has been derived for the creation of double-quantum signals as a function of irradiation strength, coupling constant, and chemical shift difference. The formalism is general and can be extended to more complex spin systems.</p> <p>Estimates of transverse relaxation, R<sub>2</sub>, are affected by frequency offset and field inhomogeneity. We find that in the presence of expected B<sub>0</sub> inhomogeneity, off-resonance effects can be removed from R<sub>2</sub> measurements, when ||omega||<= 0.5 gamma\,B<sub>1</sub> in Hahn echo experiments, when ||omega||<=gamma\,B<sub>1</sub> in CPMG experiments with specific phase variations, by fitting exact solutions of the Bloch equations given in the Lagrange form.</p> <p>Approximate solutions of CPMG experiments show the specific phase variations can significantly smooth the dependence of measured intensities on frequency offset in the range of +/- 1/2 gamma\,B<sub>1</sub>. The effective R<sub>2</sub> of CPMG experiments when using a phase variation scheme can be expressed as a second-order formula with respect to the ratio of offset to pi-pulse amplitude.</p> <p>Optimization problems using the exact or approximate solution of the Bloch equations are established for designing optimal broadband universal rotation (OBUR) pulses. OBUR pulses are independent of initial magnetization and can be applied to replace any pulse of the same flip angles in a pulse sequence. We demonstrate the process to exactly and efficiently calculate the first- and second-order derivatives with respect to pulses. Using these exact derivatives, a second-order optimization method is employed to design pulses. Experiments and simulations show that OBUR pulses can provide more uniform spectra in the designed offset range and come up with advantages in CPMG experiments.</p> / Doctor of Philosophy (PhD)
14

Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques / Rank aggregation with ties : algorithms, user guidance et applications to biologicals data

Brancotte, Bryan 25 September 2015 (has links)
L'agrégation de classements consiste à établir un consensus entre un ensemble de classements (éléments ordonnés). Bien que ce problème ait de très nombreuses applications (consensus entre les votes d'utilisateurs, consensus entre des résultats ordonnés différemment par divers moteurs de recherche...), calculer un consensus exact est rarement faisable dans les cas d'applications réels (problème NP-difficile). De nombreux algorithmes d'approximation et heuristiques ont donc été conçus. Néanmoins, leurs performances (en temps et en qualité de résultat produit) sont très différentes et dépendent des jeux de données à agréger. Plusieurs études ont cherché à comparer ces algorithmes mais celles-ci n’ont généralement pas considéré le cas (pourtant courant dans les jeux de données réels) des égalités entre éléments dans les classements (éléments classés au même rang). Choisir un algorithme de consensus adéquat vis-à-vis d'un jeu de données est donc un problème particulièrement important à étudier (grand nombre d’applications) et c’est un problème ouvert au sens où aucune des études existantes ne permet d’y répondre. Plus formellement, un consensus de classements est un classement qui minimise le somme des distances entre ce consensus et chacun des classements en entrés. Nous avons considérés (comme une grande partie de l’état-de-art) la distance de Kendall-Tau généralisée, ainsi que des variantes, dans nos études. Plus précisément, cette thèse comporte trois contributions. Premièrement, nous proposons de nouveaux résultats de complexité associés aux cas que l'on rencontre dans les données réelles où les classements peuvent être incomplets et où plusieurs éléments peuvent être classés à égalité. Nous isolons les différents « paramètres » qui peuvent expliquer les variations au niveau des résultats produits par les algorithmes d’agrégation (par exemple, utilisation de la distance de Kendall-Tau généralisée ou de variantes, d’un pré-traitement des jeux de données par unification ou projection). Nous proposons un guide pour caractériser le contexte et le besoin d’un utilisateur afin de le guider dans le choix à la fois d’un pré-traitement de ses données mais aussi de la distance à choisir pour calculer le consensus. Nous proposons finalement une adaptation des algorithmes existants à ce nouveau contexte. Deuxièmement, nous évaluons ces algorithmes sur un ensemble important et varié de jeux de données à la fois réels et synthétiques reproduisant des caractéristiques réelles telles que similarité entre classements, la présence d'égalités, et différents pré-traitements. Cette large évaluation passe par la proposition d’une nouvelle méthode pour générer des données synthétiques avec similarités basée sur une modélisation en chaîne Markovienne. Cette évaluation a permis d'isoler les caractéristiques des jeux de données ayant un impact sur les performances des algorithmes d'agrégation et de concevoir un guide pour caractériser le besoin d'un utilisateur et le conseiller dans le choix de l'algorithme à privilégier. Une plateforme web permettant de reproduire et étendre ces analyses effectuée est disponible (rank-aggregation-with-ties.lri.fr). Enfin, nous démontrons l'intérêt d'utiliser l'approche d'agrégation de classements dans deux cas d'utilisation. Nous proposons un outil reformulant à-la-volé des requêtes textuelles d'utilisateur grâce à des terminologies biomédicales, pour ensuite interroger de bases de données biologiques, et finalement produire un consensus des résultats obtenus pour chaque reformulation (conqur-bio.lri.fr). Nous comparons l'outil à la plateforme de références et montrons une amélioration nette des résultats en qualité. Nous calculons aussi des consensus entre liste de workflows établie par des experts dans le contexte de la similarité entre workflows scientifiques. Nous observons que les consensus calculés sont très en accord avec les utilisateurs dans une large proportion de cas. / The rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases.
15

Fixed cardinality linear ordering problem, polyhedral studies and solution methods / Problème d'ordre linéaire sous containte de cardinalité, étude polyédrale et méthodes de résolution

Neamatian Monemi, Rahimeh 02 December 2014 (has links)
Le problème d’ordre linéaire (LOP) a reçu beaucoup d’attention dans différents domaines d’application, allant de l’archéologie à l’ordonnancement en passant par l’économie et même de la psychologie mathématique. Ce problème est aussi connu pour être parmi les problèmes NP-difficiles. Nous considérons dans cette thèse une variante de (LOP) sous contrainte de cardinalité. Nous cherchons donc un ordre linéaire d’un sous-ensemble de sommets du graphe de préférences de cardinalité fixée et de poids maximum. Ce problème, appelé (FCLOP) pour ’fixed-cardinality linear ordering problem’, n’a pas été étudié en tant que tel dans la littérature scientifique même si plusieurs applications dans les domaines de macro-économie, de classification dominante ou de transport maritime existent concrètement. On retrouve en fait ses caractéristiques dans les modèles étendus de sous-graphes acycliques. Le problème d’ordre linéaire est déjà connu comme un problème NP-difficile et il a donné lieu à de nombreuses études, tant théoriques sur la structure polyédrale de l’ensemble des solutions réalisables en variables 0-1 que numériques grâce à des techniques de relaxation et de séparation progressive. Cependant on voit qu’il existe de nombreux cas dans la littérature, dans lesquelles des solveurs de Programmation Linéaire en nombres entiers comme CPLEX peuvent en résoudre certaines instances en moins de 10 secondes, mais une fois que la cardinalité est limitée, ces mêmes instances deviennent très difficiles à résoudre. Sur les aspects polyédraux, nous avons étudié le polytope de FCLOP, défini plusieurs classes d’inégalités valides et identifié la dimension ainsi que certaines inégalités qui définissent des facettes pour le polytope de FCLOP. Nous avons introduit un algorithme Relax-and-Cut basé sur ces résultats pour résoudre les instances du problème. Dans cette étude, nous nous sommes également concentrés sur la relaxation Lagrangienne pour résoudre ces cas difficiles. Nous avons étudié différentes stratégies de relaxation et nous avons comparé les bornes duales par rapport à la consolidation obtenue à partir de chaque stratégie de relâcher les contraintes afin de détecter le sous-ensemble des contraintes le plus approprié. Les résultats numériques montrent que nous pouvons trouver des bornes duales de très haute qualité. Nous avons également mis en place une méthode de décomposition Lagrangienne. Dans ce but, nous avons décomposé le modèle de FCLOP en trois sous-problèmes (au lieu de seulement deux) associés aux contraintes de ’tournoi’, de ’graphes sans circuits’ et de ’cardinalité’. Les résultats numériques montrent une amélioration significative de la qualité des bornes duales pour plusieurs cas. Nous avons aussi mis en oeuvre une méthode de plans sécants (cutting plane algorithm) basée sur la relaxation pure des contraintes de circuits. Dans cette méthode, on a relâché une partie des contraintes et on les a ajoutées au modèle au cas où il y a des de/des violations. Les résultats numériques montrent des performances prometteuses quant à la réduction du temps de calcul et à la résolution d’instances difficiles hors d’atteinte des solveurs classiques en PLNE. / Linear Ordering Problem (LOP) has receive significant attention in different areas of application, ranging from transportation and scheduling to economics and even archeology and mathematical psychology. It is classified as a NP-hard problem. Assume a complete weighted directed graph on V n , |V n |= n. A permutation of the elements of this finite set of vertices is a linear order. Now let p be a given fixed integer number, 0 ≤ p ≤ n. The p-Fixed Cardinality Linear Ordering Problem (FCLOP) is looking for a subset of vertices containing p nodes and a linear order on the nodes in S. Graphically, there exists exactly one directed arc between every pair of vertices in an LOP feasible solution, which is also a complete cycle-free digraph and the objective is to maximize the sum of the weights of all the arcs in a feasible solution. In the FCLOP, we are looking for a subset S ⊆ V n such that |S|= p and an LOP on these S nodes. Hence the objective is to find the best subset of the nodes and an LOP over these p nodes that maximize the sum of the weights of all the arcs in the solution. Graphically, a feasible solution of the FCLOP is a complete cycle-free digraph on S plus a set of n − p vertices that are not connected to any of the other vertices. There are several studies available in the literature focused on polyhedral aspects of the linear ordering problem as well as various exact and heuristic solution methods. The fixed cardinality linear ordering problem is presented for the first time in this PhD study, so as far as we know, there is no other study in the literature that has studied this problem. The linear ordering problem is already known as a NP-hard problem. However one sees that there exist many instances in the literature that can be solved by CPLEX in less than 10 seconds (when p = n), but once the cardinality number is limited to p (p < n), the instance is not anymore solvable due to the memory issue. We have studied the polytope corresponding to the FCLOP for different cardinality values. We have identified dimension of the polytope, proposed several classes of valid inequalities and showed that among these sets of valid inequalities, some of them are defining facets for the FCLOP polytope for different cardinality values. We have then introduced a Relax-and-Cut algorithm based on these results to solve instances of the FCLOP. To solve the instances of the problem, in the beginning, we have applied the Lagrangian relaxation algorithm. We have studied different relaxation strategies and compared the dual bound obtained from each case to detect the most suitable subproblem. Numerical results show that some of the relaxation strategies result better dual bound and some other contribute more in reducing the computational time and provide a relatively good dual bound in a shorter time. We have also implemented a Lagrangian decomposition algorithm, decom-6 posing the FCLOP model to three subproblems (instead of only two subproblems). The interest of decomposing the FCLOP model to three subproblems comes mostly from the nature of the three subproblems, which are relatively quite easier to solve compared to the initial FCLOP model. Numerical results show a significant improvement in the quality of dual bounds for several instances. We could also obtain relatively quite better dual bounds in a shorter time comparing to the other relaxation strategies. We have proposed a cutting plane algorithm based on the pure relaxation strategy. In this algorithm, we firstly relax a subset of constraints that due to the problem structure, a very few number of them are active. Then in the course of the branch-and-bound tree we verify if there exist any violated constraint among the relaxed constraints or. Then the characterized violated constraints will be globally added to the model. (...)
16

Zero-Error capacity of quantum channels. / Capacidade Erro-Zero de canais quânticos.

MEDEIROS, Rex Antonio da Costa. 01 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T21:11:37Z No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5) / Made available in DSpace on 2018-08-01T21:11:37Z (GMT). No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5) Previous issue date: 2008-05-09 / Nesta tese, a capacidade erro-zero de canais discretos sem memória é generalizada para canais quânticos. Uma nova capacidade para a transmissão de informação clássica através de canais quânticos é proposta. A capacidade erro-zero de canais quânticos (CEZQ) é definida como sendo a máxima quantidade de informação por uso do canal que pode ser enviada através de um canal quântico ruidoso, considerando uma probabilidade de erro igual a zero. O protocolo de comunicação restringe palavras-código a produtos tensoriais de estados quânticos de entrada, enquanto que medições coletivas entre várias saídas do canal são permitidas. Portanto, o protocolo empregado é similar ao protocolo de Holevo-Schumacher-Westmoreland. O problema de encontrar a CEZQ é reformulado usando elementos da teoria de grafos. Esta definição equivalente é usada para demonstrar propriedades de famílias de estados quânticos e medições que atingem a CEZQ. É mostrado que a capacidade de um canal quântico num espaço de Hilbert de dimensão d pode sempre ser alcançada usando famílias compostas de, no máximo,d estados puros. Com relação às medições, demonstra-se que medições coletivas de von Neumann são necessárias e suficientes para alcançar a capacidade. É discutido se a CEZQ é uma generalização não trivial da capacidade erro-zero clássica. O termo não trivial refere-se a existência de canais quânticos para os quais a CEZQ só pode ser alcançada através de famílias de estados quânticos não-ortogonais e usando códigos de comprimento maior ou igual a dois. É investigada a CEZQ de alguns canais quânticos. É mostrado que o problema de calcular a CEZQ de canais clássicos-quânticos é puramente clássico. Em particular, é exibido um canal quântico para o qual conjectura-se que a CEZQ só pode ser alcançada usando uma família de estados quânticos não-ortogonais. Se a conjectura é verdadeira, é possível calcular o valor exato da capacidade e construir um código de bloco quântico que alcança a capacidade. Finalmente, é demonstrado que a CEZQ é limitada superiormente pela capacidade de Holevo-Schumacher-Westmoreland.

Page generated in 0.0672 seconds