• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1262
  • 440
  • 229
  • 124
  • 93
  • 37
  • 27
  • 26
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 10
  • Tagged with
  • 2786
  • 320
  • 317
  • 288
  • 233
  • 229
  • 190
  • 181
  • 179
  • 160
  • 155
  • 138
  • 137
  • 131
  • 130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Models and algorithms for network design problems

Poss, Michael 22 February 2011 (has links)
Dans cette thèse, nous étudions différents modèles, déterministes et stochastiques, pour les problèmes de dimensionnement de réseaux. Nous examinons également le problème du sac-à-dos stochastique ainsi que, plus généralement, les contraintes de capacité en probabilité. Dans une première partie, nous nous consacrons à des modèles de dimensionnement de réseaux déterministes, possédant de nombreuses contraintes techniques s'approchant de situations réalistes. Nous commençons par étudier deux modèles de réseaux de télécommunications. Le premier considère des réseaux multi-couches et des capacités sur les arcs, tandis que le second étudie des réseaux mono-couche, sans capacité, où les commodités doivent être acheminées sur un nombre K de chemins disjoint de taille au plus L. Nous résolvons ces deux problèmes grâce à un algorithme de ``branch-and-cut' basé sur la décomposition de Benders de formulations linéaires pour ces problèmes. La nouveauté de notre approche se situe principalement dans l'étude empirique de la fréquence optimale de génération de coupes au cours de l'algorithme. Nous étudions ensuite un problème d'expansion de réseaux de transmission électrique. Notre travail étudie différents modèles et formulations pour le problème, les comparant sur des réseaux brésiliens réels. En particulier, nous montrons que le re-dimensionnement permet des réductions de coût importantes. Dans une seconde partie, nous examinons des modèles de programmation stochastique. Premièrement, nous prouvons que trois cas particuliers du problème de sac-à-dos avec recours simple peuvent être résolu par des algorithmes de programmation dynamique. Nous reformulons ensuite le problème comme un programme non-linéaire en variables entières et testons un algorithme ``branch-and-cut' basé l'approximation extérieure de la fonction objective. Cet algorithme est ensuite transformé en un algorithme de ``branch-and-cut-and-price', utilisé pour résoudre un problème de dimensionnement de réseau stochastique avec recours simple. Finalement, nous montrons comment linéariser des contraintes de capacité en probabilité avec variables binaires lorsque les coefficients sont des variables aléatoires satisfaisant certaines propriétés.
342

Methods for phylogenetic analysis

Krig, Kåre January 2010 (has links)
In phylogenetic analysis one study the relationship between different species. By comparing DNA from two different species it is possible to get a numerical value representing the difference between the species. For a set of species, all pair-wise comparisons result in a dissimilarity matrix d. In this thesis I present a few methods for constructing a phylogenetic tree from d. The common denominator for these methods is that they do not generate a tree, but instead give a connected graph. The resulting graph will be a tree, in areas where the data perfectly matches a tree. When d does not perfectly match a tree, the resulting graph will instead show the different possible topologies, and how strong support they have from the data. Finally I have tested the methods both on real measured data and constructed test cases.
343

Application of Sputtering Technology on Preparing Visible-light Nano-sized Photocatalysts for the Decomposition of Acetone

Wu, Yi-chen 05 September 2007 (has links)
This study investigated the decomposition efficiency of acetone using unmodified (pure TiO2) and modified TiO2 (TiO2/ITO¡BTiO2/N) prepared by sputtering technology. The influence of operating parameters including wavelength and relative humidity on the decomposition efficiency of acetone was further discussed. Operating parameters investigated included light wavelength (350~400, 435~500, and 506~600 nm), photocatalysts (TiO2/ITO, TiO2/N, and TiO2), and relative humidity (RH) (0%, 50%, and 100%). In the experiments, acetone was degraded by photocatalysts in a self-designed batch photocatalytical reactor. Samples coated with TiO2 were placed in the batch reactor. The incident light with different wavelength was irradiated by a 20-watt lamp. Moreover, a low-pressure mercury lamp for UV light or LED lamps for blue and green lights were placed on the top of reactor. Acetone was injected into reactor by using a gasket syringe. Reactants and products were analyzed quantitatively by a gas chromatography with a flame ionization detector followed by a methaneizer (GC/FID-Methaneizer). The structure of the photocatalyst film surface showed taper and the width of column ranged from 100 to 200 nm. The film structure showed crystallization cylindrical surface and the thickness of the photocatalyst film was in the range of 4.0-4.3 £gm. The highest decomposition efficiency of acetone was observed by using TiO2/ITO under visible-light with 50% RH. The synthesis of TiO2 was mainly anatase for the tested photocatalysts. AFM images showed that the photocatalyst surface appeared rugged and the shape showed a mountain ridge distribution . Keywords: sputtering technology, modified photocatalysts, photosensitive, acetone, photocatalytic oxidation, acetone decomposition
344

Approches numérique multi-échelle/multi-modèle de la dégradation des matériaux composites

Touzeau, Josselyn 30 October 2012 (has links) (PDF)
Nos travaux concernent la mise en oeuvre d'une méthode multiéchelle pour faciliter la simulation numérique de structures complexes, appliquée à la modélisation de composants aéronautiques (notamment pour les pièces tournantes de turboréacteur et des structures composites stratifiées). Ces développements sont basés autour de la méthode Arlequin qui permet d'enrichir des modélisations numériques, à l'aide de patchs, autour de zones d'intérêt où des phénomènes complexes se produisent. Cette méthode est mise en oeuvre dans un cadre général permettant la superposition de maillages incompatibles au sein du code de calcul Z-set{Zébulon, en utilisant une formulation optimale des opérateurs de couplage. La précision et la robustesse de cette approche ont été évaluées sur différents problèmes numériques. Afin d'accroître les performances de la méthode Arlequin, un solveur spécifique basé sur les techniques de décomposition de domaine a été développé pour bénéficier des capacités de calcul offertes par les machines à architectures parallèles. Ces performances ont été évaluées sur différents cas tests académiques et quasi-industriels. Enfin, ces développements ont été appliqué à la simulation de problèmes de structures composites stratifiées.
345

Depth Map Compression Based on Platelet Coding and Quadratic Curve Fitting

Wang, Han 26 October 2012 (has links)
Due to the fast development in 3D technology during recent decades, many approaches in 3D representation technologies have been proposed worldwide. In order to get an accurate information to render a 3D representation, more data need to be recorded compared to normal video sequence. In this case, how to find an efficient way to transmit the 3D representation data becomes an important part in the whole 3D representation technology. Recent years, many coding schemes based on the principle of encoding the depth have been proposed. Compared to the traditional multiview coding schemes, those new proposed schemes can achieve higher compression efficiency. Due to the development of depth capturing technology, the accuracy and quality of the reconstructed depth image also get improved. In this thesis we propose an efficient depth data compression scheme for 3D images. Our proposed depth data compression scheme is platelet based coding using Lagrangian optimization, quadtree decomposition and quadratic curve fitting. We study and improve the original platelet based coding scheme and achieve a compression improvement of 1-2 dB compared to the original platelet based scheme. The experimental results illustrate the improvement provided by our scheme. The quality of the reconstructed results of our proposed curve fitting based platelet coding scheme are better than that of the original scheme.
346

Boundary integral methods for Stokes flow : Quadrature techniques and fast Ewald methods

Marin, Oana January 2012 (has links)
Fluid phenomena dominated by viscous effects can, in many cases, be modeled by the Stokes equations. The boundary integral form of the Stokes equations reduces the number of degrees of freedom in a numerical discretization by reformulating the three-dimensional problem to two-dimensional integral equations to be discretized over the boundaries of the domain. Hence for the study of objects immersed in a fluid, such as drops or elastic/solid particles, integral equations are to be discretized over the surfaces of these objects only. As outer boundaries or confinements are added these must also be included in the formulation. An inherent difficulty in the numerical treatment of boundary integrals for Stokes flow is the integration of the singular fundamental solution of the Stokes equations, e.g. the so called Stokeslet. To alleviate this problem we developed a set of high-order quadrature rules for the numerical integration of the Stokeslet over a flat surface. Such a quadrature rule was first designed for singularities of the type <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?1/%7C%5Cmathbf%7Bx%7D%7C" />. To assess the convergence properties of this quadrature rule a theoretical analysis has been performed. The slightly more complicated singularity of the Stokeslet required certain modifications of the integration rule developed for <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?1/%7C%5Cmathbf%7Bx%7D%7C" />. An extension of this type of quadrature rule to a cylindrical surface is also developed. These quadrature rules are tested also on physical problems that have an analytic solution in the literature. Another difficulty associated with boundary integral problems is introduced by periodic boundary conditions. For a set of particles in a periodic domain periodicity is imposed by requiring that the motion of each particle has an added contribution from all periodic images of all particles all the way up to infinity. This leads to an infinite sum which is not absolutely convergent, and an additional physical constraint which removes the divergence needs to be imposed. The sum is decomposed into two fast converging sums, one that handles the short range interactions in real space and the other that sums up the long range interactions in Fourier space. Such decompositions are already available in the literature for kernels that are commonly used in boundary integral formulations. Here a decomposition in faster decaying sums than the ones present in the literature is derived for the periodic kernel of the stress tensor. However the computational complexity of the sums, regardless of the decomposition they stem from, is <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Cmathcal%7BO%7D(N%5E%7B2%7D)" />. This complexity can be lowered using a fast summation method as we introduced here for simulating a sedimenting fiber suspension. The fast summation method was initially designed for point particles, which could be used for fibers discretized numerically almost without any changes. However, when two fibers are very close to each other, analytical integration is used to eliminate numerical inaccuracies due to the nearly singular behavior of the kernel and the real space part in the fast summation method was modified to allow for this analytical treatment. The method we have developed for sedimenting fiber suspensions allows for simulations in large periodic domains and we have performed a set of such simulations at a larger scale (larger domain/more fibers) than previously feasible. / <p>QC 20121122</p>
347

Arthropod successionin Whitehorse, Yukon Territory and compared development of protophormia terraenovae (R. -D.) from Beringia and the Great Lakes Region

Bygarski, Katherine 01 July 2012 (has links)
Forensic medicocriminal entomology is used in the estimation of post-mortem intervals in death investigations, by means of arthropod succession patterns and the development rates of individual insect species. The purpose of this research was to determine arthropod succession patterns in Whitehorse, Yukon Territory, and compare the development rates of the dominant blowfly species (Protophormia terraenovae R.-D.) to another population collected in Oshawa, Ontario. Decomposition in Whitehorse occurred at a much slower rate than is expected for the summer season, and the singularly dominant blowfly species is not considered dominant or a primary colonizer in more southern regions. Development rates of P. terraenovae were determined for both fluctuating and two constant temperatures. Under natural fluctuating conditions, there was no significant difference in growth rate between studied biotypes. Results at repeated 10°C conditions varied, though neither biotype completed development indicating the published minimum development thresholds for this species are underestimated. / UOIT
348

Energy, exergy and cost analyses of nuclear-based hydrogen production via thermochemical water decomposition using a copper-chlorine (Cu-CI) cycle

Orhan, Mehmet Fatih 01 April 2008 (has links)
In this thesis the Copper-Chlorine (Cu-CI) thermochemical cycle and its components as well as operational and environmental conditions are defined, and a comprehensive thermodynamic analysis of a Cu-CI thermochemical cycle, including the relevant chemical reactions, is performed. Also the performance of each component/process is evaluated through energy and exergy efficiencies. Various parametric studies on energetic and exergetic aspects with variable reaction and reference-environment temperatures are carried out. A detailed analysis of the general methodology of cost estimation for the proposed process, including all cost items with their percentages, the factors that affect accuracy, and a scaling method, is also presented. / UOIT
349

An Economic Analysis of School and Labor Market Outcomes For At-Risk Youth

Kagaruki-Kakoti, Generosa 12 May 2005 (has links)
Federal education policy has targeted children who are disadvantaged in order to improve their academic performance. The most recent federal education policy is the No Child Left Behind law signed by President Bush in 2001. Indicators often used to identify an at-risk youth range from economic, personal, family, and neighborhood characteristics. A probit model is used in this study to estimate the probability that a student graduates from high school as a function of 8th grade variables. Students are classified as at-risk of dropping out of high school or non at-risk based on having one or more risk factor. The main measures of academic outcomes are high school completion and post-secondary academic achievements. The main measures of labor market outcomes are short-term and long-term earnings. The results show that a student who comes from a low income family, has a sibling who dropped out, has parents with low education, is home alone after school for three hours or more, or comes from a step family in the eighth grade is at-risk of dropping out of high school. At-risk students are less likely than non at-risk students to graduate from high school. They appear to be more sensitive to existing conditions that may impair/assist their academic progress while they are in high school. At-risk students are also less likely to select a bachelor’s degree. When they are compared to comparable non at-risk students, a greater percentage of at-risk students select a bachelor’s degree or post-graduate degrees than non at-risk students. At-risk individuals face long-term disadvantage in the labor market, receiving lower wage offers than the non at-risk group. Comparing only those without post secondary education shows that the average earnings offered to at-risk individuals were lower than those offered to non at-risk individuals. At-risk college graduates also receive lower earnings than non at-risk college graduates. The wage differential is largely due to the disadvantage at-risk individuals face in the labor market.
350

Quantitative Tissue Classification via Dual Energy Computed Tomography for Brachytherapy Treatment Planning : Accuracy of the Three Material Decomposition Method

Gürlüler, Merve January 2013 (has links)
Dual Energy Computed Tomography (DECT) is an emerging technique that offers new possibilities to determine composition of tissues in clinical applications. Accurate knowledge of tissue composition is important for instance for brachytherapy (BT) treatment planning. However, the accuracy of CT numbers measured with contemporary clinical CT scanners is relatively low since CT numbers are affected by image artifacts. The aim of this work was to estimate the accuracy of CT numbers measured with the Siemens SOMATOM Definition Flash DECT scanner and the accuracy of the resulting volume or mass fractions calculated via the three material decomposition method. CT numbers of water, gelatin and a 3rd component (salt, hydroxyapatite or protein powder) mixtures were measured using Siemens SOMATOM Definition Flash DECT scanner. The accuracy of CT numbers was determined by (i) a comparison with theoretical (true) values and (ii) using different measurement conditions (configurations) and assessing the resulting variations in CT numbers. The accuracy of mass fractions determined via the three material decomposition method was estimated by a comparison with mass fractions measured with calibrated scales. The latter method was assumed to provide highly accurate results. It was found that (i) axial scanning biased CT numbers for some detector rows. (ii) large volume of air surrounding the measured region shifted CT numbers compared to a configuration where the region was surrounded by water. (iii) highly attenuating object shifted CT numbers of surrounding voxels. (iv) some image kernels caused overshooting and undershooting of CT numbers close to edges. The three material decomposition method produced mass fractions differing from true values by 8% and 15% for the salt and hydroxyapatite mixtures respectively. In this case, the analyzed CT numbers were averaged over a volumetric region. For individual voxels, the volume fractions were affected by statistical noise. The method failed when statistical noise was high or CT numbers of the decomposition triplet were similar. Contemporary clinical DECT scanners produced image artifacts that strongly affected the accuracy of the three material decomposition method; the Siemens’ image reconstruction algorithm is not well suited for quantitative CT. The three material decomposition method worked relatively well for averages of CT numbers taken from volumetric regions as these averages lowered statistical noise in the analyzed data.

Page generated in 0.065 seconds