• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1262
  • 440
  • 229
  • 124
  • 93
  • 37
  • 27
  • 26
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 10
  • Tagged with
  • 2786
  • 320
  • 317
  • 288
  • 233
  • 229
  • 190
  • 181
  • 179
  • 160
  • 155
  • 138
  • 137
  • 131
  • 130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Predicting Motion of Engine-Ingested Particles Using Deep Neural Networks

Bowman, Travis Lynn 01 August 2022 (has links)
The ultimate goal of this work is to facilitate the design of gas turbine engine particle separators by reducing the computational expense to accurately simulate the fluid flow and particle motion inside the separator. It has been well-documented that particle ingestion yields many detrimental impacts for gas turbine engines. The consequences of ice particle ingestion can range from surface-wear abrasion to engine power loss. It is known that sufficiently small particles, characterized by small particle response times (τp), closely follow the fluid trajectory whereas large particles deviate from the streamlines. Rather than manually deriving how the particle acceleration varies from the fluid acceleration, this work chooses to implicitly derive this relationship using machine learning (ML). Inertial particle separators are devices designed to remove particles from the engine intake flow, which contributes to both elongating the lifespan and promoting safer operation of aviation gas turbine engines. Complex flows, such as flow through a particle separator, naturally have rotation and strain present throughout the flow field. This study attempts to understand if the motion of particles within rotational and strained canonical flows can be accurately predicted using supervised ML. This report suggests that preprocessing the ML training data to the fluid streamline coordinates can improve model training. ML models were developed for predicting particle acceleration in laminar, fully rotational/irrotational flows and combined laminar flows with rotation and strain. Lastly, the ML model is applied to particle data extracted from a Computational Fluid Dynamics (CFD) study of particle-laden flow around a louver-geometry. However, the model trained with particle data from combined canonical flows fails to accurately predict particle accelerations in the CFD flow field. / Master of Science / Aviation gas turbine engine particle ingestion is known to reduce engine lifespans and even pose a threat to safe operation in the worst case. Particles being ingested into an engine can be modeled using multiphase flow techniques. Devices called inertial particle separators are designed to remove particles from the flow into the engine. One challenge with designing such a separator is figuring out how to efficiently expel the small particles from the flow while not unnecessarily increasing pressure loss with excessive twists and turns in the geometry. Designers usually have to develop such geometries using multiphase flow computational fluid dynamics (CFD) that solve the fluid and particle dynamics. The abundance of data associated with CFD, and especially multiphase flows make it an ideal application to study with machine learning (ML). Because such multiphase simulations are very computationally expensive, it is desirable to develop "cheaper" methods. This is the long term goal of this work; we want to create ML surrogates that decrease the computational cost of simulating the particle and fluid flow in particle separator geometries such that designs can be iterated more quickly. In this work we introduce how artificial neural networks (ANNs), which are a tool used in ML, can be used to predict particle acceleration in fluid flow. The ANNs are shown to learn the acceleration predictions with acceptable accuracy for the training data generated with canonical flow cases. However, the ML model struggles to become generalizable to actual CFD simulations.
252

Exploring Changes in Poverty in Zimbabwe between 1995 and 2001 using Parametric and Nonparametric Quantile Regression Decomposition Techniques

Eriksson, Katherine 27 November 2007 (has links)
This paper applies and extends Machado and Mata's parametric quantile decomposition method and a similar nonparametric technique to explore changes in welfare in Zimbabwe between 1995 and 2001. These methods allow us to construct a counterfactual distribution in order to decompose the shift into the part due to changes in endowments and that due to changes in returns. We examine two subsets of a nationally representative dataset and find that endowments had a positive effect but that returns account for more of the difference. In communal farming areas, the effect of returns was positive while, in urban Harare, it was negative. / Master of Science
253

Activity Recognition Processing in a Self-Contained Wearable System

Chong, Justin Brandon 05 November 2008 (has links)
Electronic textiles provide an effective platform to contain wearable computing elements, especially components geared towards the application of activity recognition. An activity recogni tion system built into a wearable textile substrate can be utilized in a variety of areas including health monitoring, military applications, entertainment, and fashion. Many of the activity recognition and motion capture systems previously developed have several drawbacks and limitations with regard to their respective designs and implementations. Some such systems are often times expensive, not conducive to mass production, and may be difficult to calibrate. An effective system must also be scalable and should be deployable in a variety of environments and contexts. This thesis presents the design and implementation of a self-contained motion sensing wearable electronic textile system with an emphasis toward the application of activity recognition. The system is developed with scalability and deployability in mind, and as such, utilizes a two-tier hierarchical model combined with a network infrastructure and wireless connectivity. An example prototype system, in the form of a jumpsuit garment, is presented and is constructed from relatively inexpensive components and materials. / Master of Science
254

An Analysis of the Impact of Low Cost Airlines on Tourist Stay Duration and Expenditures

Qiu, W., Rudkin, Simon, Sharma, Abhijit 2017 September 1914 (has links)
Yes / Low cost carriers (budget airlines) have a significant share of the air travel market, but little research has been done to understand the distributional effect of their operation on key tourism indicators such as length of stay and expenditure. Using data on European visitors to the United Kingdom we demonstrate how counterfactual decompositions can inform us of the true impact of mode of travel. Passengers on low cost carriers tend to spend less, particularly at the upper end of the distribution. Budget airline users typically stay longer, though differences in characteristics of observed groups are important to this result. Counterfactual techniques provide additional valuable insights not obtained from conventional econometric models used in the literature. Illustrating an application of the methodology to policy we demonstrate that enabling respondents to extend their stay generates the greatest additional expenditure at the lower end of the distribution. We also show nationality is a significant characteristic, with important impacts across the expenditure distribution.
255

Long-term effects of hydrated lime and quicklime on the decay of human remains using pig cadavers as human body analogues

Schotsmans, Eline M.J., Fletcher, Jonathan N., Denton, J., Janaway, Robert C., Wilson, Andrew S. January 2014 (has links)
No / An increased number of police enquiries involving human remains buried with lime have demonstrated the need for more research into the effect of different types of lime on cadaver decomposition and its micro-environment. This study follows previous studies by the authors who have investigated the effects of lime on the decay of human remains in laboratory conditions and 6 months of field experiments. Six pig carcasses (Sus scrofa), used as human body analogues, were buried without lime with hydrated lime (Ca(OH)2) and quicklime (CaO) in shallow graves in sandy-loam soil in Belgium and recovered after 17 and 42 months of burial. Analysis of the soil, lime and carcasses included entomology, pH, moisture content, microbial activity, histology and lime carbonation. The results of this study demonstrate that despite conflicting evidence in the literature, the extent of decomposition is slowed down by burial with both hydrated lime and quicklime. The more advanced the decay process, the more similar the degree of liquefaction between the limed and unlimed remains. The end result for each mode of burial will ultimately result in skeletonisation. This study has implications for the investigation of clandestine burials, for a better understanding of archaeological plaster burials and potentially for the interpretation of mass graves and management of mass disasters by humanitarian organisation and DVI teams.
256

The Hair

Wilson, Andrew S. January 2008 (has links)
No
257

Hair and nail

Wilson, Andrew S., Gilbert, M.T.P. January 2007 (has links)
No
258

The effect of inorganic fertilizer application on compost and crop litter decomposition dynamics in sandy soil

Van der Ham, Ilana 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Inorganic fertilizer applications are common practice in commercial agriculture, yet not much is known regarding their interaction with organic matter and soil biota. Much research has been done on the effect of inorganic N on forest litter decomposition, yet very little research has focused on the effect of inorganic fertilizers on crop litters and, to our knowledge, none on composted organic matter. Furthermore none of the research has been done in South Africa. The main aim of this research project was to determine the effect of inorganic fertilizer applications on the decomposition of selected organic matter sources commonly used in South African agriculture and forestry. Two decomposition studies were conducted over a 3-month period, one on composts and the other on plant litters, using a local, sandy soil. In the first experiment a lower quality compost, compost A (C:N ratio, 17.67), and higher quality compost, compost B (C:N ratio, 4.92) was treated with three commercially used fertilizer treatments. Two were typical blends used for vegetable (tomato and cabbage) production: tomato fertilizer (10:2:15) (100 kg N, 20 kg P, 150 kg K per ha) and cabbage fertilizer (5:2:4) (250 kg N, 100 kg P, 200 kg K per ha). The third fertilizer blend, an equivalent mass application of N and P applied at 150 kg of each element per ha, is more commonly used in pastures. In the second experiment, five commonly encountered crop and forestry litters, namely kikuyu grass, lucerne residues, pine needles, sugar cane trash and wheat straw, were selected to represent the labile organic matter sources. The litters were treated with the tomato and cabbage fertilizer applications rates. Both decomposition experiments were conducted under ambient laboratory conditions at field water capacity. Decomposition rates were monitored by determining CO2 emissions, DOC production, β-glucosidase and polyphenol oxidase activity (PPO). At the start and end of decomposition study, loss on ignition was performed to assess the total loss of OM. Based on the results obtained from these two experiments, it was concluded that the addition of high N containing inorganic fertilizers enhanced the decomposition of both composted and labile organic matter. For both compost and plant litters, DOC production was greatly enhanced with the addition of inorganic fertilizers regardless of the organic matter quality. The conclusion can be made that inherent N in organic matter played a role in the response of decomposition to inorganic fertilizer application with organic matter low in inherent N showing greater responses in decomposition changes. For labile organic matter polyphenol and cellulose content also played a role in the responses observed from inorganic fertilizer applications. / AFRIKAANSE OPSOMMING: Anorganiese kunsmis toedieningss is algemene praktyk in die kommersiële landbou sektor,maar nog min is bekend oor hul interaksie met organiese materiaal en grond biota. Baie navorsing is reeds oor die uitwerking van anorganiese N op woud en plantasiereste se ontbinding gedoen. Baie min navorsing het gefokus op die uitwerking van anorganiese kunsmis op die gewasreste en tot ons kennis, is daar geen navorsing gedoen op die invloed van anorganiese kunsmis op gekomposteer organiese material nie. Verder is geeneen van die navorsing studies is in Suid-Afrika gedoen nie. Die hoofdoel van hierdie navorsingsprojek was om die effek van anorganiese kunsmis toedienings op die ontbinding van geselekteerde organiese materiaal bronne, wat algemeen gebruik word in die Suid-Afrikaanse landbou en bosbou, te bepaal. Twee ontbinding studies is gedoen oor 'n 3-maande-tydperk, een op kompos en die ander op die plantreste, met die gebruik van 'n plaaslike, sanderige grond. In die eerste eksperiment is ‘n laer gehalte kompos, kompos A (C: N verhouding, 17.67), en 'n hoër gehalte kompos, kompos B (C: N verhouding, 4.92) met drie kommersieel anorganiese bemesting behandelings behandel. Twee was tipiese versnitte gebruik vir die groente (tamatie en kool) produksie: tamatie kunsmis (10: 2:15) (100 kg N, 20 kg P, 150 kg K per ha) en kool kunsmis (5: 2: 4) (250 kg N, 100 kg P, 200 kg K per ha). Die derde kunsmis versnit was 'n ekwivalente massa toepassing van N en P van 150 kg van elke element per ha, wat meer algemeen gebruik word in weiding. In die tweede eksperiment was vyf algemeen gewas en bosbou reste, naamlik kikoejoegras, lusern reste, dennenaalde, suikerriet reste en koring strooi, gekies om die labiele organiese materiaal bronne te verteenwoordig. Die reste is met die tamatie en kool kunsmis toedienings behandel. Beide ontbinding eksperimente is uitgevoer onder normale laboratorium toestande by veldwaterkapasiteit. Ontbinding tempo is deur die bepaling van die CO2-vrystellings, opgelosde organiese koolstof (OOK) produksie, β-glukosidase en polifenol oksidase aktiwiteit (PPO) gemonitor. Aan die begin en einde van ontbinding studie, is verlies op ontbranding uitgevoer om die totale verlies van OM te evalueer. Gebaseer op die resultate van hierdie twee eksperimente, was die gevolgtrekking dat die toevoeging van hoë N bevattende anorganiese bemestingstowwe die ontbinding van beide komposte en plant reste verhoog. Vir beide kompos en plantreste word OOK produksie verhoog met die toevoeging van anorganiese bemesting, ongeag van die organiese materiaal gehalte. Die gevolgtrekking kan gemaak word dat die inherente N in organiese materiaal 'n rol gespeel het in die reaksie van ontbinding op anorganiese bemesting toedienings met die grootste reaksie in organiese material laag in inherente N. Vir labiele organiese material het polifenol en sellulose inhoud ook 'n rol gespeel in die reaksie waargeneeming op anorganiese bemesting.
259

(Méta)-noyaux constructifs et linéaires dans les graphes peu denses / Constructive and Linear (Meta)-Kernelisations on Sparse Graphs

Garnero, Valentin 04 July 2016 (has links)
En algorithmique et en complexité, la plus grande part de la recherche se base sur l’hypothèse que P ≠ NP (Polynomial time et Non deterministic Polynomial time), c'est-à-dire qu'il existe des problèmes dont la solution peut être vérifiée mais non construite en temps polynomial. Si cette hypothèse est admise, de nombreux problèmes naturels ne sont pas dans P (c'est-à-dire, n'admettent pas d'algorithme efficace), ce qui a conduit au développement de nombreuses branches de l'algorithmique. L'une d'elles est la complexité paramétrée. Elle propose des algorithmes exacts, dont l'analyse est faite en fonction de la taille de l'instance et d'un paramètre. Ce paramètre permet une granularité plus fine dans l'analyse de la complexité.Un algorithme sera alors considéré comme efficace s'il est à paramètre fixé, c'est-à-dire, lorsque sa complexité est exponentielle en fonction du paramètre et polynomiale en fonction de la taille de l'instance. Ces algorithmes résolvent les problèmes de la classe FPT (Fixed Parameter Tractable).L'extraction de noyaux est une technique qui permet, entre autre, d’élaborer des algorithmes à paramètre fixé. Elle peut être vue comme un pré-calcul de l'instance, avec une garantie sur la compression des données. Plus formellement, une extraction de noyau est une réduction polynomiale depuis un problème vers lui même, avec la contrainte supplémentaire que la taille du noyau (l'instance réduite) est bornée en fonction du paramètre. Pour obtenir l’algorithme à paramètre fixé, il suffit de résoudre le problème dans le noyau, par exemple par une recherche exhaustive (de complexité exponentielle, en fonction du paramètre). L’existence d'un noyau implique donc l'existence d'un algorithme à paramètre fixé, la réciproque est également vraie. Cependant, l’existence d'un algorithme à paramètre fixé efficace ne garantit pas un petit noyau, c'est a dire un noyau dont la taille est linéaire ou polynomiale. Sous certaines hypothèses, il existe des problèmes n’admettant pas de noyau (c'est-à-dire hors de FPT) et il existe des problèmes de FPT n’admettant pas de noyaux polynomiaux.Un résultat majeur dans le domaine des noyaux est la construction d'un noyau linéaire pour le problème Domination dans les graphes planaires, par Alber, Fellows et Niedermeier.Tout d'abord, la méthode de décomposition en régions proposée par Alber, Fellows et Niedermeier, a permis de construire de nombreux noyaux pour des variantes de Domination dans les graphes planaires. Cependant cette méthode comportait un certain nombre d’imprécisions, ce qui rendait les preuves invalides. Dans la première partie de notre thèse, nous présentons cette méthode sous une forme plus rigoureuse et nous l’illustrons par deux problèmes : Domination Rouge Bleue et Domination Totale.Ensuite, la méthode a été généralisée, d'une part, sur des classes de graphes plus larges (de genre borné, sans-mineur, sans-mineur-topologique), d'autre part, pour une plus grande variété de problèmes. Ces méta-résultats prouvent l’existence de noyaux linéaires ou polynomiaux pour tout problème vérifiant certaines conditions génériques, sur une classe de graphes peu denses. Cependant, pour atteindre une telle généralité, il a fallu sacrifier la constructivité des preuves : les preuves ne fournissent pas d'algorithme d'extraction constructif et la borne sur le noyau n'est pas explicite. Dans la seconde partie de notre thèse nous effectuons un premier pas vers des méta-résultats constructifs ; nous proposons un cadre général pour construire des noyaux linéaires en nous inspirant des principes de la programmation dynamique et d'un méta-résultat de Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh et Thilikos. / In the fields of Algorithmic and Complexity, a large area of research is based on the assumption that P ≠ NP(Polynomial time and Non deterministic Polynomial time), which means that there are problems for which a solution can be verified but not constructed in polynomial time. Many natural problems are not in P, which means, that they have no efficient algorithm. In order to tackle such problems, many different branches of Algorithmic have been developed. One of them is called Parametric Complexity. It consists in developing exact algorithms whose complexity is measured as a function of the size of the instance and of a parameter. Such a parameter allows a more precise analysis of the complexity. In this context, an algorithm will be considered to be efficient if it is fixed parameter tractable (fpt), that is, if it has a complexity which is exponential in the parameter and polynomial in the size of the instance. Problems that can be solved by such an algorithm form the FPT class.Kernelisation is a technical that produces fpt algorithms, among others. It can be viewed as a preprocessing of the instance, with a guarantee on the compression of the data. More formally, a kernelisation is a polynomial reduction from a problem to itself, with the additional constraint that the size of the kernel, the reduced instance, is bounded by a function of the parameter. In order to obtain an fpt algorithm, it is sufficient to solve the problem in the reduced instance, by brute-force for example (which has exponential complexity, in the parameter). Hence, the existence of a kernelisiation implies the existence of an fpt algorithm. It holds that the converse is true also. Nevertheless, the existence of an efficient fpt algorithm does not imply a small kernel, meaning a kernel with a linear or polynomial size. Under certain hypotheses, it can be proved that some problems can not have a kernel (that is, are not in FPT) and that some problems in FPT do not have a polynomial kernel.One of the main results in the field of Kernelisation is the construction of a linear kernel for the Dominating Set problem on planar graphs, by Alber, Fellows and Niedermeier.To begin with, the region decomposition method proposed by Alber, Fellows and Niedermeier has been reused many times to develop kernels for variants of Dominating Set on planar graphs. Nevertheless, this method had quite a few inaccuracies, which has invalidated the proofs. In the first part of our thesis, we present a more thorough version of this method and we illustrate it with two examples: Red Blue Dominating Set and Total Dominating Set.Next, the method has been generalised to larger classes of graphs (bounded genus, minor-free, topological-minor-free), and to larger families of problems. These meta-results prove the existence of a linear or polynomial kernel for all problems verifying some generic conditions, on a class of sparse graphs. As a price of generality, the proofs do not provide constructive algorithms and the bound on the size of the kernel is not explicit. In the second part of our thesis, we make a first step to constructive meta-results. We propose a framework to build linear kernels based on principles of dynamic programming and a meta-result of Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh and Thilikos.
260

Minimização ótima de classes especiais de funções booleanas / On the optimal minimization of espcial classes of Boolean functions

Callegaro, Vinicius January 2016 (has links)
O problema de fatorar e decompor funções Booleanas é Σ-completo2 para funções gerais. Algoritmos eficientes e exatos podem ser criados para classes de funções existentes como funções read-once, disjoint-support decomposable e read-polarity-once. Uma forma fatorada é chamada de read-once (RO) se cada variável aparece uma única vez. Uma função Booleana é RO se existe uma forma fatorada RO que a representa. Por exemplo, a função representada por =12+134+135 é uma função RO, pois pode ser fatorada em =1(2+3(4+5)). Uma função Booleana f(X) pode ser decomposta usando funções mais simples g e h de forma que ()=ℎ((1),2) sendo X1, X2 ≠ ∅, e X1 ∪ X2 = X. Uma decomposição disjunta de suporte (disjoint-support decomposition – DSD) é um caso especial de decomposição funcional, onde o conjunto de entradas X1 e X2 não compartilham elementos, i.e., X1 ∩ X2 = ∅. Por exemplo, a função =12̅̅̅3+123̅̅̅ 4̅̅̅+12̅̅̅4 é DSD, pois existe uma decomposição tal que =1(2⊕(3+4)). Uma forma read-polarity-once (RPO) é uma forma fatorada onde cada polaridade (positiva ou negativa) de uma variável aparece no máximo uma vez. Uma função Booleana é RPO se existe uma forma fatorada RPO que a representa. Por exemplo, a função =1̅̅̅24+13+23 é RPO, pois pode ser fatorada em =(1̅̅̅4+3)(1+2). Esta tese apresenta quarto novos algoritmos para síntese de funções Booleanas. A primeira contribuição é um método de síntese para funções read-once baseado em uma estratégia de divisão-e-conquista. A segunda contribuição é um algoritmo top-down para síntese de funções DSD baseado em soma-de-produtos, produto-de-somas e soma-exclusiva-de-produtos. A terceira contribuição é um método bottom-up para síntese de funções DSD baseado em diferença Booleana e cofatores. A última contribuição é um novo método para síntese de funções RPO que é baseado na análise de transições positivas e negativas. / The problem of factoring and decomposing Boolean functions is Σ-complete2 for general functions. Efficient and exact algorithms can be created for an existing class of functions known as read-once, disjoint-support decomposable and read-polarity-once functions. A factored form is called read-once (RO) if each variable appears only once. A Boolean function is RO if it can be represented by an RO form. For example, the function represented by =12+134+135 is a RO function, since it can be factored into =1(2+3(4+5)). A Boolean function f(X) can be decomposed using simpler subfunctions g and h, such that ()=ℎ((1),2) being X1, X2 ≠ ∅, and X1 ∪ X2 = X. A disjoint-support decomposition (DSD) is a special case of functional decomposition, where the input sets X1 and X2 do not share any element, i.e., X1 ∩ X2 = ∅. Roughly speaking, DSD functions can be represented by a read-once expression where the exclusive-or operator (⊕) can also be used as base operation. For example, =1(2⊕(4+5)). A read-polarity-once (RPO) form is a factored form where each polarity (positive or negative) of a variable appears at most once. A Boolean function is RPO if it can be represented by an RPO factored form. For example the function =1̅̅̅24+13+23 is RPO, since it can factored into =(1̅̅̅4+3)(1+2). This dissertation presents four new algorithms for synthesis of Boolean functions. The first contribution is a synthesis method for read-once functions based on a divide-and-conquer strategy. The second and third contributions are two algorithms for synthesis of DSD functions: a top-down approach that checks if there is an OR, AND or XOR decomposition based on sum-of-products, product-of-sums and exclusive-sum-of-products inputs, respectively; and a method that runs in a bottom-up fashion and is based on Boolean difference and cofactor analysis. The last contribution is a new method to synthesize RPO functions which is based on the analysis of positive and negative transition sets. Results show the efficacy and efficiency of the four proposed methods.

Page generated in 0.0776 seconds