• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 274
  • 163
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 1190
  • 259
  • 193
  • 143
  • 124
  • 87
  • 74
  • 67
  • 61
  • 61
  • 61
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Partial Volume Quantification Using Magnetic Resonance Fingerprinting

Deshmane, Anagha Vishwas 02 June 2017 (has links)
No description available.
292

Qualification and quantification of bacterial pathogen load in acute bovine respiratory disease cases.

Roof, Clinton January 1900 (has links)
Master of Science / Department of Clinical Sciences / Michael D. Apley / One hundred ninety four steers, bulls, and heifers weighing 182-318 kg were purchased at an Arkansas sale barn and shipped 12 hours to a northern Kansas feedlot. There was no previous history of treatment and the cattle had been delivered to the sale barn within the 24 hour period prior to the sale. The objectives of the study were to evaluate (1) bacterial pathogen isolates in different locations in the respiratory tract, (2) pathogen load in clinically ill and clinically normal calves, and (3) compare histological damage that may be a result of clinical disease. Fifteen calves were identified with signs of acute bovine respiratory disease (BRD) based on clinical score and a minimum rectal temperature of 40° C. An additional 5 calves with no clinical signs and rectal temperatures < 40° C were selected as controls. Cattle were humanely euthanized following recording of antemortem clinical observations. At postmortem, samples for microbiologic and histologic (hematoxylin and eosin stain) analysis were collected from grossly normal and/or consolidated tissue in each lung lobe. Samples were also collected from the tonsils and trachea. Quantification of the BRD pathogens per gram were determined for each positive site and then converted to total counts for each animal. Total colony forming units (CFU) of pathogens in the entire lung for cattle with identified pathogens ranged from 2x10[superscript]7 – 2x10[superscript]8 CFU for Pasturella multocida and 9x10[superscript]6 – 9x10[superscript]8 CFU for Mannheimmia haemolytica. Total visual estimated percent consolidation ranged from 0.0% to 45.0% of the total lung. Isolated pathogens from the upper and lower respiratory tract were compared and showed to have no significant agreement. Histology scores of 0-4 were assigned to the tissue samples and compared to the quantified BRD pathogens to test a possible association between the pathologic process and the total agents in that tissue sample. A significant difference in bacterial counts between histology scores of zero or 1 and a histology score of 4 was observed.
293

The caking and swelling of South African large coal particles / Sansha Coetzee

Coetzee, Sansha January 2015 (has links)
The swelling and caking propensity of coals may cause operational problems such as channelling and excessive pressure build-up in combustion, gasification and specifically in fluidised-bed and fixed-bed operations. As a result, the swelling and caking characteristics of certain coals make them less suitable for use as feedstock in applications where swelling and/or caking is undesired. Therefore, various studies have focused on the manipulation of the swelling and/or caking propensity of coals, and have proven the viability of using additives to reduce the swelling and caking of powdered coal (<500 μm). However, there is still a lack of research specifically focused on large coal particle devolatilisation behaviour, particularly swelling and caking, and the reduction thereof using additives. A comprehensive study was therefore proposed to investigate the swelling and caking behaviour of large coal particles (5, 10, and 20 mm) of typical South African coals, and the influence of the selected additive (potassium carbonate) thereon. Three different South African coals were selected based on their Free Swelling Index (FSI): coal TSH is a high swelling coal (FSI 9) from the Limpopo province, GG is a medium swelling coal (FSI 5.5-6.5) from the Waterberg region, and TWD is a non-swelling coal (FSI 0) from the Highveld region. Image analysis was used to semi-quantitatively describe the transient swelling and shrinkage behaviour of large coal particles (-20+16 mm) during lowtemperature devolatilisation (700 °C, N2 atmosphere, 7 K/min). X-ray computed tomography and mercury submersion were used to quantify the degree of swelling of large particles, and were compared to conventional swelling characteristics of powdered coals. The average swelling ratios obtained for TWD, GG, and TSH were respectively 1.9, 2.1 and 2.5 from image analysis and 1.8, 2.2 and 2.5 from mercury submersion. The results showed that coal swelling measurements such as FSI, and other conventional techniques used to describe the plastic behaviour of powdered coal, can in general not be used for the prediction of large coal particle swelling. The large coal particles were impregnated for 24 hours, using an excess 5.0 M K2CO3 impregnation solution. The influence of K2CO3-addition on the swelling behaviour of different coal particle sizes was compared, and results showed that the addition of K2CO3 resulted in a reduction in swelling for powdered coal (-212 μm), as well as large coal particles (5, 10, and 20 mm). For powdered coal, the addition of 10 wt.% K2CO3 decreased the free swelling index of GG and TSH coals from 6.5 to 0 and from 9.0 to 4.5, respectively. The volumetric swelling ratios (SRV) of the 20 mm particles were reduced from 3.0 to 1.8 for the GG coal, and from 5.7 to 1.4 for TSH. In contrast to the non-swelling (FSI 0) behaviour of the TWD powders, the large particles exhibited average SRV values of 1.7, and was found not be influenced by K2CO3-impregnation. It was found that the maximum swelling coefficient, kA, was reduced from 0.025 to 0.015 oC-1 for GG, and from 0.045 to 0.027 oC-1 for TSH, as a results of impregnation. From the results it was concluded that K2CO3-impregnation reduces the extent of swelling of coals such as GG (medium-swelling) and TSH (high-swelling), which exhibit significant plastic deformation. Results obtained from the caking experiments indicated that K2CO3-impregnation influenced the physical behaviour of the GG coal particles (5, 10, and 20 mm) the most. The extent of caking of GG was largely reduced due to impregnation, while the wall thickness and porosity also decreased. The coke from the impregnated GG samples had a less fluid-like appearance compared to coke from the raw coal. Bridging neck size measurements were performed, which quantitatively showed a 25-50% decrease in the caking propensity of GG particles. Coal TWD did not exhibit any caking behaviour. The K2CO3-impregnation did not influence the surface texture or porosity of the TWD char, but increased the overall brittleness of the devolatilised samples. Both the extent of caking and porosity of TSH coke were not influenced by impregnation. However, impregnation resulted in significantly less and smaller opened pores on the surface of the devolatilised samples, and also reduced the average wall thickness of the TSH coke. The overall conclusion made from this investigation is that K2CO3 (using solution impregnation) can be used to significantly reduce the caking and swelling tendency of large coal particles which exhibits a moderate degree of fluidity, such as GG (Waterberg region). The results obtained during this investigation show the viability of using additive addition to reduce the caking and swelling tendency of large coal particles. Together with further development, this may be a suitable method for modifying the swelling and caking behaviour of specific coals for use in fixed-bed and fluidised-bed gasification operations. / PhD (Chemical Engineering), North-West University, Potchefstroom Campus, 2015
294

The caking and swelling of South African large coal particles / Sansha Coetzee

Coetzee, Sansha January 2015 (has links)
The swelling and caking propensity of coals may cause operational problems such as channelling and excessive pressure build-up in combustion, gasification and specifically in fluidised-bed and fixed-bed operations. As a result, the swelling and caking characteristics of certain coals make them less suitable for use as feedstock in applications where swelling and/or caking is undesired. Therefore, various studies have focused on the manipulation of the swelling and/or caking propensity of coals, and have proven the viability of using additives to reduce the swelling and caking of powdered coal (<500 μm). However, there is still a lack of research specifically focused on large coal particle devolatilisation behaviour, particularly swelling and caking, and the reduction thereof using additives. A comprehensive study was therefore proposed to investigate the swelling and caking behaviour of large coal particles (5, 10, and 20 mm) of typical South African coals, and the influence of the selected additive (potassium carbonate) thereon. Three different South African coals were selected based on their Free Swelling Index (FSI): coal TSH is a high swelling coal (FSI 9) from the Limpopo province, GG is a medium swelling coal (FSI 5.5-6.5) from the Waterberg region, and TWD is a non-swelling coal (FSI 0) from the Highveld region. Image analysis was used to semi-quantitatively describe the transient swelling and shrinkage behaviour of large coal particles (-20+16 mm) during lowtemperature devolatilisation (700 °C, N2 atmosphere, 7 K/min). X-ray computed tomography and mercury submersion were used to quantify the degree of swelling of large particles, and were compared to conventional swelling characteristics of powdered coals. The average swelling ratios obtained for TWD, GG, and TSH were respectively 1.9, 2.1 and 2.5 from image analysis and 1.8, 2.2 and 2.5 from mercury submersion. The results showed that coal swelling measurements such as FSI, and other conventional techniques used to describe the plastic behaviour of powdered coal, can in general not be used for the prediction of large coal particle swelling. The large coal particles were impregnated for 24 hours, using an excess 5.0 M K2CO3 impregnation solution. The influence of K2CO3-addition on the swelling behaviour of different coal particle sizes was compared, and results showed that the addition of K2CO3 resulted in a reduction in swelling for powdered coal (-212 μm), as well as large coal particles (5, 10, and 20 mm). For powdered coal, the addition of 10 wt.% K2CO3 decreased the free swelling index of GG and TSH coals from 6.5 to 0 and from 9.0 to 4.5, respectively. The volumetric swelling ratios (SRV) of the 20 mm particles were reduced from 3.0 to 1.8 for the GG coal, and from 5.7 to 1.4 for TSH. In contrast to the non-swelling (FSI 0) behaviour of the TWD powders, the large particles exhibited average SRV values of 1.7, and was found not be influenced by K2CO3-impregnation. It was found that the maximum swelling coefficient, kA, was reduced from 0.025 to 0.015 oC-1 for GG, and from 0.045 to 0.027 oC-1 for TSH, as a results of impregnation. From the results it was concluded that K2CO3-impregnation reduces the extent of swelling of coals such as GG (medium-swelling) and TSH (high-swelling), which exhibit significant plastic deformation. Results obtained from the caking experiments indicated that K2CO3-impregnation influenced the physical behaviour of the GG coal particles (5, 10, and 20 mm) the most. The extent of caking of GG was largely reduced due to impregnation, while the wall thickness and porosity also decreased. The coke from the impregnated GG samples had a less fluid-like appearance compared to coke from the raw coal. Bridging neck size measurements were performed, which quantitatively showed a 25-50% decrease in the caking propensity of GG particles. Coal TWD did not exhibit any caking behaviour. The K2CO3-impregnation did not influence the surface texture or porosity of the TWD char, but increased the overall brittleness of the devolatilised samples. Both the extent of caking and porosity of TSH coke were not influenced by impregnation. However, impregnation resulted in significantly less and smaller opened pores on the surface of the devolatilised samples, and also reduced the average wall thickness of the TSH coke. The overall conclusion made from this investigation is that K2CO3 (using solution impregnation) can be used to significantly reduce the caking and swelling tendency of large coal particles which exhibits a moderate degree of fluidity, such as GG (Waterberg region). The results obtained during this investigation show the viability of using additive addition to reduce the caking and swelling tendency of large coal particles. Together with further development, this may be a suitable method for modifying the swelling and caking behaviour of specific coals for use in fixed-bed and fluidised-bed gasification operations. / PhD (Chemical Engineering), North-West University, Potchefstroom Campus, 2015
295

Transmission efficace en temps réel de la voix sur réseaux ad hoc sans fil

Kwong, Mylène D January 2008 (has links)
La téléphonie mobile se démocratise et de nouveaux types de réseaux voient le jour, notamment les réseaux ad hoc. Sans focaliser exclusivement sur ces réseaux particuliers, le nombre de communications vocales effectuées chaque minute est en constante augmentation mais les réseaux sont encore souvent victimes d'erreurs de transmission. L'objectif de cette thèse porte sur l'utilisation de méthodes de codage en vue d'une transmission de la voix robuste face aux pertes de paquets, sur un réseau mobile et sans fil perturbé permettant le multichemin. La méthode envisagée prévoit l'utilisation d'un codage en descriptions multiples (MDC) appliqué à un flux de données issu d'un codec de parole bas débit, plus particulièrement l'AMR-WB (Adaptive Multi Rate - Wide Band). Parmi les paramètres encodés par l'AMR-WB, les coefficients de la prédiction linéaire sont calculés une fois par trame, contrairement aux autres paramètres qui sont calculés quatre fois. La problématique majeure réside dans la création adéquate de descriptions pour les paramètres de prédiction linéaire. La méthode retenue applique une quantification vectorielle conjuguée à quatre descriptions. Pour diminuer la complexité durant la recherche, le processus est épaulé d'un préclassificateur qui effectue une recherche localisée dans le dictionnaire complet selon la position d'un vecteur d'entrée. L'application du modèle de MDC à des signaux de parole montre que l'utilisation de quatre descriptions permet de meilleurs résultats lorsque le réseau est sujet à des pertes de paquets. Une optimisation de la communication entre le routage et le processus de création de descriptions mène à l'utilisation d'une méthode adaptative du codage en descriptions. Les travaux de cette thèse visaient la retranscription d'un signal de parole de qualité, avec une optimisation adéquate des ressources de stockage, de la complexité et des calculs. La méthode adaptative de MDC rencontre ces attentes et s'avère très robuste dans un contexte de perte de paquets.
296

Correction de l'effet de volume partiel en tomographie d'émission par positrons chez le modelé animal

Guillette, Nicolas January 2011 (has links)
La quantification de la concentration de radiotraceur en imagerie par tomographie d'émission par positrons (TEP) est capitale, que ce soit au niveau clinique, mais majoritairement dans le domaine de la recherche, notamment pour l'analyse pharmacocinétique, appliquée chez l'humain et l'animal. Parmi les corrections importantes en imagerie TEP, l'effet de volume partiel (EVP) est l'un des plus complexes à corriger. Il dépend de divers phénomènes physiques liés aux performances de l'appareil, à la reconstruction de l'image ainsi qu'aux propriétés de l'objet mesuré.La correction pour l'EVP est cruciale afin de pallier la faible résolution spatiale de l'appareil TEP. Les conséquences principales de l'EVP sont une sous-estimation de la concentration d'un radiotraceur dans les objets de petite taille ainsi qu'une difficulté à cerner correctement les contours de l'objet observé. De nombreuses méthodes appliquées directement sur les images reconstruites sont proposées dans la littérature et sont utilisées dans certains cas, mais aucun consensus sur une méthode n'a encore été établi jusqu'à présent. Ce travail porte sur la compréhension de la problématique de l'EVP par différents tests mathématiques et des simulations, ainsi que sur le développement et l'application d'une nouvelle méthode de correction appliquée à l'imagerie TEP. L'argument principal de cette technique de correction est que le défaut de la résolution spatiale est enregistré au niveau des détecteurs et la correction pour ce défaut devrait être effectuée avant la reconstruction de l'image. En outre, la technique proposée a l'avantage de tenir compte non seulement de la taille, mais également de la forme de l'objet selon les différentes projections, ce qui représente un bénéfice par rapport aux approches appliquées directement sur l'image.La correction est basée sur la déconvolution des projections mesurées par des fonctions d'étalement appropriées. Celles-ci sont obtenues expérimentalement à partir de mires de cylindres de différents diamètres. Pour tester les capacités de cette technique, l'étude s'est limitée à la correction d'objets initialement isolés par régions d'intérêt sur l'image.La technique a ensuite été testée sur des images de tumeurs isolées et du coeur chez le rat, obtenues par injection de [indice supérieur 18] F-fluorodésoxy-D-glucose (FDG). Les conséquences de la correction de l'EVP ont enfin été étudiées par l'application du modèle d'analyse pharmacocinétique compartimentale du FDG dans le coeur de rat.
297

Evaluating the security of anonymized big graph/structural data

Ji, Shouling 27 May 2016 (has links)
We studied the security of anonymized big graph data. Our main contributions include: new De-Anonymization (DA) attacks, comprehensive anonymity, utility, and de-anonymizability quantifications, and a secure graph data publishing/sharing system SecGraph. New DA Attacks. We present two novel graph DA frameworks: cold start single-phase Optimization-based DA (ODA) and De-anonymizing Social-Attribute Graphs (De-SAG). Unlike existing seed-based DA attacks, ODA does not priori knowledge. In addition, ODA’s DA results can facilitate existing DA attacks by providing more seed information. De-SAG is the first attack that takes into account both graph structure and attribute information. Through extensive evaluations leveraging real world graph data, we validated the performance of both ODA and De-SAG. Graph Anonymity, Utility, and De-anonymizability Quantifications. We developed new techniques that enable comprehensive graph data anonymity, utility, and de-anonymizability evaluation. First, we proposed the first seed-free graph de-anonymizability quantification framework under a general data model which provides the theoretical foundation for seed-free SDA attacks. Second, we conducted the first seed-based quantification on the perfect and partial de-anonymizability of graph data. Our quantification closes the gap between seed-based DA practice and theory. Third, we conducted the first attribute-based anonymity analysis for Social-Attribute Graph (SAG) data. Our attribute-based anonymity analysis together with existing structure-based de-anonymizability quantifications provide data owners and researchers a more complete understanding of the privacy of graph data. Fourth, we conducted the first graph Anonymity-Utility-De-anonymity (AUD) correlation quantification and provided close-forms to explicitly demonstrate such correlation. Finally, based on our quantifications, we conducted large-scale evaluations leveraging 100+ real world graph datasets generated by various computer systems and services. Using the evaluations, we demonstrated the datasets’ anonymity, utility, and de-anonymizability, as well as the significance and validity of our quantifications. SecGraph. We designed, implemented, and evaluated the first uniform and open-source Secure Graph data publishing/sharing (SecGraph) system. SecGraph enables data owners and researchers to conduct accurate comparative studies of anonymization/DA techniques, and to comprehensively understand the resistance/vulnerability of existing or newly developed anonymization techniques, the effectiveness of existing or newly developed DA attacks, and graph and application utilities of anonymized data.
298

Methods for solving problems in financial portfolio construction, index tracking and enhanced indexation

Mezali, Hakim January 2013 (has links)
The focus of this thesis is on index tracking that aims to replicate the movements of an index of a specific financial market. It is a form of passive portfolio (fund) management that attempts to mirror the performance of a specific index and generate returns that are equal to those of the index, but without purchasing all of the stocks that make up the index. Additionally, we consider the problem of out-performing the index - Enhanced Indexation. It attempts to generate modest excess returns compared to the index. Enhanced indexation is related to index tracking in that it is a relative return strategy. One seeks a portfolio that will achieve more than the return given by the index (excess return). In the first approach, we propose two models for the objective function associated with choice of a tracking portfolio, namely; minimise the maximum absolute difference between the tracking portfolio return and index return and minimise the average of the absolute differences between tracking portfolio return and index return. We illustrate and investigate the performance of our models from two perspectives; namely, under the exclusion and inclusion of fixed and variable costs associated with buying or selling each stock. The second approach studied is that of using Quantile regression for both index tracking and enhanced indexation. We present a mixed-integer linear programming of these problems based on quantile regression. The third approach considered is on quantifying the level of uncertainty associated with the portfolio selected. The quantification of uncertainty is of importance as this provides investors with an indication of the degree of risk that can be expected as a result of holding the selected portfolio over the holding period. Here a bootstrap approach is employed to quantify the uncertainty of the portfolio selected from our quantile regression model.
299

John Ruskin : conservative attitudes to the modern 1836-1860

Williams, Michael A. January 1997 (has links)
I examine the way in which, in his work of the 1840s, Ruskin uses methods and assumptions derived from eighteenth-century Materialist, Mechanist and Vitalist Natural Philosophy, especially his assertion that the meanings which he reads into natural phenomena are objectively present and can be quantified, and the way in which therefore aesthetic concepts, responses and judgements can be quantified, and their values fixed. I examine the ways in which Ruskin seeks to demonstrate the relationship between the unity of Nature and the Multipilicity of Phenomena, not only as existing objectively in the external world, but also as reflected in the paintings of Turner. I suggest that his attempt at demonstration features a problematic relationship between his accounting for a material reality and the spiritual significances which he sees as immanent in it, and that resistance to the dynamism of contemporary industrial and social change is implicit in his celebration of an eternalised natural order. I examine four features of his correspondence during the 1840s: his dealings in the art market, his outright opposition to a number of modern developments, his urgent desires to see his favourite European architectural heritage preserved, and his strident xenophobia, and suggest relationships between the last two and his resistance to the modern. I examine the shift in his interests in the 1840s and 1850s from Nature and Art to Architecture and Man, and thence to Political Economy, and examine available accounts which rely too heavily on references to his psychological development, or on his claims to regular epiphanies, or on a significant shift in focus which can be explained by revealing the internal continuities in his work. I conclude with an attempt to demonstrate that what I have called the "broad sweep" approach obscures the confusions and contradictions in his position in the late 1840s and 1850s, and suggest that his social and intellectual inheritance, which is of a highly conservative and unremittingly paternalistic nature, crucially limits his work as ~ social critic. I offer three appendices: on the problem of the relationship between the Unity of Nature and the Multiplicity of Phenomena as that had been addressed in the Natural Philosophy on whose assumptions Ruskin draws; on eighteenth century Materialist, Mechanist and Vitalist theories of matter; and on the work of Edmund Burke and Sir Charles Bell.
300

Statistical adjustment, calibration, and uncertainty quantification of complex computer models

Yan, Huan 27 August 2014 (has links)
This thesis consists of three chapters on the statistical adjustment, calibration, and uncertainty quantification of complex computer models with applications in engineering. The first chapter systematically develops an engineering-driven statistical adjustment and calibration framework, the second chapter deals with the calibration of potassium current model in a cardiac cell, and the third chapter develops an emulator-based approach for propagating input parameter uncertainty in a solid end milling process. Engineering model development involves several simplifying assumptions for the purpose of mathematical tractability which are often not realistic in practice. This leads to discrepancies in the model predictions. A commonly used statistical approach to overcome this problem is to build a statistical model for the discrepancies between the engineering model and observed data. In contrast, an engineering approach would be to find the causes of discrepancy and fix the engineering model using first principles. However, the engineering approach is time consuming, whereas the statistical approach is fast. The drawback of the statistical approach is that it treats the engineering model as a black box and therefore, the statistically adjusted models lack physical interpretability. In the first chapter, we propose a new framework for model calibration and statistical adjustment. It tries to open up the black box using simple main effects analysis and graphical plots and introduces statistical models inside the engineering model. This approach leads to simpler adjustment models that are physically more interpretable. The approach is illustrated using a model for predicting the cutting forces in a laser-assisted mechanical micromachining process and a model for predicting the temperature of outlet air in a fluidized-bed process. The second chapter studies the calibration of a computer model of potassium currents in a cardiac cell. The computer model is expensive to evaluate and contains twenty-four unknown parameters, which makes the calibration challenging for the traditional methods using kriging. Another difficulty with this problem is the presence of large cell-to-cell variation, which is modeled through random effects. We propose physics-driven strategies for the approximation of the computer model and an efficient method for the identification and estimation of parameters in this high-dimensional nonlinear mixed-effects statistical model. Traditional sampling-based approaches to uncertainty quantification can be slow if the computer model is computationally expensive. In such cases, an easy-to-evaluate emulator can be used to replace the computer model to improve the computational efficiency. However, the traditional technique using kriging is found to perform poorly for the solid end milling process. In chapter three, we develop a new emulator, in which a base function is used to capture the general trend of the output. We propose optimal experimental design strategies for fitting the emulator. We call our proposed emulator local base emulator. Using the solid end milling example, we show that the local base emulator is an efficient and accurate technique for uncertainty quantification and has advantages over the other traditional tools.

Page generated in 0.2165 seconds