• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 8
  • 8
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 144
  • 144
  • 24
  • 24
  • 18
  • 16
  • 14
  • 13
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Numerical simulation of multi-dimensional fractal soot aggregates

Suarez, Andres January 2018 (has links)
Superaggregates are clusters formed by diverse aggregation mechanisms at different scales. They can be found in fluidized nanoparticles and soot formation. An aggregate, with a single aggregation mechanism, can be described by the fractal dimension, df , which is the measure of the distribution and configuration of primary particles into the aggregates. Similarly, a su-peraggregate can be analyzed by the different fractal dimensions that are found at each scale. In a fractal structure aggregate, a self-similarity can be identified at different scales and it has a power law relation between the mass and aggregate size, which can be related to properties like density or light scattering. The fractal dimension, df , can be influenced by aggregation mechanism, particles concentration, temperature, residence time, among other variables. More-over, this parameter can help on the estimation of aggregates’ properties which can help on the design of new processes, analyze health issues and characterize new materials.A multi-dimensional soot aggregate was simulated with the following approach. The first aggregation stage was modeled with a Diffusion Limited cluster-cluster aggregation (DLCA) mechanism, where primary clusters with a fractal dimension, df1, close to 1.44 were obtained. Then, the second aggregation stage was specified by Ballistic Aggregation (BA) mechanism, where the primary clusters generated in the first stage were used to form a superaggregate. All the models were validated with reported data on different experiments and computer models. Using the Ballistic Aggregation (BA) model with primary particles as the building blocks, the fractal dimension, df2, was close to 2.0, which is the expected value reported by literature. However, a decrease on this parameter is appreciated using primary clusters, from a DLCA model, as the building blocks because there is a less compact distribution of primary particles in the superaggregate’s structure.On the second aggregation stage, the fractal dimension, df2, increases when the superaggre-gate size increases, showing an asymptotic behavior to 2.0, which will be developed at higher scales. Partial reorganization was implemented in the Ballistic Aggregation (BA) mechanism where two contact points between primary clusters were achieved for stabilization purposes. This implementation showed a faster increase on the fractal dimension, df2, than without par-tial reorganization. This behavior is the result of a more packed distribution of primary clusters in a short range scales, but it does not affect the scaling behavior of multi-dimensional fractal structures. Moreover, the same results were obtained with different scenarios where the building block sizes were in the range from 200 to 300 and 700 to 800 primary particles.The obtained results demonstrate the importance of fractal dimension, df , for aggregate characterization. This parameter is powerful, universal and accurate since the identification of the different aggregation stages in the superaggregate can increase the accuracy of the estimation of properties, which is crucial in physics and process modeling.
102

Three-dimensional stress measurement technique based on electrical resistivity tomography / 電気比抵抗トモグラフィ-に基づく三次元応力計測技術

Lu, Zirui 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第24896号 / 工博第5176号 / 新制||工||1988(附属図書館) / 京都大学大学院工学研究科都市社会工学専攻 / (主査)准教授 PIPATPONGSA Thirapong, 教授 肥後 陽介, 教授 岸田 潔, 教授 安原 英明 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
103

Pregnancy, Transition to Motherhood, Infant Feeding Attitudes and Health Locus of Control in Nigeria

Adegbayi, Adenike January 2022 (has links)
Exclusive breastfeeding and holistic maternity care are strategic to improving maternal and infant health outcomes in Nigeria. This thesis aimed at informing policies and interventions to promote breastfeeding and to improve Nigerian mother’s experiences in antenatal and intrapartum care. The study in this research focused upon psychological dynamics underlying societal culture around maternity and breastfeeding. Using quantitative method, attitudes toward breastfeeding and health orientation were surveyed in 400 Nigerian men and women using the Iowa Infant Feeding Attitude Scale (IIFAS) and the Multidimensional Health Locus of Control Scale (MHLoC). There were more positive attitudes toward breastfeeding in males, participants in the 20-29-year-old age category, and in those who identified as single. Higher internal HLoC was associated with more positive attitudes to breastfeeding and higher EHLoC scores were associated with more negative attitudes to formula feeding. The second study explored the experience of pregnancy and childbirth in Nigerian women. Qualitative interviews with 12 women implied that Nigerian women perceive pregnancy and childbirth as a multidimensional experience comprising physiological and psychological elements and also as risky. Control mechanisms that reflected internal HLoC included choosing multiple antenatal care sources to obtain holistic care, adopting new technology in bridging perceived communication gaps with health care providers and adopting physical and mental strategies in controlling the somatic and sensory changes that accompany pregnancy. Pregnancy and childbirth were viewed through an external HLoC lens as spiritual, and reflected in an entrenched belief in the intervention of deity to mitigate pain and risk associated with childbirth. These results have implications for practice, intervention and policy to promote breastfeeding at the societal level and improve maternity services for the current and next child-bearing generation.
104

An Interpretative Phenomenological Analysis of Positive Transformation: Fostering New Possibilities through High-Quality Connections, Multi-Dimensional Diversity, and Individual Transformation

Ewing, H. Timothy January 2011 (has links)
No description available.
105

SENIOR INFORMATION TECHNOLOGY (IT) LEADER CREDIBILITY: KNOWLEDGE SCALE, MEDIATING KNOWLEDGE MECHANISMS, AND EFFECTIVENESS

Shoop, Jessica A. 05 June 2017 (has links)
No description available.
106

Electronically-Scanned Wideband Digital Aperture Antenna Arrays using Multi-Dimensional Space-Time Circuit-Network Resonance

Pulipati, Sravan Kumar January 2017 (has links)
No description available.
107

Improving the Security of Mobile Devices Through Multi-Dimensional and Analog Authentication

Gurary, Jonathan, Gurary 28 March 2018 (has links)
No description available.
108

Contributions to Mean Shift filtering and segmentation : Application to MRI ischemic data / Contributions au filtrage Mean Shift à la segmentation : Application à l’ischémie cérébrale en imagerie IRM

Li, Thing 04 April 2012 (has links)
De plus en plus souvent, les études médicales utilisent simultanément de multiples modalités d'acquisition d'image, produisant ainsi des données multidimensionnelles comportant beaucoup d'information supplémentaire dont l'interprétation et le traitement deviennent délicat. Par exemple, les études sur l'ischémie cérébrale se basant sur la combinaison de plusieurs images IRM, provenant de différentes séquences d'acquisition, pour prédire l'évolution de la zone nécrosée, donnent de bien meilleurs résultats que celles basées sur une seule image. Ces approches nécessitent cependant l'utilisation d'algorithmes plus complexes pour réaliser les opérations de filtrage, segmentation et de clustering. Une approche robuste pour répondre à ces problèmes de traitements de données multidimensionnelles est le Mean Shift qui est basé sur l'analyse de l'espace des caractéristiques et l'estimation non-paramétrique par noyau de la densité de probabilité. Dans cette thèse, nous étudions les paramètres qui influencent les résultats du Mean Shift et nous cherchons à optimiser leur choix. Nous examinons notamment l'effet du bruit et du flou dans l'espace des caractéristiques et comment le Mean Shift doit être paramétrés pour être optimal pour le débruitage et la réduction du flou. Le grand succès du Mean Shift est principalement du au réglage intuitif de ces paramètres de la méthode. Ils représentent l'échelle à laquelle le Mean Shift analyse chacune des caractéristiques. En se basant sur la méthode du Plug In (PI) monodimensionnel, fréquemment utilisé pour le filtrage Mean Shift et permettant, dans le cadre de l'estimation non-paramétrique par noyau, d'approximer le paramètre d'échelle optimal, nous proposons l'utilisation du PI multidimensionnel pour le filtrage Mean Shift. Nous évaluons l'intérêt des matrices d'échelle diagonales et pleines calculées à partir des règles du PI sur des images de synthèses et naturelles. Enfin, nous proposons une méthode de segmentation automatique et volumique combinant le filtrage Mean Shift et la croissance de région ainsi qu'une optimisation basée sur les cartes de probabilité. Cette approche est d'abord étudiée sur des images IRM synthétisées. Des tests sur des données réelles issues d'études sur l'ischémie cérébrale chez le rats et l'humain sont aussi conduits pour déterminer l'efficacité de l'approche à prédire l'évolution de la zone de pénombre plusieurs jours après l'accident vasculaire et ce, à partir des IRM réalisées peu de temps après la survenue de cet accident. Par rapport aux segmentations manuelles réalisées des experts médicaux plusieurs jours après l'accident, les résultats obtenus par notre approche sont mitigés. Alors qu'une segmentation parfaite conduirait à un coefficient DICE de 1, le coefficient est de 0.8 pour l'étude chez le rat et de 0.53 pour l'étude sur l'homme. Toujours en utilisant le coefficient DICE, nous déterminons la combinaison de d'images IRM conduisant à la meilleure prédiction. / Medical studies increasingly use multi-modality imaging, producing multidimensional data that bring additional information that are also challenging to process and interpret. As an example, for predicting salvageable tissue, ischemic studies in which combinations of different multiple MRI imaging modalities (DWI, PWI) are used produced more conclusive results than studies made using a single modality. However, the multi-modality approach necessitates the use of more advanced algorithms to perform otherwise regular image processing tasks such as filtering, segmentation and clustering. A robust method for addressing the problems associated with processing data obtained from multi-modality imaging is Mean Shift which is based on feature space analysis and on non-parametric kernel density estimation and can be used for multi-dimensional filtering, segmentation and clustering. In this thesis, we sought to optimize the mean shift process by analyzing the factors that influence it and optimizing its parameters. We examine the effect of noise in processing the feature space and how Mean Shift can be tuned for optimal de-noising and also to reduce blurring. The large success of Mean Shift is mainly due to the intuitive tuning of bandwidth parameters which describe the scale at which features are analyzed. Based on univariate Plug-In (PI) bandwidth selectors of kernel density estimation, we propose the bandwidth matrix estimation method based on multi-variate PI for Mean Shift filtering. We study the interest of using diagonal and full bandwidth matrix with experiment on synthesized and natural images. We propose a new and automatic volume-based segmentation framework which combines Mean Shift filtering and Region Growing segmentation as well as Probability Map optimization. The framework is developed using synthesized MRI images as test data and yielded a perfect segmentation with DICE similarity measurement values reaching the highest value of 1. Testing is then extended to real MRI data obtained from animals and patients with the aim of predicting the evolution of the ischemic penumbra several days following the onset of ischemia using only information obtained from the very first scan. The results obtained are an average DICE of 0.8 for the animal MRI image scans and 0.53 for the patients MRI image scans; the reference images for both cases are manually segmented by a team of expert medical staff. In addition, the most relevant combination of parameters for the MRI modalities is determined.
109

Multi-dimensional Teager-Kaiser signal processing for improved characterization using white light interferometry / Traitement du signal Teager-Kaiser multi-dimensionel pour la caractérisation améliorée avec l'interférométrie en lumière blanche

Gianto, Gianto 14 September 2018 (has links)
L'utilisation de franges d'interférence en lumière blanche comme une sonde optique en microscopie interférométrique est d'une importance croissante dans la caractérisation des matériaux, la métrologie de surface et de l'imagerie médicale. L'Interférométrie en lumière blanche est une technique basée sur la détection de l'enveloppe de franges d'interférence. Il a été démontré antérieurement, la capacité des approches 2D à rivaliser avec certaines méthodes classiques utilisées dans le domaine de l'interférométrie, en termes de robustesse et de temps de calcul. En outre, alors que la plupart des méthodes tiennent compte seulement des données 1 D, il semblerait avantageux de prendre en compte le voisinage spatial utilisant des approches multidimensionnelles (2D/3D), y compris le paramètre de temps afin d'améliorer les mesures. Le but de ce projet de thèse est de développer de nouvelles approches n-D qui sont appropriées pour une meilleure caractérisation des surfaces plus complexes et des couches transparentes. / The use of white light interference fringes as an optical probe in microscopy is of growing importance in materials characterization, surface metrology and medical imaging. Coherence Scanning Interferometry (CSI, also known as White Light Scanning Interferometry, WSLI) is well known for surface roughness and topology measurement [1]. Full-Field Optical Coherence Tomography (FF-OCT) is the version used for the tomographic analysis of complex transparent layers. Both techniques generally make use of some sort of fringe scanning along the optical axis and the acquisition of a stack of xyz images. Image processing is then used to identify the fringe envelopes along z at each pixel in order to measure the positions of either a single surface or of multiple scattering objects within a layer.In CSI, the measurement of surface shape generally requires peak or phase extraction of the mono dimensional fringe signal. Most of the methods are based on an AM-FM signal model, which represents the variation in light intensity measured along the optical axis of an interference microscope [2]. We have demonstrated earlier [3, 4] the ability of 2D approaches to compete with some classical methods used in the field of interferometry, in terms of robustness and computing time. In addition, whereas most methods only take into account the 1D data, it would seem advantageous to take into account the spatial neighborhood using multidimensional approaches (2D, 3D, 4D), including the time parameter in order to improve the measurements.The purpose of this PhD project is to develop new n-D approaches that are suitable for improved characterization of more complex surfaces and transparent layers. In addition, we will enrich the field of study by means of heterogeneous image processing from multiple sensor sources (heterogeneous data fusion). Applications considered will be in the fields of materials metrology, biomaterials and medical imaging.
110

科技政策網站內容分析之研究

賴昌彥, Lai, Chang-Yen Unknown Date (has links)
面對全球資訊網(WWW)應用蓬勃發展,網際網路上充斥著各種類型的資訊資源。而如何有效地管理及檢索這些資料,就成為當前資訊管理的重要課題之一。在發掘資訊時,最常用的便是搜尋引擎,透過比對查詢字串與索引表格(index table),找出相關的網頁文件,並回傳結果。但因為網頁描述資訊的不足,導致其回覆大量不相關的查詢結果,浪費使用者許多時間。 為了解決上述問題,就資訊搜尋的角度而言,本研究提出以文字開採技術實際分析網頁內容,並將其轉換成維度資訊來描述,再以多維度資料庫方式儲存的架構。做為改進現行資訊檢索的參考架構。 就資訊描述的角度,本研提出採用RDF(Resource Description Framework)來描述網頁Metadata的做法。透過此通用的資料格式來描述網路資源,做為跨領域使用、表達資訊的標準,便於Web應用程式間的溝通。期有效改善現行網際網路資源描述之缺失,大幅提昇搜尋之品質。

Page generated in 0.0691 seconds