• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 13
  • 12
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 192
  • 47
  • 39
  • 30
  • 30
  • 24
  • 22
  • 22
  • 21
  • 21
  • 20
  • 20
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

DOSY External Calibration Curve Molecular Weight Determination as a Valuable Methodology in Characterizing Reactive Intermediates in Solution

Neufeld, Roman 14 March 2016 (has links)
No description available.
152

A Study of Several Statistical Methods for Classification with Application to Microbial Source Tracking

Zhong, Xiao 30 April 2004 (has links)
With the advent of computers and the information age, vast amounts of data generated in a great deal of science and industry fields require the statisticians to explore further. In particular, statistical and computational problems in biology and medicine have created a new field of bioinformatics, which is attracting more and more statisticians, computer scientists, and biologists. Several procedures have been developed for tracing the source of fecal pollution in water resources based on certain characteristics of certain microorganisms. Use of this collection of techniques has been termed microbial source tracking (MST). Most of the current methods for MST are based on patterns of either phenotypic or genotypic variation in indicator organisms. Studies also suggested that patterns of genotypic variation might be more reliable due to their less association with environmental factors than those of phenotypic variation. Among the genotypic methods for source tracking, fingerprinting via rep-PCR is most common. Thus, identifying the specific pollution sources in contaminated waters based on rep-PCR fingerprinting techniques, viewed as a classification problem, has become an increasingly popular research topic in bioinformatics. In the project, several statistical methods for classification were studied, including linear discriminant analysis, quadratic discriminant analysis, logistic regression, and $k$-nearest-neighbor rules, neural networks and support vector machine. This project report summaries each of these methods and relevant statistical theory. In addition, an application of these methods to a particular set of MST data is presented and comparisons are made.
153

Použití koherentních metod měření rizika v modelování operačních rizik / The use of coherent risk measures in operational risk modeling

Lebovič, Michal January 2012 (has links)
The debate on quantitative operational risk modeling has only started at the beginning of the last decade and the best-practices are still far from being established. Estimation of capital requirements for operational risk under Advanced Measurement Approaches of Basel II is critically dependent on the choice of risk measure, which quantifies the risk exposure based on the underlying simulated distribution of losses. Despite its well-known caveats Value-at-Risk remains a predominant risk measure used in the context of operational risk management. We describe several serious drawbacks of Value-at-Risk and explain why it can possibly lead to misleading conclusions. As a remedy we suggest the use of coherent risk measures - and namely the statistic known as Expected Shortfall - as a suitable alternative or complement for quantification of operational risk exposure. We demonstrate that application of Expected Shortfall in operational loss modeling is feasible and produces reasonable and consistent results. We also consider a variety of statistical techniques for modeling of underlying loss distribution and evaluate extreme value theory framework as the most suitable for this purpose. Using stress tests we further compare the robustness and consistency of selected models and their implied risk capital estimates...
154

Doppler Radar Data Processing And Classification

Aygar, Alper 01 September 2008 (has links) (PDF)
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
155

Electronic structure and exchange integrals of low-dimensional cuprates

Rosner, Helge 19 September 1999 (has links) (PDF)
The physics of cuprates is strongly influenced by the dimension of the cooper-oxygen network in the considered crystals. Due to the rich manifoldness of different network geometries realized by nature, cuprates are ideal model systems for experimental and theoretical studies of low-dimensional, strongly correlated systems. The dimensionality of the considered model compounds varies between zero and three with a focus on one- and two-dimensional compounds. Starting from LDA band structure calculations, the relevant orbitals for the low-energy physics have been characterized together with a discussion of the chemical bonding in the investigated compounds. By means of a systematic approach for various compounds, the influence of particular structural components on the electronic structure could be concluded. For the undoped cuprate compounds, paramagnetic LDA band structure calculations yield a metallic groundstate instead of the experimentally observed insulating behavoir. The strong correlations were taken into account using Hubbard- or Heisenberg-like models for the investigation of the magnetic couplings in cuprates. The necessary parameters were obtained from tight-binding parameterizations of LDA band structures. Finallly, several ARPES as well as XAS measurements were interpreted. The present work shows, that the combination of experiment, LDA, and model calculations is a powerful tool for the investigation of the electronic structure of strongly correlated systems.
156

Analyse chimique des matières résineuses employées dans le domaine artistique pré-hispanique au Mexique : application aux échantillons archéologiques aztèque et maya

Lucero, Paola 14 September 2012 (has links) (PDF)
Dans le present travail de recherche notre équipe s'est interessé à l'étude de la composition chimique -au niveau moléculaire- d'un groupe d'échantillons résineuses provenant d'objects Aztecs et Mayas. Le but ultime de cette étude a été d'établir l'origin botanique de ces résines et d'avoir une vue d'ensemble des formulations utilisées dans les adhésifs ou les figurines, afin de mieux comprendre leurs propriétes physiques.Pour aboutir à cet objectif une stratégie analytique très spécifique a été créeé et mise en oeuvre. Cette stratégie a inclus l'analyse d'échantillons des résines archéologiques mais aussi de résines d'origine botanique certifiée et de résines commerciales, achetées dans des marchés traditionels au Mexique dans une région qui correspond a la one géographique occupée autre fois par l'empire aztèque.L'étude des matériaux a fait appel à des techniques tels que l'observation a l'échelle microscopique, la spectroscopie infrarouge (IRTF), la chromatographie liquide-DAD (HPLC-UV/Vis) et la chromatographie en phase gazeuse couplée à la spéctroscopie de masse (CPG-SM).L'étude moléculaire de ces échantillons, et plus précisement, l'étude leur fraction tritérpenique a permis de : Établir un profil moléculaire des résines pour chaque origine botanique étudiée Idéntifier précisement certain composées tritérpeniques présents dans toutes les échantillons Réperer les molécules tritérpeniques susceptibles d'être utilisées dans l'avenir comme des marqueurs moléculaires d'origine botanique pour les résines frais des espèces étudiées Suggèrer une origine botanique pour les échantillons archéologiques Aztèques Écarter des origines botaniques possibles pour l'échantillon Maya Avoir une vue d'ensemble de l'origine botanique des résines commercialisées au territoire mexicain sous le nom de "copal" Créer un protocole d'analyse simple pour permettre aux professionnels de la conservation et des biomatériaux d'établir l'origine botanique et un profil moléculaire des résines utilisées dans des expériences diverses Caractériser le comportement du copal lors du vieillisement naturel et artificiel Établir des molécules susceptibles de devenir marqueurs moléculairs pour les deux types de vieillisement Établir que l'origine botanique d'un échantillon résineux peut être retrouvée malgré son âge (archéologique)
157

Near-Field Study of Multiple Interacting Jets : Confluent Jets

Ghahremanian, Shahriar January 2015 (has links)
This thesis deals with the near-field of confluent jets, which can be of interest in many engineering applications such as design of a ventilation supply device. The physical effect of interaction between multiple closely spaced jets is studied using experimental and numerical methods. The primary aim of this study is to explore a better understanding of flow and turbulence behavior of multiple interacting jets. The main goal is to gain an insight into the confluence of jets occurring in the near-field of multiple interacting jets. The array of multiple interacting jets is studied when they are placed on a flat and a curved surface. To obtain the boundary conditions at the nozzle exits of the confluent jets on a curved surface, the results of numerical prediction of a cylindrical air supply device using two turbulence models (realizable 𝑘 − 𝜖 and Reynolds stress model) are validated with hot-wire anemometry (HWA) near different nozzles discharge in the array. A single round jet is then studied to find the appropriate turbulence models for the prediction of the three-dimensional flow field and to gain an understanding of the effect of the boundary conditions predicted at the nozzle inlet. In comparison with HWA measurements, the turbulence models with low Reynolds correction (𝑘 − 𝜖 and shear stress transport [SST] 𝑘 − 𝜔) give reasonable flow predictions for the single round jet with the prescribed inlet boundary conditions, while the transition models (𝑘 − 𝑘l − 𝜔𝜔 and transition SST 𝑘 − 𝜔) are unable to predict the flow in the turbulent region. The results of numerical prediction (low Reynolds SST 𝑘 − 𝜔 model) using the prescribed inlet boundary conditions agree well with the HWA measurement in the nearfield of confluent jets on a curved surface, except in the merging region. Instantaneous velocity measurements are performed by laser Doppler anemometry (LDA) and particle image velocimetry (PIV) in two different configurations, a single row of parallel coplanar jets and an inline array of jets on a flat surface. The results of LDA and PIV are compared, which exhibit good agreement except near the nozzle exits. The streamwise velocity profile of the jets in the initial region shows a saddle back shape with attenuated turbulence in the core region and two off-centered narrow peaks. When confluent jets issue from an array of closely spaced nozzles, they may converge, merge, and combine after a certain distance downstream of the nozzle edge. The deflection plays a salient role for the multiple interacting jets (except in the single row configuration), where all the jets are converged towards the center of the array. The jet position, such as central, side and corner jets, significantly influences the development features of the jets, such as velocity decay and lateral displacement. The flow field of confluent jets exhibits asymmetrical distributions of Reynolds stresses around the axis of the jets and highly anisotropic turbulence. The velocity decays slower in the combined regio  of confluent jets than a single jet. Using the response surface methodology, the correlations between characteristic points (merging and combined points) and the statistically significant terms of the three design factors (inlet velocity, spacing between the nozzles and diameter of the nozzles) are determined for the single row of coplanar parallel jets. The computational parametric study of the single row configuration shows that spacing has the greatest impact on the near-field characteristics.
158

Density functional simulations of defect behavior in oxides for applications in MOSFET and resistive memory

Li, Hongfei January 2018 (has links)
Defects in the functional oxides play an important role in electronic devices like metal oxide semiconductor field effect transistors (MOSFETs) and resistive random-access memories (ReRAMs). The continuous scaling of CMOS has brought the Si MOSFET to its physical technology limit and the replacement of Si channel with Ge channel is required. However, the performance of Ge MOSFETs suffers from Ge/oxide interface quality and reliability problems, which originates from the charge traps and defect states in the oxide or at the Ge/oxide interface. The sub-oxide layers composed of GeII states at the Ge/GeO2 interface seems unavoidable with normal passivation methods like hydrogen treatment, which has poor electrical properties and is related to the reliability problem. On the other hand, ReRAM works by formation and rupture of O vacancy conducting filaments, while how this process happens in atomic scale remains unclear. In this thesis, density functional theory is applied to investigate the defect behaviours in oxides to address existing issues in these electronic devices. In chapter 3, the amorphous atomic structure of doped GeO2 and Ge/GeO2 interface networks are investigated to explain the improved MOSFET reliability observed in experiments. The reliability improvement has been attributed to the passivation of valence alternation pair (VAP) type O deficiency defects by doped rare earth metals. In chapter 4, the oxidation mechanism of GeO2 is investigated by transition state simulation of the intrinsic defect diffusion in the network. It is proposed that GeO2 is oxidized from the Ge substrate through lattice O interstitial diffusion, which is different from SiO2 which is oxidized by O2 molecule diffusion. This new mechanism fully explains the strange isotope tracer experimental results in the literature. In chapter 5, the Fermi level pinning effect is explored for metal semiconductor electrical contacts in Ge MOSFETs. It is found that germanides show much weaker Fermi level pinning than normal metal on top of Ge, which is well explained by the interfacial dangling bond states. These results are important to tune Schottky barrier heights (SBHs) for n-type contacts on Ge for use on Ge high mobility substrates in future CMOS devices. In chapter 6, we investigate the surface and subsurface O vacancy defects in three kinds of stable TiO2 surfaces. The low formation energy under O poor conditions and the +2 charge state being the most stable O vacancy are beneficial to the formation and rupture of conducting filament in ReRAM, which makes TiO2 a good candidate for ReRAM materials. In chapter 7, we investigate hydrogen behaviour in amorphous ZnO. It is found that hydrogen exists as hydrogen pairs trapped at oxygen vacancies and forms Zn-H bonds. This is different from that in c-ZnO, where H acts as shallow donors. The O vacancy/2H complex defect has got defect states in the lower gap region, which is proposed to be the origin of the negative bias light induced stress instability.
159

Lattice Codes for Secure Communication and Secret Key Generation

Vatedka, Shashank January 2017 (has links) (PDF)
In this work, we study two problems in information-theoretic security. Firstly, we study a wireless network where two nodes want to securely exchange messages via an honest-but-curious bidirectional relay. There is no direct link between the user nodes, and all communication must take place through the relay. The relay behaves like a passive eavesdropper, but otherwise follows the protocol it is assigned. Our objective is to design a scheme where the user nodes can reliably exchange messages such that the relay gets no information about the individual messages. We first describe a perfectly secure scheme using nested lattices, and show that our scheme achieves secrecy regardless of the distribution of the additive noise, and even if this distribution is unknown to the user nodes. Our scheme is explicit, in the sense that for any pair of nested lattices, we give the distribution used for randomization at the encoders to guarantee security. We then give a strongly secure lattice coding scheme, and we characterize the performance of both these schemes in the presence of Gaussian noise. We then extend our perfectly-secure and strongly-secure schemes to obtain a protocol that guarantees end-to-end secrecy in a multichip line network. We also briefly study the robustness of our bidirectional relaying schemes to channel imperfections. In the second problem, we consider the scenario where multiple terminals have access to private correlated Gaussian sources and a public noiseless communication channel. The objective is to generate a group secret key using their sources and public communication in a way that an eavesdropper having access to the public communication can obtain no information about the key. We give a nested lattice-based protocol for generating strongly secure secret keys from independent and identically distributed copies of the correlated random variables. Under certain assumptions on the joint distribution of the sources, we derive achievable secret key rates. The tools used in designing protocols for both these problems are nested lattice codes, which have been widely used in several problems of communication and security. In this thesis, we also study lattice constructions that permit polynomial-time encoding and decoding. In this regard, we first look at a class of lattices obtained from low-density parity-check (LDPC) codes, called Low-density Construction-A (LDA) lattices. We show that high-dimensional LDA lattices have several “goodness” properties that are desirable in many problems of communication and security. We also present a new class of low-complexity lattice coding schemes that achieve the capacity of the AWGN channel. Codes in this class are obtained by concatenating an inner Construction-A lattice code with an outer Reed-Solomon code or an expander code. We show that this class of codes can achieve the capacity of the AWGN channel with polynomial encoding and decoding complexities. Furthermore, the probability of error decays exponentially in the block length for a fixed transmission rate R that is strictly less than the capacity. To the best of our knowledge, this is the first capacity-achieving coding scheme for the AWGN channel which has an exponentially decaying probability of error and polynomial encoding/decoding complexities.
160

以推敲可能性模式探討影響評論幫助性之因素 / Factors Affecting Review Helpfulness : An Elaboration Likelihood Model Perspective

熊耿得, Hsiung, Keng-Te Unknown Date (has links)
在電子商務中,評論會影響消費者的購買決策,透過評論幫助性可以篩選出關鍵的評論,以利消費者進行決策。本研究以推敲可能性模式作為研究架構,透過文字探勘挖掘評論的文本特性來探討影響幫助性之要素,中央線索除了評論長度與可讀性外,利用LDA主題模型衡量評論主題廣度;周邊線索則是透過環狀情緒模型進行情感分析,並透過評論者排名來衡量來源可信度,利用亞馬遜商店中的資料進行驗證分析。結果發現,消費者在判斷評論幫助性時,會參考中央以及周邊線索。具備高論點品質的中央線索將有效提升評論幫助性;周邊線索整體而言,證實了社會中存在負向偏誤,具備喚起度的負向情感較容易提升評論幫助性,而評論是否被認為有幫助確實會受到評論者的排名所影響。進階分析結果顯示,周邊的情感效果會受到評論者排名高低的影響,前段評論者應保持中立避免帶有個人情緒;中段評論者的評論幫助性會隨著情緒喚起度而增加;後段評論者則需要增加自身的負向情感,才能夠對於評論幫助性有正向影響。 / Online reviews are important factors in consumers’ purchase decision. The helpfulness of reviews allows consumers to quickly identify useful reviews. The purpose of this study is to investigate the nature of online reviews that affect their helpfulness through the lens of the elaboration likelihood model. For the central cues, we adopt latent dirichlet allocation to measure review breadth in addition to review length and review readability. For the peripheral cues, we use the sentiment analysis based on the circumplex model to catch the emotion effect and use the ranking of the reviewers to measure the source credibility. We used a dataset collected from Amazon.com to evaluate our model. The result suggests that consumers focus both central and peripheral cues when they read reviews. Consumers care about the length, breadth and readability of reviews associated with the central route, and the emotional effects associated with the peripheral route. In the advanced research, we split our sample into 3 groups by their ranking of the reviewers. We found that the top reviewers should keep neutral and avoid personal feelings to make their reviews more helpful; the middle reviewers can use more arousal words to improve their review helpfulness; the bottom reviewers must increase their emotional valence strength, especially the negative emotion to higher the perceived review helpfulness.

Page generated in 0.0562 seconds