• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1588
  • 568
  • 227
  • 185
  • 155
  • 89
  • 46
  • 41
  • 33
  • 32
  • 21
  • 19
  • 16
  • 15
  • 15
  • Tagged with
  • 3607
  • 642
  • 423
  • 417
  • 357
  • 316
  • 291
  • 273
  • 243
  • 235
  • 210
  • 193
  • 187
  • 184
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Detection of Fast Moving Pulses in a Noisy Environment

Renault, Raphael 01 February 2001 (has links)
We develop and analyze a combination of techniques to improve timing measurement accuracy of systems processing Gaussian pulses distorted by noise. The approach involves M/N detection, integration, and either correlation or threshold timing measurement techniques. The gain of this process is an increase of the detection capabilities of the system: improvement of the detection probability and decrease in false alarm probability, reduction in pulse distortion, and increase of the accuracy of time delay measurements between pulses using either threshold or correlation measurement methods. Each element of the proposed architecture is studied separately, and modeled analytically. As a result, a design method is proposed in order to develop an appropriate solution to any system requiring accurate time delay measurements in noisy environments. This general method is then applied to a real system, and the results in terms of detection improvement and rms timing error of the method meet expectations: the signal to noise ratio (SNR) operating point of the system is lowered by 10dB, and correlation proves to generate 2dB less rms timing error than threshold. / Master of Science
252

Effects of Site Response on the Correlation Structure of Ground Motion Residuals

Motamed, Maryam 06 February 2014 (has links)
Seismic hazard analyses require an estimate of earthquake ground motions from future events. These predictions are achieved through Ground Motion Prediction Equations, which include a prediction of the median and the standard deviation of ground motion parameters. The differences between observed and predicted ground motions, when normalized by the standard deviation, are referred to as epsilon (𝜖). For spectral accelerations, the correlation structure of normalized residuals across oscillator periods is important for guiding ground motion selection. Correlation structures for large global datasets have been studied extensively. These correlation structures reflect effects that are averaged over the entire dataset underlying the analyses. This paper considers the effects of site response, at given sites, on the correlation structure of normalized residuals. This is achieved by performing site response analyses for two hypothetical soil profiles using a set of 85 rock input motions. Results show that there is no significant difference between correlation coefficients for rock ground motions and correlation coefficients after considering the effects of site response for the chosen sites. / Master of Science
253

Modelling and simulation of MSF desalination process using gPROMS and neural network based physical property correlation

Sowgath, Md Tanvir, Mujtaba, Iqbal January 2006 (has links)
No / Multi Stage Flash (MSF) desalination plants are a sustainable source of fresh water in arid regions. Modelling plays an important role in simulation, optimisation and control of MSF processes. In this work an MSF process model is developed, using gPROMS modelling tool. Accurate estimation of Temperature Elevation (TE) due to salinity is important in developing reliable process model. Here, instead of using empirical correlations from literature, a Neural Network based correlation is used to determine the TE. This correlation is embedded in the gPROMS based process model. We obtained a good agreement between the results reported by Rosso et. al. (1996) and those predicted by our model. Effects of seawater temperature (Tseawater) and steam temperature (Tsteam) on the performance of the MSF process are also studied and reported.
254

Estimates of edge detection filters in human vision

McIlhagga, William H. 10 October 2018 (has links)
Yes / Edge detection is widely believed to be an important early stage in human visual processing. However, there have been relatively few attempts to map human edge detection filters. In this study, observers had to locate a randomly placed step edge in brown noise (the integral of white noise) with a 1/𝑓2 power spectrum. Their responses were modelled by assuming the probability the observer chose an edge location depended on the response of their own edge detection filter to that location. The observer’s edge detection filter was then estimated by maximum likelihood methods. The filters obtained were odd-symmetric and similar to a derivative of Gaussian, with a peak-to-trough width of 0.1–0.15 degrees. These filters are compared with previous estimates of edge detectors in humans, and with neurophysiological receptive fields and theoretical edge detectors.
255

Scalable algorithms for correlation clustering on large graphs

Cordner, Nathan 01 October 2024 (has links)
Correlation clustering (CC) is a widely-used clustering paradigm, where objects are represented as graph nodes and clustering is performed based on relationships between objects (positive or negative edges between pairs of nodes). The CC objective is to obtain a graph clustering that minimizes the number of incorrectly assigned edges (negative edges within clusters, and positive edges between clusters). Many of the state-of-the-art algorithms for solving correlation clustering rely on subroutines that cause significant memory and run time bottlenecks when applied to larger graphs. Several algorithms with the best theoretical guarantees for clustering quality need to first solve a relatively large linear program; others perform brute-force searches over sizeable sets, or store large amounts of unnecessary information. Because of these issues, algorithms that run quicker (e.g. in linear time) but have lower quality approximation guarantees have still remained popular. In this thesis we examine three such popular linear time CC algorithms: Pivot, Vote, and LocalSearch. For the general CC problem we show that these algorithms perform well against slower state-of-the-art algorithms; we also develop a lightweight InnerLocalSearch method that runs much faster and delivers nearly the same quality of results as the full LocalSearch. We adapt Pivot, Vote, and LocalSearch for two constrained CC variants (limited cluster sizes, and limited total number of clusters), and show their practicality when compared against slower algorithms with better approximation guarantees. Finally, we give two practical run time improvements for applying CC algorithms to the related consensus clustering problem.
256

The spatial distribution of birds in southern Sweden : A descriptive study of willow warbler, nightingale, blackbird, robin and grey flycatcher in Svealand and Götaland.

Sjöström, Lars January 2016 (has links)
This is a thesis about the spatial distribution of willow warbler, nightingale, blackbird, robin and grey flycatcher in Svealand and Götaland, that is the southern third of Sweden. It explores the possibilities of using statistics to describe the distribution and variation of birds in a given region.The data was collected by observation of birds on sites called standard routes, with 25 kilometres between them. The standard routes are the points in a grid net placed upon the map of Sweden. The purpose of standard routes is to represent the birds in Sweden both geographic and biotopological.The thesis compare the results from kriging, variogram and four alternative poisson regressions. In the end I come up with the information provided by kriging and variogram and which poisson regression that bests estimates the population sizes of the birds at a given site with information about year, mean temperature from January to May and what kind of environment or habitat the site consist of.
257

STATISTICAL PROPERTIES OF PSEUDORANDOM SEQUENCES

Gu, Ting 01 January 2016 (has links)
Random numbers (in one sense or another) have applications in computer simulation, Monte Carlo integration, cryptography, randomized computation, radar ranging, and other areas. It is impractical to generate random numbers in real life, instead sequences of numbers (or of bits) that appear to be ``random" yet repeatable are used in real life applications. These sequences are called pseudorandom sequences. To determine the suitability of pseudorandom sequences for applications, we need to study their properties, in particular, their statistical properties. The simplest property is the minimal period of the sequence. That is, the shortest number of steps until the sequence repeats. One important type of pseudorandom sequences is the sequences generated by feedback with carry shift registers (FCSRs). In this dissertation, we study statistical properties of N-ary FCSR sequences with odd prime connection integer q and least period (q-1)/2. These are called half-ℓ-sequences. More precisely, our work includes: The number of occurrences of one symbol within one period of a half-ℓ-sequence; The number of pairs of symbols with a fixed distance between them within one period of a half-ℓ-sequence; The number of triples of consecutive symbols within one period of a half-ℓ-sequence. In particular we give a bound on the number of occurrences of one symbol within one period of a binary half-ℓ-sequence and also the autocorrelation value in binary case. The results show that the distributions of half-ℓ-sequences are fairly flat. However, these sequences in the binary case also have some undesirable features as high autocorrelation values. We give bounds on the number of occurrences of two symbols with a fixed distance between them in an ℓ-sequence, whose period reaches the maximum and obtain conditions on the connection integer that guarantee the distribution is highly uniform. In another study of a cryptographically important statistical property, we study a generalization of correlation immunity (CI). CI is a measure of resistance to Siegenthaler's divide and conquer attack on nonlinear combiners. In this dissertation, we present results on correlation immune functions with regard to the q-transform, a generalization of the Walsh-Hadamard transform, to measure the proximity of two functions. We give two definitions of q-correlation immune functions and the relationship between them. Certain properties and constructions for q-correlation immune functions are discussed. We examine the connection between correlation immune functions and q-correlation immune functions.
258

合成型擔保債權憑證之評價-考量異質分配與隨機風險因子承載係數

張立民 Unknown Date (has links)
本文以Hull and White(2004)與Anderson and Sidenius(2004)之理論模型為基礎,在單因子連繫結構模型(one-factor copula model)下,探討風險因子改變其分配之假設或考慮隨機風險因子承載係數(random factor loading)時,對擔保債權憑證之損失分配乃至於各分券信用價差所造成之影響。此外,文中亦將模型運用於實際市場資料上,對兩組Dow Jones iTraxx EUR 五年期之指數型擔保債權憑證(index tranches)與一組Dow Jones CDX NA IG指數型擔保債權憑證進行評價與分析。我們發現在三組資料下,使用double t-distribution 連繫結構模型(double t-distribution copula model)與隨機風險因子承載係數模型(random factor loading model)皆能比使用高斯連繫結構模型(Gaussian copula model)更接近市場上之報價。最後,在評價指數型擔保債權憑證外,本研究亦計算各分券之隱含違約相關係數(implied correlation)與基準違約相關係數(base correlation)。
259

Définition et évaluation d'un mécanisme de génération de règles de corrélation liées à l'environnement. / Definition and assessment of a mechanism for the generation of environment specific correlation rules

Godefroy, Erwan 30 September 2016 (has links)
Dans les systèmes d'informations, les outils de détection produisent en continu un grand nombre d'alertes.Des outils de corrélation permettent de réduire le nombre d'alertes et de synthétiser au sein de méta-alertes les informations importantes pour les administrateurs.Cependant, la complexité des règles de corrélation rend difficile leur écriture et leur maintenance.Cette thèse propose par conséquent une méthode pour générer des règles de corrélation de manière semi-automatique à partir d’un scénario d’attaque exprimé dans un langage de niveau d'abstraction élevé.La méthode repose sur la construction et l'utilisation d’une base de connaissances contenant une modélisation des éléments essentiels du système d’information (par exemple les nœuds et le déploiement des outils de détection). Le procédé de génération des règles de corrélation est composé de différentes étapes qui permettent de transformer progressivement un arbre d'attaque en règles de corrélation.Nous avons évalué ce travail en deux temps. D'une part, nous avons déroulé la méthode dans le cadre d'un cas d'utilisation mettant en jeu un réseau représentatif d'un système d'une petite entreprise.D'autre part, nous avons mesuré l'influence de fautes touchant la base de connaissances sur les règles de corrélation générées et sur la qualité de la détection. / Information systems produce continuously a large amount of messages and alerts. In order to manage this amount of data, correlation system are introduced to reduce the alerts number and produce high-level meta-alerts with relevant information for the administrators. However, it is usually difficult to write complete and correct correlation rules and to maintain them. This thesis describes a method to create correlation rules from an attack scenario specified in a high-level language. This method relies on a specific knowledge base that includes relevant information on the system such as nodes or the deployment of sensor. This process is composed of different steps that iteratively transform an attack tree into a correlation rule. The assessment of this work is divided in two aspects. First, we apply the method int the context of a use-case involving a small business system. The second aspect covers the influence of a faulty knowledge base on the generated rules and on the detection.
260

Describing strong correlations with mean-field approximations

Tsuchimochi, Takashi 06 September 2012 (has links)
Strong electron correlations in electronic structure theory are purely quantum effects arising as a result of degeneracies in molecules and materials, and exhibit significantly different yet interesting characters than do weak correlations. Although weak correlations have recently been able to be described very efficiently and accurately within single particle pictures, less known are good prescriptions for treating strong correlations efficiently. Brute-force calculations of strong correlations in wave function theories tend to be very computationally-intensive, and are usually limited to small molecules for applications. Breaking symmetry in a mean-field approximation is an efficient alternative to acquire strong correlations with, in many cases, qualitatively accurate results. The symmetry broken in quantum chemistry has been traditionally of spin, in so-called unrestricted methods, which typically break spatial symmetry as a consequence, and vice versa, in most situations. In this work, we present a novel approach to accurately describing strong correlations with a mean-field cost by means of Hartree- Fock-Bogoliubov (HFB) theory. We are inspired by the number-symmetry-breaking in HFB, which, with an attractive particle interaction, accounts for strong correlations, while maintaining spin and spatial symmetry. We show that this attractive interaction must be restricted to the chemically-relevant orbitals in an active space to obtain physically meaningful results. With such constraints, our constrained pairing mean-field theory (CPMFT) can accurately describe potential energy curves of various strongly-correlated molecular systems, by cleanly separating strong and weak correlations. To achieve the correct dissociation limits in hetero-atomic molecules, we have modified our CPMFT functional by adding asymptotic constraints. We also include weak correlations by combining CPMFT with density functional theory for chemically accurate results, and reveal the connection between CPMFT and traditional unrestricted methods. The similarity between CPMFT and unrestricted methods leads us to the idea of constrained active space unrestricted mean-field approaches. Motivated by CPMFT, we partially retrieve spin-symmetry that has been fully broken in unrestricted methods. We allow symmetry breaking only in an active space. This constrained unrestricted Hartree-Fock (CUHF) is an interpolation between two extrema: the fully broken-symmetry solution and the symmetry preserved solution. This thesis defines the theory behind and reports the results of CUHF. We first show that, if an active space is chosen to include only open-shell electrons, CUHF reduces to restricted open-shell Hartree-Fock (ROHF), and such CUHF proves in many ways significantly

Page generated in 0.0835 seconds