• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1591
  • 568
  • 227
  • 185
  • 155
  • 89
  • 46
  • 41
  • 33
  • 32
  • 21
  • 19
  • 16
  • 15
  • 15
  • Tagged with
  • 3611
  • 643
  • 423
  • 418
  • 358
  • 316
  • 292
  • 273
  • 243
  • 235
  • 210
  • 193
  • 188
  • 185
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Modelling and simulation of MSF desalination process using gPROMS and neural network based physical property correlation

Sowgath, Md Tanvir, Mujtaba, Iqbal January 2006 (has links)
No / Multi Stage Flash (MSF) desalination plants are a sustainable source of fresh water in arid regions. Modelling plays an important role in simulation, optimisation and control of MSF processes. In this work an MSF process model is developed, using gPROMS modelling tool. Accurate estimation of Temperature Elevation (TE) due to salinity is important in developing reliable process model. Here, instead of using empirical correlations from literature, a Neural Network based correlation is used to determine the TE. This correlation is embedded in the gPROMS based process model. We obtained a good agreement between the results reported by Rosso et. al. (1996) and those predicted by our model. Effects of seawater temperature (Tseawater) and steam temperature (Tsteam) on the performance of the MSF process are also studied and reported.
252

Walking Speed, Gait Asymmetry, and Motor Variability

Hughes-Oliver, Cherice January 2018 (has links)
Study design is among the most fundamental factors influencing collection and interpretation of data. The purpose of this study is to understand the effect of design choices by evaluating gait mechanics in healthy control participants using three primary objectives: 1) determine the repeatability of marker placement, 2) determine the effect of set versus self-selected walking speed, and 3) examine the correlation between gait asymmetry and motor variability. Ten and fifty-one healthy control participants were recruited for aim 1 and aims 2/3, respectively. Reflective markers were placed on lower-extremity bony landmarks and participants walked on an instrumented treadmill while 3D motion capture data was collected. For aim 1, this procedure was repeated at two time points 30 minutes apart. For aims 2 and 3, participants completed set and self-selected speed trials. JMP Pro 13 was used to compare joint kinetics and gait kinematics for all aims. Marker placement was repeatable between time points. Participants walked slower in the self-selected walking speed trial, which resulted in both kinematic and kinetic gait mechanics alterations. Gait asymmetry was significantly correlated with motor variability for both spatial and temporal measures. Current study findings reiterated the importance of walking speed when evaluating gait symmetry, joint kinetics, and kinematics. The decision regarding whether to utilize a set or self-selected speed condition within a study design should be made based on whether the measures of interest are independent of walking speed. Gait asymmetry and motor variability are related and should not be treated as independent components of gait. / Master of Science / This study aims to evaluate gait mechanics in healthy young adults by evaluating the impact of multiple study design choices and relationships between different aspects of gait (walking). Loading and movement walking data was collected from a total of sixty-one participants. This data was then used to calculate several measures of gait including symmetry between limbs, joint ranges of motion, and variability of movement. The potential impact of study design choices including setting walking speed for all participants and evaluating loading asymmetry and movement variability independently are discussed based on the findings of the current study.
253

Methodologies for Quantifying and Characterizing Strain Fields Resulting from Focused Ultrasound Therapies in Mouse Achilles Tendon using Ultrasound Imaging and Digital Image Correlation

Salazar, Steven Anthony 04 August 2022 (has links)
Tendinopathy is a common pathology of tendons characterized by pain and a decrease in function resulting from changes in the tissue's structure and/or composition due to injury. Diagnosis of tendinopathy is determined by the qualitative analysis of a trained physician usually with assistance from an imaging modality. Although physicians can often identify tendinopathy, there are no quantitative metrics to evaluate tendon fatigue, damage, or healing. Physical therapy (PT) is a common treatment for patients with tendinopathy, and recent studies have investigated Focused Ultrasound (FUS) for its treatment of tendons. Developments in the use of FUS as a therapeutic have led to studies of the underlying mechanisms by which it operates. Digital Image Correlation (DIC) is a non-contact method of quantifying tissue displacements and strains of a deforming material using high resolution imaging DIC programs can evaluate and interpolate strain data by applying statistical image processing algorithms and solid continuum mechanics principles using a set of sequential image frames capturing the mechanical deformation of the specimen during testing. The studies presented in this thesis investigate methodologies for using DIC with ultrasound imaging of mouse Achilles tendons to characterize strains resulting from FUS therapies. The first method is based upon an orthogonal configuration of therapy and imaging transducers while the second investigates a coaxial experimental configuration. This work explores DIC as a viable means of quantifying the mechanical stimulation caused by FUS therapies on tendon tissue through ultrasound imaging to better understand the underlying mechanisms of FUS therapy. / Master of Science / Tendinopathy is a common injury that many people will experience in their lifetime. Pain and swelling are common symptoms and can make daily actions uncomfortable to perform. Physical therapy (PT) is one of the most common ways to help relieve the symptoms of this condition. A therapy being investigated to help treat tendinopathy utilizes Focused Ultrasound (FUS) technology to help the healing process. PT can be difficult and painful for those experiencing tendinopathy, but if a therapeutic like FUS could mimic the effects of PT, then some patients would not need to perform these physically demanding tasks. To understand if this treatment is viable, we need to better understand the underlying mechanisms by which it operates. Therefore, we are investigating the mechanical stimulation that FUS imparts on tendons because it is believed that the mechanical stimulations from exercise are a primary contributor to healing. Specifically, we want to evaluate the kind of strains applied by FUS therapies to inform decisions about dosage. One method uses Digital Image Correlation (DIC). DIC is a method of evaluating displacements and strains using non-contact high resolution imaging. DIC works using statistically motivated algorithms to calculate the deformation between subsequent video frames in a given material undergoing a state of stress. Using this technology along with ultrasound imaging, this work gives a preliminary exploration of using DIC as a means of quantifying strain to better understand the underlying mechanisms of the mechanical stimulations caused by FUS therapy.
254

Scalable algorithms for correlation clustering on large graphs

Cordner, Nathan 01 October 2024 (has links)
Correlation clustering (CC) is a widely-used clustering paradigm, where objects are represented as graph nodes and clustering is performed based on relationships between objects (positive or negative edges between pairs of nodes). The CC objective is to obtain a graph clustering that minimizes the number of incorrectly assigned edges (negative edges within clusters, and positive edges between clusters). Many of the state-of-the-art algorithms for solving correlation clustering rely on subroutines that cause significant memory and run time bottlenecks when applied to larger graphs. Several algorithms with the best theoretical guarantees for clustering quality need to first solve a relatively large linear program; others perform brute-force searches over sizeable sets, or store large amounts of unnecessary information. Because of these issues, algorithms that run quicker (e.g. in linear time) but have lower quality approximation guarantees have still remained popular. In this thesis we examine three such popular linear time CC algorithms: Pivot, Vote, and LocalSearch. For the general CC problem we show that these algorithms perform well against slower state-of-the-art algorithms; we also develop a lightweight InnerLocalSearch method that runs much faster and delivers nearly the same quality of results as the full LocalSearch. We adapt Pivot, Vote, and LocalSearch for two constrained CC variants (limited cluster sizes, and limited total number of clusters), and show their practicality when compared against slower algorithms with better approximation guarantees. Finally, we give two practical run time improvements for applying CC algorithms to the related consensus clustering problem.
255

Estimates of edge detection filters in human vision

McIlhagga, William H. 10 October 2018 (has links)
Yes / Edge detection is widely believed to be an important early stage in human visual processing. However, there have been relatively few attempts to map human edge detection filters. In this study, observers had to locate a randomly placed step edge in brown noise (the integral of white noise) with a 1/𝑓2 power spectrum. Their responses were modelled by assuming the probability the observer chose an edge location depended on the response of their own edge detection filter to that location. The observer’s edge detection filter was then estimated by maximum likelihood methods. The filters obtained were odd-symmetric and similar to a derivative of Gaussian, with a peak-to-trough width of 0.1–0.15 degrees. These filters are compared with previous estimates of edge detectors in humans, and with neurophysiological receptive fields and theoretical edge detectors.
256

Comparing Raw Score Difference, Multilevel Modeling, and Structural Equation Modeling Methods for Estimating Discrepancy in Dyads

McEnturff, Amber L 05 1900 (has links)
The current study focused on dyadic discrepancy, the difference between two individuals. A Monte Carlo simulation was used to compare three dyadic discrepancy estimation methods across a variety of potential research conditions, including variations on intraclass correlation, cluster number, reliability, effect size, and effect size variance. The methods compared were: raw score difference (RSD); empirical Bayes estimate of slope in multilevel modeling (EBD); and structural equation modeling estimate (SEM). Accuracy and reliability of the discrepancy estimate and the accuracy of prediction when using the discrepancy to predict an outcome were examined. The results indicated that RSD and SEM, despite having poor reliability, performed better than EBD when predicting an outcome. The results of this research provide methodological guidance to researchers interested in dyadic discrepancies.
257

The spatial distribution of birds in southern Sweden : A descriptive study of willow warbler, nightingale, blackbird, robin and grey flycatcher in Svealand and Götaland.

Sjöström, Lars January 2016 (has links)
This is a thesis about the spatial distribution of willow warbler, nightingale, blackbird, robin and grey flycatcher in Svealand and Götaland, that is the southern third of Sweden. It explores the possibilities of using statistics to describe the distribution and variation of birds in a given region.The data was collected by observation of birds on sites called standard routes, with 25 kilometres between them. The standard routes are the points in a grid net placed upon the map of Sweden. The purpose of standard routes is to represent the birds in Sweden both geographic and biotopological.The thesis compare the results from kriging, variogram and four alternative poisson regressions. In the end I come up with the information provided by kriging and variogram and which poisson regression that bests estimates the population sizes of the birds at a given site with information about year, mean temperature from January to May and what kind of environment or habitat the site consist of.
258

STATISTICAL PROPERTIES OF PSEUDORANDOM SEQUENCES

Gu, Ting 01 January 2016 (has links)
Random numbers (in one sense or another) have applications in computer simulation, Monte Carlo integration, cryptography, randomized computation, radar ranging, and other areas. It is impractical to generate random numbers in real life, instead sequences of numbers (or of bits) that appear to be ``random" yet repeatable are used in real life applications. These sequences are called pseudorandom sequences. To determine the suitability of pseudorandom sequences for applications, we need to study their properties, in particular, their statistical properties. The simplest property is the minimal period of the sequence. That is, the shortest number of steps until the sequence repeats. One important type of pseudorandom sequences is the sequences generated by feedback with carry shift registers (FCSRs). In this dissertation, we study statistical properties of N-ary FCSR sequences with odd prime connection integer q and least period (q-1)/2. These are called half-ℓ-sequences. More precisely, our work includes: The number of occurrences of one symbol within one period of a half-ℓ-sequence; The number of pairs of symbols with a fixed distance between them within one period of a half-ℓ-sequence; The number of triples of consecutive symbols within one period of a half-ℓ-sequence. In particular we give a bound on the number of occurrences of one symbol within one period of a binary half-ℓ-sequence and also the autocorrelation value in binary case. The results show that the distributions of half-ℓ-sequences are fairly flat. However, these sequences in the binary case also have some undesirable features as high autocorrelation values. We give bounds on the number of occurrences of two symbols with a fixed distance between them in an ℓ-sequence, whose period reaches the maximum and obtain conditions on the connection integer that guarantee the distribution is highly uniform. In another study of a cryptographically important statistical property, we study a generalization of correlation immunity (CI). CI is a measure of resistance to Siegenthaler's divide and conquer attack on nonlinear combiners. In this dissertation, we present results on correlation immune functions with regard to the q-transform, a generalization of the Walsh-Hadamard transform, to measure the proximity of two functions. We give two definitions of q-correlation immune functions and the relationship between them. Certain properties and constructions for q-correlation immune functions are discussed. We examine the connection between correlation immune functions and q-correlation immune functions.
259

合成型擔保債權憑證之評價-考量異質分配與隨機風險因子承載係數

張立民 Unknown Date (has links)
本文以Hull and White(2004)與Anderson and Sidenius(2004)之理論模型為基礎,在單因子連繫結構模型(one-factor copula model)下,探討風險因子改變其分配之假設或考慮隨機風險因子承載係數(random factor loading)時,對擔保債權憑證之損失分配乃至於各分券信用價差所造成之影響。此外,文中亦將模型運用於實際市場資料上,對兩組Dow Jones iTraxx EUR 五年期之指數型擔保債權憑證(index tranches)與一組Dow Jones CDX NA IG指數型擔保債權憑證進行評價與分析。我們發現在三組資料下,使用double t-distribution 連繫結構模型(double t-distribution copula model)與隨機風險因子承載係數模型(random factor loading model)皆能比使用高斯連繫結構模型(Gaussian copula model)更接近市場上之報價。最後,在評價指數型擔保債權憑證外,本研究亦計算各分券之隱含違約相關係數(implied correlation)與基準違約相關係數(base correlation)。
260

Définition et évaluation d'un mécanisme de génération de règles de corrélation liées à l'environnement. / Definition and assessment of a mechanism for the generation of environment specific correlation rules

Godefroy, Erwan 30 September 2016 (has links)
Dans les systèmes d'informations, les outils de détection produisent en continu un grand nombre d'alertes.Des outils de corrélation permettent de réduire le nombre d'alertes et de synthétiser au sein de méta-alertes les informations importantes pour les administrateurs.Cependant, la complexité des règles de corrélation rend difficile leur écriture et leur maintenance.Cette thèse propose par conséquent une méthode pour générer des règles de corrélation de manière semi-automatique à partir d’un scénario d’attaque exprimé dans un langage de niveau d'abstraction élevé.La méthode repose sur la construction et l'utilisation d’une base de connaissances contenant une modélisation des éléments essentiels du système d’information (par exemple les nœuds et le déploiement des outils de détection). Le procédé de génération des règles de corrélation est composé de différentes étapes qui permettent de transformer progressivement un arbre d'attaque en règles de corrélation.Nous avons évalué ce travail en deux temps. D'une part, nous avons déroulé la méthode dans le cadre d'un cas d'utilisation mettant en jeu un réseau représentatif d'un système d'une petite entreprise.D'autre part, nous avons mesuré l'influence de fautes touchant la base de connaissances sur les règles de corrélation générées et sur la qualité de la détection. / Information systems produce continuously a large amount of messages and alerts. In order to manage this amount of data, correlation system are introduced to reduce the alerts number and produce high-level meta-alerts with relevant information for the administrators. However, it is usually difficult to write complete and correct correlation rules and to maintain them. This thesis describes a method to create correlation rules from an attack scenario specified in a high-level language. This method relies on a specific knowledge base that includes relevant information on the system such as nodes or the deployment of sensor. This process is composed of different steps that iteratively transform an attack tree into a correlation rule. The assessment of this work is divided in two aspects. First, we apply the method int the context of a use-case involving a small business system. The second aspect covers the influence of a faulty knowledge base on the generated rules and on the detection.

Page generated in 0.1005 seconds