• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 12
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 17
  • 14
  • 13
  • 12
  • 11
  • 11
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Étude des M-estimateurs et leurs versions pondérées pour des données clusterisées / A study of M estimators and wheighted M estimators in the case of clustered data

El Asri, Mohamed 15 December 2014 (has links)
La classe des M-estimateurs engendre des estimateurs classiques d’un paramètre de localisation multidimensionnel tels que l’estimateur du maximum de vraisemblance, la moyenne empirique et la médiane spatiale. Huber (1964) introduit les M-estimateurs dans le cadre de l’étude des estimateurs robustes. Parmi la littérature dédiée à ces estimateurs, on trouve en particulier les ouvrages de Huber (1981) et de Hampel et al. (1986) sur le comportement asymptotique et la robustesse via le point de rupture et la fonction d’influence (voir Ruiz-Gazen (2012) pour une synthèse sur ces notions). Plus récemment, des résultats sur la convergence et la normalité asymptotique sont établis par Van der Vaart (2000) dans le cadre multidimensionnel. Nevalainen et al. (2006, 2007) étudient le cas particulier de la médiane spatiale pondérée et non-pondérée dans le cas clusterisé. Nous généralisons ces résultats aux M-estimateurs pondérés. Nous étudions leur convergence presque sûre, leur normalité asymptotique ainsi que leur robustesse dans le cas de données clusterisées. / M-estimators were first introduced by Huber (1964) as robust estimators of location and gave rise to a substantial literature. For results on their asymptotic behavior and robustness (using the study of the influence func- tion and the breakdown point), we may refer in particular to the books of Huber (1981) and Hampel et al. (1986). For more recent references, we may cite the work of Ruiz-Gazen (2012) with a nice introductory presentation of robust statistics, and the book of Van der Vaart (2000) for results, in the independent and identically distributed setting, concerning convergence and asymptotic normality in the multivariate setting considered throughout this paper. Most of references address the case where the data are independent and identically distributed. However clustered, and hierarchical, data frequently arise in applications. Typically the facility location problem is an important research topic in spatial data analysis for the geographic location of some economic activity. In this field, recent studies perform spatial modelling with clustered data (see e.g. Liao and Guo, 2008; Javadi and Shahrabi, 2014, and references therein). Concerning robust estimation, Nevalainen et al. (2006) study the spatial median for the multivariate one-sample location problem with clustered data. They show that the intra-cluster correlation has an impact on the asymptotic covariance matrix. The weighted spatial median, introduced in their pioneer paper of 2007, has a superior efficiency with respect to its unweighted version, especially when clusters’ sizes are heterogenous or in the presence of strong intra-cluster correlation. The class of weighted M-estimators (introduced in El Asri, 2013) may be viewed as a generalization of this work to a broad class of estimators: weights are assigned to the objective function that defines M-estimators. The aim is, for example, to adapt M-estimators to the clustered structures, to the size of clusters, or to clusters including extremal values, in order to increase their efficiency or robustness. In this thesis, we study the almost sure convergence of unweighted and weighted M-estimators and establish their asymptotic normality. Then, we provide consistent estimators of the asymptotic variance and derived, numerically, optimal weights that improve the relative efficiency to their unweighted versions. Finally, from a weight-based formulation of the breakdown point, we illustrate how these optimal weights lead to an altered breakdown point.
32

Photophysical Properties and Applications of Fluorescent Probes in Studying DNA Conformation and Dynamics

January 2015 (has links)
abstract: Fluorescence spectroscopy is a popular technique that has been particularly useful in probing biological systems, especially with the invention of single molecule fluorescence. For example, Förster resonance energy transfer (FRET) is one tool that has been helpful in probing distances and conformational changes in biomolecules. In this work, important properties necessary in the quantification of FRET were investigated while FRET was also applied to gain insight into the dynamics of biological molecules. In particular, dynamics of damaged DNA was investigated. While damages in DNA are known to affect DNA structure, what remains unclear is how the presence of a lesion, or multiple lesions, affects the flexibility of DNA, especially in relation to damage recognition by repair enzymes. DNA conformational dynamics was probed by combining FRET and fluorescence anisotropy along with biochemical assays. The focus of this work was to investigate the relationship between dynamics and enzymatic repair. In addition, to properly quantify fluorescence and FRET data, photophysical phenomena of fluorophores, such as blinking, needs to be understood. The triplet formation of the single molecule dye TAMRA and the photoisomerization yield of two different modifications of the single molecule cyanine dye Cy3 were examined spectroscopically to aid in accurate data interpretation. The combination of the biophysical and physiochemical studies illustrates how fluorescence spectroscopy can be used to answer biological questions. / Dissertation/Thesis / Doctoral Dissertation Chemistry 2015
33

Interaction Effects in Multilevel Models

January 2015 (has links)
abstract: Researchers are often interested in estimating interactions in multilevel models, but many researchers assume that the same procedures and interpretations for interactions in single-level models apply to multilevel models. However, estimating interactions in multilevel models is much more complex than in single-level models. Because uncentered (RAS) or grand mean centered (CGM) level-1 predictors in two-level models contain two sources of variability (i.e., within-cluster variability and between-cluster variability), interactions involving RAS or CGM level-1 predictors also contain more than one source of variability. In this Master’s thesis, I use simulations to demonstrate that ignoring the four sources of variability in a total level-1 interaction effect can lead to erroneous conclusions. I explain how to parse a total level-1 interaction effect into four specific interaction effects, derive equivalencies between CGM and centering within context (CWC) for this model, and describe how the interpretations of the fixed effects change under CGM and CWC. Finally, I provide an empirical example using diary data collected from working adults with chronic pain. / Dissertation/Thesis / Masters Thesis Psychology 2015
34

Sparse Signal Recovery Based on Compressive Sensing and Exploration Using Multiple Mobile Sensors

Shekaramiz, Mohammad 01 December 2018 (has links)
The work in this dissertation is focused on two areas within the general discipline of statistical signal processing. First, several new algorithms are developed and exhaustively tested for solving the inverse problem of compressive sensing (CS). CS is a recently developed sub-sampling technique for signal acquisition and reconstruction which is more efficient than the traditional Nyquist sampling method. It provides the possibility of compressed data acquisition approaches to directly acquire just the important information of the signal of interest. Many natural signals are sparse or compressible in some domain such as pixel domain of images, time, frequency and so forth. The notion of compressibility or sparsity here means that many coefficients of the signal of interest are either zero or of low amplitude, in some domain, whereas some are dominating coefficients. Therefore, we may not need to take many direct or indirect samples from the signal or phenomenon to be able to capture the important information of the signal. As a simple example, one can think of a system of linear equations with N unknowns. Traditional methods suggest solving N linearly independent equations to solve for the unknowns. However, if many of the variables are known to be zero or of low amplitude, then intuitively speaking, there will be no need to have N equations. Unfortunately, in many real-world problems, the number of non-zero (effective) variables are unknown. In these cases, CS is capable of solving for the unknowns in an efficient way. In other words, it enables us to collect the important information of the sparse signal with low number of measurements. Then, considering the fact that the signal is sparse, extracting the important information of the signal is the challenge that needs to be addressed. Since most of the existing recovery algorithms in this area need some prior knowledge or parameter tuning, their application to real-world problems to achieve a good performance is difficult. In this dissertation, several new CS algorithms are proposed for the recovery of sparse signals. The proposed algorithms mostly do not require any prior knowledge on the signal or its structure. In fact, these algorithms can learn the underlying structure of the signal based on the collected measurements and successfully reconstruct the signal, with high probability. The other merit of the proposed algorithms is that they are generally flexible in incorporating any prior knowledge on the noise, sparisty level, and so on. The second part of this study is devoted to deployment of mobile sensors in circumstances that the number of sensors to sample the entire region is inadequate. Therefore, where to deploy the sensors, to both explore new regions while refining knowledge in aleady visited areas is of high importance. Here, a new framework is proposed to decide on the trajectories of sensors as they collect the measurements. The proposed framework has two main stages. The first stage performs interpolation/extrapolation to estimate the phenomenon of interest at unseen loactions, and the second stage decides on the informative trajectory based on the collected and estimated data. This framework can be applied to various problems such as tuning the constellation of sensor-bearing satellites, robotics, or any type of adaptive sensor placement/configuration problem. Depending on the problem, some modifications on the constraints in the framework may be needed. As an application side of this work, the proposed framework is applied to a surrogate problem related to the constellation adjustment of sensor-bearing satellites.
35

<i>In Silico</i> Studies of Mechanotransduction and Cell Adhesion Proteins

Walujkar, Sanket Pradeep January 2021 (has links)
No description available.
36

Design, Analysis, and Misspecification Sensitivity of Partially and Fully Nested Multisite Cluster-Randomized Designs

Xie, Yanli 22 August 2022 (has links)
No description available.
37

Novel design concepts for unconventional antenna array architecutres in next generation communications systems

Gottardi, Giorgio 28 October 2019 (has links)
In this work, the formulation and the implementation of innovative methodological paradigms for the design of unconventional array architectures for future generation communication systems has been addressed. By exploiting the potentialities of the codesign strategy for elementary radiators in an irregularly clustered array architectures and by introducing an innovative capacity-driven design paradigm, the proposed methodologies allow to effectively design unconventional array architectures with optimal trade-offs in terms of performance and complexity/costs. The codesign synthesis strategy is proposed to solve the arising massive multi-objective design problem aimed at fitting the multiple objectives and requirements on the "free-space" performance of the array architecture. Afterward, the capacity-driven design paradigm is formulated and implemented for the design of MIMO array architectures to maximize the quality of the communication system in first place instead of considering "free-space" figures-of-merit. A set of numerical results has been provided (i) to validate the proposed paradigms in real-application scenarios and (ii) to provide insights on the effectiveness, the limitations and the potentialities of proposed design methodologies.
38

Bayesian Hierarchical Models for Partially Observed Data

Jaberansari, Negar January 2016 (has links)
No description available.
39

The determinants of credit spreads changes in global shipping bonds.

Kavussanos, M.G., Tsouknidis, Dimitris A. January 2014 (has links)
Yes / This paper investigates whether bond, issuer, industry and macro-specific variables account for the observed variation of credit spreads’ changes of global shipping bond issues before and after the onset of the subprime financial crisis. Results show that conclusions as to the significant variables of spreads depend significantly on whether two-way clusteradjusted standard errors are utilized, thus rendering results in the extant literature ambigious. The main determinants of global cargo-carrying companies’ shipping bond spreads are found in this paper to be: the liquidity of the bond issue, the stock market’s volatility, the bond market’s cyclicality, freight earnings and the credit rating of the bond issue.
40

Compiler-Assisted Energy Optimization For Clustered VLIW Processors

Nagpal, Rahul 03 1900 (has links)
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long wires having high load capacitance which leads to delay in execution and significantly high energy consumption. Inter-cluster communication also introduces many short idle cycles, therby significantly increasing the overall leakage energy consumption in the functional units. The trend towards miniatrurization of devices (and associated reduction in threshold voltage) makes energy consumption in interconnects and functional units even worse and limits the usability of clustered architectures in smaller technologies. In the past, study of leakage energy management at the architectural level has mostly focused on storage structures such as cache. Relatively, little work has been done on architecture level leakage energy management in functional units in the context of superscalar processors and energy efficient scheduling in the context of VLIW architectures. In the absence of any high level model for interconnect energy estimation, the primary focus of research in the context of interconnects has been to reduce the latency of communication and evaluation of various inter-cluster communication models. To the best of our knowledge, there has been no such work in the past from the point of view of enegy efficiency targeting clustered VLIW architectures specifically focusing on smaller technologies. Technological advancements now permit design of interconnects and functional units With varying performance and power modes. In thesis we people scheduling algorithms that aggregate the scheduling slack of instructions and communication slack of data values to exploit the low power modes of interconnects and functional units . We also propose a high level model for estimation of interconnect delay and energy (in contrast to low-level circuit level model proposed earlier) that makes it possible to carry out architectural and compiler optimizations specifically targeting the inter connect, Finally we present synergistic combination of these algorithms that simultaneously saves energy in functional units and interconnects to improve the usability of clustered architectures by archiving better overall energy-performance trade-offs. Our compiler assisted leakage energy management scheme for functional units reduces the energy consumption of functional units approximately by 15% and 17% in the context of a 2-clustered and a 4-clustered VLIW architecture respectively with negligible performance degradation over and above that offered by a hardware-only scheme. The interconnect energy optimization scheme improves the energy consumption of interconnects on an average by 41% and 46% for a 2-clustered and a 4-clustered machine respectively with 2% and 1.5% performance degradation. The combined scheme options slightly better energy benefit in functional units and 37% and 43% energy benefit in interconnect with slightly higher performance degradation. Even with the conservative estimates of contribution of functional unit interconnect to overall processor energy consumption the proposed combined scheme obtains on an average 8% and 10% improvement in overall energy delay product with 3.5% and 2% performance degradation for a 2-clustered and a 4-clustered machine respectively. We present a detailed experimental evaluation of the proposed schemes using the Trimaran compiler infrastructure.

Page generated in 0.0745 seconds