• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 22
  • 9
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 27
  • 26
  • 20
  • 17
  • 16
  • 16
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Fresh View of Digital Signal Processing for Software Defined Radios: Part II

Harris, Fred 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / A DSP modem is often designed as a set of processing blocks that replace the corresponding blocks of an analog prototype. Such a design is sub-optimal, inheriting legacy compromises made in the analog design while discarding important design options unique to the DSP domain. In part I of this two part paper, we used multirate processing to transform a digital down converter from an emulation of the standard analog architecture to a DSP based solution that reversed the order of frequency selection, filtering, and resampling. We continue this tack of embedding traditional processing tasks into multirate DSP solutions that perform multiple simultaneous processing tasks.
22

Transaction costs and resampling in mean-variance portfolio optimization

Asumeng-Denteh, Emmanuel 30 April 2004 (has links)
Transaction costs and resampling are two important issues that need great attention in every portfolio investment planning. In practice costs are incurred to rebalance a portfolio. Every investor tries to find a way of avoiding high transaction cost as much as possible. In this thesis, we investigated how transaction costs and resampling affect portfolio investment. We modified the basic mean-variance optimization problem to include rebalancing costs we incur on transacting securities in the portfolio. We also reduce trading as much as possible by applying the resampling approach any time we rebalance our portfolio. Transaction costs are assumed to be a percentage of the amount of securities transacted. We applied the resampling approach and tracked the performance of portfolios over time, assuming transaction costs and then no transaction costs are incurred. We compared how the portfolio is affected when we incorporated the two issues outlined above to that of the basic mean-variance optimization.
23

Inferens av effektutvärdering på retrospektiva data : – baserad på en applikation om samverkan inom Resursteam i Uppsala 2004-2007

Avdic, Daniel January 2008 (has links)
<p>Matchning av observationer på bakgrundsvariabler är en metod att i praktiken undvika de problem som uppstår när vi i icke-deterministiska observationsstudier vill mäta potentiella effekter av exem-pelvis projektsatsningar inom sjukvården. Problemet är att vi är begränsade i våra val till potentiella kontrollindivider eftersom dessa ofta skiljer sig från de behandlade individerna i fråga om faktorer som ofta är högt korrelerade med effektvariabeln som vi är intresserade av. Med hjälp av matchning på retrospektiva data kan vi dock ändå estimera kontrafaktiska individer som därefter kan användas som kontroller i estimationen av behandlingseffekten. Inferens av denna effektutvärderingsestimator är dock problematisk då vissa individer potentiellt kan användas flera gånger i analysen och skapa sned-vridning i variansestimatorn. Istället motiveras i denna uppsats en alternativ metod att göra inferens; via subsampling. Utgångspunkten för denna procedur är att asymptotiskt skatta den empiriska fördel-ningen för estimatorn av intresse genom resampling och därefter göra inferens via denna approxima-tion. Uppsatsens empiriska del är baserad på en applikation om Resursteam som har som övergripande mål att förkorta sjukdomsperioderna för individer som ligger i riskzonen för att drabbas av långtids-sjukskrivningar. Som jämförelse används en tidigare utvärdering av Resursteam där ett annat län an-vänds för att välja lämpliga kontroller.</p>
24

Inferens av effektutvärdering på retrospektiva data : – baserad på en applikation om samverkan inom Resursteam i Uppsala 2004-2007

Avdic, Daniel January 2008 (has links)
Matchning av observationer på bakgrundsvariabler är en metod att i praktiken undvika de problem som uppstår när vi i icke-deterministiska observationsstudier vill mäta potentiella effekter av exem-pelvis projektsatsningar inom sjukvården. Problemet är att vi är begränsade i våra val till potentiella kontrollindivider eftersom dessa ofta skiljer sig från de behandlade individerna i fråga om faktorer som ofta är högt korrelerade med effektvariabeln som vi är intresserade av. Med hjälp av matchning på retrospektiva data kan vi dock ändå estimera kontrafaktiska individer som därefter kan användas som kontroller i estimationen av behandlingseffekten. Inferens av denna effektutvärderingsestimator är dock problematisk då vissa individer potentiellt kan användas flera gånger i analysen och skapa sned-vridning i variansestimatorn. Istället motiveras i denna uppsats en alternativ metod att göra inferens; via subsampling. Utgångspunkten för denna procedur är att asymptotiskt skatta den empiriska fördel-ningen för estimatorn av intresse genom resampling och därefter göra inferens via denna approxima-tion. Uppsatsens empiriska del är baserad på en applikation om Resursteam som har som övergripande mål att förkorta sjukdomsperioderna för individer som ligger i riskzonen för att drabbas av långtids-sjukskrivningar. Som jämförelse används en tidigare utvärdering av Resursteam där ett annat län an-vänds för att välja lämpliga kontroller.
25

Landmark Prediction of Survival

Parast, Layla January 2012 (has links)
The importance of developing personalized risk prediction estimates has become increasingly evident in recent years. In general, patient populations may be heterogenous and represent a mixture of different unknown subtypes of disease. When the source of this heterogeneity and resulting subtypes of disease are unknown, accurate prediction of survival may be difficult. However, in certain disease settings the onset time of an observable intermediate event may be highly associated with these unknown subtypes of disease and thus may be useful in predicting long term survival. Throughout this dissertation, we examine an approach to incorporate intermediate event information for the prediction of long term survival: the landmark model. In Chapter 1, we use the landmark modeling framework to develop procedures to assess how a patient’s long term survival trajectory may change over time given good intermediate outcome indications along with prognosis based on baseline markers. We propose time-varying accuracy measures to quantify the predictive performance of landmark prediction rules for residual life and provide resampling-based procedures to make inference about such accuracy measures. We illustrate our proposed procedures using a breast cancer dataset. In Chapter 2, we aim to incorporate intermediate event time information for the prediction of survival. We propose a fully non-parametric procedure to incorporate intermediate event information when only a single baseline discrete covariate is available for prediction. When a continuous covariate or multiple covariates are available, we propose to incorporate intermediate event time information using a flexible varying coefficient model. To evaluate the performance of the resulting landmark prediction rule and quantify the information gained by using the intermediate event, we use robust non-parametric procedures. We illustrate these procedures using a dataset of post-dialysis patients with end-stage renal disease. In Chapter 3, we consider improving efficiency by incorporating intermediate event information in a randomized clinical trial setting. We propose a semi-nonparametric two-stage procedure to estimate survival by incorporating intermediate event information observed before the landmark time. In addition, we present a testing procedure using these resulting estimates to test for a difference in survival between two treatment groups. We illustrate these proposed procedures using an AIDS dataset.
26

Understanding Patterns in Infant-Directed Speech in Context: An Investigation of Statistical Cues to Word Boundaries

Hartman, Rose 01 May 2017 (has links)
People talk about coherent episodes of their experience, leading to strong dependencies between words and the contexts in which they appear. Consequently, language within a context is more repetitive and more coherent than language sampled from across contexts. In this dissertation, I investigated how patterns in infant-directed speech differ under context-sensitive compared to context-independent analysis. In particular, I tested the hypothesis that cues to word boundaries may be clearer within contexts. Analyzing a large corpus of transcribed infant-directed speech, I implemented three different approaches to defining context: a top-down approach using the occurrence of key words from pre-determined context lists, a bottom-up approach using topic modeling, and a subjective coding approach where contexts were determined by open-ended, subjective judgments of coders reading sections of the transcripts. I found substantial agreement among the context codes from the three different approaches, but also important differences in the proportion of the corpus that was identified by context, the distribution of the contexts identified, and some characteristics of the utterances selected by each approach. I discuss implications for the use and interpretation of contexts defined in each of these three ways, and the value of a multiple-method approach in the exploration of context. To test the strength of statistical cues to word boundaries in context-specific sub-corpora relative to a context-independent analysis of cues to word boundaries, I used a resampling procedure to compare the segmentability of context sub-corpora defined by each of the three approaches to a distribution of random sub-corpora, matched for size for each context sub-corpus. Although my analyses confirmed that context-specific sub-corpora are indeed more repetitive, the data did not support the hypothesis that speech within contexts provides richer information about the statistical dependencies among phonemes than is available when analyzing the same statistical dependencies without respect to context. Alternative hypotheses and future directions to further elucidate this phenomenon are discussed. / 2019-02-17
27

Evaluation of Calibration Methods to Adjust for Infrequent Values in Data for Machine Learning

Dutra Calainho, Felipe January 2018 (has links)
The performance of supervised machine learning algorithms is highly dependent on the distribution of the target variable. Infrequent values are more di_cult to predict, as there are fewer examples for the algorithm to learn patterns that contain those values. These infrequent values are a common problem with real data, being the object of interest in many _elds such as medical research, _nance and economics, just to mention a few. Problems regarding classi_cation have been comprehensively studied. For regression, on the other hand, few contributions are available. In this work, two ensemble methods from classi_cation are adapted to the regression case. Additionally, existing oversampling techniques, namely SmoteR, are tested. Therefore, the aim of this research is to examine the inuence of oversampling and ensemble techniques over the accuracy of regression models when predicting infrequent values. To assess the performance of the proposed techniques, two data sets are used: one concerning house prices, while the other regards patients with Parkinson's Disease. The _ndings corroborate the usefulness of the techniques for reducing the prediction error of infrequent observations. In the best case, the proposed Random Distribution Sample Ensemble reduced the overall RMSE by 8.09% and the RMSE for infrequent values by 6.44% when compared with the best performing benchmark for the housing data set.
28

Statistical comparisons for nonlinear curves and surfaces

Zhao, Shi 31 May 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Estimation of nonlinear curves and surfaces has long been the focus of semiparametric and nonparametric regression. The advances in related model fitting methodology have greatly enhanced the analyst’s modeling flexibility and have led to scientific discoveries that would be otherwise missed by the traditional linear model analysis. What has been less forthcoming are the testing methods concerning nonlinear functions, particularly for comparisons of curves and surfaces. Few of the existing methods are carefully disseminated, and most of these methods are subject to important limitations. In the implementation, few off-the-shelf computational tools have been developed with syntax similar to the commonly used model fitting packages, and thus are less accessible to practical data analysts. In this dissertation, I reviewed and tested the existing methods for nonlinear function comparison, examined their operational characteristics. Some theoretical justifications were provided for the new testing procedures. Real data exampleswere included illustrating the use of the newly developed software. A new R package and a more user-friendly interface were created for enhanced accessibility. / 2020-08-22
29

Creation of a High Density Soybean Linkage Map, QTL Mapping and the Effects of Marker Number, Population Size and Significance Threshold on Characterization of Quantitative Trait Loci

Freewalt, Keith January 2014 (has links)
No description available.
30

EXPLORING BOOTSTRAP APPLICATIONS TO LINEAR STRUCTURAL EQUATIONS

PEI, HUILING 21 May 2002 (has links)
No description available.

Page generated in 0.0679 seconds