• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 22
  • 9
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 132
  • 27
  • 26
  • 20
  • 17
  • 16
  • 16
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Spatial Audio for the Mobile User

Sánchez Pardo, Ignacio January 2005 (has links)
Voice over the Internet Protocol (VoIP) is one of the latest and most successful Internet services. It takes advantage of Wireless Local Area Networks (WLANs) and broadband connections to provide high quality and low cost telephony over the Internet or an intranet. This project exploits features of VoIP to create a communication scenario where various conversations can be held at the same time, and each of these conversations can be located at a virtual location in space. The report includes theoretical analysis of psychoacoustic parameters and their experimental implementation together with the design of a spatial audio module for the Session Initiation Protocol (SIP) User Agent “minisip”. Besides the 3D sound environment this project introduces multitasking as an integrative feature for “minisip”, gathering various sound inputs connected by a SIP session to the “minisip” interface, and combining them altogether into a unique output. This later feature is achieved with the use of resampling as a core technology. The effects of traffic increment to and from the user due to the support of multiple streams are also introduced. / Röst över Internet Protocol (VoIP) är en av de senaste och mest framgångsrika Internettjänsterna. Det utnyttjar Trådlösa Nätverk och bredband för att erbjuda högkvalitativ och billig telefonering över Internet eller ett Intranät. Det här projektet använder sig av VoIP för att skapa ett kommunikationsscenario där flera olika konversationer kan hållas samtidigt och där varje konversation kan placeras på en virtuell plats i rymden. Rapporten innehåller en teoretisk analys av psykoakustiska parametrar och deras experimentella genomförande tillsammans med design av en 3D ljud modul för Session Initiation Protocol (SIP) User Agent ”minisip”. Förutom ljudmiljön i 3D introducerar projektet multitasking som en integrerbar del av ”minisip”. Alla tänkbara ljudkällor baserade på SIP förbindelser samlas med ”minisip” interfacet och kombineras till en enda utsignal. Detta uppnås med hjälp av resampling som kärnteknologi. Effekterna av att mer trafik når användaren på grund av stödet av multiple källor introduceras också.
72

On Sufficient Dimension Reduction via Asymmetric Least Squares

Soale, Abdul-Nasah, 0000-0003-2093-7645 January 2021 (has links)
Accompanying the advances in computer technology is an increase collection of high dimensional data in many scientific and social studies. Sufficient dimension reduction (SDR) is a statistical method that enable us to reduce the dimension ofpredictors without loss of regression information. In this dissertation, we introduce principal asymmetric least squares (PALS) as a unified framework for linear and nonlinear sufficient dimension reduction. Classical methods such as sliced inverse regression (Li, 1991) and principal support vector machines (Li, Artemiou and Li, 2011) often do not perform well in the presence of heteroscedastic error, while our proposal addresses this limitation by synthesizing different expectile levels. Through extensive numerical studies, we demonstrate the superior performance of PALS in terms of both computation time and estimation accuracy. For the asymptotic analysis of PALS for linear sufficient dimension reduction, we develop new tools to compute the derivative of an expectation of a non-Lipschitz function. PALS is not designed to handle symmetric link function between the response and the predictors. As a remedy, we develop expectile-assisted inverse regression estimation (EA-IRE) as a unified framework for moment-based inverse regression. We propose to first estimate the expectiles through kernel expectile regression, and then carry out dimension reduction based on random projections of the regression expectiles. Several popular inverse regression methods in the literature including slice inverse regression, slice average variance estimation, and directional regression are extended under this general framework. The proposed expectile-assisted methods outperform existing moment-based dimension reduction methods in both numerical studies and an analysis of the Big Mac data. / Statistics
73

The Single Imputation Technique in the Gaussian Mixture Model Framework

Aisyah, Binti M.J. January 2018 (has links)
Missing data is a common issue in data analysis. Numerous techniques have been proposed to deal with the missing data problem. Imputation is the most popular strategy for handling the missing data. Imputation for data analysis is the process to replace the missing values with any plausible values. Two most frequent imputation techniques cited in literature are the single imputation and the multiple imputation. The multiple imputation, also known as the golden imputation technique, has been proposed by Rubin in 1987 to address the missing data. However, the inconsistency is the major problem in the multiple imputation technique. The single imputation is less popular in missing data research due to bias and less variability issues. One of the solutions to improve the single imputation technique in the basic regression model: the main motivation is that, the residual is added to improve the bias and variability. The residual is drawn by normal distribution assumption with a mean of 0, and the variance is equal to the residual variance. Although new methods in the single imputation technique, such as stochastic regression model, and hot deck imputation, might be able to improve the variability and bias issues, the single imputation techniques suffer with the uncertainty that may underestimate the R-square or standard error in the analysis results. The research reported in this thesis provides two imputation solutions for the single imputation technique. In the first imputation procedure, the wild bootstrap is proposed to improve the uncertainty for the residual variance in the regression model. In the second solution, the predictive mean matching (PMM) is enhanced, where the regression model is taking the main role to generate the recipient values while the observations in the donors are taken from the observed values. Then the missing values are imputed by randomly drawing one of the observations in the donor pool. The size of the donor pool is significant to determine the quality of the imputed values. The fixed size of donor is used to be employed in many existing research works with PMM imputation technique, but might not be appropriate in certain circumstance such as when the data distribution has high density region. Instead of using the fixed size of donor pool, the proposed method applies the radius-based solution to determine the size of donor pool. Both proposed imputation procedures will be combined with the Gaussian mixture model framework to preserve the original data distribution. The results reported in the thesis from the experiments on benchmark and artificial data sets confirm improvement for further data analysis. The proposed approaches are therefore worthwhile to be considered for further investigation and experiments.
74

Comparing the Statistical Tests for Homogeneity of Variances.

Mu, Zhiqiang 15 August 2006 (has links) (PDF)
Testing the homogeneity of variances is an important problem in many applications since statistical methods of frequent use, such as ANOVA, assume equal variances for two or more groups of data. However, testing the equality of variances is a difficult problem due to the fact that many of the tests are not robust against non-normality. It is known that the kurtosis of the distribution of the source data can affect the performance of the tests for variance. We review the classical tests and their latest, more robust modifications, some other tests that have recently appeared in the literature, and use bootstrap and permutation techniques to test for equal variances. We compare the performance of these tests under different types of distributions, sample sizes and true ratios of variances of the populations. Monte-Carlo methods are used in this study to calculate empirical powers and type I errors under different settings.
75

Toward Automatically Composed FPGA-Optimized Robotic Systems Using High-Level Synthesis

Lin, Szu-Wei 14 April 2023 (has links) (PDF)
Robotic systems are known to be computationally intensive. To improve performance, developers tend to implement custom robotic algorithms in hardware. However, a full robotic system typically consists of many interconnected algorithmic components that can easily max-out FPGA resources, thus requiring the designer to adjust each algorithm design for each new robotic systems in order to meet specific systems requirements and limited resources. Furthermore, manual development of digital circuitry using a hardware description language (HDL) such as verilog or VHDL, is error-prone, time consuming, and often takes months or years to develop and verify. Recent developments in high-level synthesis (HLS), enable automatic generation of digital circuit designs from high-level languages such as C or C++. In this thesis, we propose to develop a database of HLS-generated pareto-optimal hardware designs for various robotic algorithms, such that a fully automated process can optimally compose a complete robotic system given a set of system requirements. In the first part of this thesis, we take a first step towards this goal by developing a system for automatic selection of an Occupancy Grid Mapping (OGM) implementation given specific system requirements and resource thresholds. We first generate hundreds of possible hardware designs via Vitis HLS as we vary parameters to explore the designs space. We then present results which evaluate and explore trade-offs of these designs with respect to accuracy, latency, resource utilization, and power. Using these results, we create a software tool which is able to automatically select an optimal OGM implementation. After implementing selected designs on a PYNQ-Z2 FPGA board, our results show that the runtime of the algorithm improves by 35x over a C++-based implementation. In the second part of this thesis, we extend these same techniques to the Particle Filter (PF) algorithm by implementing 7 different resampling methods and varying parameters on hardware, again via HLS. In this case, we are able to explore and analyze thousands of PF designs. Our evaluation results show that runtime of the algorithm using Local Selection Resampling method reaches the fastest performance on an FPGA and can be as much as 10x faster than in C++. Finally, we build another design selection tool that automatically generates an optimal PF implementation from this design space for a given query set of requirements.
76

Distribution-Based Adversarial Multiple-Instance Learning

Chen, Sherry 27 January 2023 (has links)
No description available.
77

Parameter Estimation from Retarding Potential Analyzers in the Presence of Realistic Noise

Debchoudhury, Shantanab 15 March 2019 (has links)
Retarding Potential Analyzers (RPA) have a rich flight heritage. These instruments are largely popular since a single current-voltage (I-V) profile can provide in-situ measurements of ion temperature, velocity and composition. The estimation of parameters from an RPA I-V curve is affected by grid geometries and non-ideal biasing which have been studied in the past. In this dissertation, we explore the uncertainties associated with estimated ion parameters from an RPA in the presence of instrument noise. Simulated noisy I-V curves representative of those expected from a mid-inclination low Earth orbit are fitted with standard curve fitting techniques to reveal the degree of uncertainty and inter-dependence between expected errors, with varying levels of additive noise. The main motive is to provide experimenters working with RPA data with a measure of error scalable for different geometries. In subsequent work, we develop a statistics based bootstrap technique designed to mitigate the large inter-dependency between spacecraft potential and ion velocity errors, which were seen to be highly correlated when estimated using a standard algorithm. The new algorithm - BATFORD, acronym for "Bootstrap-based Algorithm with Two-stage Fit for Orbital RPA Data analysis" - was applied to a simulated dataset treated with noise from a laboratory calibration based realistic noise model, and also tested on real in-flight data from the C/NOFS mission. BATFORD outperforms a traditional algorithm in simulation and also provides realistic in-situ estimates from a section of a C/NOFS orbit when the satellite passed through a plasma bubble. The low signal-to-noise ratios (SNR) of measured I-Vs in these bubbles make autonomous parameter estimation notoriously difficult. We thus propose a method for robust autonomous analysis of RPA data that is reliable in low SNR environments, and is applicable for all RPA designs. / Doctor of Philosophy / The plasma environment in Earth’s upper atmosphere is dynamic and diverse. Of particular interest is the ionosphere - a region of dense ionized gases that directly affects the variability in weather in space and the communication of radio wave signals across Earth. Retarding potential analyzers (RPA) are instruments that can directly measure the characteristics of this environment in flight. With the growing popularity of small satellites, these probes need to be studied in greater detail to exploit their ability to understand how ions - the positively charged particles- behave in this region. In this dissertation, we aim to understand how the RPA measurements, obtained as current-voltage relationships, are affected by electronic noise. We propose a methodology to understand the associated uncertainties in the estimated parameters through a simulation study. The results show that a statistics based algorithm can help to interpret RPA data in the presence of noise, and can make autonomous, robust and more accurate measurements compared to a traditional non-linear curve-fitting routine. The dissertation presents the challenges in analyzing RPA data that is affected by noise and proposes a new method to better interpret measurements in the ionosphere that can enable further scientific progress in the space physics community.
78

Statistical estimators of the finite population parameters in the case of sample rotation / Baigtinės populiacijos parametrų statistiniai įvertiniai esant imties rotacijai

Chadyšas, Viktoras 03 March 2010 (has links)
The dissertation analyzes how to incorporate auxiliary information into the estimation of the finite population total, distribution function and quantile in the case of sample rotation. First of all estimation of the finite population total in the case of sample rotation are considered. We focus on construction of the total estimator for rotated sampling design. Successive sampling procedure using multi-phase sampling design have been developed. The composite ratio type estimator of the total using auxiliary information and its approximate variance is constructed. A simulation study, based on the real population data, is performed and the proposed estimators are compared by a traditional estima-tor for a total. The composite estimators of finite population distribution function, constructed under sampling on two occasions, are considered. Composite regression and ratio type estimators are constructed, using values of the study variable as auxiliary information obtained on the first occasion. The optimal estimators, in the sense of minimal variance, is also obtained. A simulation study, based on the real population data, is performed and the proposed estimators are compared by a traditional estimator for a distribution function. Several quantile estimators by deriving the distribution function estimators with the use of auxiliary information are proposed. Some procedures that may be used to obtain estimates of confidence intervals for quantiles in a finite population (most of... [to full text] / Disertacijoje sudaromi baigtinės populiacijos tyrimo kintamojo sumos, pasiskirstymo funkcijos, kvantilio įvertiniai esant imties rotacijai. Pirmiausia darbe nagrinėjamas baigtinės populiacijos tyrimo kintamojo sumos vertinimas esant imties rotacijai. Sudarytas sudėtinis santykinis sumos įvertinys naudojantis papildomą informaciją žinomą iš ankstesnių imties rinkimų. Modeliavimo rezultatai rodo, kad papildomos informacijos panaudojimas iš jau išrinktos imties gali pagerinti įvertinių tikslumą. Kelių ėmimų schema gali būti taikoma siekiant pagerinti baigtinės populiacijos nepaslinktojo imties planu pagristo sumos įvertinio tikslumą. Taip pat nagrinėjami baigtinės populiacijos tyrimo kintamojo pasiskirstymo funkcijos įvertiniai esant imties rotacijai. Sudaryti keli tyrimo kintamojo baigtinėje populiacijoje pasiskirstymo funkcijos sudėtiniai įvertiniai (regresinis ir santykinis) naudojant dviejų ėmimų schemą. Pasiūlyti optimalūs sudėtiniai pasiskirstymo funkcijos įvertiniai su mažiausia dispersija. Įvertiniai lyginami tarpusavyje atliekant modeliavimą su realiais duomenimis. Baigtinės populiacijos kvantilio ivertiniai sudaromi imant sudėtiniu pasiskirstymo funkcijų įvertinių atvirkštinių funkcijų įvertinius. Taip pat sudaromi pasikliautinojo intervalo įvertiniai kvantiliams, taikant kartotinių imčių metodus. Modeliuojant duomenis lyginamas jų tikslumas, daromos išvados apie pasikliautinojo intervalo įvertinių, sudarytų skirtingais būdais, kvantiliui efektyvumą.
79

Improved Methods for Pharmacometric Model-Based Decision-Making in Clinical Drug Development

Dosne, Anne-Gaëlle January 2016 (has links)
Pharmacometric model-based analysis using nonlinear mixed-effects models (NLMEM) has to date mainly been applied to learning activities in drug development. However, such analyses can also serve as the primary analysis in confirmatory studies, which is expected to bring higher power than traditional analysis methods, among other advantages. Because of the high expertise in designing and interpreting confirmatory studies with other types of analyses and because of a number of unresolved uncertainties regarding the magnitude of potential gains and risks, pharmacometric analyses are traditionally not used as primary analysis in confirmatory trials. The aim of this thesis was to address current hurdles hampering the use of pharmacometric model-based analysis in confirmatory settings by developing strategies to increase model compliance to distributional assumptions regarding the residual error, to improve the quantification of parameter uncertainty and to enable model prespecification. A dynamic transform-both-sides approach capable of handling skewed and/or heteroscedastic residuals and a t-distribution approach allowing for symmetric heavy tails were developed and proved relevant tools to increase model compliance to distributional assumptions regarding the residual error. A diagnostic capable of assessing the appropriateness of parameter uncertainty distributions was developed, showing that currently used uncertainty methods such as bootstrap have limitations for NLMEM. A method based on sampling importance resampling (SIR) was thus proposed, which could provide parameter uncertainty in many situations where other methods fail such as with small datasets, highly nonlinear models or meta-analysis. SIR was successfully applied to predict the uncertainty in human plasma concentrations for the antibiotic colistin and its prodrug colistin methanesulfonate based on an interspecies whole-body physiologically based pharmacokinetic model. Lastly, strategies based on model-averaging were proposed to enable full model prespecification and proved to be valid alternatives to standard methodologies for studies assessing the QT prolongation potential of a drug and for phase III trials in rheumatoid arthritis. In conclusion, improved methods for handling residual error, parameter uncertainty and model uncertainty in NLMEM were successfully developed. As confirmatory trials are among the most demanding in terms of patient-participation, cost and time in drug development, allowing (some of) these trials to be analyzed with pharmacometric model-based methods will help improve the safety and efficiency of drug development.
80

Análise da dinâmica do potássio e nitrato em colunas de solo não saturado por meio de modelos não lineares e multiresposta / Analysis of the dynamics of potassium and nitrate in soil columns unsaturated through nonlinear model and multi-response

Peixoto, Ana Patricia Bastos 02 August 2013 (has links)
Nos últimos anos grande número de modelos computacionais tem sido propostos com o intuito de descrever o movimento de solutos no perfil do solo, apesar disso, o que se observa é que existe grande dificuldade em se modelar esses fenômenos, para que o modelo possa predizer o processo de deslocamento e retenção dos solutos na natureza. Para tanto, o objetivo deste trabalho foi utilizar um modelo estatístico para descrever o transporte dos solutos no perfil do solo. Dessa forma, foi realizado um experimento em laboratório e observado os níveis de potássio e nitrato ao longo do perfil dos solos Latossolo Vermelho Amarelo e Nitossolo Vermelho. Para inferir sobre essas variáveis foram consideradas duas abordagens. Para a primeira abordagem foi utilizado um modelo de regressão não linear para cada uma das variáveis, cujos parâmetros do modelo apresentam uma interpretação prática, na área de solos. Para esse modelo foi realizado um esboço sobre a não linearidade do mesmo para verificar as propriedades assintóticas dos estimadores dos parâmetros. Para o método de estimação foi considerado, o método de mínimos quadrados e o método de bootstrap. Além disso, foi realizada uma análise de diagnóstico para verificar a adequação do modelo, bem como identificar pontos discrepantes. Por outro lado, para outra abordagem, foi utilizado um modelo multiresposta para analisar o comportamento das variáveis nitrato e potássio ao longo do perfil dos solos, conjuntamente. Para esse modelo foi utilizado o método da máxima verossimilhança para encontrar as estimativas dos parâmetros do modelo. Em ambas as situações, observou-se a adequação dos modelos para descrever o comportamento dos solutos nos solos, sendo uma alternativa para os pesquisadores que trabalham com estudo de solos. O modelo logístico com quatro parâmetros se destacou por apresentar melhores propriedades, como medidas de não linearidade e boa qualidade de ajuste. / In the last years, several computational models have been proposed to describe the movement of solutes in the soil profile, but what is observed is that there is great difficulty in model these phenomena, so that model can predict the displacement process and retention of solutes in nature. Thus, the aim of this study was to use a statistical model to describe the transport of solutes in the soil profile. Therefore, an experiment was conducted in the laboratory and observed levels of potassium and nitrate along the depth of soil Oxisol (Haplustox) and Hapludox,. To make inferences about these variables were considered two approaches. For the first approach was utilized a non-linear regression model for each variable and the model parameters have a practical interpretation on soil. For this model we performed a sketch on the nonlinearity of the model to check the asymptotic properties of parameter estimators. To estimate the parameters were considered the least squares method and the bootstrap method. In addition, we performed a diagnostic analysis to verify the adequacy of the model and identify outliers. In the second approach considered was using a multi-response model to analyze the behavior of the variables nitrate and potassium throughout the soil profile together. For this model we used the maximum likelihood method to estimate the model parameters. In both cases, we observed the suitability of the models to describe the behavior of solutes in soils, being an alternative for researchers working on the study of soils. The logistic model with four parameters stood out with better properties, such as non-linearity and good fit.

Page generated in 0.072 seconds