• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Assessing Nonlinear Relationships through Rich Stimulus Sampling in Repeated-Measures Designs

Cole, James Jacob 01 August 2018 (has links)
Explaining a phenomenon often requires identification of an underlying relationship between two variables. However, it is common practice in psychological research to sample only a few values of an independent variable. Young, Cole, and Sutherland (2012) showed that this practice can impair model selection in between-subject designs. The current study expands that line of research to within-subjects designs. In two Monte Carlo simulations, model discrimination under systematic sampling of 2, 3, or 4 levels of the IV was compared with that under random uniform sampling and sampling from a Halton sequence. The number of subjects, number of observations per subject, effect size, and between-subject parameter variance in the simulated experiments were also manipulated. Random sampling out-performed the other methods in model discrimination with only small, function-specific costs to parameter estimation. Halton sampling also produced good results but was less consistent. The systematic sampling methods were generally rank-ordered by the number of levels they sampled.
12

Local Part Model for Action Recognition in Realistic Videos

Shi, Feng January 2014 (has links)
This thesis presents a framework for automatic recognition of human actions in uncontrolled, realistic video data such as movies, internet and surveillance videos. In this thesis, the human action recognition problem is solved from the perspective of local spatio-temporal feature and bag-of-features representation. The bag-of-features model only contains statistics of unordered low-level primitives, and any information concerning temporal ordering and spatial structure is lost. To address this issue, we proposed a novel multiscale local part model on the purpose of maintaining both structure information and ordering of local events for action recognition. The method includes both a coarse primitive level root feature covering event-content statistics and higher resolution overlapping part features incorporating local structure and temporal relationships. To extract the local spatio-temporal features, we investigated a random sampling strategy for efficient action recognition. We also introduced the idea of using very high sampling density for efficient and accurate classification. We further explored the potential of the method with the joint optimization of two constraints: the classification accuracy and its efficiency. On the performance side, we proposed a new local descriptor, called GBH, based on spatial and temporal gradients. It significantly improved the performance of the pure spatial gradient-based HOG descriptor on action recognition while preserving high computational efficiency. We have also shown that the performance of the state-of-the-art MBH descriptor can be improved with a discontinuity-preserving optical flow algorithm. In addition, a new method based on histogram intersection kernel was introduced to combine multiple channels of different descriptors. This method has the advantages of improving recognition accuracy with multiple descriptors and speeding up the classification process. On the efficiency side, we applied PCA to reduce the feature dimension which resulted in fast bag-of-features matching. We also evaluated the FLANN method on real-time action recognition. We conducted extensive experiments on real-world videos from challenging public action datasets. We showed that our methods achieved the state-of-the-art with real-time computational potential, thus highlighting the effectiveness and efficiency of the proposed methods.
13

Random Sampling of Steel Scrap : A novel method of recycling

Sirén, Patrik, Nguyen, John January 2013 (has links)
Today, the alloy content in steel scrap deliveries in Sweden are determined by the waste management company by test melts. Random sampling analysis (RSA) is an alternative method, under development, to determine the alloy composition of steel scrap. This method evaluates the alloy composition of the steel delivery based on a number of randomly chosen steel scrap unit. RSA is a surface analysis, it is done on a distributed area where with the help of a grid, marks the random steel scrap units for evaluation. This means that the surface fraction determines the odds of analyzing the steel scrap. In a previous study of RSA, 100 random pieces of scrap units was evaluated for its alloy composition with Optical Emission Spectroscopy (OES). These scrap deliveries were thereafter sent to an Electric Arc Furnace for melting. This was done to compare the RSA analysis with samples taken after scrap melting. The RSA study however assumes that the scrap units have the same weight. In this study, the weights of the scrap units in the RSA was assumed to have a variance. Using MATLAB® and the alloy composition data acquired from the old study, a simulation was made where 100 pieces and 100 analyses was made to see what the margin of error in comparison to the old study. Another goal with this study was to see if the variance of the weight had any relation to the absolute deviation of each element in the alloy composition. The results showed that there was no relation between the absolute deviation of each element and the weight distribution in the population. This indicates that there are other factors involved other than the weight distribution in the samples. The average margin of error for all the elements was calculated to 5.94% for the weight distribution of 0.1:0.1:10 kg. This indicates that RSA is accurate or close in analysis for old steel scrap deliveries even if the weight distribution is 0.1:0.1:10kg. The highest margin of error was obtained for W, Ce and Ti with a margin of error of 18.6%, 14.89% and 10.71% respectively. All the other elements had a margin of error beneath 10%. This indicates that for RSA on old steel scrap deliveries a margin of error of 10% would be a good benchmark on the accuracy of the analysis.
14

Apport de l'échantillonnage aléatoire à temps quantifié pour le traitement en bande de base dans un contexte radio logicielle restreinte / Contribution of the time-quantized random sampling technique applied to the base-band stage of software defined radio receivers

Maalej, Asma 23 May 2012 (has links)
Ces travaux de recherche s’inscrivent dans le cadre de la conception de récepteurs multistandard optimisés pouvant traiter des signaux à spécifications hétérogènes. L’idée est d’appliquer l’échantillonnage aléatoire au niveau de l’étage en bande de base d’un récepteur radio logicielle restreinte afin de tirer profit de son pouvoir d’anti-repliement. La nouveauté dans ces travaux est l’étude analytique de la réduction du repliement spectral par l’échantillonnage aléatoire à temps quantifié, candidat favorable à l’implémentation matérielle. Une deuxième contribution concerne aussi l’étude analytique de l’échantillonnage pseudo-aléatoire à temps quantifié (TQ-PRS) dont l’importance réside en sa grande facilité d’implémentation matérielle. Les formulations théoriques ont permis d’estimer l’atténuation des répliques en fonction du facteur de la quantification temporelle et du facteur du sur-échantillonnage. Les mesures de l’atténuation du repliement spectral ont permis de dimensionner l’étage en bande de base d’une architecture de réception multistandard. Le dimensionnement s’intéresse à différentes configurations de l’étage en bande de base régies par les performances du convertisseur analogique numérique (ADC) utilisé.Les travaux de recherche ont démontré que l’application du TQ-PRS au niveau de l’ADC mène soit à une réduction de l’ordre du filtre anti-repliement soit à une réduction de la fréquence d’échantillonnage. Un bilan global de la consommation de puissance a permis un gain de 30% de la consommation de l’étage en bande de base analogique. En tenant compte du générateur de l’horloge TQ-PRS et de l’étage de sélection numérique du canal, ce gain devient 25%. / The work presented in this Ph.D. dissertation deals with the design of multistandard radio receivers that process signals with heterogeneous specifications. The originality of these research activities comes from the application of random sampling at the baseband stage of a software defined radio receiver. The purpose behind the choice of random sampling is to take advantage of its alias-free feature. The originality of this work is the analytic proof of the alias attenuation feature of the time quantized random sampling, the implementation version of the random sampling. A second contribution concerns also the analytic study of the simplest implementation version of the random sampling, the time quantized pseudo-random sampling (TQ-PRS). Theoretical formulas allow the estimation of the alias attenuation in terms of time quantization factor and oversampling ratio. Alias attenuation measurement permits to design the baseband stage of the proposed multistandard radio receiver architecture. The design concerns different configuration of the baseband stage according to the performances of the used analog-to-digital converters (ADC). The TQPRS allows decreasing the anti-aliasing filter order or the sampling frequency. The design of the baseband stage reveals a difference on the choice of the time quantization factor for each standard. The power consumption budget analysis demonstrates a power consumption gain of 30% regarding the power consumption of the analog baseband stage. This gain becomes 27.5% when the TQ-PRS clock and the digital canal selection stages are considered.
15

The Effects of Topography on Spatial Tornado Distribution

Cox, David Austin 12 May 2012 (has links)
The role of topography on the spatial distribution of tornadoes was assessed through geospatial and statistical techniques. A 100-m digital elevation model was used to create slope, aspect, and surface roughness maps; and; tornado beginning and ending points and paths were used to extract terrain information. Tornado touchdowns, liftoffs, paths, and path-land angles were examined to determine whether tornado paths occur more frequently in or along certain terrain or slopes. Statistical analyses, such as bootstrapping, were used to analyze tornado touchdowns, liftoffs and paths and path-relative terrain angles. Results show that tornado paths are more common with downhill-movement. Tornadoes are not as likely to move uphill because the 73.6 percent northeast path bias represents the highest frequencies of path-angles. Tornado touchdowns and paths occur more often in smooth terrain, rather than rough terrain. Complex topographic variability seems to not have an effect on the spatial distribution of tornadoes.
16

Coded Acquisition of High Speed Videos with Multiple Cameras

Pournaghi, Reza 10 April 2015 (has links)
High frame rate video (HFV) is an important investigational tool in sciences, engineering and military. In ultrahigh speed imaging, the obtainable temporal, spatial and spectral resolutions are limited by the sustainable throughput of in-camera mass memory, the lower bound of exposure time, and illumination conditions. In order to break these bottlenecks, we propose a new coded video acquisition framework that employs K>1 cameras, each of which makes random measurements of the video signal in both temporal and spatial domains. For each of the K cameras, this multi-camera strategy greatly relaxes the stringent requirements in memory speed, shutter speed, and illumination strength. The recovery of HFV from these random measurements is posed and solved as a large scale l1 minimization problem by exploiting joint temporal and spatial sparsities of the 3D signal. Three coded video acquisition techniques of varied trade o s between performance and hardware complexity are developed: frame-wise coded acquisition, pixel-wise coded acquisition, and column-row-wise coded acquisition. The performances of these techniques are analyzed in relation to the sparsity of the underlying video signal. To make ultra high speed cameras of coded exposure more practical and a fordable, we develop a coded exposure video/image acquisition system by an innovative assembling of multiple rolling shutter cameras. Each of the constituent rolling shutter cameras adopts a random pixel read-out mechanism by simply changing the read out order of pixel rows from sequential to random. Simulations of these new image/video coded acquisition techniques are carried out and experimental results are reported. / Dissertation / Doctor of Philosophy (PhD)
17

Amostragem e medidas de qualidade de shapelets / Shapelets sampling and quality measurements

Cavalcante, Lucas Schmidt 02 May 2016 (has links)
Uma série temporal é uma sequência ordenada pelo tempo de valores reais. Dado que inúmeros fenômenos do dia-a-dia podem ser representados por séries temporais, há grande interesse na mineração de dados temporais, em especial na tarefa de classificação. Recentemente foi introduzida uma nova primitiva de séries temporais chamada shapelet, que é uma subsequência que permite a classificação de séries temporais de acordo com padrões locais. Na transformada shapelet estas subsequências se tornam atributos em uma matriz de distância que mede a dissimilaridade entre os atributos e as séries temporais. Para obter a transformada é preciso escolher alguns shapelets dos inúmeros possíveis, seja pelo efeito de evitar overfitting ou pelo fato de que é computacionalmente caro obter todos. Sendo assim, foram elaboradas medidas de qualidade para os shapelets. Tradicionalmente tem se utilizado a medida de ganho de informação, porém recentemente foi proposto o uso da f-statistic, e nós propomos neste trabalho uma nova denominada in-class transitions. Em nossos experimentos demonstramos que a inclass transitions costuma obter a melhor acurácia, especialmente quando poucos atributos são utilizados. Além disso, propomos o uso de amostragem aleatória nos shapelets para reduzir o espaço de busca e acelerar o processo de obtenção da transformada. Contrastamos a abordagem de amostragem aleatória contra uma em que só são exploradas shapelets de determinados tamanhos. Nossos experimentos mostraram que a amostragem aleatória é mais rápida e requer a computação de um menor número de shapelets. De fato, obtemos os melhores resultados ao amostrarmos 5% dos shapelets, mas mesmo a uma amostragem de 0,05% não foi possível notar uma degradação significante da acurácia. / A time series is a time ordered sequence of real values. Given that numerous daily phenomena that can be described by time series, there is a great interest on its data mining, specially on the task of classification. Recently it was introduced a new time series primitive called shapelets, that is a subsequence that allows the classification of time series by local patterns. On the shapelet transformation these subsequences turn into attributes in a distance matrix that measures the dissimilarity between these attributes and the time series. To obtain the shapelet transformation it is required to choose some shapelets among all of the possible ones, be it to avoid overfitting or because it is too computationally expensive to obtain everyone. Thus, some shapelet quality measurements were created. Traditionally the information gain has been used as the default measurement, however, recently it was proposed to use the f-statistic instead, and in this work we propose a new one called in-class transitions. On our experiments it is shown that usually the in-class transitions achieves the best accuracy, specially when few attributes are used. Moreover, we propose the use of random sampling of shapelets as a way to reduce the search space and to speed up the process of obtaining the shapelet transformation. We contrast this approach with one that explores only shapelets that have a specific length. Our experiments show that random sampling is faster and requires fewer shapelets to be computed. In fact, we got the best results when we sampled 5% of the shapelets, but even at a rate of 0.05% it was not possible to detect a significant degradation of the accuracy.
18

Amostragem e medidas de qualidade de shapelets / Shapelets sampling and quality measurements

Lucas Schmidt Cavalcante 02 May 2016 (has links)
Uma série temporal é uma sequência ordenada pelo tempo de valores reais. Dado que inúmeros fenômenos do dia-a-dia podem ser representados por séries temporais, há grande interesse na mineração de dados temporais, em especial na tarefa de classificação. Recentemente foi introduzida uma nova primitiva de séries temporais chamada shapelet, que é uma subsequência que permite a classificação de séries temporais de acordo com padrões locais. Na transformada shapelet estas subsequências se tornam atributos em uma matriz de distância que mede a dissimilaridade entre os atributos e as séries temporais. Para obter a transformada é preciso escolher alguns shapelets dos inúmeros possíveis, seja pelo efeito de evitar overfitting ou pelo fato de que é computacionalmente caro obter todos. Sendo assim, foram elaboradas medidas de qualidade para os shapelets. Tradicionalmente tem se utilizado a medida de ganho de informação, porém recentemente foi proposto o uso da f-statistic, e nós propomos neste trabalho uma nova denominada in-class transitions. Em nossos experimentos demonstramos que a inclass transitions costuma obter a melhor acurácia, especialmente quando poucos atributos são utilizados. Além disso, propomos o uso de amostragem aleatória nos shapelets para reduzir o espaço de busca e acelerar o processo de obtenção da transformada. Contrastamos a abordagem de amostragem aleatória contra uma em que só são exploradas shapelets de determinados tamanhos. Nossos experimentos mostraram que a amostragem aleatória é mais rápida e requer a computação de um menor número de shapelets. De fato, obtemos os melhores resultados ao amostrarmos 5% dos shapelets, mas mesmo a uma amostragem de 0,05% não foi possível notar uma degradação significante da acurácia. / A time series is a time ordered sequence of real values. Given that numerous daily phenomena that can be described by time series, there is a great interest on its data mining, specially on the task of classification. Recently it was introduced a new time series primitive called shapelets, that is a subsequence that allows the classification of time series by local patterns. On the shapelet transformation these subsequences turn into attributes in a distance matrix that measures the dissimilarity between these attributes and the time series. To obtain the shapelet transformation it is required to choose some shapelets among all of the possible ones, be it to avoid overfitting or because it is too computationally expensive to obtain everyone. Thus, some shapelet quality measurements were created. Traditionally the information gain has been used as the default measurement, however, recently it was proposed to use the f-statistic instead, and in this work we propose a new one called in-class transitions. On our experiments it is shown that usually the in-class transitions achieves the best accuracy, specially when few attributes are used. Moreover, we propose the use of random sampling of shapelets as a way to reduce the search space and to speed up the process of obtaining the shapelet transformation. We contrast this approach with one that explores only shapelets that have a specific length. Our experiments show that random sampling is faster and requires fewer shapelets to be computed. In fact, we got the best results when we sampled 5% of the shapelets, but even at a rate of 0.05% it was not possible to detect a significant degradation of the accuracy.
19

Processor design-space exploration through fast simulation.

Khan, Taj Muhammad 12 May 2011 (has links) (PDF)
Simulation is a vital tool used by architects to develop new architectures. However, because of the complexity of modern architectures and the length of recent benchmarks, detailed simulation of programs can take extremely long times. This impedes the exploration of processor design space which the architects need to do to find the optimal configuration of processor parameters. Sampling is one technique which reduces the simulation time without adversely affecting the accuracy of the results. Yet, most sampling techniques either ignore the warm-up issue or require significant development effort on the part of the user.In this thesis we tackle the problem of reconciling state-of-the-art warm-up techniques and the latest sampling mechanisms with the triple objective of keeping the user effort minimum, achieving good accuracy and being agnostic to software and hardware changes. We show that both the representative and statistical sampling techniques can be adapted to use warm-up mechanisms which can accommodate the underlying architecture's warm-up requirements on-the-fly. We present the experimental results which show an accuracy and speed comparable to latest research. Also, we leverage statistical calculations to provide an estimate of the robustness of the final results.
20

Optimizing Sample Design for Approximate Query Processing

Rösch, Philipp, Lehner, Wolfgang 30 November 2020 (has links)
The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.

Page generated in 0.105 seconds