Spelling suggestions: "subject:"frequencybased"" "subject:"sequencebased""
1 |
Modulated Imaging PolarimetryLaCasse, Charles January 2012 (has links)
In this work, image processing algorithms are presented for an advanced sensor classification known collectively as imaging modulated polarimetry. The image processing algorithms presented are novel in that they use frequency domain based approaches, in comparison to the data domain based approaches that all previous algorithms have employed. Under the conditions on the data and imaging device derived in this work, the frequency domain based demodulation algorithms will optimally reduced reconstruction artifacts in a least squared sense. This work provides a framework for objectively comparing polarimeters that modulate in different domains (i.e. time vs. space), referred to as the spectral density response function. The spectral density response function is created as an analog to the modulation transfer function (or the more general transfer function for temporal devices) employed in the design of conventional imaging devices. The framework considers the total bandwidth of the object to be measured, and then can consider estimation artifacts that arise in both time and space due to the measurement modality that has been chosen. Using the framework for objectively comparing different modulated polarimeters (known as the spectral density response function), a method of developing a Wiener filter for multi-signal demodulation is developed, referred to as the polarimetric Wiener filter. This filter is then shown to be optimal for one extensive test case. This document provides one extensive example of implementing the algorithms and spectral density response calculations on a real system, known as the MSPI polarimeter. The MSPI polarimeter has been published extensively elsewhere, so only a basic system description here is used as necessary to describe how the methods presented here can be implemented on this system.
|
2 |
An Examination of the Stability of Positive Psychological Capital Using Frequency-Based MeasurementMcGee, Elizabeth Anne 01 May 2011 (has links)
The purpose of this study was to explore the utility of frequency-based measurement as an alternative method for examining the stability of psychological capital, a higher-order construct introduced by Luthans and colleagues (2007), consisting of self-efficacy, hope, resilience, and optimism. Frequency-based measurement is a new approach based on the distributional assessment model (Kane, 1986; 2000) that provides information on the relative frequency of occurrence for specific behaviors over a given period of time, and offers a distribution that depicts the scope of an individual’s behavior. One advantage of this approach is that it can provide information on a person’s behavior over time in a single administration, allowing researchers to examine the temporal stability of constructs without having to conduct longitudinal studies (e.g., personality, Edwards & Woehr, 2007).
To investigate the usefulness of this new approach, a series of studies was conducted using a sample of students from a large southeastern university. The first study compared a frequency-based measure of psychological capital to the more traditional Likert-type measure. Results indicated that the two are equivalent measures of the central tendency of psychological capital. The frequency-based measure was also compared to the Likert-type measure given across three contexts (family, school, and social settings) in a second study. Results indicated that the two approaches offered similar information in terms of consistency, with both approaches demonstrating some variability in responses over time or across contexts. Thus, this study provided further evidence that frequency-based measurement offers additional information not available in a single administration using a Likert-type measure. The last study investigated agreement between an individual’s self-reported psychological capital and ratings of their psychological capital given by an acquaintance. Contrary to my expectations, within-item consistency did not moderate self/other agreement. The implications of these findings are outlined, in addition to suggestions for future research.
|
3 |
A Proposed Frequency-Based Feature Selection Method for Cancer ClassificationPan, Yi 01 April 2017 (has links)
Feature selection method is becoming an essential procedure in data preprocessing step. The feature selection problem can affect the efficiency and accuracy of classification models. Therefore, it also relates to whether a classification model can have a reliable performance. In this study, we compared an original feature selection method and a proposed frequency-based feature selection method with four classification models and three filter-based ranking techniques using a cancer dataset. The proposed method was implemented in WEKA which is an open source software. The performance is evaluated by two evaluation methods: Recall and Receiver Operating Characteristic (ROC). Finally, we found the frequency-based feature selection method performed better than the original ranking method.
|
4 |
Vibrations in MagAO: frequency-based analysis of on-sky data, resonance sources identification, and future challenges in vibrations mitigationZúñiga, Sebastián, Garcés, Javier, Close, Laird M., Males, Jared R., Morzinski, Katie M., Escárate, Pedro, Castro, Mario, Marchioni, José, Zagals, Diego 27 July 2016 (has links)
Frequency-based analysis and comparisons of tip-tilt on-sky data registered with 6.5 Magellan Telescope Adaptive Optics (MagAO) system on April and Oct 2014 was performed. Twelve tests are conducted under different operation conditions in order to observe the influence of system instrumentation (such as fans, pumps and louvers). Vibration peaks can be detected, power spectral densities (PSDs) are presented to reveal their presence. Instrumentation-induced resonances, close-loop gain and future challenges in vibrations mitigation techniques are discussed.
|
5 |
Entropy-based nonlinear analysis for electrophysiological recordings of brain activity in Alzheimer's diseaseAzami, Hamed January 2018 (has links)
Alzheimer’s disease (AD) is a neurodegenerative disorder in which the death of brain cells causes memory loss and cognitive decline. As AD progresses, changes in the electrophysiological brain activity take place. Such changes can be recorded by the electroencephalography (EEG) and magnetoencephalography (MEG) techniques. These are the only two neurophysiologic approaches able to directly measure the activity of the brain cortex. Since EEGs and MEGs are considered as the outputs of a nonlinear system (i.e., brain), there has been an interest in nonlinear methods for the analysis of EEGs and MEGs. One of the most powerful nonlinear metrics used to assess the dynamical characteristics of signals is that of entropy. The aim of this thesis is to develop entropy-based approaches for characterization of EEGs and MEGs paying close attention to AD. Recent developments in the field of entropy for the characterization of physiological signals have tried: 1) to improve the stability and reliability of entropy-based results for short and long signals; and 2) to extend the univariate entropy methods to their multivariate cases to be able to reveal the patterns across channels. To enhance the stability of entropy-based values for short univariate signals, refined composite multiscale fuzzy entropy (MFE - RCMFE) is developed. To decrease the running time and increase the stability of the existing multivariate MFE (mvMFE) while keeping its benefits, the refined composite mvMFE (RCmvMFE) with a new fuzzy membership function is developed here as well. In spite of the interesting results obtained by these improvements, fuzzy entropy (FuzEn), RCMFE, and RCmvMFE may still lead to unreliable results for short signals and are not fast enough for real-time applications. To address these shortcomings, dispersion entropy (DispEn) and frequency-based DispEn (FDispEn), which are based on our introduced dispersion patterns and the Shannon’s definition of entropy, are developed. The computational cost of DispEn and FDispEn is O(N) – where N is the signal length –, compared with the O(N2) for popular sample entropy (SampEn) and FuzEn. DispEn and FDispEn also overcome the problem of equal values for embedded vectors and discarding some information with regard to the signal amplitudes encountered in permutation entropy (PerEn). Moreover, unlike PerEn, DispEn and FDispEn are relatively insensitive to noise. As extensions of our developed DispEn, multiscale DispEn (MDE) and multivariate MDE (mvMDE) are introduced to quantify the complexity of univariate and multivariate signals, respectively. MDE and mvMDE have the following advantages over the existing univariate and multivariate multiscale methods: 1) they are noticeably faster; 2) MDE and mvMDE result in smaller coefficient of variations for synthetic and real signals showing more stable profiles; 3) they better distinguish various states of biomedical signals; 4) MDE and mvMDE do not result in undefined values for short time series; and 5) mvMDE, compared with multivariate multiscale SampEn (mvMSE) and mvMFE, needs to store a considerably smaller number of elements. In this Thesis, two restating-state electrophysiological datasets related to AD are analyzed: 1) 148-channel MEGs recorded from 62 subjects (36 AD patients vs. 26 age-matched controls); and 2) 16-channel EEGs recorded from 22 subjects (11 AD patients vs. 11 age-matched controls). The results obtained by MDE and mvMDE suggest that the controls’ signals are more and less complex at respectively short (scales between 1 to 4) and longer (scales between 5 to 12) scale factors than AD patients’ recordings for both the EEG and MEG datasets. The p-values based on Mann-Whitney U-test for AD patients vs. controls show that the MDE and mvMDE, compared with the existing complexity techniques, significantly discriminate the controls from subjects with AD at a larger number of scale factors for both the EEG and MEG datasets. Moreover, the smallest p-values are achieved by MDE (e.g., 0.0010 and 0.0181 for respectively MDE and MFE using EEG dataset) and mvMDE (e.g., 0.0086 and 0.2372 for respectively mvMDE and mvMFE using EEG dataset) for both the EEG and MEG datasets, illustrating the superiority of these developed entropy-based techniques over the state-of-the-art univariate and multivariate entropy approaches. Overall, the introduced FDispEn, DispEn, MDE, and mvMDE methods are expected to be useful for the analysis of physiological signals due to their ability to distinguish different types of time series with a low computation time.
|
6 |
Experimental Dynamic Substructuring of an Ampair 600 Wind Turbine Hub together with Two Blades : A Study of the Transmission Simulator MethodJohansson, Tim, Cwenarkiewicz, Magdalena January 2016 (has links)
In this work, the feasibility to perform substructuring technique with experimental data is demonstrated. This investigation examines two structures with different additional mass‑loads, i.e. transmission simulators (TSs). The two structures are a single blade and the hub together with two blades from an Ampair 600 wind turbine. Simulation data from finite element models of the TSs are numerically decoupled from each of the two structures. The resulting two structures are coupled to each other. The calculations are made exclusively in the frequency domain. A comparison between the predicted behavior from this assembled structure and measurements on the full hub with all three blades is carried out. The result is discouraging for the implemented method. It shows major problems, even though the measurements were performed in a laboratory environment.
|
7 |
An investigation into a generally applicable plant performance indexEggberry, Ivan 29 August 2008 (has links)
It is important to develop methods that are capable of successfully determining plant performance. The method used should be based on the ability to determine the performance of each of the various unit operations within the plant. This in turn will assist with the correct decision as to which unit in the plant should be improved first. The performance of the various units can be accumulated to give a representation of the performance of the entire plant. A plant-wide performance monitoring method has been developed to do just this. Originally it was developed for a specific unit operation. It has now been verified that this method is applicable to different unit operations. The method employed to determine this plant-wide performance is by evaluating how close the plant is to its inherent optimum. Where applicable, this inherent optimum can also be replaced with a user specified optimum. When an optimum is specified there is a possibility of oscillations around this “optimum” and the effects of this on the performance number are eliminated to give a more general plant-wide performance number for each unit operation. In addition to the “optimum” value selection the addition of performance weights to specific focus areas (utility usage or product quality) in the performance calculation will also improve the comparative nature of the plant-wide index for different unit operations. The scope of this investigation is limited to the experimental test rigs that were available in the Process Control Laboratory at the University of Pretoria. The methods that were used to determine the single loop performance of each of the different control loops are: <ul> <li>Minimum variance</li> <li>Generalised minimum variance</li> <li>Integral of the Absolute Error (IAE)</li> <li>Integral of the Square Error (ISE)</li> </ul> The single loop performance methods are required to determine how effectively the plant-wide performance index evaluates the plant, since these are existing means of determining how well a plant is operating, but these become impractical due to excessive amounts of information needing evaluation. / Dissertation (MEng)--University of Pretoria, 2008. / Chemical Engineering / unrestricted
|
8 |
Hyperpath and social welfare optimization considering non-additive public transport fare structures / 公共交通の非加法的な運賃構造を考慮したハイパーパスと社会的厚生の最適化 / # ja-KanaSaeed, Maadi 25 September 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第21361号 / 工博第4520号 / 新制||工||1704(附属図書館) / 京都大学大学院工学研究科都市社会工学専攻 / (主査)教授 山田 忠史, 教授 藤井 聡, 准教授 SCHMOECKER,Jan-Dirk / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
9 |
A Novel Method for Vibration Analysis of the Tire-Vehicle System via Frequency Based SubstructuringClontz, Matthew Christopher 07 June 2018 (has links)
Noise and vibration transmitted through the tire and suspension system are strong indicators of overall vehicle ride quality. Often, during the tire design process, target specifications are used to achieve the desired ride performance. To validate the design, subjective evaluations are performed by expert drivers. These evaluations are usually done on a test track and are both quite expensive and time consuming due to the several experimental sets of tires that must be manufactured, installed, and then tested on the target vehicle. In order to evaluate the performance, expert drivers tune themselves to the frequency response of the tire/vehicle combination. Provided the right models exist, this evaluation can also be achieved in a laboratory.
The research presented here is a method which utilizes the principles of frequency based substructuring (FBS) to separate or combine frequency response data for the tire and suspension. This method allows for the possibility of combining high fidelity tire models with analytical or experimental suspension data in order to obtain an overall response of the combined system without requiring an experimental setup or comprehensive simulations. Though high fidelity models are not combined with experimental data in the present work, these coupling/decoupling techniques are applied independently to several quarter car models of varying complexity and to experimental data. These models range from a simplified spring-mass model to a generalized 3D model including rotation. Further, decoupling techniques were applied to simulations of a rigid ring tire model, which allows for inclusion of nonlinearities present in the tire subsystem and provides meaningful information for a loaded tire. By reducing the need for time consuming simulations and experiments, this research has the potential to significantly reduce the time and cost associated with tire design for ride performance.
In order to validate the process experimentally, a small-scale quarter car test rig was developed. This novel setup was specifically designed for the challenges associated with the testing necessary to apply FBS techniques to the tire and suspension systems. The small-scale quarter car system was then used to validate both the models and the testing processes unique to this application. By validating the coupling/decoupling process for the first time on the tire/vehicle system with experimental data, this research can potentially improve the current process of tire design for ride performance. / Ph. D. / Noise and vibration transmitted through the tire and suspension system of a vehicle strongly influence the comfort of passengers. Often, during the tire design process, target specifications are used to achieve the desired vibrational characteristics. Subjective evaluations are then performed by expert drivers in order to validate the tire design. These evaluations are usually done on a test track and are both quite expensive and time consuming due to the several experimental sets of tires that must be manufactured, installed, then tested on the target vehicle.
The research presented here utilizes techniques from the field of Dynamic Substructuring which allow frequency data for the tire and suspension systems to be separated or combined. This method allows for the possibility of combining high fidelity tire models with analytical or experimental suspension data in order to obtain an overall response of the combined system without requiring an experimental setup or comprehensive simulations. Several analytical tire and suspension models were developed for this work and the process of separating/combining the frequency data was performed. Then, a small scale test system was developed and used to establish experimental procedures to collect the data necessary to carry out the Dynamic Substructuring techniques. Finally, the process was validated by repeating the process of separating/combing the frequency properties of the experimental data.
|
10 |
Isar Imaging And Motion CompensationKucukkilic, Talip 01 December 2006 (has links) (PDF)
In Inverse Synthetic Aperture Radar (ISAR) systems the motion of the target can be classified in two main categories: Translational Motion and Rotational Motion. A small degree of rotational motion is required in order to generate the synthetic aperture of the ISAR systems. On the other hand, the remaining part of the target&rsquo / s motion, that is any degree of translational motion and the large degree of rotational motion, degrades ISAR image quality. Motion compensation techniques focus on eliminating the effect of the targets&rsquo / motion on the ISAR images.
In this thesis, ISAR image generation is discussed using both Conventional Fourier Based and Time-Frequency Based techniques. Standard translational motion compensation steps, Range and Doppler Tracking, are examined. Cross-correlation method and Dominant Scatterer Algorithm are employed for Range and Doppler tracking purposes, respectively. Finally, Time-Frequency based motion compensation is studied and compared with the conventional techniques.
All of the motion compensation steps are examined using the simulated data. Stepped frequency waveforms are used in order to generate the required data of the simulations. Not only successful results, but also worst case examinations and lack of algorithms are also discussed with the examples.
|
Page generated in 0.06 seconds