Spelling suggestions: "subject:"men.""
331 |
Model Specification Searches in Latent Growth Modeling: A Monte Carlo StudyKim, Min Jung 2012 May 1900 (has links)
This dissertation investigated the optimal strategy for the model specification search in the latent growth modeling. Although developing an initial model based on the theory from prior research is favored, sometimes researchers may need to specify the starting model in the absence of theory. In this simulation study, the effectiveness of the start models in searching for the true population model was examined. The four possible start models adopted in this study were: the simplest mean and covariance structure model, the simplest mean and the most complex covariance structure model, the most complex mean and the simplest covariance structure model, and the most complex mean and covariance structure model. Six model selection criteria were used to determine the recovery of the true model: Likelihood ratio test (LRT), DeltaCFI, DeltaRMSEA, DeltaSRMR, DeltaAIC, and DeltaBIC.
The results showed that specifying the most complex covariance structure (UN) with the most complex mean structure recovered the true mean trajectory most successfully with the average hit rate above 90% using the DeltaCFI, DeltaBIC, DeltaAIC, and DeltaSRMR. In searching for the true covariance structure, LRT, DeltaCFI, DeltaAIC, and DeltaBIC performed successfully regardless of the searching method with different start models.
|
332 |
Stochastic Modeling and Analysis of Plant Microtubule System CharacteristicsEren, Ezgi 2012 May 1900 (has links)
In this dissertation, we consider a complex biological system known as cortical microtubule (CMT) system, where stochastic dynamics of the components (i.e., the CMTs) are defined in both space and time. CMTs have an inherent spatial dimension of their own, as their length changes over time in addition to their location. As a result of their dynamics in a confined space, they run into and interact with each other according to simple stochastic rules. Over time, CMTs acquire an ordered structure that is achieved without any centralized control beginning with a completely disorganized system. It is also observed that this organization might be distorted, when parameters of dynamicity and interactions change due to genetic mutation or environmental conditions. The main question of interest is to explore the characteristics of this system and the drivers of its self-organization, which is not feasible relying solely on biological experiments. For this, we replicate the system dynamics and interactions using computer simulations. As the simulations successfully mimic the organization seen in plant cells, we conduct an extensive analysis to discover the effects of dynamics and interactions on system characteristics by experimenting with different input parameters. To compare simulation results, we characterize system properties and quantify organization level using metrics based on entropy, average length and number of CMTs in the system. Based on our findings and conjectures from simulations, we develop analytical models for more generalized conclusions and efficient computation of system metrics. As a fist step, we formulate a mean-field model, which we use to derive sufficient conditions for organization to occur in terms of input parameters. Next, considering the parameter ranges that satisfy these conditions, we develop predictive methodologies for estimation of expected average length and number of CMTs over time, using a fluid model, transient analysis, and approximation algorithms tailored to our problem. Overall, we build a comprehensive framework for analysis and control of microtubule organization in plant cells using a wide range of models and methodologies in conjunction. This research also has broader impacts related to the fields of bio-energy, healthcare, and nanotechnology; in addition to its methodological contribution to stochastic modeling of systems with high-level spatial and temporal complexity.
|
333 |
Review of the effectiveness of vehicle activated signsJomaa, Diala, Yella, Siril, Dougherty, Mark January 2013 (has links)
This paper reviews the effectiveness of vehicle activated signs. Vehicle activated signs are being reportedly used in recent years to display dynamic information to road users on an individual basis in order to give a warning or inform about a specific event. Vehicle activated signs are triggered individually by vehicles when a certain criteria is met. An example of such criteria is to trigger a speed limit sign when the driver exceeds a pre-set threshold speed. The preset threshold is usually set to a constant value which is often equal, or relative, to the speed limit on a particular road segment. This review examines in detail the basis for the configuration of the existing sign types in previous studies and explores the relation between the configuration of the sign and their impact on driver behavior and sign efficiency. Most of previous studies showed that these signs have significant impact on driver behavior, traffic safety and traffic efficiency. In most cases the signs deployed have yielded reductions in mean speeds, in speed variation and in longer headways. However most experiments reported within the area were performed with the signs set to a certain static configuration within applicable conditions. Since some of the aforementioned factors are dynamic in nature, it is felt that the configurations of these signs were thus not carefully considered by previous researchers and there is no clear statement in the previous studies describing the relationship between the trigger value and its consequences under different conditions. Bearing in mind that different designs of vehicle activated signs can give a different impact under certain conditions of road, traffic and weather conditions the current work suggests that variable speed thresholds should be considered instead.
|
334 |
Driving Cycle Generation Using Statistical Analysis and Markov ChainsTorp, Emil, Önnegren, Patrik January 2013 (has links)
A driving cycle is a velocity profile over time. Driving cycles can be used for environmental classification of cars and to evaluate vehicle performance. The benefit by using stochastic driving cycles instead of predefined driving cycles, i.e. the New European Driving Cycle, is for instance that the risk of cycle beating is reduced. Different methods to generate stochastic driving cycles based on real-world data have been used around the world, but the representativeness of the generated driving cycles has been difficult to ensure. The possibility to generate stochastic driving cycles that captures specific features from a set of real-world driving cycles is studied. Data from more than 500 real-world trips has been processed and categorized. The driving cycles are merged into several transition probability matrices (TPMs), where each element corresponds to a specific state defined by its velocity and acceleration. The TPMs are used with Markov chain theory to generate stochastic driving cycles. The driving cycles are validated using percentile limits on a set of characteristic variables, that are obtained from statistical analysis of real-world driving cycles. The distribution of the generated driving cycles is investigated and compared to real-world driving cycles distribution. The generated driving cycles proves to represent the original set of real-world driving cycles in terms of key variables determined through statistical analysis. Four different methods are used to determine which statistical variables that describes the features of the provided driving cycles. Two of the methods uses regression analysis. Hierarchical clustering of statistical variables is proposed as a third alternative, and the last method combines the cluster analysis with the regression analysis. The entire process is automated and a graphical user interface is developed in Matlab to facilitate the use of the software. / En körcykel är en beskriving av hur hastigheten för ett fordon ändras under en körning. Körcykler används bland annat till att miljöklassa bilar och för att utvärdera fordonsprestanda. Olika metoder för att generera stokastiska körcykler baserade på verklig data har använts runt om i världen, men det har varit svårt att efterlikna naturliga körcykler. Möjligheten att generera stokastiska körcykler som representerar en uppsättning naturliga körcykler studeras. Data från över 500 körcykler bearbetas och kategoriseras. Dessa används för att skapa överergångsmatriser där varje element motsvarar ett visst tillstånd, med hastighet och acceleration som tillståndsvariabler. Matrisen tillsammans med teorin om Markovkedjor används för att generera stokastiska körcykler. De genererade körcyklerna valideras med hjälp percentilgränser för ett antal karaktäristiska variabler som beräknats för de naturliga körcyklerna. Hastighets- och accelerationsfördelningen hos de genererade körcyklerna studeras och jämförs med de naturliga körcyklerna för att säkerställa att de är representativa. Statistiska egenskaper jämfördes och de genererade körcyklerna visade sig likna den ursprungliga uppsättningen körcykler. Fyra olika metoder används för att bestämma vilka statistiska variabler som beskriver de naturliga körcyklerna. Två av metoderna använder regressionsanalys. Hierarkisk klustring av statistiska variabler föreslås som ett tredje alternativ. Den sista metoden kombinerar klusteranalysen med regressionsanalysen. Hela processen är automatiserad och ett grafiskt användargränssnitt har utvecklats i Matlab för att underlätta användningen av programmet.
|
335 |
Channel estimation in a two-way relay networkNwaekwe, Chinwe M. 01 August 2011 (has links)
In wireless communications, channel estimation is necessary for coherent symbol detection.
This thesis considers a network which consists of two transceivers communicating
with the help of a relay applying the amplify-and-forward (AF) relaying scheme. The
training based channel estimation technique is applied to the proposed network where
the numbers of the training sequence transmitted by the two transceivers, are different.
All three terminals are equipped with a single antenna for signal transmission and reception.
Communication between the transceivers is carried out in two phases. In the
first phase, each transceiver sends a transmission block of data embedded with known
training symbols to the relay. In the second phase, the relay retransmits an amplified
version of the received signal to both transceivers. Estimates of the channel coefficients
are obtained using the Maximum Likelihood (ML) estimator. The performance analysis
of the derived estimates are carried out in terms of the mean squared error (MSE) and
we determine conditions required to increase the estimation accuracy. / UOIT
|
336 |
Three Essays on Real Options Analysis of Forestry Investments Under Stochastic Timber PricesKhajuria, Rajender 19 January 2009 (has links)
This thesis has applied the theory of real options to study forestry investment decision-making under stochastic timber prices. Suitable models have been developed for the stochastic timber prices, after addressing major issues in characterisation of the price process. First, the assumption of stochastic timber price process was based on detailed unit root tests, incorporating structural breaks in time-series analysis. The series was found to be stationary around shifting mean, justifying the assumption of mean reversion model. Due to shift in the mean, long-run mean to which the prices tended to revert could not be assumed constant. Accordingly, it was varied in discreet steps as per the breaks identified in the tests. The timber price series failed the normality test implying fat tails in the data. To account for these fat tails, ‘jumps’ were incorporated in the mean reversion model. The results showed that the option values for the jump model were higher than the mean reversion model and threshold levels for investment implied different optimal paths. Ignoring jumps could provide sub-optimal results leading to erroneous decisions. Second, the long-run mean to which prices reverted was assumed to shift continuously in a random manner. This was modeled through the incorporation of stochastic level and slope in the trend of the prices. Since the stochastic level and slope were not observable in reality, a Kalman-filter approach was used for the estimation of model parameters. The price forecasts from the model were used to estimate option values for the harvest investment decisions. Third, investment in a carbon sequestration project from managed forests was evaluated using real options, under timber price stochasticity. The option values and threshold levels for investment were estimated, under baseline and mitigation scenarios. Results indicated that carbon sequestration from managed forests might not be a viable investment alternative due to existing bottlenecks. Overall, the research stressed upon the need for market information and adaptive management, with a pro-active approach, for efficient investment decisions in forestry.
|
337 |
QoE-based Application Mapping for Resource ManagementLeila, Shayanpour 11 January 2011 (has links)
Mapping between many different applications and many different underlying technologies
is very complicated. Moreover, since users need service continuity and want to get the
service in a satisfactory level, firstly, their perception of the service should be measured and secondly, the changes in the underlying technologies should be transparent to users.
As a result, there should be ”virtualization layer” between application layer and underlying access technologies whose job is to abstract user perception of the application in terms of network parameters and transfer these requirements to underlying layers. In this thesis, we propose a generic mathematical expression to abstract user perception of application in a unified way for different applications. Since today applications are composite
applications, having a generalized expression that has the same form for various
applications can ease resource management calculations. We use application service map
which is based on quality of experience for resource management.
|
338 |
QoE-based Application Mapping for Resource ManagementLeila, Shayanpour 11 January 2011 (has links)
Mapping between many different applications and many different underlying technologies
is very complicated. Moreover, since users need service continuity and want to get the
service in a satisfactory level, firstly, their perception of the service should be measured and secondly, the changes in the underlying technologies should be transparent to users.
As a result, there should be ”virtualization layer” between application layer and underlying access technologies whose job is to abstract user perception of the application in terms of network parameters and transfer these requirements to underlying layers. In this thesis, we propose a generic mathematical expression to abstract user perception of application in a unified way for different applications. Since today applications are composite
applications, having a generalized expression that has the same form for various
applications can ease resource management calculations. We use application service map
which is based on quality of experience for resource management.
|
339 |
Identification of linear periodically time-varying (LPTV) systemsYin, Wutao 10 September 2009
A linear periodically time-varying (LPTV) system is a linear time-varying system with the coefficients changing periodically, which is widely used in control, communications, signal processing, and even circuit modeling. This thesis concentrates on identification of LPTV systems. To this end, the representations of LPTV systems are thoroughly reviewed. Identification methods are developed accordingly. The usefulness of the proposed identification methods is verified by the simulation results.<p>
A periodic input signal is applied to a finite impulse response (FIR)-LPTV system and measure
the noise-contaminated output. Using such periodic inputs, we show that we can formulate the
problem of identification of LPTV systems in the frequency domain. With the help of the discrete
Fourier transform (DFT), the identification method reduces to finding the least-squares (LS) solution of a set of linear equations. A sufficient condition for the identifiability of LPTV systems is given, which can be used to find appropriate inputs for the purpose of identification.<p>
In the frequency domain, we show that the input and the output can be related by using the
discrete Fourier transform (DFT) and a least-squares method can be used to identify the alias
components. A lower bound on the mean square error (MSE) of the estimated alias components
is given for FIR-LPTV systems. The optimal training signal achieving this lower MSE bound is
designed subsequently. The algorithm is extended to the identification of infinite impulse response
(IIR)-LPTV systems as well. Simulation results show the accuracy of the estimation and the
efficiency of the optimal training signal design.
|
340 |
They Must Be Mediocre: Representations, Cognitive Complexity, and Problem Solving in Secondary Calculus TextbooksRomero, Christopher 1978- 14 March 2013 (has links)
A small group of profit seeking publishers dominates the American textbook market and guides the learning of the majority of our nation’s calculus students. The College Board’s AP Calculus curriculum is a de facto national standard for this gateway course that is critically important to 21st century STEM careers. A multi-representational understanding of calculus is a central pillar of the AP curriculum. This dissertation asks whether this multi-representational vision is manifest in popular calculus textbooks.
This dissertation began with a survey of all AP Calculus AB Examination free response items, 2002-2011, and found that students score worse on items characterized by numerical anchors or verbal targets. Based on previously elucidated models, a new cognitive model of five levels and six principles is developed for the purpose of calculus textbook task analysis. This model explicates complexity as a function of representational input and output. Eight popular secondary calculus textbooks were selected for study based on Amazon sales rank data. All verbally anchored mathematical tasks (n=555) from sections of those books concerning the mean value theorem and all AP Calculus AB prompts (n=226) were analyzed for cognitive complexity and representational diversity using the model.
The textbook study found that calculus textbooks underrepresented the numerical anchor and verbal target. It found that the textbooks were both explicitly and implicitly less cognitively complex than the AP test. The article suggested that textbook tasks should be less dense, avoid cognitive attenuation, move away from the stand-alone item, juxtapose anchor representations, scaffold student solutions, incorporate previously considered overarching concepts and include more profound follow-up questions.
To date there have been no studies of calculus textbook content based on established research on cognitive learning. Given the critical role that their calculus course plays in the lives of hundreds of thousands of students annually, it is incumbent upon the College Board to establish a textbook review process at the very least in the same vain as the teacher syllabus auditing process established in recent years.
|
Page generated in 0.0543 seconds