• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 373
  • 124
  • 60
  • 50
  • 46
  • 35
  • 19
  • 15
  • 13
  • 12
  • 11
  • 11
  • 6
  • 5
  • 4
  • Tagged with
  • 880
  • 185
  • 130
  • 130
  • 122
  • 86
  • 78
  • 64
  • 61
  • 57
  • 54
  • 53
  • 50
  • 49
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The molecular genetics of schizophrenia : a linkage study

Kalsi, Gursharan January 2000 (has links)
No description available.
32

Neural and genetic modelling, control and real-time finite simulation of flexible manipulators

Shaheed, Mohammad Hasan January 2000 (has links)
No description available.
33

Linear Approximations For Factored Markov Decision Processes

Patrascu, Relu-Eugen January 2004 (has links)
A Markov Decision Process (MDP) is a model employed to describe problems in which a decision must be made at each one of several stages, while receiving feedback from the environment. This type of model has been extensively studied in the operations research community and fundamental algorithms have been developed to solve associated problems. However, these algorithms are quite inefficient for very large problems, leading to a need for alternatives; since MDP problems are provably hard on compressed representations, one becomes content even with algorithms which may perform well at least on specific classes of problems. The class of problems we deal with in this thesis allows succinct representations for the MDP as a dynamic Bayes network, and for its solution as a weighted combination of basis functions. We develop novel algorithms for producing, improving, and calculating the error of approximate solutions for MDPs using a compressed representation. Specifically, we develop an efficient branch-and-bound algorithm for computing the Bellman error of the compact approximate solution regardless of its provenance. We introduce an efficient direct linear programming algorithm which, using incremental constraints generation, achieves run times significantly smaller than existing approximate algorithms without much loss of accuracy. We also show a novel direct linear programming algorithm which, instead of employing constraints generation, transforms the exponentially many constraints into a compact form more amenable for tractable solutions. In spite of its perceived importance, the efficient optimization of the Bellman error towards an approximate MDP solution has eluded current algorithms; to this end we propose a novel branch-and-bound approximate policy iteration algorithm which makes direct use of our branch-and-bound method for computing the Bellman error. We further investigate another procedure for obtaining an approximate solution based on the dual of the direct, approximate linear programming formulation for solving MDPs. To address both the loss of accuracy resulting from the direct, approximate linear program solution and the question of where basis functions come from we also develop a principled system able not only to produce the initial set of basis functions, but also able to augment it with new basis functions automatically generated such that the approximation error decreases according to the user's requirements and time limitations.
34

Structural changes in fed cattle basis and the implications on basis forecasting

Highfill, Brian James January 1900 (has links)
Master of Science / Department of Agricultural Economics / Glynn T. Tonsor / The past several years has marked one of the most heightened periods of fed cattle basis volatility since the installment of live cattle futures contracts. Understanding basis, the difference between local cash price and the futures contract price, is imperative when making marketing and procurement decisions. In the face of increased volatility, the ability to produce accurate basis expectations is no simple task. The purpose of these analyses was to develop econometric models to determine the greatest influencers of fed cattle basis, to test the presence of structural changes in the determinants of fed cattle basis, and to compare out-of-sample forecasting performance. This study analyzed in-sample econometric models using monthly data from January 2003 through September 2016, then compared the results of the competing models. Using the same time period, we then identified the presence of structural breaks in the data. Furthermore, this study analyzed the out-of-sample forecasting performance for January 2012 through September 2016. The out-of-sample results were then compared to in-sample estimations and historical average basis models. The in-sample estimations indicated the important factors that influence fed cattle basis. The results indicate that there are multiple structural breaks present in the determinants of fed cattle basis examined during this study. We can robustly conclude that there was a market structural break present in the fourth quarter of 2013 and within the 2005-2006 time period. The results indicate that the out-of-sample regression estimations were outperformed by historical average models and did not improve our ability to accurately forecast basis. Overall, a 3 or 4 year historical average model should be preferred over econometric estimations when forecasting fed cattle basis.
35

An analysis of rainfall weather index insurance: the case of forage crops in Canada

Simpson, Alexa 18 April 2016 (has links)
This study analyzes rainfall weather index insurance used for forage crops, in the Province of Ontario, Canada. The first objective of the study was to examine factors affecting the willingness of farmers to pay for forage rainfall index insurance, and a survey was undertaken. Some factors found to influence farmers' willingness to pay were knowledge and attitude regarding insurance, their risk profile, and socio-economic factors. A second objective of the study was to examine basis risk reduction approaches. Basis risk is the difference between the actual loss on a farm and the index measured loss payments that are determined by weather station data. The focus was to capture changing yield and weather relationships over crop growth stages. Using farm level forage yield and daily weather station data from Ontario, a multi-trigger index was designed using weighted crop cycle optimization, and results show that basis risk was substantially reduced. / May 2016
36

Gender Invariance of Behavior and Symptom Identification Scale Factor Structure

Idiculla, Thomaskutty B. January 2008 (has links)
Thesis advisor: Thomas O'Hare / The Behavior and Symptom Identification Scale 24 (BASIS-24) is a psychiatric outcome measure used for inpatient and outpatient populations. This 24-item measure comprises six subscales: depression/functioning; interpersonal relationships; self-harm; emotional lability; psychosis; and substance abuse. Earlier studies examined the reliability and validity of the BASIS-24, but none empirically examined its factor structure across gender. The purpose of this study was therefore to assess the construct validity of the BASIS-24 six-factor model and find evidence of configural, metric, strong and strict factorial invariance across gender. The sample consisted of 1398 psychiatric inpatients that completed BASIS-24 at admission and discharge at 11 facilities nation-wide. Confirmatory factor analyses were used to test measurement invariance of the BASIS-24 six-factor model across males and females. The single confirmatory factor analysis showed the original six-factor model of BASIS-24 provided an acceptable fit to the male sample at admission (RMSEA=0.058, SRMR=0.070, CFI=0.975, NNFI=0.971 and GFI=0.977) and at discharge (RMSEA=0.059, SRMR=0 .078, CFI=0.977, NNFI=0.972, and GFI=0.969). The goodness-of-fit indices for the female group at admission (RMSEA=0.055, SRMR=0.067, CFI=0.980, NNFI=0.976, and GFI=0.983), and at discharge (RMSEA=0.055, SRMR=0.079, CFI=0.98, NNFI=0.977, and GFI=0.971) also revealed that the six factor model fit reasonably well to the data. The goodness-of-fit indices between the unconstrained and constrained models showed that all four multi-group models were equivalent for both male and female samples at admission and discharge in terms of goodness-of-fit examined through the &#916;CFI and that all of them show an acceptable fit to the data. The decrease in CFI was <0.008 for admission sample and <0.003 for discharge sample and both fell below the 0.01 cut-off. This indicates that the configural, metric, as well as the strong and strict factorial invariance of BASIS-24 exist across males and females. The two important contributions of the present study are: 1) BASIS-24 can be used as a reliable and valid symptom measurement tool in assessing psychiatric inpatient populations which can compare quantitative differences in the magnitude of patient symptoms and functioning across genders; 2) the current study provides an example of useful statistical methodology for examining specific questions related to factorial invariance of the BASIS-24 instrument across gender. Implications of social work practice and research are discussed. / Thesis (PhD) — Boston College, 2008. / Submitted to: Boston College. Graduate School of Social Work. / Discipline: Social Work.
37

Interactive Planning and Sensing for Aircraft in Uncertain Environments with Spatiotemporally Evolving Threats

Cooper, Benjamin S 30 November 2018 (has links)
Autonomous aerial, terrestrial, and marine vehicles provide a platform for several applications including cargo transport, information gathering, surveillance, reconnaissance, and search-and-rescue. To enable such applications, two main technical problems are commonly addressed.On the one hand, the motion-planning problem addresses optimal motion to a destination: an application example is the delivery of a package in the shortest time with least fuel. Solutions to this problem often assume that all relevant information about the environment is available, possibly with some uncertainty. On the other hand, the information gathering problem addresses the maximization of some metric of information about the environment: application examples include such as surveillance and environmental monitoring. Solutions to the motion-planning problem in vehicular autonomy assume that information about the environment is available from three sources: (1) the vehicle’s own onboard sensors, (2) stationary sensor installations (e.g. ground radar stations), and (3) other information gathering vehicles, i.e., mobile sensors, especially with the recent emphasis on collaborative teams of autonomous vehicles with heterogeneous capabilities. Each source typically processes the raw sensor data via estimation algorithms. These estimates are then available to a decision making system such as a motion- planning algorithm. The motion-planner may use some or all of the estimates provided. There is an underlying assumption of “separation� between the motion-planning algorithm and the information about environment. This separation is common in linear feedback control systems, where estimation algorithms are designed independent of control laws, and control laws are designed with the assumption that the estimated state is the true state. In the case of motion-planning, there is no reason to believe that such a separation between the motion-planning algorithm and the sources of estimated environment information will lead to optimal motion plans, even if the motion planner and the estimators are themselves optimal. The goal of this dissertation is to investigate whether the removal of this separation, via interactive motion-planning and sensing, can significantly improve the optimality of motion- planning. The major contribution of this work is interactive planning and sensing. We consider the problem of planning the path of a vehicle, which we refer to as the actor, to traverse a threat field with minimum threat exposure. The threat field is an unknown, time- variant, and strictly positive scalar field defined on a compact 2D spatial domain – the actor’s workspace. The threat field is estimated by a network of mobile sensors that can measure the threat field pointwise. All measurements are noisy. The objective is to determine a path for the actor to reach a desired goal with minimum risk, which is a measure sensitive not only to the threat exposure itself, but also to the uncertainty therein. A novelty of this problem setup is that the actor can communicate with the sensor network and request that the sensors position themselves in a procedure we call sensor reconfiguration such that the actor’s risk is minimized. This work continues with a foundation in motion planning in time-varying fields where waiting is a control input. Waiting is examined in the context of finding an optimal path with considerations for the cost of exposure to a threat field, the cost of movement, and the cost of waiting. For example, an application where waiting may be beneficial in motion-planning is the delivery of a package where adverse weather may pose a risk to the safety of a UAV and its cargo. In such scenarios, an optimal plan may include “waiting until the storm passes.� Results on computational efficiency and optimality of considering waiting in path- planning algorithms are presented. In addition, the relationship of waiting in a time- varying field represented with varying levels of resolution, or multiresolution is studied. Interactive planning and sensing is further developed for the case of time-varying environments. This proposed extension allows for the evaluation of different mission windows, finite sensor network reconfiguration durations, finite planning durations, and varying number of available sensors. Finally, the proposed method considers the effect of waiting in the path planner under the interactive planning and sensing for time-varying fields framework. Future work considers various extensions of the proposed interactive planning and sensing framework including: generalizing the environment using Gaussian processes, sensor reconfiguration costs, multiresolution implementations, nonlinear parameters, decentralized sensor networks and an application to aerial payload delivery by parafoil.
38

17x bits elliptic curve scalar multiplication over GF(2M) using optimal normal basis.

January 2001 (has links)
Tang Ko Cheung, Simon. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 89-91). / Abstracts in English and Chinese. / Chapter 1 --- Theory of Optimal Normal Bases --- p.3 / Chapter 1.1 --- Introduction --- p.3 / Chapter 1.2 --- The minimum number of terms --- p.6 / Chapter 1.3 --- Constructions for optimal normal bases --- p.7 / Chapter 1.4 --- Existence of optimal normal bases --- p.10 / Chapter 2 --- Implementing Multiplication in GF(2m) --- p.13 / Chapter 2.1 --- Defining the Galois fields GF(2m) --- p.13 / Chapter 2.2 --- Adding and squaring normal basis numbers in GF(2m) --- p.14 / Chapter 2.3 --- Multiplication formula --- p.15 / Chapter 2.4 --- Construction of Lambda table for Type I ONB in GF(2m) --- p.16 / Chapter 2.5 --- Constructing Lambda table for Type II ONB in GF(2m) --- p.21 / Chapter 2.5.1 --- Equations of the Lambda matrix --- p.21 / Chapter 2.5.2 --- An example of Type IIa ONB --- p.23 / Chapter 2.5.3 --- An example of Type IIb ONB --- p.24 / Chapter 2.5.4 --- Creating the Lambda vectors for Type II ONB --- p.26 / Chapter 2.6 --- Multiplication in practice --- p.28 / Chapter 3 --- Inversion over optimal normal basis --- p.33 / Chapter 3.1 --- A straightforward method --- p.33 / Chapter 3.2 --- High-speed inversion for optimal normal basis --- p.34 / Chapter 3.2.1 --- Using the almost inverse algorithm --- p.34 / Chapter 3.2.2 --- "Faster inversion, preliminary subroutines" --- p.37 / Chapter 3.2.3 --- "Faster inversion, the code" --- p.41 / Chapter 4 --- Elliptic Curve Cryptography over GF(2m) --- p.49 / Chapter 4.1 --- Mathematics of elliptic curves --- p.49 / Chapter 4.2 --- Elliptic Curve Cryptography --- p.52 / Chapter 4.3 --- Elliptic curve discrete log problem --- p.56 / Chapter 4.4 --- Finding good and secure curves --- p.58 / Chapter 4.4.1 --- Avoiding weak curves --- p.58 / Chapter 4.4.2 --- Finding curves of appropriate order --- p.59 / Chapter 5 --- The performance of 17x bit Elliptic Curve Scalar Multiplication --- p.63 / Chapter 5.1 --- Choosing finite fields --- p.63 / Chapter 5.2 --- 17x bit test vectors for onb --- p.65 / Chapter 5.3 --- Testing methodology and sample runs --- p.68 / Chapter 5.4 --- Proposing an elliptic curve discrete log problem for an 178bit curve --- p.72 / Chapter 5.5 --- Results and further explorations --- p.74 / Chapter 6 --- On matrix RSA --- p.77 / Chapter 6.1 --- Introduction --- p.77 / Chapter 6.2 --- 2 by 2 matrix RSA scheme 1 --- p.80 / Chapter 6.3 --- Theorems on matrix powers --- p.80 / Chapter 6.4 --- 2 by 2 matrix RSA scheme 2 --- p.83 / Chapter 6.5 --- 2 by 2 matrix RSA scheme 3 --- p.84 / Chapter 6.6 --- An example and conclusion --- p.85 / Bibliography --- p.91
39

The effectiveness of the accrual-based trading strategy for loss firms. / 基於會計應計項目的交易策略對於虧損企業的有效性 / Ji yu hui ji ying ji xiang mu de jiao yi ce lüe dui yu kui sun qi ye de you xiao xing

January 2009 (has links)
Huang, Zheng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 30-32). / Abstract also in Chinese. / Abstract (English) --- p.i / Abstract (Chinese) --- p.ii / Acknowledgement --- p.iii / Table of Contents --- p.iv / List of Tables and Figures --- p.v / Chapter Chapters: --- List of Tables and Figures / Chapter 1. --- Introduction --- p.1 / Chapter 2. --- Literature Review: Accrual Anomaly --- p.7 / Chapter 3. --- Data and Descriptive Statistics --- p.12 / Chapter 3.1 --- Sample Selection --- p.12 / Chapter 3.2 --- Variable Measurement --- p.12 / Chapter 3.3 --- Descriptive Statistics --- p.14 / Chapter 4. --- Basic Accrual Anomaly Evidence for Loss Firms --- p.15 / Chapter 5. --- Empirical Evidences of the Pseudo accrual Trading Strategy --- p.18 / Chapter 5.1 --- The Effectiveness of the Pseudo accrual Trading Strategy --- p.18 / Chapter 5.2 --- The Asset Growth Effect Tests of the Pseudo Accrual --- p.22 / Chapter 6. --- Sensitivity Tests --- p.25 / Chapter 7. --- Conclusion --- p.28 / References --- p.30
40

Uncertainty quantification for spatial field data using expensive computer models : refocussed Bayesian calibration with optimal projection

Salter, James Martin January 2017 (has links)
In this thesis, we present novel methodology for emulating and calibrating computer models with high-dimensional output. Computer models for complex physical systems, such as climate, are typically expensive and time-consuming to run. Due to this inability to run computer models efficiently, statistical models ('emulators') are used as fast approximations of the computer model, fitted based on a small number of runs of the expensive model, allowing more of the input parameter space to be explored. Common choices for emulators are regressions and Gaussian processes. The input parameters of the computer model that lead to output most consistent with the observations of the real-world system are generally unknown, hence computer models require careful tuning. Bayesian calibration and history matching are two methods that can be combined with emulators to search for the best input parameter setting of the computer model (calibration), or remove regions of parameter space unlikely to give output consistent with the observations, if the computer model were to be run at these settings (history matching). When calibrating computer models, it has been argued that fitting regression emulators is sufficient, due to the large, sparsely-sampled input space. We examine this for a range of examples with different features and input dimensions, and find that fitting a correlated residual term in the emulator is beneficial, in terms of more accurately removing regions of the input space, and identifying parameter settings that give output consistent with the observations. We demonstrate and advocate for multi-wave history matching followed by calibration for tuning. In order to emulate computer models with large spatial output, projection onto a low-dimensional basis is commonly used. The standard accepted method for selecting a basis is to use n runs of the computer model to compute principal components via the singular value decomposition (the SVD basis), with the coefficients given by this projection emulated. We show that when the n runs used to define the basis do not contain important patterns found in the real-world observations of the spatial field, linear combinations of the SVD basis vectors will not generally be able to represent these observations. Therefore, the results of a calibration exercise are meaningless, as we converge to incorrect parameter settings, likely assigning zero posterior probability to the correct region of input space. We show that the inadequacy of the SVD basis is very common and present in every climate model field we looked at. We develop a method for combining important patterns from the observations with signal from the model runs, developing a calibration-optimal rotation of the SVD basis that allows a search of the output space for fields consistent with the observations. We illustrate this method by performing two iterations of history matching on a climate model, CanAM4. We develop a method for beginning to assess model discrepancy for climate models, where modellers would first like to see whether the model can achieve certain accuracy, before allowing specific model structural errors to be accounted for. We show that calibrating using the basis coefficients often leads to poor results, with fields consistent with the observations ruled out in history matching. We develop a method for adjusting for basis projection when history matching, so that an efficient and more accurate implausibility bound can be derived that is consistent with history matching using the computationally prohibitive spatial field.

Page generated in 0.0391 seconds