• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 819
  • 411
  • 122
  • 95
  • 43
  • 31
  • 24
  • 22
  • 17
  • 15
  • 14
  • 13
  • 12
  • 10
  • 8
  • Tagged with
  • 1897
  • 415
  • 359
  • 342
  • 208
  • 190
  • 180
  • 168
  • 146
  • 144
  • 140
  • 133
  • 119
  • 117
  • 115
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

A count data model with endogenous covariates : formulation and application to roadway crash frequency at intersections

Born, Kathryn Mary 24 March 2014 (has links)
This thesis proposes an estimation approach for count data models with endogenous covariates. The maximum approximate composite marginal likelihood inference approach is used to estimate model parameters. The modeling framework is applied to predict crash frequency at urban intersections in Irving, Texas. The sample is drawn from the Texas Department of Transportation crash incident files for the year 2008. The results highlight the importance of accommodating endogeneity effects in count models. In addition, the results reveal the increased propensity for crashes at intersections with flashing lights, intersections with crest approaches, and intersections that are on frontage roads. / text
452

Scheduling of Generalized Cambridge Rings

Bauer, Daniel Howard 14 October 2009 (has links)
A Generalized Cambridge Ring is a queueing system that can be used as an approximate model of some material handling systems used in modern factories. It consists of one or more vehicles that carry cargo from origins to destinations around a loop, with queues forming when cargo temporarily exceeds the capacity of the system. For some Generalized Cambridge Rings that satisfy the usual traffic conditions for stability, it is demonstrated that some nonidling scheduling polices are unstable. A good scheduling policy will increase the efficiency of these systems by reducing waiting times and by therefore also reducing work in process (WIP). Simple heuristic policies are developed which provide substantial improvements over the commonly used first-in-first-out (FIFO) policy. Variances are incorporated into previously developed fluid models that used only means to produce a more accurate partially discrete fluid mean-variance model, which is used to further reduce waiting times. Optimal policies are obtained for some simple special cases, and simulations are used to compare policies in more general cases. The methods developed may be applicable to other queueing systems. / text
453

Solving the generalized assignment problem : a hybrid Tabu search/branch and bound algorithm

Woodcock, Andrew John January 2007 (has links)
The research reported in this thesis considers the classical combinatorial optimization problem known as the Generalized Assignment Problem (GAP). Since the mid 1970's researchers have been developing solution approaches for this particular type of problem due to its importance both in practical and theoretical terms. Early attempts at solving GAP tended to use exact integer programming techniques such as Branch and Bound. Although these tended to be reasonably successful on small problem instances they struggle to cope with the increase in computational effort required to solve larger instances. The increase in available computing power during the 1980's and 1990's coincided with the development of some highly efficient heuristic approaches such as Tabu Search (TS), Genetic Algorithms (GA) and Simulated Annealing (SA). Heuristic approaches were subsequently developed that were able to obtain high quality solutions to larger and more complex instances of GAP. Most of these heuristic approaches were able to outperform highly sophisticated commercial mathematical programming software since the heuristics tend to be tailored to the problem and therefore exploit its structure. A new approach for solving GAP has been developed during this research that combines the exact Branch and Bound approach and the heuristic strategy of Tabu Search to produce a hybrid algorithm for solving GAP. This approach utilizes the mathematical programming software Xpress-MP as a Branch and Bound solver in order to solve sub-problems that are generated by the Tabu Search guiding heuristic. Tabu Search makes use of memory structures that record information about attributes of solutions visited during the search. This information is used to guide the search and in the case of the hybrid algorithm to generate sub problems to pass to the Branch and Bound solver. The new algorithm has been developed, imp lemented and tested on benchmark test problems that are extremely challenging and a comprehensive report and analysis of the experimentation is reported in this thesis.
454

Higher-order generalized singular value decomposition : comparative mathematical framework with applications to genomic signal processing

Ponnapalli, Sri Priya 03 December 2010 (has links)
The number of high-dimensional datasets recording multiple aspects of a single phenomenon is ever increasing in many areas of science. This is accompanied by a fundamental need for mathematical frameworks that can compare data tabulated as multiple large-scale matrices of di erent numbers of rows. The only such framework to date, the generalized singular value decomposition (GSVD), is limited to two matrices. This thesis addresses this limitation and de fines a higher-order GSVD (HO GSVD) of N > 2 datasets, that provides a mathematical framework that can compare multiple high-dimensional datasets tabulated as large-scale matrices of different numbers of rows. / text
455

An investigation of the optimal test design for multi-stage test using the generalized partial credit model

Chen, Ling-Yin 27 January 2011 (has links)
Although the design of Multistage testing (MST) has received increasing attention, previous studies mostly focused on comparison of the psychometric properties of MST with CAT and paper-and-pencil (P&P) test. Few studies have systematically examined the number of items in the routing test, the number of subtests in a stage, or the number of stages in a test design to achieve accurate measurement in MST. Given that none of the studies have identified an ideal MST test design using polytomously-scored items, the current study conducted a simulation to investigate the optimal design for MST using generalized partial credit model (GPCM). Eight different test designs were examined on ability estimation across two routing test lengths (short and long) and two total test lengths (short and long). The item pool and generated item responses were based on items calibrated from a national test consisting of 273 partial credit items. Across all test designs, the maximum information routing method was employed and the maximum likelihood estimation was used for ability estimation. Ten samples of 1,000 simulees were used to assess each test design. The performance of each test design was evaluated in terms of the precision of ability estimates, item exposure rate, item pool utilization, and item overlap. The study found that all test designs produced very similar results. Although there were some variations among the eight test structures in the ability estimates, results indicate that the performance overall of these eight test structures in achieving measurement precision did not substantially deviate from one another with regard to total test length and routing test length. However, results from the present study suggest that routing test length does have a significant effect on the number of non-convergent cases in MST tests. Short routing tests tended to result in more non-convergent cases, and the presence of fewer stage tests yielded more of such cases than structures with more stages. Overall, unlike previous findings, the results of the present study indicate that the MST test structure is less likely to be a factor impacting ability estimation when polytomously-scored items are used, based on GPCM. / text
456

An evaluation of item difficulty and person ability estimation using the multilevel measurement model with short tests and small sample sizes

Brune, Kelly Diane 08 June 2011 (has links)
Recently, researchers have reformulated Item Response Theory (IRT) models into multilevel models to evaluate clustered data appropriately. Using a multilevel model to obtain item difficulty and person ability parameter estimates that correspond directly with IRT models’ parameters is often referred to as multilevel measurement modeling. Unlike conventional IRT models, multilevel measurement models (MMM) can handle, the addition of predictor variables, appropriate modeling of clustered data, and can be estimated using non-specialized computer software, including SAS. For example, a three-level model can model the repeated measures (level one) of individuals (level two) who are clustered within schools (level three). Limitations in terms of the minimum sample size and number of test items that permit reasonable one-parameter logistic (1-PL) IRT model’s parameters have not been examined for either the two- or three-level MMM. Researchers (Wright and Stone, 1979; Lord, 1983; Hambleton and Cook, 1983) have found that sample sizes under 200 and fewer than 20 items per test result in poor model fit and poor parameter recovery for dichotomous 1-PL IRT models with data that meet model assumptions. This simulation study tested the performance of the two-level and three-level MMM under various conditions that included three sample sizes (100, 200, and 400), three test lengths (5, 10, and 20), three level-3 cluster sizes (10, 20, and 50), and two generated intraclass correlations (.05 and .15). The study demonstrated that use of the two- and three-level MMMs lead to somewhat divergent results for item difficulty and person-level ability estimates. The mean relative item difficulty bias was lower for the three-level model than the two-level model. The opposite was true for the person-level ability estimates, with a smaller mean relative parameter bias for the two-level model than the three-level model. There was no difference between the two- and three-level MMMs in the school-level ability estimates. Modeling clustered data appropriately; having a minimum total sample size of 100 to accurately estimate level-2 residuals and a minimum total sample size of 400 to accurately estimate level-3 residuals; and having at least 20 items will help ensure valid statistical test results. / text
457

Short circuit modeling of wind turbine generators

2013 August 1900 (has links)
Modeling of wind farms to determine their short circuit contribution in response to faults is a crucial part of system impact studies performed by power utilities. Short circuit calculations are necessary to determine protective relay settings, equipment ratings and to provide data for protection coordination. The plethora of different factors that influence the response of wind farms to short circuits makes short circuit modeling of wind farms an interesting, complex, and challenging task. Low voltage ride through (LVRT) requirements make it necessary for the latest generation of wind generators to be capable of providing reactive power support without disconnecting from the grid during and after voltage sags. If the wind generator must stay connected to the grid, a facility has to be provided to by-pass the high rotor current that occurs during voltage sags and prevent damage of the rotor side power electronic circuits. This is done through crowbar circuits which are of two types, namely active and passive crowbars, based on the power electronic device used in the crowbar triggering circuit. Power electronics-based converters and controls have become an integral part of wind generator systems like the Type 3 doubly fed induction generator based wind generators. The proprietary nature of the design of these power electronics makes it difficult to obtain the necessary information from the manufacturer to model them accurately. Also, the use of power electronic controllers has led to phenomena such as sub-synchronous control interactions (SSCI) in series compensated Type 3 wind farms which are characterized by non-fundamental frequency oscillations. SSCI affects fault current magnitude significantly and is a crucial factor that cannot be ignored while modeling series compensated Type 3 wind farms. These factors have led to disagreement and inconsistencies about which techniques are appropriate for short circuit modeling of wind farms. Fundamental frequency models like voltage behind transient reactance model are incapable of representing the majority of critical wind generator fault characteristics such as sub-synchronous interactions. The Detailed time domain models, though accurate, demand high levels of computation and modeling expertise. Voltage dependent current source modeling based on look up tables are not stand-alone models and provide only a black-box type of solution. The short circuit modeling methodology developed in this research work for representing a series compensated Type 3 wind farm is based on the generalized averaging theory, where the system variables are represented as time varying Fourier coefficients known as dynamic phasors. The modeling technique is also known as dynamic phasor modeling. The Type 3 wind generator has become the most popular type of wind generator, making it an ideal candidate for such a modeling method to be developed. The dynamic phasor model provides a generic model and achieves a middle ground between the conventional electromechanical models and the cumbersome electromagnetic time domain models. The essence of this scheme to model a periodically driven system, such as power converter circuits, is to retain only particular Fourier coefficients based on the behavior of interest of the system under study making it computationally efficient and inclusive of the required frequency components, even if non-fundamental in nature. The capability to model non-fundamental frequency components is critical for representing sub-synchronous interactions. A 450 MW Type 3 wind farm consisting of 150 generator units was modeled using the proposed approach. The method is shown to be highly accurate for representing faults at the point of interconnection of the wind farm to the grid for balanced and unbalanced faults as well as for non-fundamental frequency components present in fault currents during sub-synchronous interactions. Further, the model is shown to be accurate also for different degrees of transmission line compensation and different transformer configurations used in the test system.
458

Συναρτήσεις Bessel και ορθογώνια πολυώνυμα με περισσότερες από μία μεταβλητές

Λόης, Αθανάσιος 13 September 2007 (has links)
Οι γενικευμένες συναρτήσεις Bessel (συναρτήσεις Bessel πολλών μεταβλητών και δεικτών) χρησιμοποιούνται ως το βασικό μαθηματικό υπόβαθρο για την απλούστευση πολύπλοκων υπολογισμών σε φαινόμενα όπως της σκέδασης όπου η προσέγγιση του διπόλου δεν μπορεί να εφαρμοσθεί. Επίσης εμφανίζονται σε προβλήματα αλληλεπίδρασης ισχυρών δεσμών laser με ηλεκτρόνια, αλληλεπίδρασης φωτός με ασθενώς δεσμευμένο ηλεκτρόνιο, σε προβλήματα ιονισμού κτλ. Οι συναρτήσεις αυτές ικανοποιούν αντίστοιχες ιδιότητες (όσον αφορά στη γεννή- τρια συνάρτηση και τις αναδρομικές σχέσεις ) με τις συναρτήσεις Bessel μιας πραγ- ματικής μεταβλητής και η απόδειξη αυτών των σχέσεων βασίζεται στον ορισμό των γενικευμένων συναρτήσεων Bessel και στις ιδιότητες των συνήθων συναρτήσεων Bessel. Συγκεκριμένα παρουσιάζονται οι διάφορες γενικεύσεις των συναρτήσεων Bessel ξεκινώντας με αυτές των δύο μεταβλητών και του ενός ακέραιου δείκτη της μορφής για τις οποίες παραθέτονται η γεννήτρια συνάρτηση, οι αναδρομικές σχέσεις, παράγωγοι ως προς τις 2 μεταβλητές κάθε τάξης, αναπτύγματα τύπου Jacobi – Anger καθώς και σχέσεις σημαντικές για τους αριθμητικούς υπολογισμούς. Η ίδια μελέτη γίνεται και για τις διάφορες τροποποιημένες μορφές των συναρτήσεων καθώς και για τις γενικευμένες συναρτήσεις τριών αλλά και γενικά Μ μεταβλητών. Επίσης δίνονται αποτελέσματα για τις συναρτήσεις Bessel με περισσότερους από έναν δείκτες όπως οι συναρτήσεις , στην μονοδιάστατη περίπτω- ση και οι , και στην πολυδιά-στατη. Γίνεται καταγραφή των γενικευμένων μορφών των πολυωνύμων Hermite στις δύο διαστάσεις, των πολυωνύμων Gould – Hopper, των ιδιοτήτων τους καθώς και του τρόπου με τον οποίο συνδέονται με τις γενικευμένες συναρτήσεις Bessel. Τέλος, στην εργασία, που έχει τον χαρακτήρα της ανασκόπησης παρουσιάζονται και κάποια αποτελέσματα τα οποία αφορούν σε ιδιότητες πολυωνύμων Legendre και Laguerre δύο μεταβλητών. / The Generalized Bessel Functions (GBF) are multivariable extensions of the ordinary Bessel functions and their modified versions. Functions of this type encountered in a large number of fields, especially in physics, and used as a very important mathematical tool for simplifying the complicated computations. Problems, like the phenomenon of ionization and scattering, the interaction of intense laser beams with electrons, the effect of an intense electromagnetic field on a weakly bound system, are some examples of GBF’s applications in physics. In this work we gather and write down all the information related to the generalized Bessel functions and their modified versions, regarding their recurrence properties, generating functions ,integral representations, Jacobi – Anger expansions etc. Also we study the way that the generalized Bessel functions are linked with some multidimensional orthogonal polynomials such as Hermite, Laguerre, Legendre and Gould – Hopper polynomials.
459

Μελέτη κατανομών μεγέθους συστάδας για επιγενή Poisson και συναφείς ασυμπτωτικές κατανομές

Κουσίδης, Σωκράτης 09 October 2008 (has links)
Σε προβλήματα ερμηνείας βιολογικών δεδομένων όπου οι υπό μελέτη μονάδες εμφανίζονται κατά συστάδες (cluster) τυχαίου μεγέθους και πλήθους, ιδιαίτερο ρόλο παίζουν οι επιγενείς κατανομές. Συγκεκριμένα ως επιγενής Poisson κατανομή μπορεί να παρασταθεί κάθε μονοδιάστατη διακριτή κατανομή η οποία είναι άπειρα διαιρετή. Έχει μελετηθεί, η περίπτωση στην οποία η κατανομή του μεγέθους της συστάδας (csd) είναι μια γενικευμένη (εισάγεται νέα παράμετρος) εξαρτώμενη μεγέθους (gsb) λογαριθμική κατανομή. Παίρνοντας τα όρια αυτής της παραμέτρου ως οριακές κατανομές προκύπτουν η ΝΝΒD και η Pόlya-Aeppli. Στη παρούσα διπλωματική μελετάται η κατανομή που προκύπτει όταν ως csd χρησιμοποιείται η gsb μιας οιασδήποτε κατανομής. Δίνεται η πιθανογεννήτρια και προσδιορίζονται οι ασυμπτωτικές κατανομές στη γενικότερη περίπτωση. Μελετώνται επίσης, οι ιδιότητες της κατανομής και δίνονται εκτιμητές με τις μεθόδους των ροπών και της μέγιστης πιθανοφάνειας. Ειδικότερα, παρουσιάζεται η περίπτωση της ακρότμητης Poisson που δίνει ως οριακές κατανομές τις Νeyman και Thomas και προσομοιώνονται δεδομένα. Εξάγονται επίσης, ως ειδική περίπτωση των γενικών τύπων, τα αποτελέσματα που έχουν αποδειχθεί για τη λογαριθμική κατανομή. Στη συνέχεια αναπτύσσονται αντίστοιχα γενικευμένα διδιάστατα μοντέλα τέτοιων κατανομών. Δίνονται επίσης οι περιθώριες και οι δεσμευμένες κατανομές τους, υπολογίζονται οι ροπές, και χρήσιμες σχέσεις για τα διδιάστατα μοντέλα. Τέλος, παρουσιάζονται ειδικές περιπτώσεις, όπως οι Sum-Symmetric Power-Series και δίνονται εφαρμογές των διδιαστάτων κατανομών που μελετήθηκαν. / In biological data interpretation domains, where the units we exam come along as clusters of random size and number, generalized distributions have a very major role. In particular, every univariate discrete distribution that is infinite divisible can be formed like a generalized Poisson distribution. The case where the cluster-size distribution is a generalized (a new parameter has been inserted) size-biased log-series distribution has been studied. Taking the limits of this parameter, as limited cases we have the NNBD and Polya-Aeppli distribution. In this diplomatic work, we study the distribution which arises when as a csd we use the gsb of a random distribution. We give the pgf and we see the asymptotic distributions in the general case. We also see the attributes of the distribution and we give estimators with the method of moments and maximum likelihood estimators. Specially, we report the case of Truncated Poisson, which gives Neyman and Thomas as limiting cases and we simulate some data. Likewise, we also see the results that have been proofed for the Log-Series distribution as a special case of the general formulas. Then, we see correspond generalized Bivariate models of these distributions. We also give the marginals and the conditional distributions, we find the moments and some useful relations about the Bivariate models. Final, we present special cases, like Sum-Symmetric Power-Series and we give applications of the Bivariate distributions that we saw. In biological data interpretation domains, where the units we exam come along as clusters of random size and number, generalized distributions have a very major role. In particular, every univariate discrete distribution that is infinite divisible can be formed like a generalized Poisson distribution. The case where the cluster-size distribution is a generalized (a new parameter has been inserted) size-biased log-series distribution has been studied. Taking the limits of this parameter, as limited cases we have the NNBD and Polya-Aeppli distribution. In this diplomatic work, we study the distribution which arises when as a csd we use the gsb of a random distribution. We give the pgf and we see the asymptotic distributions in the general case. We also see the attributes of the distribution and we give estimators with the method of moments and maximum likelihood estimators. Specially, we report the case of Truncated Poisson, which gives Neyman and Thomas as limiting cases and we simulate some data. Likewise, we also see the results that have been proofed for the Log-Series distribution as a special case of the general formulas. Then, we see correspond generalized Bivariate models of these distributions. We also give the marginals and the conditional distributions, we find the moments and some useful relations about the Bivariate models. Final, we present special cases, like Sum-Symmetric Power-Series and we give applications of the Bivariate distributions that we saw.
460

Motion planning of mobile robot in dynamic environment using potential field and roadmap based planner

Malik, Waqar Ahmad 30 September 2004 (has links)
Mobile robots are increasingly being used to perform tasks in unknown environments. The potential of robots to undertake such tasks lies in their ability to intelligently and efficiently locate and interact with objects in their environment. My research focuses on developing algorithms to plan paths for mobile robots in a partially known environment observed by an overhead camera. The environment consists of dynamic obstacles and targets. A new methodology, Extrapolated Artificial Potential Field, is proposed for real time robot path planning. An algorithm for probabilistic collision detection and avoidance is used to enhance the planner. The aim of the robot is to select avoidance maneuvers to avoid the dynamic obstacles. The navigation of a mobile robot in a real-world dynamic environment is a complex and daunting task. Consider the case of a mobile robot working in an office environment. It has to avoid the static obstacles such as desks, chairs and cupboards and it also has to consider dynamic obstacles such as humans. In the presence of dynamic obstacles, the robot has to predict the motion of the obstacles. Humans inherently have an intuitive motion prediction scheme when planning a path in a crowded environment. A technique has been developed which predicts the possible future positions of obstacles. This technique coupled with the generalized Voronoi diagram enables the robot to safely navigate in a given environment.

Page generated in 0.0852 seconds