• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 748
  • 179
  • 109
  • 91
  • 28
  • 24
  • 24
  • 23
  • 18
  • 18
  • 12
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1590
  • 289
  • 240
  • 199
  • 198
  • 189
  • 168
  • 163
  • 149
  • 144
  • 137
  • 136
  • 117
  • 111
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Ethnic differences in achievement growth: Longitudinal data analysis of math achievement in a hierarchical linear modeling framework

Xiang, Yun January 2009 (has links)
Thesis advisor: Henry Braun / Given the call for greater understanding of racial inequality in student achievement in K-12 education, this study contributes a comprehensive, quantitative, longitudinal examination of the achievement gap phenomenon, with particular attention to the organization characteristics of schools and school districts. Employing data from a large number of districts in a single state, it examines the trends in achievement and the growth in achievement after the passage of NCLB. It focuses on mathematics performance from grade 6 to grade 8. Both a traditional descriptive approach and one employing Hierarchical Linear Models were applied and compared. The purpose was not to determine which methodology is superior but to provide complementary perspectives. The comparison between the two approaches revealed similar trends in achievement gaps, but the HLM approach offered a more nuanced description. Nonetheless the results suggest that it is useful to employ both approaches. As to the main question regarding ethnicity, it appears that even if student ethnicity is confounded with other indicators, such as initial score and socio-economic status, it is still an important predictor of both achievement gaps and achievement growth gaps. Moreover, demographic profiles at the school and district levels were also associated with these gaps. / Thesis (PhD) — Boston College, 2009. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement, and Evaluation.
32

A Bayesian Test of Independence for Two-way Contingency Tables Under Cluster Sampling

Bhatta, Dilli 19 April 2013 (has links)
We consider a Bayesian approach to the study of independence in a two-way contingency table obtained from a two-stage cluster sampling design. We study the association between two categorical variables when (a) there are no covariates and (b) there are covariates at both unit and cluster levels. Our main idea for the Bayesian test of independence is to convert the cluster sample into an equivalent simple random sample which provides a surrogate of the original sample. Then, this surrogate sample is used to compute the Bayes factor to make an inference about independence. For the test of independence without covariates, the Rao-Scott corrections to the standard chi-squared (or likelihood ratio) statistic were developed. They are ``large sample' methods and provide appropriate inference when there are large cell counts. However, they are less successful when there are small cell counts. We have developed the methodology to overcome the limitations of Rao-Scott correction. We have used a hierarchical Bayesian model to convert the observed cluster samples to simple random samples. This provides the surrogate samples which can be used to derive the distribution of the Bayes factor to make an inference about independence. We have used a sampling-based method to fit the model. For the test of independence with covariates, we first convert the cluster sample with covariates to a cluster sample without covariates. We use multinomial logistic regression model with random effects to accommodate the cluster effects. Our idea is to fit the cluster samples to the random effect models and predict the new samples by adjusting with the covariates. This provides the cluster sample without covariates. We then use a hierarchical Bayesian model to convert this cluster sample to a simple random sample which allows us to calculate the Bayes factor to make an inference about independence. We use Markov chain Monte Carlo methods to fit our models. We apply our first method to the Third International Mathematics and Science Study (1995) for third grade U.S. students in which we study the association between the mathematics test scores and the communities the students come from, and science test scores and the communities the students come from. We also provide a simulation study which establishes our methodology as a viable alternative to the Rao-Scott approximations for relatively small two-stage cluster samples. We apply our second method to the data from the Trend in International Mathematics and Science Study (2007) for fourth grade U.S. students to assess the association between the mathematics and science scores represented as categorical variables and also provide the simulation study. The result shows that if there is strong association between two categorical variables, there is no difference between the significance of the test in using the model (a) with covariates and (b) without covariates. However, in simulation studies, there is a noticeable difference in the significance of the test between the two models when there are borderline cases (i.e., situations where there is marginal significance).
33

Bayesian hierarchical models for linear networks

Al-Kaabawi, Zainab A. A. January 2018 (has links)
A motorway network is handled as a linear network. The purpose of this study is to highlight dangerous motorways via estimating the intensity of accidents and study its pattern across the UK motorway network. Two mechanisms have been adopted to achieve this aim. The first, the motorway-specific intensity is estimated by modelling the point pattern of the accident data using a homogeneous Poisson process. The homogeneous Poisson process is used to model all intensities but heterogeneity across motorways is incorporated using two-level hierarchical models. The data structure is multilevel since each motorway consists of junctions that are joined by grouped segments. In the second mechanism, the segment-specific intensity is estimated by modelling the point pattern of the accident data. The homogeneous Poisson process is used to model accident data within segments but heterogeneity across segments is incorporated using three-level hierarchical models. A Bayesian method via Markov Chain Monte Carlo simulation algorithms is used in order to estimate the unknown parameters in the models and a sensitivity analysis to the prior choice is assessed. The performance of the proposed models is checked through a simulation study and an application to traffic accidents in 2016 on the UK motorway network. The performance of the three-level frequentist model was poor. The deviance information criterion (DIC) and the widely applicable information criterion (WAIC) are employed to choose between the two-level Bayesian hierarchical model and the three-level Bayesian hierarchical model, where the results showed that the best fitting model was the three-level Bayesian hierarchical model.
34

Common Strategies for Regulating Emotions across the Hierarchical Taxonomy of Psychopathology (HiTOP) Model

Bennett, Charles B 08 1900 (has links)
The hierarchical taxonomy of psychopathology (HiTOP) is a novel classification system that adopts both a dimensional and hierarchical approach to psychopathology to address shortcomings. However, the HiTOP framework is descriptive in nature and requires additional research to consider potential mechanisms for the onset and maintenance of psychopathology, such as cognitive-behavioral emotion regulation strategies. To redress this gap, a sample of 341 adults who endorsed ongoing mental health concerns completed self-report measures of emotion regulation strategies and psychopathology. The data revealed a three-spectra HiTOP model consisting of internalizing, thought disorder, and antagonistic externalizing. Results found that psychopathology was most strongly associated with avoidance, catastrophizing, expressive suppression, and self-blame. In contrast, adaptive strategies were generally unrelated to the HiTOP spectra. This pattern was strongest for internalizing, distress, and detachment. Fewer, yet noteworthy unique relationships between the strategies and specific spectra/subfactors were also found. These findings suggest that psychopathology may be best conceptualized as an overutilization of maladaptive emotion regulation strategies. Furthermore, the results indicate there is added benefit to considering these strategies within a hierarchical approach to psychopathology. These associations alert clinicians to potential treatment targets and contribute to an ongoing literature that seeks to identify underlying mechanisms of the structure of psychopathology.
35

Design, development and evaluation of an efficient hierarchical interconnection network.

Campbell, Stuart M. January 1999 (has links)
Parallel computing has long been an area of research interest because exploiting parallelism in difficult problems has promised to deliver orders of magnitude speedups. Processors are now both powerful and cheap, so that systems incorporating tens, hundreds or even thousands of powerful processors need not be prohibitively expensive. The weak link in exploiting parallelism is the means of communication between the processors. Shared memory systems are fundamentally limited in the number of processors they can utilise. To achieve high levels of parallelism it is still necessary to use distributed memory and some form of interconnection network. But interconnection networks can be costly, slow, difficult to build and expand, vulnerable to faults and limited in the range of problems they can be used to solve effectively. As a result there has been extensive research into developing interconnection networks which overcome some or all of these difficulties. In this thesis it is argued that a new interconnection network, Hierarchical Cliques (HiC), and a derivative, FatHiC, possesses many desirable properties and are worthy of consideration for use in building parallel computers. A fundamental element of an interconnection network is its topology. After defining the topology of HiC, expressions are derived for the various parameters which define its underlying limits of performance and fault tolerance. A second element of an interconnection network is an addressing and routing scheme. The addressing scheme and routing algorithms of HiC are described. The flexibility of HiC is demonstrated by developing embeddings of popular, regular interconnection networks. Some embeddings into HiC suffer from high congestion, however the FatHiC network is shown to have low congestion for those embeddings. The performance of some important, regular, data parallel problems on HiC and ++ / FatHiC are determined by analysis and simulation, using the 2D-mesh as a means of comparison. But performance alone does not tell the whole story. Any parallel computer system must be cost effective. In order to analyse the cost effectiveness of HiCs an existing measure was expanded to provide a more realistic model and a more accurate means of comparison. One aim of this thesis is to demonstrate the suitability of HiC for parallel computing systems which execute irregular algorithms requiring dynamic load balancing. A new dynamic load balancing algorithm is proposed which takes advantage of the hierarchical structure of the HiC to reduce communication overheads incurred when distributing work. To demonstrate performance of an irregular problem, a novel parallel algorithm was developed to detect subgraph isomorphism from many model graphs to a single input graph. The use of the new load balancing algorithm in conjunction with the subgraph isomorphism algorithm is discussed.
36

Hierarchical modeling of multi-scale dynamical systems using adaptive radial basis function neural networks: application to synthetic jet actuator wing

Lee, Hee Eun 30 September 2004 (has links)
To obtain a suitable mathematical model of the input-output behavior of highly nonlinear, multi-scale, nonparametric phenomena, we introduce an adaptive radial basis function approximation approach. We use this approach to estimate the discrepancy between traditional model areas and the multiscale physics of systems involving distributed sensing and technology. Radial Basis Function Networks offers the possible approach to nonparametric multi-scale modeling for dynamical systems like the adaptive wing with the Synthetic Jet Actuator (SJA). We use the Regularized Orthogonal Least Square method (Mark, 1996) and the RAN-EKF (Resource Allocating Network-Extended Kalman Filter) as a reference approach. The first part of the algorithm determines the location of centers one by one until the error goal is met and regularization is achieved. The second process includes an algorithm for the adaptation of all the parameters in the Radial Basis Function Network, centers, variances (shapes) and weights. To demonstrate the effectiveness of these algorithms, SJA wind tunnel data are modeled using this approach. Good performance is obtained compared with conventional neural networks like the multi layer neural network and least square algorithm. Following this work, we establish Model Reference Adaptive Control (MRAC) formulations using an off-line Radial Basis Function Networks (RBFN). We introduce the adaptive control law using a RBFN. A theory that combines RBFN and adaptive control is demonstrated through the simple numerical simulation of the SJA wing. It is expected that these studies will provide a basis for achieving an intelligent control structure for future active wing aircraft.
37

A 3-d capacitance extraction algorithm based on kernel independent hierarchical method and geometric moments

Zhuang, Wei 17 September 2007 (has links)
A three dimensional (3-D) capacitance extraction algorithm based on a kernel independent hierarchical method and geometric moments is described. Several techniques are incorporated, which leads to a better overall performance for arbitrary interconnect systems. First, the new algorithm hierarchically partitions the bounding box of all interconnect panels to build the partition tree. Then it uses simple shapes to match the low order moments of the geometry of each box in the partition tree. Finally, with the help of a fast matrix-vector product, GMRES is used to solve the linear system. Experimental results show that our algorithm reduces the linear system's size greatly and at the same time maintains a satisfying accuracy. Compared with FastCap, the running time of the new algorithm can be reduced more than a magnitude and the memory usage can be reduced more than thirty times.
38

Reification of network resource control in multi-agent systems

Liu, Chen 31 August 2006
In multi-agent systems [1], coordinated resource sharing is indispensable for a set of autonomous agents, which are running in the same execution space, to accomplish their computational objectives. This research presents a new approach to network resource control in multi-agent systems, based on the CyberOrgs [2] model. This approach aims to offer a mechanism to reify network resource control in multi-agent systems and to realize this mechanism in a prototype system. <p>In order to achieve these objectives, a uniform abstraction vLink (Virtual Link) is introduced to represent network resource, and based on this abstraction, a coherent mechanism of vLink creation, allocation and consumption is developed. This mechanism is enforced in the network by applying a fine-grained flow-based scheduling scheme. In addition, concerns of computations are separated from those of resources required to complete them, which simplifies engineering of network resource control. Thus, application programmers are enabled to focus on their application development and separately declaring resource request and defining resource control policies for their applications in a simplified way. Furthermore, network resource is bounded to computations and controlled in a hierarchy to coordinate network resource usage. A computation and its sub-computations are not allowed to consume resources beyond their resource boundary. However, resources can be traded between different boundaries. <p> In this thesis, the design and implementation of a prototype system is described as well. The prototype system is a middleware system architecture, which can be used to build systems supporting network resource control. This architecture has a layered structure and aims to achieve three goals: (1) providing an interface for programmers to express resource requests for applications and define their resource control policies; (2) specializing the CyberOrgs model to control network resource; and (3) providing carefully designed mechanisms for routing, link sharing and packet scheduling to enforce required resource allocation in the network.
39

Using Improved AHP Method in Maintenance Approach Selection

Rashidpour, Koorosh January 2013 (has links)
This research intends to introduce a model in order to choose the best Maintenance Strategy based on the condition of the relevant company. Basically, it is divided into three main parts. First part is the theoretical part and deals with the Maintenance approaches, conceptions, cost, software, and management. Second part explains the structure of selecting maintenance strategy by using improved Analytical Hierarchical Process (AHP) method and describes some definitions and equations in this scientific method. In the third part, a hypothetical example shows the accuracy of the method and the way it works.
40

Capacity Scaling and Optimal Operation of Wireless Networks

Ghaderi Dehkordi, Javad 15 July 2008 (has links)
How much information can be transferred over a wireless network and what is the optimal strategy for the operation of such network? This thesis tries to answer some of these questions from an information theoretic approach. A model of wireless network is formulated to capture the main features of the wireless medium as well as topology of the network. The performance metrics are throughput and transport capacity. The throughput is the summation of all reliable communication rates for all source-destination pairs in the network. The transport capacity is a sum rate where each rate is weighted by the distance over which it is transported. Based on the network model, we study the scaling laws for the performance measures as the number of users in the network grows. First, we analyze the performance of multihop wireless network under different criteria for successful reception of packets at the receiver. Then, we consider the problem of information transfer without arbitrary assumptions on the operation of the network. We observe that there is a dichotomy between the cases of relatively high signal attenuation and low attenuation. Moreover, a fundamental relationship between the performance metrics and the total transmitted power of users is discovered. As a result, the optimality of multihop is demonstrated for some scenarios in high attenuation regime, and better strategies than multihop are proposed for the operation in the low attenuation regime. Then, we study the performance of a special class of networks, random networks, where the traffic is uniformly distributed inside the networks. For this special class, the upperbounds on the throughput are presented for both low and high attenuation cases. To achieve the presented upperbounds, a hierarchical cooperation scheme is analyzed and optimized by choosing the number of hierarchical stages and the corresponding cluster sizes that maximize the total throughput. In addition, to apply the hierarchical cooperation scheme to random networks, a clustering algorithm is developed, which divides the whole network into quadrilateral clusters, each with exactly the number of nodes required.

Page generated in 0.0717 seconds