31 |
Ethnic differences in achievement growth: Longitudinal data analysis of math achievement in a hierarchical linear modeling frameworkXiang, Yun January 2009 (has links)
Thesis advisor: Henry Braun / Given the call for greater understanding of racial inequality in student achievement in K-12 education, this study contributes a comprehensive, quantitative, longitudinal examination of the achievement gap phenomenon, with particular attention to the organization characteristics of schools and school districts. Employing data from a large number of districts in a single state, it examines the trends in achievement and the growth in achievement after the passage of NCLB. It focuses on mathematics performance from grade 6 to grade 8. Both a traditional descriptive approach and one employing Hierarchical Linear Models were applied and compared. The purpose was not to determine which methodology is superior but to provide complementary perspectives. The comparison between the two approaches revealed similar trends in achievement gaps, but the HLM approach offered a more nuanced description. Nonetheless the results suggest that it is useful to employ both approaches. As to the main question regarding ethnicity, it appears that even if student ethnicity is confounded with other indicators, such as initial score and socio-economic status, it is still an important predictor of both achievement gaps and achievement growth gaps. Moreover, demographic profiles at the school and district levels were also associated with these gaps. / Thesis (PhD) — Boston College, 2009. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement, and Evaluation.
|
32 |
A Bayesian Test of Independence for Two-way Contingency Tables Under Cluster SamplingBhatta, Dilli 19 April 2013 (has links)
We consider a Bayesian approach to the study of independence in a two-way contingency table obtained from a two-stage cluster sampling design. We study the association between two categorical variables when (a) there are no covariates and (b) there are covariates at both unit and cluster levels. Our main idea for the Bayesian test of independence is to convert the cluster sample into an equivalent simple random sample which provides a surrogate of the original sample. Then, this surrogate sample is used to compute the Bayes factor to make an inference about independence. For the test of independence without covariates, the Rao-Scott corrections to the standard chi-squared (or likelihood ratio) statistic were developed. They are ``large sample' methods and provide appropriate inference when there are large cell counts. However, they are less successful when there are small cell counts. We have developed the methodology to overcome the limitations of Rao-Scott correction. We have used a hierarchical Bayesian model to convert the observed cluster samples to simple random samples. This provides the surrogate samples which can be used to derive the distribution of the Bayes factor to make an inference about independence. We have used a sampling-based method to fit the model. For the test of independence with covariates, we first convert the cluster sample with covariates to a cluster sample without covariates. We use multinomial logistic regression model with random effects to accommodate the cluster effects. Our idea is to fit the cluster samples to the random effect models and predict the new samples by adjusting with the covariates. This provides the cluster sample without covariates. We then use a hierarchical Bayesian model to convert this cluster sample to a simple random sample which allows us to calculate the Bayes factor to make an inference about independence. We use Markov chain Monte Carlo methods to fit our models. We apply our first method to the Third International Mathematics and Science Study (1995) for third grade U.S. students in which we study the association between the mathematics test scores and the communities the students come from, and science test scores and the communities the students come from. We also provide a simulation study which establishes our methodology as a viable alternative to the Rao-Scott approximations for relatively small two-stage cluster samples. We apply our second method to the data from the Trend in International Mathematics and Science Study (2007) for fourth grade U.S. students to assess the association between the mathematics and science scores represented as categorical variables and also provide the simulation study. The result shows that if there is strong association between two categorical variables, there is no difference between the significance of the test in using the model (a) with covariates and (b) without covariates. However, in simulation studies, there is a noticeable difference in the significance of the test between the two models when there are borderline cases (i.e., situations where there is marginal significance).
|
33 |
Bayesian hierarchical models for linear networksAl-Kaabawi, Zainab A. A. January 2018 (has links)
A motorway network is handled as a linear network. The purpose of this study is to highlight dangerous motorways via estimating the intensity of accidents and study its pattern across the UK motorway network. Two mechanisms have been adopted to achieve this aim. The first, the motorway-specific intensity is estimated by modelling the point pattern of the accident data using a homogeneous Poisson process. The homogeneous Poisson process is used to model all intensities but heterogeneity across motorways is incorporated using two-level hierarchical models. The data structure is multilevel since each motorway consists of junctions that are joined by grouped segments. In the second mechanism, the segment-specific intensity is estimated by modelling the point pattern of the accident data. The homogeneous Poisson process is used to model accident data within segments but heterogeneity across segments is incorporated using three-level hierarchical models. A Bayesian method via Markov Chain Monte Carlo simulation algorithms is used in order to estimate the unknown parameters in the models and a sensitivity analysis to the prior choice is assessed. The performance of the proposed models is checked through a simulation study and an application to traffic accidents in 2016 on the UK motorway network. The performance of the three-level frequentist model was poor. The deviance information criterion (DIC) and the widely applicable information criterion (WAIC) are employed to choose between the two-level Bayesian hierarchical model and the three-level Bayesian hierarchical model, where the results showed that the best fitting model was the three-level Bayesian hierarchical model.
|
34 |
Design, development and evaluation of an efficient hierarchical interconnection network.Campbell, Stuart M. January 1999 (has links)
Parallel computing has long been an area of research interest because exploiting parallelism in difficult problems has promised to deliver orders of magnitude speedups. Processors are now both powerful and cheap, so that systems incorporating tens, hundreds or even thousands of powerful processors need not be prohibitively expensive. The weak link in exploiting parallelism is the means of communication between the processors. Shared memory systems are fundamentally limited in the number of processors they can utilise. To achieve high levels of parallelism it is still necessary to use distributed memory and some form of interconnection network. But interconnection networks can be costly, slow, difficult to build and expand, vulnerable to faults and limited in the range of problems they can be used to solve effectively. As a result there has been extensive research into developing interconnection networks which overcome some or all of these difficulties. In this thesis it is argued that a new interconnection network, Hierarchical Cliques (HiC), and a derivative, FatHiC, possesses many desirable properties and are worthy of consideration for use in building parallel computers. A fundamental element of an interconnection network is its topology. After defining the topology of HiC, expressions are derived for the various parameters which define its underlying limits of performance and fault tolerance. A second element of an interconnection network is an addressing and routing scheme. The addressing scheme and routing algorithms of HiC are described. The flexibility of HiC is demonstrated by developing embeddings of popular, regular interconnection networks. Some embeddings into HiC suffer from high congestion, however the FatHiC network is shown to have low congestion for those embeddings. The performance of some important, regular, data parallel problems on HiC and ++ / FatHiC are determined by analysis and simulation, using the 2D-mesh as a means of comparison. But performance alone does not tell the whole story. Any parallel computer system must be cost effective. In order to analyse the cost effectiveness of HiCs an existing measure was expanded to provide a more realistic model and a more accurate means of comparison. One aim of this thesis is to demonstrate the suitability of HiC for parallel computing systems which execute irregular algorithms requiring dynamic load balancing. A new dynamic load balancing algorithm is proposed which takes advantage of the hierarchical structure of the HiC to reduce communication overheads incurred when distributing work. To demonstrate performance of an irregular problem, a novel parallel algorithm was developed to detect subgraph isomorphism from many model graphs to a single input graph. The use of the new load balancing algorithm in conjunction with the subgraph isomorphism algorithm is discussed.
|
35 |
Hierarchical modeling of multi-scale dynamical systems using adaptive radial basis function neural networks: application to synthetic jet actuator wingLee, Hee Eun 30 September 2004 (has links)
To obtain a suitable mathematical model of the input-output behavior of highly nonlinear, multi-scale, nonparametric phenomena, we introduce an adaptive radial basis function approximation approach. We use this approach to estimate the discrepancy between traditional model areas and the multiscale physics of systems involving distributed sensing and technology. Radial Basis Function Networks offers the possible approach to nonparametric multi-scale modeling for dynamical systems like the adaptive wing with the Synthetic Jet Actuator (SJA). We use the Regularized Orthogonal Least Square method (Mark, 1996) and the RAN-EKF (Resource Allocating Network-Extended Kalman Filter) as a reference approach. The first part of the algorithm determines the location of centers one by one until the error goal is met and regularization is achieved. The second process includes an algorithm for the adaptation of all the parameters in the Radial Basis Function Network, centers, variances (shapes) and weights. To demonstrate the effectiveness of these algorithms, SJA wind tunnel data are modeled using this approach. Good performance is obtained compared with conventional neural networks like the multi layer neural network and least square algorithm. Following this work, we establish Model Reference Adaptive Control (MRAC) formulations using an off-line Radial Basis Function Networks (RBFN). We introduce the adaptive control law using a RBFN. A theory that combines RBFN and adaptive control is demonstrated through the simple numerical simulation of the SJA wing. It is expected that these studies will provide a basis for achieving an intelligent control structure for future active wing aircraft.
|
36 |
A 3-d capacitance extraction algorithm based on kernel independent hierarchical method and geometric momentsZhuang, Wei 17 September 2007 (has links)
A three dimensional (3-D) capacitance extraction algorithm based on a kernel independent hierarchical method and geometric moments is described. Several techniques are incorporated, which leads to a better overall performance for arbitrary interconnect systems. First, the new algorithm hierarchically partitions the bounding box of all interconnect panels to build the partition tree. Then it uses simple shapes to match the low order moments of the geometry of each box in the partition tree. Finally, with the help of a fast matrix-vector product, GMRES is used to solve the linear system. Experimental results show that our algorithm reduces the linear system's size greatly and at the same time maintains a satisfying accuracy. Compared with FastCap, the running time of the new algorithm can be reduced more than a magnitude and the memory usage can be reduced more than thirty times.
|
37 |
Reification of network resource control in multi-agent systemsLiu, Chen 31 August 2006
In multi-agent systems [1], coordinated resource sharing is indispensable for a set of autonomous agents, which are running in the same execution space, to accomplish their computational objectives. This research presents a new approach to network resource control in multi-agent systems, based on the CyberOrgs [2] model. This approach aims to offer a mechanism to reify network resource control in multi-agent systems and to realize this mechanism in a prototype system. <p>In order to achieve these objectives, a uniform abstraction vLink (Virtual Link) is introduced to represent network resource, and based on this abstraction, a coherent mechanism of vLink creation, allocation and consumption is developed. This mechanism is enforced in the network by applying a fine-grained flow-based scheduling scheme. In addition, concerns of computations are separated from those of resources required to complete them, which simplifies engineering of network resource control. Thus, application programmers are enabled to focus on their application development and separately declaring resource request and defining resource control policies for their applications in a simplified way. Furthermore, network resource is bounded to computations and controlled in a hierarchy to coordinate network resource usage. A computation and its sub-computations are not allowed to consume resources beyond their resource boundary. However, resources can be traded between different boundaries. <p> In this thesis, the design and implementation of a prototype system is described as well. The prototype system is a middleware system architecture, which can be used to build systems supporting network resource control. This architecture has a layered structure and aims to achieve three goals: (1) providing an interface for programmers to express resource requests for applications and define their resource control policies; (2) specializing the CyberOrgs model to control network resource; and (3) providing carefully designed mechanisms for routing, link sharing and packet scheduling to enforce required resource allocation in the network.
|
38 |
Using Improved AHP Method in Maintenance Approach SelectionRashidpour, Koorosh January 2013 (has links)
This research intends to introduce a model in order to choose the best Maintenance Strategy based on the condition of the relevant company. Basically, it is divided into three main parts. First part is the theoretical part and deals with the Maintenance approaches, conceptions, cost, software, and management. Second part explains the structure of selecting maintenance strategy by using improved Analytical Hierarchical Process (AHP) method and describes some definitions and equations in this scientific method. In the third part, a hypothetical example shows the accuracy of the method and the way it works.
|
39 |
Capacity Scaling and Optimal Operation of Wireless NetworksGhaderi Dehkordi, Javad 15 July 2008 (has links)
How much information can be transferred over a wireless network
and what is the optimal strategy for the operation of such
network? This thesis tries to answer some of these questions from
an information theoretic approach.
A model of wireless network is formulated to capture the main
features of the wireless medium as well as topology of the
network. The performance metrics are throughput and transport
capacity. The throughput is the summation of all reliable
communication rates for all source-destination pairs in the
network. The transport capacity is a sum rate where each rate is
weighted by the distance over which it is transported. Based on
the network model, we study the scaling laws for the performance
measures as the number of users in the network grows.
First, we analyze the performance of multihop wireless network
under different criteria for successful reception of packets at
the receiver. Then, we consider the problem of information
transfer without arbitrary assumptions on the operation of the
network. We observe that there is a dichotomy between the cases of
relatively high signal attenuation and low attenuation. Moreover,
a fundamental relationship between the performance metrics and the
total transmitted power of users is discovered. As a result, the
optimality of multihop is demonstrated for some scenarios in high
attenuation regime, and better strategies than multihop are
proposed for the operation in the low attenuation regime. Then, we
study the performance of a special class of networks, random
networks, where the traffic is uniformly distributed inside the
networks. For this special class, the upperbounds on the
throughput are presented for both low and high attenuation cases.
To achieve the presented upperbounds, a hierarchical cooperation
scheme is analyzed and optimized by choosing the number of
hierarchical stages and the corresponding cluster sizes that
maximize the total throughput. In addition, to apply the
hierarchical cooperation scheme to random networks, a clustering
algorithm is developed, which divides the whole network into
quadrilateral clusters, each with exactly the number of nodes
required.
|
40 |
A Latent Health Factor Model for Estimating Estuarine Ecosystem HealthWu, Margaret 05 1900 (has links)
Assessment of the “health” of an ecosystem is often of great interest to those interested in monitoring and conservation of ecosystems. Traditionally, scientists have quantified the health of an ecosystem using multimetric indices that are semi-qualitative. Recently, a statistical-based index called the Latent Health Factor Index (LHFI) was devised to address many inadequacies of the conventional indices. Relying on standard modelling procedures, unlike the conventional indices, accords the LHFI many advantages: the LHFI is less arbitrary, and it allows for straightforward model inference and for formal statistical prediction of health for a new site (using only supplementary environmental covariates). In contrast, with conventional indices, formal statistical prediction does not exist, meaning that proper estimation of health for a new site requires benthic data which are expensive and time-consuming to gather. As the LHFI modelling methodology is a relatively new concept, it has so far only been demonstrated (and validated) on freshwater ecosystems. The goal of this thesis is to apply the LHFI modelling methodology to estuarine ecosystems, particularly to the previously unassessed system in Richibucto, New Brunswick. Specifically, the aims of this thesis are threefold: firstly, to investigate whether the LHFI is even applicable to estuarine systems since estuarine and freshwater metrics, or indicators of health, are quite different; secondly, to determine the appropriate form that the LHFI model if the technique is applicable; and thirdly, to assess the health of the Richibucto system. Note that the second objective includes determining which covariates may have a significant impact on estuarine health. As scientists have previously used the AZTI Marine Biotic Index (AMBI) and the Infaunal Trophic Index (ITI) as measurements of estuarine ecosystem health, this thesis investigates LHFI models using metrics from these two indices simultaneously. Two sets of models were considered in a Bayesian framework and implemented using Markov chain Monte Carlo techniques, the first using only metrics from AMBI, and the second using metrics from both AMBI and ITI. Both sets of LHFI models were successful in that they were able to make distinctions between health levels at different sites.
|
Page generated in 0.0687 seconds