401 |
Exploring Virtualization Techniques for Branch Outcome PredictionSadooghi-Alvandi, Maryam 20 December 2011 (has links)
Modern processors use branch prediction to predict branch outcomes, in order to fetch ahead in the instruction stream, increasing concurrency and performance. Larger predictor tables can improve prediction accuracy, but come at the cost of larger area and longer access delay.
This work introduces a new branch predictor design that increases the perceived predictor capacity without increasing its delay, by using a large virtual second-level table allocated in the second-level caches. Virtualization is applied to a state-of-the-art multi- table branch predictor. We evaluate the design using instruction count as proxy for timing on a set of commercial workloads. For a predictor whose size is determined by access delay constraints rather than area, accuracy can be improved by 8.7%. Alternatively, the design can be used to achieve the same accuracy as a non-virtualized design while using 25% less dedicated storage.
|
402 |
Using cooperation to improve the experience of web services consumersLuo, Yuting 11 September 2009
Web Services (WS) are one of the most promising approaches for building loosely coupled systems. However, due to the heterogeneous and dynamic nature of the WS environment, ensuring good QoS is still non-trivial. While WS tend to scale better than tightly coupled systems, they introduce a larger communication overhead and are more susceptible to server/resource latency. Traditionally this problem has been addressed by relying on negotiated Service Level Agreement to ensure the required QoS, or the development of elaborate compensation handlers to minimize the impact of undesirable latency.<p>
This research focuses on the use of cooperation between consumers and providers as an effective means of optimizing resource utilization and consumer experiences. It introduces a novel cooperative approach to implement the cooperation between consumers and providers.
|
403 |
Modeling academic performance change from high school to collegeBrown, Wayne E. (Wayne Edward), 1943- 04 June 1990 (has links)
This research was undertaken to identify variables
that accounted for major changes in academic performance
between high school and college. Differences between
predicted and actual college GPA were used to classify
students as gainers or decliners among a group of
sophomores and a group of seniors at a medium-sized
research university.
A model composed of nine variables was developed to
explain the change in performance. Each variable was
classified as an Environmental Triggering Mechanism
(environmental stimulus), an Internal Psychological state
(a cognitive response to the stimulus), or an Academic
Behavior. Seven of the variables were derived from the
literature of academic achievement in college. Two of the
variables were identified in the course of exploratory
interviews with senior performance changers.
Two-way discriminant function analysis was performed
to determine which of the variables contributed most to
classifying students as gainers or decliners. Correlation
analysis was performed to examine the relationships
between variables.
Academic expectancies, the number of terms required
to adjust to college academically, and the students'
approach to study (consistency and priority of study)
emerged as making the strongest contribution to the
discriminant function for both sophomores and seniors.
Significant correlations were found between some, but
not all, of the variables in each category, supporting the
basic structure of the model. Variables categorized as
Environmental Triggering Mechanisms played a secondary
role with respect to those Internal Psychological States
and Academic Behaviors that contributed most to academic
performance change. / Graduation date: 1991
|
404 |
Computational approaches for RNA energy parameter estimationAndronescu, Mirela Stefania 05 1900 (has links)
RNA molecules play important roles, including catalysis of chemical reactions and control of gene expression, and their functions largely depend on their folded structures. Since determining these structures by biochemical means is expensive, there is increased demand for computational predictions of RNA structures. One computational approach is to find the secondary structure (a set of base pairs) that minimizes a free energy function for a given RNA conformation. The forces driving RNA folding can be approximated by means of a free energy model, which associates a free energy parameter to a distinct considered feature.
The main goal of this thesis is to develop state-of-the-art computational approaches that can significantly increase the accuracy (i.e., maximize the number of correctly predicted base pairs) of RNA secondary structure prediction methods, by improving and refining the parameters of the underlying RNA free energy model.
We propose two general approaches to estimate RNA free energy parameters. The Constraint Generation (CG) approach is based on iteratively generating constraints that enforce known structures to have energies lower than other structures for the same molecule. The Boltzmann Likelihood (BL) approach infers a set of RNA free energy parameters which maximize the conditional likelihood of a set of known RNA structures. We discuss several variants and extensions of these two approaches, including a linear Gaussian Bayesian network that defines relationships between features. Overall, BL gives slightly better results than CG, but it is over ten times more expensive to run. In addition, CG requires software that is much simpler to implement.
We obtain significant improvements in the accuracy of RNA minimum free energy secondary structure prediction with and without pseudoknots (regions of non-nested base pairs), when measured on large sets of RNA molecules with known structures. For the Turner model, which has been the gold-standard model without pseudoknots for more than a decade, the average prediction accuracy of our new parameters increases from 60% to 71%. For two models with pseudoknots, we obtain an increase of 9% and 6%, respectively. To the best of our knowledge, our parameters are currently state-of-the-art for the three considered models.
|
405 |
Transferability of community-based macro-level collision prediction models for use in road safety planning applicationsKhondaker, Bidoura 11 1900 (has links)
This thesis proposes the methodology and guidelines for community-based macro-level CPM
transferability to do road safety planning applications, with models developed in one spatial-temporal
region being capable of used in a
different spatial-temporal region. In doing this.
the macro-level CPMs developed for the Greater Vancouver Regional District (GVRD) by
Lovegrove and Sayed (2006, 2007) was used in a model transferability study. Using those
models from GVRD and data from Central Okanagan Regional District (CORD), in the
Province of British Columbia. Canada. a transferability test has been conducted that involved
recalibration of the 1996 GVRD models to Kelowna, in 2003 context. The case study was
carried out in three parts. First, macro-level CPMs for the City of Kelowna were developed
using 2003 data following the research by GVRD CPM development and use. Next, the 1996
GVRD models were recalibrated to see whether they could yield reliable prediction of the
safety estimates for Kelowna, in 2003 context. Finally, a
comparison between the results of
Kelowna’s own developed models and the transferred models was conducted to determine
which models yielded better results.
The results of the transferability study revealed that macro-level CPM transferability was
possible and no more complicated than micro-level CPM transferability. To facilitate the
development of reliable community-based, macro-level collision prediction models, it was
recommended that CPMs be transferred rather than developed from scratch whenever and
wherever communities lack sufficient data of adequate quality. Therefore, the transferability
guidelines in this research, together with their application in the case studies, have been
offered as a contribution towards model transferability to do road safety planning
applications, with models developed in one spatial-temporal region being capable of used in
a different spatial-temporal region.
|
406 |
Towards Objective Human Brain Tumours Classification using DNA microarraysCastells Domingo, Xavier 10 June 2009 (has links)
Els tumors de cervell humans (HBTs) són uns dels càncers més agressius i intractables. El sistema actual de diagnosi i prognosi dels HBTs es basa en l'examinació histològica d'un tall de biòpsia, el qual es considera el sistema de referència ("gold¬standard"). A més de ser invasiva, aquesta tècnica no és prou acurada per a diferenciar els graus de malignitat de determinats HBTs i la correlació amb la resposta del pacient a la teràpia sol ser variable. En aquest context, les signatures gèniques obtingudes a partir de microarrays de DNA poden millorar els resultats del "gold-standard". En aquesta tesi, vaig recollir 333 biòpsies de varis tipus de HBTs. Com un 38% de les mostres tenien l'RNA degradat, vam avaluar si el tipus de HBTs, el contingut aparent de sang de la biòpsia i el medi de recollida de la biòpsia hi afectaven. Com no vam determinar cap relació, hipotetitzo que un temps variable d'isquèmia a temperatura normal del cos abans de l'extracció de la biòpsia podria induir la degradació de l'RNA. Això va ser avaluat en un tumor glial pre-clínic desenvolupat en ratolí. Es va detectar que 30 minuts de temps d'isquèmia afecta la integritat del RNA en tumors no necròtics, però no en els necròtics. Una part crucial de la tesi va ser la demostració com una "prova de principis" de l'habilitat de les signatures gèniques per a predir objectivament els HBTs. Això es va mostrar mitjançant una predicció perfecta de glioblastoma multiforme (Gbm) i meningioma meningotelial (Mm) utilitzant microarrays de cDNA i microchips d'Affymetrix. Els histopatòlegs poden discriminar perfectament aquests dos tipus de tumors, però aquest treball demostra una predicció perfecta utilitzant una fórmula matemàtica objectiva. Un cop es va demostrar això, em vaig sentir confiat per a predir diferents graus de malignitat i possibles subtipus moleculars de tumors glials. En aquest sentit, es va descriure una signatura gènica basada en l'expressió de 59 transcrits, la qual va distingir dos grups de glioblastomes. Finalment, una anàlisi inicial de les dades clíniques associades suggereix que la signatura gènica podria correlacionar amb glioblastomes primaris i secundaris. / Human brain tumours (HBTs) are among the most aggressive and intractable cancers. The current system for diagnosis and prognosis of HBTs is based on the histological examination of a biopsy slice, which is considered the 'gold standard'. Apart from being invasive, this technique is not accurate enough to differentiate malignancy grades of some HBTs and it provides a variable correlation with response to therapy of the patient. In this context, gene signatures from DNA microarray experiments can improve the results of the 'gold standard'. In this thesis, I collected 333 biopsies from various types of HBTs. As 38% of samples displayed degraded RNA, I evaluated whether the HBT type, the apparent blood content and the collection medium of the biopsy could play a role in this. As no relationship was found, I hypothesized that the variable ischaemia time at normal body temperature prior to removal of the biopsy may induce degradation of RNA. This was tested in a preclinical glial tumour model in mice. It was detected that 30 minutes ischaemia time affects the integrity of the RNA in non-necrotic tumours, but not in the necrotic ones. A crucial part of this thesis was the demonstration of proof-of-principle of the ability of gene signatures for objective prediction of HBTs. This was shown by perfect prediction of glioblastoma multiforme (Gbm) and meningothelial meningioma (Mm) using cDNA and Affymetrix microarrays. Histopathologists perfectly discriminates both tumour types, but this work demonstrated perfect prediction using a simple mathematical formula. Once this was demonstrated, I felt confident to predict different malignancy grades and possible molecular subtypes of glial tumours. In this respect, a gene signature based on the expression of 59 transcripts, which distinguished two groups of glioblastomas, was described. Finally, a crude initial analysis of associated clinical data suggests that this gene signature may correlate to primary and secondary glioblastomas.
|
407 |
A Methodology to Enhance the Prediction of Forest Fire PropagationAbdalhap, Baker 18 June 2004 (has links)
Los incendios forestales tienen efectos significativamente negativos sobre la ecologiíta, economía y sociedad. La simulación de propagación del fuego es todo un reto, debido a la complejidad de los modelos físicos implicados, de la necesidad de una gran cantidad de cómputo y de las dificultades en proporcionar los parámetros de la entrada exactos. Hay ciertos parámetros que no se pueden medir directamente, pero que pueden ser estimados a partir de las medidas indirectas (por ejemplo, los contenidos de agua en la vegetación). Otros parámetros se pueden medir en algunos puntos particulares, pero el valor de tales parámetros se debe, entonces, interpolar al terreno entero (por ejemplo, la velocidad del viento y la dirección). En ambos casos es extremadamente difícil saber el valor exacto de cada parámetro en tiempo de ejecución. Todos estos factores implican que, en la mayoría de los casos, los resultados proporcionados por los simuladores no concuerdan con la propagación del fuego real. Así, los simuladores no son enteramente útiles, puesto que las predicciones no son confiables. Los parámetros de entrada son una de las principales fuentes de desviación entre los resultados del simulador y la propagación del fuego real. Una manera de superar este problema consiste en la optimización de los parámetros de entrada con el objetivo de encontrar un conjunto de valores de entrada de modo que la propagación predicha del fuego concuerde con la propagación del fuego real. Algoritmos evolutivos para optimizar los parámetros de la entrada se han utilizado. Sin embargo, tales técnicas de optimización se deben realizar en tiempo real, y por lo tanto, se deben aplicar algunos métodos para acelerar el proceso de optimización. En este trabajo, hemos aplicado un análisis de sensibilidad a los parámetros de entrada del simulador, con el objetivo de determinar cuales de ellos afectan más el resultado de la simulación, una vez identificados los parámetros más sensibles, se concentra el esfuerzo de optimización sobre ellos aprovechamos así, el cómputo ofrecido por los sistemas distribuidos.A continuación se describe la organización de la tesis:En el capítulo uno introducimos la ciencia computacional y la predicción de propagación de incendios forestales. Después introducimos la metodología para mejorar la predicción (capítulo 2). En el capitulo 3, discutimos las maneras posibles de paralelizar la metodología y la implementación de la misma. Los resultados experimentales de comparación de algoritmos de optimización se incluyen en el capitulo 4. En capítulo 5 se describen diferentes estrategias para acelerar la optimización. Y, por último la aplicación de la metodología sobre casos reales se presenta en el capítulo 6. / Wild land fire is an important problem from the ecological, economical and social point of view. Fire propagation simulation is seen as a challenging problem in the area of simulation, due to the complexity of the physical models involved, the need for a great amount of computation and the difficulties of providing accurate input parameters. There are certain parameters that cannot be measured directly, but that must be estimated from indirect measures (for example, the moisture contents in vegetation). Other parameters can be measured in some particular points, but the value of such parameters must then be interpolated to the whole terrain (for example, the wind speed and direction). In both cases it is extremely difficult to know the exact value of each parameter at run-time. All these factors imply that, in most cases, the results provided by simulation tools do not match real propagation. Thus, that simulation tools are not wholly useful, since the predictions are not reliable. Input parameters appear as one of the major sources of deviation between predicted results and real-fire propagation. A way to overcome this problem, consist of optimizing the input parameters with the aim of finding an input set so that predicted fire propagation matches real fire propagation. Evolutionary algorithms have been used to optimise the input parameters. However, such optimization techniques must be carried out during real time operation and, therefore, some methods must be applied to accelerate the optimization process. For this purpose, we propose to apply a sensitivity analysis to the input parameters in order to asses their output impact degree and, consequently, determine the parameters that are worthy to spend time in tuning and the parameters that is better not to spent too much effort on them and keep them in an estimated value. These methods take advantage of the computational power offered by distributed systems.The thesis is organized as followsFirst we introduce computational science and some issues of forest fire propagation prediction. Then we introduce a theoretical description of the enhanced prediction method (Chapter 2). Then, we discuss the possible ways of parallelizing the method and a description of the implementation is stated in chapter 3. After that an illustration of experimental comparison of the implemented optimization techniques is reported in chapter 4. We discuss, in chapter 5, the theory and experiments of possible ways to accelerate the optimization methods. Finally, in chapter 6, an application of the complete methodology on real cases is included. In the following we describe in more details each chapter.In chapter 2, we describe a pragmatic approach that intended to improve the prediction quality of forest-fire simulators with the existence of all imperfections in real life (described in the introduction). As mentioned, enhanced prediction method is based on searching for values of input parameters that enhance the prediction of the simulators. Therefore, search methods occupy an important part of the Thesis. Thus, a theoretical discussion of search methods is introduced in chapter [cha:A-methodology-to].Chapter 3 discusses the way we have parallelized the method to reduce the time of execution and make it possible to execute the method in reasonable time. In addition, a full description of the implementation of the method is reported.In chapter 4, we illustrate our experimental study to tune and compare several optimization techniques that could be used in the proposed methodology. Chapter 5 describes the ways to accelerate the optimization method so that we can reach the optimal solution in less iteration and, therefore in less time. In the same chapter we illustrate the experimental study performed.In chapter 6, we apply this methodology on real fire lines extracted from laboratory experiment, which were specifically designed to test our methodology.Finally, in chapter 7, we address the main conclusions and propose future directions that can extend and enhance this research.
|
408 |
Wind Farms Production: Control and PredictionEL-Fouly, Tarek Hussein Mostafa 11 October 2007 (has links)
Wind energy resources, unlike dispatchable central station generation, produce power dependable on external irregular source and that is the incident wind speed which does not always blow when electricity is needed. This results in the variability, unpredictability, and uncertainty of wind resources. Therefore, the integration of wind facilities to utility electrical grid presents a major challenge to power system operator. Such integration has significant impact on the optimum power flow, transmission congestion, power quality issues, system stability, load dispatch, and economic analysis.
Due to the irregular nature of wind power production, accurate prediction represents the major challenge to power system operators. Therefore, in this thesis two novel models are proposed for wind speed and wind power prediction. One proposed model is dedicated to short-term prediction (one-hour ahead) and the other involves medium term prediction (one-day ahead). The accuracy of the proposed models is revealed by comparing their results with the corresponding values of a reference prediction model referred to as the persistent model.
Utility grid operation is not only impacted by the uncertainty of the future production of wind farms, but also by the variability of their current production and how the active and reactive power exchange with the grid is controlled. To address this particular task, a control technique for wind turbines, driven by doubly-fed induction generators (DFIGs), is developed to regulate the terminal voltage by equally sharing the generated/absorbed reactive power between the rotor-side and the grid-side converters. To highlight the impact of the new developed technique in reducing the power loss in the generator set, an economic analysis is carried out. Moreover, a new aggregated model for wind farms is proposed that accounts for the irregularity of the incident wind distribution throughout the farm layout. Specifically, this model includes the wake effect and the time delay of the incident wind speed of the different turbines on the farm, and to simulate the fluctuation in the generated power more accurately and more closer to real-time operation.
Recently, wind farms with considerable output power ratings have been installed. Their integrating into the utility grid will substantially affect the electricity markets. This thesis investigates the possible impact of wind power variability, wind farm control strategy, wind energy penetration level, wind farm location, and wind power prediction accuracy on the total generation costs and close to real time electricity market prices. These issues are addressed by developing a single auction market model for determining the real-time electricity market prices.
|
409 |
An Effort Prediction Framework for Software Defect CorrectionHassouna, Alaa 27 August 2008 (has links)
Developers apply changes and updates to software systems to adapt to emerging
environments and address new requirements. In turn, these changes introduce
additional software defects, usually caused by our inability to comprehend the full
scope of the modi ed code. As a result, software practitioners have developed tools
to aid in the detection and prediction of imminent software defects, in addition to
the eort required to correct them. Although software development eort prediction
has been in use for many years, research into defect-correction eort prediction is
relatively new. The increasing complexity, integration and ubiquitous nature of
current software systems has sparked renewed interest in this eld. Eort prediction
now plays a critical role in the planning activities of managers. Accurate predictions
help corporations budget, plan and distribute available resources eectively and
e ciently. In particular, early defect-correction eort predictions could be used by
testers to set schedules, and by managers to plan costs and provide earlier feedback
to customers about future releases.
In this work, we address the problem of predicting the eort needed to resolve a
software defect. More speci cally, our study is concerned with defects or issues that
are reported on an Issue Tracking System or any other defect repository. Current
approaches use one prediction method or technique to produce eort predictions.
This approach usually suers from the weaknesses of the chosen prediction method,
and consequently the accuracy of the predictions are aected. To address this problem,
we present a composite prediction framework. Rather than using one prediction
approach for all defects, we propose the use of multiple integrated methods
which complement the weaknesses of one another. Our framework is divided into
two sub-categories, Similarity-Score Dependent and Similarity-Score Independent.
The Similarity-Score Dependent method utilizes the power of Case-Based Reasoning,
also known as Instance-Based Reasoning, to compute predictions. It relies on
matching target issues to similar historical cases, then combines their known eort
for an informed estimate. On the other hand, the Similarity-Score Independent
method makes use of other defect-related information with some statistical manipulation
to produce the required estimate. To measure similarity between defects,
some method of distance calculation must be used. In some cases, this method
might produce misleading results due to observed inconsistencies in history, and
the fact that current similarity-scoring techniques cannot account for all the variability
in the data. In this case, the Similarity-Score Independent method can be
used to estimate the eort, where the eect of such inconsistencies can be reduced.
We have performed a number of experimental studies on the proposed framework
to assess the eectiveness of the presented techniques. We extracted the data sets
from an operational Issue Tracking System in order to test the validity of the model
on real project data. These studies involved the development of multiple tools in
both the Java programming language and PHP, each for a certain stage of data
analysis and manipulation. The results show that our proposed approach produces
signi cant improvements when compared to current methods.
|
410 |
New Approaches to Protein Structure PredictionLi, Shuai Cheng 04 November 2009 (has links)
Protein structure prediction is concerned with the prediction of a
protein's three dimensional structure from its amino acid sequence.
Such predictions are commonly performed by searching the possible
structures and evaluating each structure by using some scoring
function. If it is assumed that the target protein structure
resembles the structure of a known protein, the search space can be
significantly reduced. Such an approach is referred to as
comparative structure prediction. When such an assumption is
not made, the approach is known as ab initio structure
prediction. There are several difficulties in devising efficient
searches or in computing the scoring function. Many of these
problems have ready solutions from known mathematical methods.
However, the problems that are yet unsolved have hindered structure
prediction methods from more ideal predictions.
The objective of this study is to present a complete framework for
ab initio protein structure prediction. To achieve this, a new
search strategy is proposed, and better techniques are devised for
computing the known scoring functions. Some of the remaining
problems in protein structure prediction are revisited. Several of
them are shown to be intractable. In many of these cases, approximation
methods are suggested as alternative solutions. The primary issues addressed in this thesis
are concerned with local structures prediction, structure assembly
or sampling, side chain packing, model comparison, and structural
alignment. For brevity, we do not elaborate on these problems here;
a concise introduction is given in the first section of this thesis.
Results from these studies prompted the development of several
programs, forming a utility suite for ab initio protein
structure prediction. Due to the general usefulness of these
programs, some of them are released with open source licenses to
benefit the community.
|
Page generated in 0.0289 seconds