271 |
Exploitation du contenu pour l'optimisation du stockage distribué / Leveraging content properties to optimize distributed storage systemsKloudas, Konstantinos 06 March 2013 (has links)
Les fournisseurs de services de cloud computing, les réseaux sociaux et les entreprises de gestion des données ont assisté à une augmentation considérable du volume de données qu'ils reçoivent chaque jour. Toutes ces données créent des nouvelles opportunités pour étendre la connaissance humaine dans des domaines comme la santé, l'urbanisme et le comportement humain et permettent d'améliorer les services offerts comme la recherche, la recommandation, et bien d'autres. Ce n'est pas par accident que plusieurs universitaires mais aussi les médias publics se référent à notre époque comme l'époque “Big Data”. Mais ces énormes opportunités ne peuvent être exploitées que grâce à de meilleurs systèmes de gestion de données. D'une part, ces derniers doivent accueillir en toute sécurité ce volume énorme de données et, d'autre part, être capable de les restituer rapidement afin que les applications puissent bénéficier de leur traite- ment. Ce document se concentre sur ces deux défis relatifs aux “Big Data”. Dans notre étude, nous nous concentrons sur le stockage de sauvegarde (i) comme un moyen de protéger les données contre un certain nombre de facteurs qui peuvent les rendre indisponibles et (ii) sur le placement des données sur des systèmes de stockage répartis géographiquement, afin que les temps de latence perçue par l'utilisateur soient minimisés tout en utilisant les ressources de stockage et du réseau efficacement. Tout au long de notre étude, les données sont placées au centre de nos choix de conception dont nous essayons de tirer parti des propriétés de contenu à la fois pour le placement et le stockage efficace. / Cloud service providers, social networks and data-management companies are witnessing a tremendous increase in the amount of data they receive every day. All this data creates new opportunities to expand human knowledge in fields like healthcare and human behavior and improve offered services like search, recommendation, and many others. It is not by accident that many academics but also public media refer to our era as the “Big Data” era. But these huge opportunities come with the requirement for better data management systems that, on one hand, can safely accommodate this huge and constantly increasing volume of data and, on the other, serve them in a timely and useful manner so that applications can benefit from processing them. This document focuses on the above two challenges that come with “Big Data”. In more detail, we study (i) backup storage systems as a means to safeguard data against a number of factors that may render them unavailable and (ii) data placement strategies on geographically distributed storage systems, with the goal to reduce the user perceived latencies and the network and storage resources are efficiently utilized. Throughout our study, data are placed in the centre of our design choices as we try to leverage content properties for both placement and efficient storage.
|
272 |
An Efficient Hybrid Heuristic and Probabilistic Model for the Gate Matrix Layout Problem in VLSI DesignBagchi, Tanuj 08 1900 (has links)
In this thesis, the gate matrix layout problem in VLSI design is considered where the goal is to minimize the number of tracks required to layout a given circuit and a taxonomy of approaches to its solution is presented. An efficient hybrid heuristic is also proposed for this combinatorial optimization problem, which is based on the combination of probabilistic hill-climbing technique and greedy method. This heuristic is tested experimentally with respect to four existing algorithms. As test cases, five benchmark problems from the literature as well as randomly generated problem instances are considered. The experimental results show that the proposed hybrid algorithm, on the average, performs better than other heuristics in terms of the required computation time and/or the quality of solution. Due to the computation-intensive nature of the problem, an exact solution within reasonable time limits is impossible. So, it is difficult to judge the effectiveness of any heuristic in terms of the quality of solution (number of tracks required). A probabilistic model of the gate matrix layout problem that computes the expected number of tracks from the given input parameters, is useful to this respect. Such a probabilistic model is proposed in this thesis, and its performance is experimentally evaluated.
|
273 |
Optical-SZE scaling relations for DES optically selected clusters within the SPT-SZ SurveySaro, A., Bocquet, S., Mohr, J., Rozo, E., Benson, B. A., Dodelson, S., Rykoff, E. S., Bleem, L., Abbott, T. M. C., Abdalla, F. B., Allen, S., Annis, J., Benoit-Levy, A., Brooks, D., Burke, D. L., Capasso, R., Carnero Rosell, A., Carrasco Kind, M., Carretero, J., Chiu, I., Crawford, T. M., Cunha, C. E., D'Andrea, C. B., da Costa, L. N., Desai, S., Dietrich, J. P., Evrard, A. E., Neto, A. Fausti, Flaugher, B., Fosalba, P., Frieman, J., Gangkofner, C., Gaztanaga, E., Gerdes, D. W., Giannantonio, T., Grandis, S., Gruen, D., Gruendl, R. A., Gupta, N., Gutierrez, G., Holzapfel, W. L., James, D. J., Kuehn, K., Kuropatkin, N., Lima, M., Marshall, J. L., McDonald, M., Melchior, P., Menanteau, F., Miquel, R., Ogando, R., Plazas, A. A., Rapetti, D., Reichardt, C. L., Reil, K., Romer, A. K., Sanchez, E., Scarpine, V., Schubnell, M., Sevilla-Noarbe, I., Smith, R. C., Soares-Santos, M., Soergel, B., Strazzullo, V., Suchyta, E., Swanson, M. E. C., Tarle, G., Thomas, D., Vikram, V., Walker, A. R., Zenteno, A. 07 1900 (has links)
We study the Sunyaev-Zel'dovich effect (SZE) signature in South Pole Telescope (SPT) data for an ensemble of 719 optically identified galaxy clusters selected from 124.6 deg(2) of the Dark Energy Survey (DES) science verification data, detecting a clear stacked SZE signal down to richness lambda similar to 20. The SZE signature is measured using matched-filtered maps of the 2500 deg(2) SPT-SZ survey at the positions of the DES clusters, and the degeneracy between SZE observable and matched-filter size is broken by adopting as priors SZE and optical mass-observable relations that are either calibrated using SPT-selected clusters or through the Arnaud et al. (A10) X-ray analysis. We measure the SPT signal-to-noise zeta - lambda relation and two integrated Compton-y Y500-lambda relations for the DES-selected clusters and compare these to model expectations that account for the SZE-optical centre offset distribution. For clusters with lambda > 80, the two SPT-calibrated scaling relations are consistent with the measurements, while for the A10-calibrated relation the measured SZE signal is smaller by a factor of 0.61 +/- 0.12 compared to the prediction. For clusters at 20 < lambda < 80, the measured SZE signal is smaller by a factor of similar to 0.20-0.80 (between 2.3 sigma and 10 sigma significance) compared to the prediction, with the SPT-calibrated scaling relations and larger lambda clusters showing generally better agreement. We quantify the required corrections to achieve consistency, showing that there is a richness-dependent bias that can be explained by some combination of (1) contamination of the observables and (2) biases in the estimated halo masses. We also discuss particular physical effects associated with these biases, such as contamination of. from line-of-sight projections or of the SZE observables from point sources, larger offsets in the SZE-optical centring or larger intrinsic scatter in the lambda-mass relation at lower richnesses.
|
274 |
Computational Models of Nuclear ProliferationFrankenstein, William 01 May 2016 (has links)
This thesis utilizes social influence theory and computational tools to examine the disparate impact of positive and negative ties in nuclear weapons proliferation. The thesis is broadly in two sections: a simulation section, which focuses on government stakeholders, and a large-scale data analysis section, which focuses on the public and domestic actor stakeholders. In the simulation section, it demonstrates that the nonproliferation norm is an emergent behavior from political alliance and hostility networks, and that alliances play a role in current day nuclear proliferation. This model is robust and contains second-order effects of extended hostility and alliance relations. In the large-scale data analysis section, the thesis demonstrates the role that context plays in sentiment evaluation and highlights how Twitter collection can provide useful input to policy processes. It first highlights the results of an on-campus study where users demonstrated that context plays a role in sentiment assessment. Then, in an analysis of a Twitter dataset of over 7.5 million messages, it assesses the role of ‘noise’ and biases in online data collection. In a deep dive analyzing the Iranian nuclear agreement, we demonstrate that the middle east is not facing a nuclear arms race, and show that there is a structural hole in online discussion surrounding nuclear proliferation. By combining both approaches, policy analysts have a complete and generalizable set of computational tools to assess and analyze disparate stakeholder roles in nuclear proliferation.
|
275 |
Paralelní implementace multireferenčních coupled cluster metod a výpočet na velkých systémech / Parallel Implementation of Multireference Coupled Clusters Methods and Calculations on Large SystemsBrabec, Jiří January 2012 (has links)
Firstly, we have developed a Tensor Contraction Engine-based implementation of the BW-MRCCSD approach. The scalability tests have been performed across thousand of cores. We have further developed a novel two-level parallel algorithm for Hilbert-space MRCC methods which uses the processor groups. In this approach, references are distributed among processor groups (reference-level parallelism) and tasks of each reference are distributed inside of a given processor group (task-level parallelism). We have shown that our implementation scales across 24000 cores. The usability of our code was demonstrated on larger systems (dodecane, polycarbenes and naphthyne isomers). Finally, we present novel universal state- selective (USS) corrections to the state-specific MRCC methods. The USS-corrected MRCC results were compared with the full configuration interaction (FCI) results.
|
276 |
Sezónní pravděpodobnostní hydrologické předpovědi / Seasonal probabilistic hydrological forecastingŠípek, Václav January 2014 (has links)
Seasonal hydrological forecasts represent a very current topic, especially in the context of extreme hydrological events that have taken place at the end of the 20th and beginning of the 21st century. These events are represented by large scale floods and long lasting periods of drought. This has led to a need for the effective water management strategies. These management strategies have to be able to efficiently distribute water resources both in the space and time. Seasonal hydrological forecasting systems constitute an essential part of water management strategies, as they enable the runoff estimation in a sufficient advance. This thesis deals with the seasonal hydrological forecasting system with a one month lead. The aim of this study is to apply three forecasting methods using an experimental watershed in the Czech Republic. The first method is represented by the reference climatology approach, the latter by the well-tested Ensemble Streamflow Prediction system (ESP), and the last by its newly proposed modification. This modification (modified ESP - mESP) is based on the restriction of input data established on their relations to the large scale climatological variables and patters. The first part of the thesis is focused on the investigation of possible relations among hydrometeorological...
|
277 |
Interacting dark energy models in Cosmology and large-scale structure observational tests / Modelos de energia escura com interação em Cosmologia e testes observacionais com estruturas em grande escalaMarcondes, Rafael José França 23 September 2016 (has links)
Modern Cosmology offers us a great understanding of the universe with striking precision, made possible by the modern technologies of the newest generations of telescopes. The standard cosmological model, however, is not absent of theoretical problems and open questions. One possibility that has been put forward is the existence of a coupling between dark sectors. The idea of an interaction between the dark components could help physicists understand why we live in an epoch of the universe where dark matter and dark energy are comparable in terms of energy density, which can be regarded as a strange coincidence given that their time evolutions are completely different. Dark matter and dark energy are generally treated as perfect fluids. Interaction is introduced when we allow for a non-zero term in the right-hand side of their individual energy-momentum tensor conservation equations. We proceed with a phenomenological approach to test models of interaction with observations of redshift-space distortions. In a flat universe composed only of these two fluids, we consider separately two forms of interaction, through terms proportional to the densities of both dark energy and dark matter. An analytic expression for the growth rate approximated as f = Omega^gamma, where Omega is the percentage contribution from the dark matter to the energy content of the universe and gamma is the growth index, is derived in terms of the interaction strength and of other parameters of the model in the first case, while for the second model we show that a non-zero interaction cannot be accommodated by the index growth approximation. The successful expressions obtained are then used to compare the predictions with growth of structure observational data in a Markov Chain Monte Carlo code and we find that the current growth data alone cannot impose constraints on the interaction strength due to their large uncertainties. We also employ observations of galaxy clusters to assess their virial state via the modified Layzer-Irvine equation in order to detect signs of an interaction. We obtain measurements of observed virial ratios, interaction strength, rest virial ratio and departure from equilibrium for a set of clusters. A compounded analysis indicates an interaction strength of 0.29^{+2.25}_{-0.40}, compatible with no interaction, but a compounded rest virial ratio of 0.82^{+0.13}_{-0.14}, which means a 2 sigma confidence level detection. Despite this tension, the method produces encouraging results while still leaves room for improvement, possibly by removing the assumption of small departure from equilibrium. / A cosmologia moderna oferece um ótimo entendimento do universo com uma precisão impressionante, possibilitada pelas tecnologias modernas das gerações mais novas de telescópios. O modelo cosmológico padrão, porém, não é livre de problemas do ponto de vista teórico, deixando perguntas ainda sem respostas. Uma possibilidade que tem sido proposta é a existência de um acoplamento entre setores escuros. A ideia de uma interação entre os componentes escuros poderia ajudar os físicos a entender por que vivemos em uma época do universo na qual a matéria escura e a energia escura são comparáveis em termos de densidades de energia, o que pode ser considerado uma estranha coincidência dado que suas evoluções com o tempo são completamente diferentes. Matéria escura e energia escura são geralmente tratadas como fluidos perfeitos. A interação é introduzida ao permitirmos um tensor não nulo no lado direito das equações de conservação dos tensores de energia-momento. Prosseguimos com uma abordagem fenomenológica para testar modelos de interação com observações de distorções no espaço de redshift. Em um universo plano composto apenas por esses dois fluidos, consideramos, separadamente, duas formas de interação, através de termos proporcionais às densidades de energia escura e de matéria escura. Uma expressão analítica para a taxa de crescimento aproximada por f = Omega^gamma, onde Omega é a contribuição percentual da matéria escura para o conteúdo do universo e gamma é o índice de crescimento, é deduzida em termos da interação e de outros parâmetros do modelo no primeiro caso, enquanto para o segundo caso mostramos que uma interação não nula não pode ser acomodada pela aproximação do índice de crescimento. As expressões obtidas são então utilizadas para comparar as previsões com dados observacionais de crescimento de estruturas em um programa para Monte Carlo via cadeias de Markov. Concluímos que tais dados atuais por si só não são capazes de restringir a interação devido às suas grandes incertezas. Utilizamos também observações de aglomerados de galáxias para analisar seus estados viriais através da equação de Layzer-Irvine modificada a fim de detectar sinais de interação. Obtemos medições de taxas viriais observadas, constante de interação, taxa virial de equilíbrio e desvio do equilíbrio para um conjunto de aglomerados. Uma análise combinada indica uma constante de interação 0.29^{+2.25}_{-0.40}, compatível com zero, mas uma taxa virial de equilíbrio combinada de 0.82^{+0.13}_{-0.14}, o que significa uma detecção em um intervalo de confiança de 2 sigma. Apesar desta tensão, o método produz resultados encorajadores enquanto ainda permite melhorias, possivelmente pela remoção da suposição de pequenos desvios do equilíbrio.
|
278 |
Elaboração de itens para avaliações em larga escala / Elaboration of items for large scale evaluationsCosta, Edson Ferreira 17 May 2018 (has links)
Este trabalho visa auxiliar professores e profissionais da Educação Básica para a elaboração de itens nas avaliações em larga escala. A princípio, é realizado um breve histórico sobre a situação da Educação Básica no país, em meados dos anos 80. Em seguida, são reveladas algumas das medidas planejadas pelos órgãos educacionais na busca por melhorias no cenário educacional brasileiro como, por exemplo, a reestruturação das avaliações em larga escala existentes na década de 90 e a criação de novos exames. O capítulo seguinte apresenta os documentos que são consultados durante este processo de construção dessas avaliações (matrizes curriculares e de referência) com ênfase no Exame Nacional do Ensino Médio (ENEM), pelo fato de ser a avaliação em larga escala de maior abrangência, em nível federal, desde 2009. Os capítulos seguintes revelam a importância do item nas avaliações em larga escala e apresentam alguns modelos elaborados, com base na Matriz de Referência do ENEM. / This work aims to help teachers and professionals of Basic Education to elaborate items in the large scale evaluations. At the outset, a brief history of the situation of Basic Education in the country in the mid-1980s is made. Then, some of the measures planned by the educational agencies are revealed in the search for improvements in the Brazilian educational scenario, such as the restructuring of the large-scale assessments in the 1990s and the creation of new examinations. The following chapter presents the documents that are consulted during this process of construction of these evaluations (curricular and reference matrices) with emphasis on the Exame Nacional do Ensino Médio (ENEM), because it is the large scale federal, since 2009. The following chapters reveal the importance of the item in the large-scale evaluations and present some elaborate models, based on the ENEM Reference Matrix.
|
279 |
A qualitative case study of a self-initiated change in South KoreaChung, Baul January 2011 (has links)
Thesis advisor: Andy Hargreaves / After a decade of large-scale educational reform there is now a growing interest in grass-roots self-initiated change (Datnow et al., 2002; Hargreaves, 2009; Hargreaves & Shirley, 2009; Shirley, 2009). Yet, self-initiated change (SIC) remains largely undertheorized in the literature of educational change. Even the advocates of self-initiated change do not clearly specify the underlying mechanisms and the multi-dimensional processes by which SIC occurs. Utilizing a qualitative case study approach and a conceptual framework that draws from incremental institutional change theory and the literature on social movements within institutions, this study explored the following research questions: * What mechanisms do the change agents of SIC employ, How do they implement these mechanisms and why do they employ these mechanisms? * What are the characteristics of the processes of SIC? What is the pacing and sequencing of the change? * How does SIC unfold over time, and why? In answering these three initial questions a fourth research question emerged that summates the other three: *What implications does an investigation of self-initiated change in one school have for understanding existing theories of self-initiated and imposed educational change? Findings from this study revealed that self-initiated change involved a recombination that embodied the ideal of "change without pain" by balancing change and stability (Abrahamson, 2004). The process of self-initiated change turned out to be slow-moving (Pierson, 2004; Thelen & Mahoney, 2010). Mindful juxtaposition (Huy, 2001) and a dialectical perspective (Hargrave & Van de Ven, 2009) were required to address the multiple and contradictory dimensions of change. Based on these analyses, I propose ways of conceptualizing SIC as: "change without pain"; "slow-moving change"; and "dialectical/ cyclical change." / Thesis (PhD) — Boston College, 2011. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Administration and Higher Education.
|
280 |
Preserving 20 Years of TIMSS Trend Measurements: Early Stages in the Transition to the eTIMSS AssessmentFishbein, Bethany January 2018 (has links)
Thesis advisor: Ina V.S. Mullis / This dissertation describes the foundation for maintaining TIMSS’ 20 year trend measurements with the introduction of a new computer- and tablet-based mode of assessment delivery—eTIMSS. Because of the potential for mode effects on the psychometric behavior of the trend items that TIMSS relies on to maintain comparable scores between subsequent assessment cycles, development efforts for TIMSS 2019 began over three years in advance. This dissertation documents the development of eTIMSS over this period and features the methodology and results of the eTIMSS Pilot / Item Equivalence Study. The study was conducted in 25 countries and employed a within-subjects, counterbalanced design to determine the effect of the mode of administration on the trend items. Further analysis examined score-level mode effects in relation to students’ socioeconomic status, gender, and self-efficacy for using digital devices. Strategies are discussed for mitigating threats of construct irrelevant variance on students’ eTIMSS performance. The analysis by student subgroups, similar item discriminations, high cross-mode correlations, and equivalent rankings of country means provide support for the equivalence of the mathematics and science constructs between paperTIMSS and eTIMSS. However, the results revealed an overall mode effect on the TIMSS trend items, where items were more difficult for students in digital formats compared to paper. The effect was larger in mathematics than science. An approach is needed to account for the mode effects in maintaining trend measurements from previous cycles to TIMSS 2019. Each eTIMSS 2019 trend country will administer the paper trend booklets to an additional nationally representative bridge sample of students, and a common population equating approach will ensure the link between paperTIMSS and eTIMSS scores. / Thesis (PhD) — Boston College, 2018. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement and Evaluation.
|
Page generated in 0.0724 seconds