Spelling suggestions: "subject:"largescale"" "subject:"largerscale""
231 |
Large Scale Data Analysis with Application to Computational Epidemiology and Network ScienceIrany, Fariba Afrin 12 1900 (has links)
This dissertation focuses on large-scale complex data analysis techniques for (i) computational epidemiology and (ii) multi-featured data arising in network science. This research contributes to improving SEIR-based mathematical models that enrich the understanding of disease transmission dynamics by considering both infected and hospitalized individuals. The study integrates three distinct interventions within the model and conducts a case study focused on Nigeria, a densely populated country in the Sub-Saharan region with underreported COVID-19 cases. This evaluation assesses the impact of COVID-19 in pre-and post-intervention phases, providing insights for other viral outbreaks as well. In network science, the study investigates complex datasets characterized by multifeatured attributes. The research addresses two challenges. The first challenge involves applying a decoupling-based community identification to uncover community structures within homogeneous multilayer networks (HoMLN). A novel module that generates multilayer networks with any desired number of layers through a single configuration file has also been introduced. This development simplifies network layer creation for a homogenous multilayer network, eliminating repetitive and cumbersome script-writing tasks. The second is developing an edge-based attack model to disrupt network centrality, thereby assessing network resilience by distributing high-core vertices. This model fills a gap in the existing literature by introducing an edge-based perturbation approach, significantly influencing epidemiology research by optimizing vaccine distribution under resource constraints. This attack model’s applicability extends to multilayer networks, enhancing its utility. Overall, this dissertation advances analytical methodologies in computational epidemiology and network science, providing valuable insights and tools for addressing complex problems in these interdisciplinary fields.
|
232 |
NUMERICAL INTEGRATION OF DYNAMIC SYSTEMS VIA WAVEFORM RELAXATION TECHNIQUES; IMPLEMENTATION AND TESTING.Guarini, Marcello W. January 1983 (has links)
No description available.
|
233 |
Design, implementation, and evaluation of node placement and data reduction algorithms for large scale wireless networksMehta, Hardik 01 December 2003 (has links)
No description available.
|
234 |
Analysis techniques for nanometer digital integrated circuitsRamalingam, Anand, 1979- 29 August 2008 (has links)
As technology has scaled into nanometer regime, manufacturing variations have emerged as a major limiter of performance (timing) in VLSI circuits. Issues related to timing are addressed in the first part of the dissertation. Statistical Static Timing Analysis (SSTA) has been proposed to perform full-chip analysis of timing under uncertainty such as manufacturing variations. In this dissertation, we propose an efficient sparse-matrix framework for a path-based SSTA. In addition to an efficient framework for doing timing analysis, to improve the accuracy of the timing analysis one needs to address the accuracy of: waveform modeling, and gate delay modeling. We propose a technique based on Singular Value Decomposition (SVD) that accurately models the waveform in a timing analyzer. To improve the gate delay modeling, we propose a closed form expression based on the centroid of power dissipation. This new metric is inspired by our key observation that the Sakurai-Newton (SN) delay metric can be viewed as the centroid of current. In addition to accurately analyzing the timing of a chip, improving timing is another major concern. One way to improve timing is to scale down the threshold voltage (Vth). But scaling down increases the subthreshold leakage current exponentially. Sleep transistors have been proposed to reduce leakage current while maintaining performance. We propose a path-based algorithm to size the sleep transistor to reduce leakage while maintaining the required performance. In the second part of dissertation we address power grid and thermal issues that arise due to the scaling of integrated circuits. In the case of power grid simulation, we propose fast and efficient techniques to analyze the power grid with accurate modeling of the transistor network. The transistor is modeled as a switch in series with an RC and the switch itself is modeled behaviorally. This model allows more accurate prediction of voltage drop compared to the current source model. In the case of thermal simulation, we address the issue of ignoring the nonlinearity of thermal conductivity in silicon. We found that ignoring the nonlinearity of thermal conductivity may lead to a temperature profile that is off by 10° C.
|
235 |
Properties and evolution of galaxy clustering at 2<z<5 based on the VIMOS Ultra Deep SurveyDurkalec, Anna 11 December 2014 (has links)
Cette thèse porte sur l'étude des propriétés et l'évolution de regroupement de galaxies pour les galaxies de la gamme de 2<z<5 de VUDS Sondage, qui est la plus grande enquête de galaxie spectroscopique à z>2. Je ai pu mesurer la distribution spatiale d'une population générale de galaxie à redshift z~3 pour la première fois avec une grande précision. Je ai quantifié le regroupement de galaxie en estimation et la modélisation de la fonction de corrélation projetée (espace réel) à deux points, pour une population générale de 3022 galaxies. Je ai prolongé les mesures de regroupement à la luminosité et des sous-échantillons de masse sélectionné stellaires. Mes résultats montrent que la force de regroupement de la population générale de la galaxie ne change pas de redshift z~3,5 à z~2,5, mais dans les deux redshift va plus lumineux et des galaxies plus massives sont plus regroupées que les moins lumineux (massives). En utilisant la distribution d'occupation de halo (HOD) formalisme je mesuré une masse moyenne de halo hôte au redshift z~3 significativement plus faible que les masses halo moyens observés à faible redshift. Je ai conclu que la population de formation d'étoiles observé des galaxies à z~3 aurait évolué dans le massif et lumineux la population de galaxies au z=0. Aussi, je interpréter les mesures de regroupement en termes de biais de galaxies à grande échelle linéaire. Je trouve que ce est nettement plus élevé que le biais des galaxies redshift intermédiaire et faible. Enfin, je ai calculé le ratio-stellaire Halo masse (SHMR) et l'efficacité intégrée de formation d'étoiles (ISFE) pour étudier l'efficacité de la formation des étoiles et l'assemblage masse stellaire. / This thesis focuses on the study of the properties and evolution of galaxy clustering for galaxies in the redshift range 2<z<5 from the VIMOS Ultra Deep Survey (VUDS), which is the largest spectroscopic galaxy survey at z>2. I was able to measure the spatial distribution of a general galaxy population at redshift z~3 for the first time with a high accuracy. I quantified the galaxy clustering by estimating and modelling the projected (real-space) two-point correlation function, for a general population of 3022 galaxies. I extended the clustering measurements to the luminosity and stellar mass-selected sub-samples. My results show that the clustering strength of the general galaxy population does not change significantly from redshift z~3.5 to z~2.5, but in both redshift ranges more luminous and more massive galaxies are more clustered than less luminous (massive) ones. Using the halo occupation distribution (HOD) formalism I measured an average host halo mass at redshift z~3 significantly lower than the observed average halo masses at low redshift. I concluded that the observed star-forming population of galaxies at z~3 might have evolved into the massive and bright (Mr<-21.5) galaxy population at redshift z=0. Also, I interpret clustering measurements in terms of a linear large-scale galaxy bias. I find it to be significantly higher than the bias of intermediate and low redshift galaxies. Finally, I computed the stellar-to-halo mass ratio (SHMR) and the integrated star formation efficiency (ISFE) to study the efficiency of star formation and stellar mass assembly. I find that the integrated star formation efficiency is quite high at ~16% for the average galaxies at z~3.
|
236 |
Challenges of Large-ScaleSoftware Testing and the Role of Quality Characteristics : Empirical StudyBelay, Eyuel January 2020 (has links)
Currently, information technology is influencing every walks of life. Our livesincreasingly depend on the software and its functionality. Therefore, thedevelopment of high-quality software products is indispensable. Also, inrecent years, there has been an increasing interest in the demand for high-qualitysoftware products. The delivery of high-quality software products and services isnot possible at no cost. Furthermore, software systems have become complex andchallenging to develop, test, and maintain because of scalability. Therefore, withincreasing complexity in large scale software development, testing has been acrucial issue affecting the quality of software products. In this paper, large-scalesoftware testing challenges concerning quality and their respective mitigations arereviewed using a systematic literature review, and interviews. Existing literatureregarding large-scale software development deals with issues such as requirementand security challenges, so research regarding large-scale software testing and itsmitigations is not dealt with profoundly.In this study, a total of 2710 articles were collected from 1995-2020; 1137(42%)IEEE, 733(27%) Scopus, and 840(31%) Web of Science. Sixty-four relevant articleswere selected using a systematic literature review. Also, to include missed butrelevant articles, snowballing techniques were applied, and 32 additional articleswere included. A total of 81 challenges of large-scale software testing wereidentified from 96 total articles out of which 32(40%) performance, 10(12 %)security, 10(12%) maintainability, 7(9 %) reliability, 6(8%) compatibility, 10(12%)general, 3(4%) functional suitability, 2(2%) usability, and 1(1%) portability weretesting challenges were identified. The author identified more challenges mainlyabout performance, security, reliability, maintainability, and compatibility qualityattributes but few challenges about functional suitability, portability, and usability.The result of the study can be used as a guideline in large-scale software testingprojects to pinpoint potential challenges and act accordingly.
|
237 |
Adaptive Fault Tolerance Strategies for Large Scale SystemsGeorge, Cijo January 2012 (has links) (PDF)
Exascale systems of the future are predicted to have mean time between node failures (MTBF) of less than one hour. At such low MTBF, the number of processors available for execution of a long running application can widely vary throughout the execution of the application. Employing traditional fault tolerance strategies like periodic checkpointing in these highly dynamic environments may not be effective because of the high number of application failures, resulting in large amount of work lost due to rollbacks apart from the increased recovery overheads. In this context, it is highly necessary to have fault tolerance strategies that can adapt to the changing node availability and also help avoid significant number of application failures. In this thesis, we present two adaptive fault tolerance strategies that make use of node failure pre-diction mechanisms to provide proactive fault tolerance for long running parallel applications on large scale systems.
The first part of the thesis deals with an adaptive fault tolerance strategy for malleable applications. We present ADFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. We first develop cost models that consider different factors like accuracy of node failure predictions and application scalability, for evaluating the benefits of various fault tolerance actions including check-pointing, live-migration and rescheduling. Our adaptive framework then uses the cost models to make runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to minimize application failures and maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in work done by the application in the presence of failures, and is effective even for petascale and exascale systems.
In the second part of the thesis, we present a fault tolerance strategy using adaptive process replication that can provide fault tolerance for applications using partial replication of a set of application processes. This fault tolerance framework adaptively changes the set of replicated processes (replicated set) periodically based on node failure predictions to avoid application failures. We have developed an MPI prototype implementation, PAREP-MPI that allows dynamically changing the replicated set of processes for MPI applications. Experiments with real scientific applications on real systems have shown that the overhead of PAREP-MPI is minimal. We have shown using simulations with real and synthetic failure traces that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20% improvement in application efficiency even for exascale systems. Significant observations are also made which can drive future research efforts in fault tolerance for large and very large scale systems.
|
238 |
The cosmic web unravelled : a study of filamentary structure in the Galaxy and Mass Assembly surveyAlpaslan, Mehmet January 2014 (has links)
I have investigated the properties of the large scale structure of the nearby Universe using data from the Galaxy and Mass Assembly survey (GAMA). I generated complementary halo mass estimates for all groups in the GAMA Galaxy Group Catalogue (G³C) using a modified caustic mass estimation algorithm. On average, the caustic mass estimates agree with dynamical mass estimates within a factor of 2 in 90% of groups. A volume limited sample of these groups and galaxies are used to generate the large scale structure catalogue. An adapted minimal spanning tree algorithm is used to identify and classify structures, detecting 643 filaments that measure up to 200 Mpc/h, each containing 8 groups on average. A secondary population of smaller coherent structures, dubbed `tendrils,' that link filaments together or penetrate into voids are also detected. On average, tendrils measure around 10 Mpc/h and contain 6 galaxies. The so-called line correlation function is used to prove that tendrils are real structures rather than accidental alignments. A population of isolated void galaxies are also identified. The properties of filaments and tendrils in observed and mock GAMA galaxy catalogues agree well. I go on to show that voids from other surveys that overlap with GAMA regions contain a large number of galaxies, primarily belonging to tendrils. This implies that void sizes are strongly dependent on the number density and sensitivity limits of the galaxies observed by a survey. Finally, I examine the properties of galaxies in different environments, finding that galaxies in filaments tend to be early-type, bright, spheroidal, and red whilst those in voids are typically the opposite: blue, late-type, and more faint. I show that group mass does not correlate with the brightness and morphologies of galaxies and that the primary driver of galaxy evolution is stellar mass.
|
239 |
AUTOMATED SYSTEM FOR IDENTIFYING USABLE SENSORS IN ALARGE SCALE SENSOR NETWORK FOR COMPUTER VISIONAniesh Chawla (6630980) 11 June 2019 (has links)
<div>Numerous organizations around the world deploy sensor networks, especially visual sensor networks for various applications like monitoring traffic, security, and emergencies. With advances in computer vision technology, the potential application of these sensor networks has expanded. This has led to an increase in demand for deployment of large scale sensor networks.</div><div>Sensors in a large network have differences in location, position, hardware, etc. These differences lead to varying usefulness as they provide different quality of information. As an example, consider the cameras deployed by the Department of Transportation (DOT). We want to know whether the same traffic cameras could be used for monitoring the damage by a hurricane.</div><div>Presently, significant manual effort is required to identify useful sensors for different applications. There does not exist an automated system which determines the usefulness of the sensors based on the application. Previous methods on visual sensor networks focus on finding the dependability of sensors based on only the infrastructural and system issues like network congestion, battery failures, hardware failures, etc. These methods do not consider the quality of information from the sensor network. In this paper, we present an automated system which identifies the most useful sensors in a network for a given application. We evaluate our system on 2,500 real-time live sensors from four cities for traffic monitoring and people counting applications. We compare the result of our automated system with the manual score for each camera.</div><div>The results suggest that the proposed system reliably finds useful sensors and it output matches the manual scoring system. It also shows that a camera network deployed for a certain application can also be useful for another application.</div>
|
240 |
Complexity management and modelling of VLSI systemsDickinson, Alex. January 1988 (has links) (PDF)
Bibliography: leaves 249-260.
|
Page generated in 0.0759 seconds