101 |
An ecological comparison of two streams in Central Ohio /Phinney, George Jay January 1967 (has links)
No description available.
|
102 |
Implementing Efficient Algorithms for Computing RunsWeng, Chia-Chun 10 1900 (has links)
<p>In the first part of this thesis we present a C++ implementation of an improved O(n log n) algorithm to compute runs, number of primitively rooted distinct squares, and maximal repetitions, based on Crochemore's partitioning algorithm. This is a joint work with Mei Jiang and extends her work on the problem. In the second part we present a C++ implementation of a linear algorithm to compute runs based on the Main's, and Kolpakov and Kucherov's algorithms following the strategy:</p> <p>1. Compute suffix array and LCP array in linear time;</p> <p>2. Using the suffix array and LCP array, compute Lempel-Ziv factorization in linear time;</p> <p>3. Using the Lempel-Ziv factorization, compute in linear time some of the runs that include all the leftmost runs following Main's algorithm;</p> <p>4. Using Kolpakov and Kucherov's approach, compute in linear time the rest of all the runs.</p> <p>For our linear time implementation, we partially relied on Jonathan Fischer's Java implementation.</p> / Master of Science (MSc)
|
103 |
A Performance Analysis of the Minimax Multivariate Quality Control ChartRehmert, Ian Jon 18 December 1997 (has links)
A performance analysis of three different Minimax control charts is performed with respect to their Chi-Square control chart counterparts under several different conditions. A unique control chart must be constructed for each process described by a unique combination of quality characteristic mean vector and associated covariance matrix. The three different charts under consideration differ in the number of quality characteristic variables of concern. In each case, without loss of generality the in-control quality characteristic mean vector is assumed to have zero entries and the associated covariance matrix is assumed to have non-negative entries. The performance of the Chi-Square and Minimax charts are compared under different values of the sample size, the probability of a Type I error, and selected shifts in the quality characteristic mean vector. Minimax and Chi-Square charts that are compared share identical in-control average run lengths (ARL) making the out-of-control ARL the appropriate performance measure. A combined Tausworthe pseudorandom number generator is used to generate the out-of-control mean vectors. Issues regarding multivariate uniform pseudorandom number generation are addressed. / Master of Science
|
104 |
Optimizing Distributed Transactions: Speculative Client Execution, Certified Serializability, and High Performance Run-TimePandey, Utkarsh 01 September 2016 (has links)
On-line services already form an important part of modern life with an immense potential for growth. Most of these services are supported by transactional systems, which are backed by database management systems (DBMS) in many cases. Many on-line services use replication to ensure high-availability, fault tolerance and scalability. Replicated systems typically consist of different nodes running the service co-ordinated by a distributed algorithm which aims to drive all the nodes along the same sequence of states by providing a total order to their operations. Thus optimization of both local DBMS operations through concurrency control and the distributed algorithm driving replicated services can lead to enhancing the performance of the on-line services.
Deferred Update Replication (DUR) is a well-known approach to design scalable replicated systems. In this method, the database is fully replicated on each distributed node. User threads perform transactions locally and optimistically before a total order is reached. DUR based systems find their best usage when remote transactions rarely conflict. Even in such scenarios, transactions may abort due to local contention on nodes. A generally adopted method to alleviate the local contention is to invoke a local certification phase to check if a transaction conflicts with other local transactions already completed. If so, the given transaction is aborted locally without burdening the ordering layer. However, this approach still results in many local aborts which significantly degrades the performance.
The first main contribution of this thesis is PXDUR, a DUR based transactional system, which enhances the performance of DUR based systems by alleviating local contention and increasing the transaction commit rate. PXDUR alleviates local contention by allowing speculative forwarding of shared objects from locally committed transactions awaiting total order to running transactions. PXDUR allows transactions running in parallel to use speculative forwarding, thereby enabling the system to utilize the highly parallel multi-core platforms. PXDUR also enhances the performance by optimizing the transaction commit process. It allows the committing transactions to skip read-set validation when it is safe to do so. PXDUR achieves performance gains of an order of magnitude over closest competitors under favorable conditions.
Transactions also form an important part of centralized DBMS, which tend to support multi-threaded access to utilize the highly parallel hardware platforms. The applications can be wrapped in transactions which can then access the DBMS as per the rules of concurrency control. This allows users to develop applications that can run on DBMSs without worrying about synchronization. texttt{Serializability} is the de-facto standard form of isolation required by transactions for many applications. The existing methods employed by DBMSs to enforce serializability employ explicit fine-grained locking. The eager-locking based approach is pessimistic and can be too conservative for many applications.
The locking approach can severely limit the performance of DBMSs especially for scenarios with moderate to high contention. This leads to the second major contribution of this thesis is TSAsR, an adaptive transaction processing framework, which can be applied to DBMSs to improve performance. TSAsR allows the DBMS's internal synchronization to be more relaxed and enforces serializability through the processng of external meta-data in an optimistic manner. It does not require any changes in the application code and achieves orders of magnitude performance improvements for high and moderate contention cases.
The replicated transaction processing systems require a distributed algorithm to keep the system consistent by ensuring that each node executes the same sequence of deterministic commands. These algorithms generally employ texttt{State Machine Replication (SMR)}. Enhancing the performance of such algorithms is a potential way to increase the performance of distributed systems. However, developing new SMR algorithms is limited in production settings because of the huge verification cost involved in proving their correctness.
There are frameworks that allow easy specification of SMR algorithms and subsequent verification. However, algorithms implemented in such framework, give poor performance. This leads to the third major contribution of this thesis Verified JPaxos, a JPaxos based runtime system which can be integrated to an easy to verify I/O automaton based on Multipaxos protocol. Multipaxos is specified in Higher Order Logic (HOL) for ease of verification which is used to generates executable code representing the Multipaxos state changes (I/O Automaton). The runtime drives the HOL generated code and interacts with the service and network to create a fully functional replicated Multipaxos system. The runtime inherits its design from JPaxos along with some optimizations. It achieves significant improvement over a state-of-art SMR verification framework while still being comparable to the performance of non-verified systems. / Master of Science
|
105 |
Predicting Performance Run-time Metrics in Fog Manufacturing using Multi-task LearningNallendran, Vignesh Raja 26 February 2021 (has links)
The integration of Fog-Cloud computing in manufacturing has given rise to a new paradigm called Fog manufacturing. Fog manufacturing is a form of distributed computing platform that integrates Fog-Cloud collaborative computing strategy to facilitate responsive, scalable, and reliable data analysis in manufacturing networks. The computation services provided by Fog-Cloud computing can effectively support quality prediction, process monitoring, and diagnosis efforts in a timely manner for manufacturing processes. However, the communication and computation resources for Fog-Cloud computing are limited in Fog manufacturing. Therefore, it is significant to effectively utilize the computation services based on the optimal computation task offloading, scheduling, and hardware autoscaling strategies to finish the computation tasks on time without compromising on the quality of the computation service. A prerequisite for adapting such optimal strategies is to accurately predict the run-time metrics (e.g., Time-latency) of the Fog nodes by capturing their inherent stochastic nature in real-time. It is because these run-time metrics are directly related to the performance of the computation service in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The run-time metrics that reflect the performance in the Fog nodes are heterogenous in nature and the performance cannot be effectively modeled through traditional predictive analysis. In this thesis, a multi-task learning methodology is adopted to predict the run-time metrics that reflect performance in Fog manufacturing by addressing the heterogeneities among the Fog nodes. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The proposed model can be further extended in computation tasks offloading and architecture optimization in Fog manufacturing to minimize the time-latency and improve the robustness of the system. / Master of Science / Smart manufacturing aims at utilizing Internet of things (IoT), data analytics, cloud computing, etc. to handle varying market demand without compromising the productivity or quality in a manufacturing plant. To support these efforts, Fog manufacturing has been identified as a suitable computing architecture to handle the surge of data generated from the IoT devices. In Fog manufacturing computational tasks are completed locally through the means of interconnected computing devices called Fog nodes. However, the communication and computation resources in Fog manufacturing are limited. Therefore, its effective utilization requires optimal strategies to schedule the computational tasks and assign the computational tasks to the Fog nodes. A prerequisite for adapting such strategies is to accurately predict the performance of the Fog nodes. In this thesis, a multi-task learning methodology is adopted to predict the performance in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The metrics that reflect the performance in the Fog nodes are heterogenous in nature and cannot be effectively modeled through conventional predictive analysis. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The results show that the multi-task learning model has better prediction accuracy than the benchmarks and that it can model the heterogeneities among the Fog nodes. The proposed model can further be incorporated in scheduling and assignment strategies to effectively utilize Fog manufacturing's computational services.
|
106 |
A Genetic Algorithm-Based Place-and-Route Compiler For A Run-time Reconfigurable Computing SystemKahne, Brian C. 14 May 1997 (has links)
Configurable Computing is a technology which attempts to increase computational power by customizing the computational platform to the specific problem at hand. An experimental computing model known as wormhole run-time reconfiguration allows for partial reconfiguration and is highly scalable. In this approach, configuration information and data are grouped together in a computing unit called a stream, which can tunnel through the chip creating a series of interconnected pipelines.
The Colt/Stallion project at Virginia Tech implements this computing model into integrated circuits. In order to create applications for this platform, a compiler is needed which can convert a human readable description of an algorithm into the sequences of configuration information understood by the chip itself. This thesis covers two compilers which perform this task. The first compiler, Tier1, requires a programmer to explicitly describe placement and routing inside of the chip. This could be considered equivalent to an assembler for a traditional microprocessor. The second compiler, Tier2, allows the user to express a problem as a dataflow graph. Actual placing and routing of this graph onto the physical hardware is taken care of through the use of a genetic algorithm.
A description of the two languages is presented, followed by example applications. In addition, experimental results are included which examine the behavior of the genetic algorithm and how alterations to various genetic operator probabilities affects performance. / Master of Science
|
107 |
Treasury bills : a comprehensive study of their temporal and cross-sectional behaviorHughes, Michael P. 01 July 2003 (has links)
No description available.
|
108 |
Process monitoring and feedback control using multiresolution analysis and machine learningGanesan, Rajesh 01 June 2005 (has links)
Online process monitoring and feedback control are two widely researched aspects that can impact the performance of a myriad of process applications. Semiconductor manufacturing is one such application that due to the ever increasing demands placed on its quality and speed holds tremendous potentials for further research and development in the areas of monitoring and control. One of the key areas of semiconductor manufacturing that has received significant attention among researchers and practitioners in recent years is the online sensor based monitoring and feedback control of its nanoscale wafer fabrication process. Monitoring and feedback control strategies of nanomanufacturing processes often require a combination of monitoring using nonstationary and multiscale signals, and a robust feedback control using complex process models. It is also essential for the monitoring and feedback control strategies to possess stringent properties such as high speed of execution, low
cost of operation, ease of implementation, high accuracy, and capability for online implementation. Due to the above requirement, a need is being felt to develop state-of-the-art sensor data processing algorithms that can perform far superior to those that are currently available both in the literature and commercially in the form of softwares.The contributions of this dissertation are three fold. It first focuses on the development of an efficient online scheme for process monitoring. The scheme combines the potentials of wavelet based multiresolution analysis and sequential probability ratio test to develop a very sensitive strategy to detect changes in nonstationary signals. Secondly, the dissertation presents a novel online feedback control scheme. The control problem is cast in the framework of probabilistic dynamic decision making, and the control scheme is built on the mathematical foundations of wavelet based multiresolution analysis, dynamic programming, and machine learning.
Analysis of convergence of the control scheme is also presented. Finally, the monitoring and the control schemes are tested on a nanoscale manufacturing process (chemical mechanical planarization, CMP) used in silicon wafer fabrication. The results obtained from experimental data clearly indicate that the approaches developed outperform the existing approaches. The novelty of the research in this dissertation stems from the fact that they further the science of sensor based process monitoring and control by uniting sophisticated concepts from signal processing, statistics, stochastic processes, and artificial intelligence, and yet remain versatile to many real world process applications.
|
109 |
Efterfrågans priselasticitet på cigaretter på kort- och lång sikt : En studie av effekten på cigarettskatten och cigarettpriset i Sverige mellan år 1996-2012Jesper, Hamrén, Anna, Viktorsson January 2014 (has links)
The study examines the price elasticity of demand for cigarettes in the short- and long run in Sweden. The time period for the study is 17 years and covers the years 1996-2012. The results of the study shows that the price elasticity of demand for cigarettes in the long run is higher than in the short run for the Swedish consumers, which is in line with previous studies in the area. The fact that the price elasticity of demand for cigarettes is higher in the long run, indicates that the substitution effect has a significant impact on the price elasticity of demand for cigarettes in the long run. The study was conducted in two parts where the authors investigated the effect of cigarette tax on cigarette prices and in addition the cigarette prices impact on the demand for cigarettes in Sweden. The combined result of the study demonstrates that increasing the cigarette tax by 10 per cent means that the demand for cigarettes is reduced by 5 per cent, while government revenues from the cigarette tax will increase by 4.5 per cent. The result in this paper shows why the state's incentive to raise the cigarette tax is twofold, since a tax increase will generate health benefits through reduced consumption and generate increased revenues to the state. These incentives are also shown to have a greater impact in the long term. / Studien undersöker efterfrågans priselasticitet på cigaretter på kort- och lång sikt i Sverige. Tidsperioden för undersökningen är 17 år och omfattar åren 1996-2012. Studiens resultat visar att efterfrågans priselasticitet på lång sikt är högre än på kort sikt för de Svenska konsumenterna, vilket ligger i linje med tidigare studier inom ämnet. Det faktum att priselasticiteten för cigaretter är högre på lång sikt indikerar på att substitutionseffekten har betydande effekt för efterfrågans priselasticitet på cigaretter på lång sikt. Studien har genomförts i två delar där författarna undersökt cigarettskattens effekt på cigarettpriset och i sin tur cigarettprisets effekt på efterfrågan på cigaretter i Sverige. Det kombinerade resultatet i studien påvisar att en ökning av cigarettskatten med 10 procent medför att efterfrågan på cigaretter minskar med 5 procent, samtidigt som statens intäkter från cigarettskatten ökar med 4,5 procent. Resultatet i denna uppsats visar därför att statens incitament till att höja cigarettskatten är tvådelad, då en skatteökning genererar hälsovinster genom minskad konsumtion samt genererar ökade intäkter till staten och att dessa incitament har en större påverkan på lång sikt.
|
110 |
Optimisation des critères d'identification des électrons et recherche de supersymétrie dans les canaux avec deux leptons de même charge à partir des données du détecteur ATLAS / Optimisation of electron identification criteria and search for supersymmetry in final states with two same sign leptons with the ATLAS detectorKahn, Sébastien 13 October 2016 (has links)
L’énergie dans le centre de masse des collisions produites par le LHC a été portée en 2015 à 13 TeV, augmentant fortement les sections efficaces de production de particules hypothétiques lourdes et ouvrant ainsi la voie à la recherche de nouvelle physique. Les travaux effectués dans ce contexte ont porté sur la définition des critères d’identification des électrons et la recherche de Supersymétrie dans les données du détecteur ATLAS. La première partie est consacrée à l’optimisation et aux performances attendues de l’identification des électrons, utilisée pour le déclenchement et l’analyse des données en 2015. La méthodologie mise au point pour adapter les critères aux contraintes expérimentales est présentée. La seconde partie est dédiée à la recherche de particules supersymétriques produites par interaction forte dans les canaux avec deux leptons de même charge électrique (électrons ou muons), des jets et de l’énergie transverse manquante, à partir de l’ensemble des données collectées en 2015 (soit 3.2 fb à s = 13 TeV). Les principaux aspects de l’analyse sont décrits, avec une attention particulière à l’estimation du bruit de fond expérimental. L’absence d’excès par rapport aux prédictions du Modèle Standard est interprétée en terme de limites sur la masse des squarks, des gluinos et des neutralinos dans le cadre de modèles supersymétriques simplifiés. Cela a permis par exemple d’exclure l’existence de gluinos de masse inférieure à 1.1 TeV dans certains modèles avec un spectre de masse compressé, ce qui représente une amélioration d’environ 150 GeV par rapport aux limites antérieures. / The LHC collisions center of mass energy rose up to 13 TeV in 2015, strongly increasing the production cross sections of hypothetical heavy particles (for example by a factor 50 for pair production of gluinos with a 1.5 TeV mass) and thus, paving the way for new physics searches. An optimisation of the electron iidentification criteria and a search for Supersymmetry with the ATLAS detector data were performed in this context. The first part is dedicated to the definition and the expected performance of the electron identification used for the trigger and the analysis of the 2015 data. The methodology defined to adapt these criteria to the experimental constraints is detailed. The second part is dedicated to the search for strongly produced supersymmetric particles in events with two same sign leptons (electrons or muons), jets and missing transverse energy using the full 2015 dataset (3.2 fb at s = 13 TeV). The main aspects of the analysis are described, paying particular attention to the experimental background. As no significant excess over the Standard Model expectation is observed, the results are interpreted using several simplified models to set limits on the masses of the gluinos, the squarks and the neutralinos. For instance, gluino masses up to 1.1 TeV are excluded, which represents an improvement of about 150 GeV with respect to the previous limits for some models with compressed mass spectra.
|
Page generated in 0.0675 seconds