• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 8
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 73
  • 73
  • 19
  • 18
  • 15
  • 14
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Predicting Performance Run-time Metrics in Fog Manufacturing using Multi-task Learning

Nallendran, Vignesh Raja 26 February 2021 (has links)
The integration of Fog-Cloud computing in manufacturing has given rise to a new paradigm called Fog manufacturing. Fog manufacturing is a form of distributed computing platform that integrates Fog-Cloud collaborative computing strategy to facilitate responsive, scalable, and reliable data analysis in manufacturing networks. The computation services provided by Fog-Cloud computing can effectively support quality prediction, process monitoring, and diagnosis efforts in a timely manner for manufacturing processes. However, the communication and computation resources for Fog-Cloud computing are limited in Fog manufacturing. Therefore, it is significant to effectively utilize the computation services based on the optimal computation task offloading, scheduling, and hardware autoscaling strategies to finish the computation tasks on time without compromising on the quality of the computation service. A prerequisite for adapting such optimal strategies is to accurately predict the run-time metrics (e.g., Time-latency) of the Fog nodes by capturing their inherent stochastic nature in real-time. It is because these run-time metrics are directly related to the performance of the computation service in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The run-time metrics that reflect the performance in the Fog nodes are heterogenous in nature and the performance cannot be effectively modeled through traditional predictive analysis. In this thesis, a multi-task learning methodology is adopted to predict the run-time metrics that reflect performance in Fog manufacturing by addressing the heterogeneities among the Fog nodes. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The proposed model can be further extended in computation tasks offloading and architecture optimization in Fog manufacturing to minimize the time-latency and improve the robustness of the system. / Master of Science / Smart manufacturing aims at utilizing Internet of things (IoT), data analytics, cloud computing, etc. to handle varying market demand without compromising the productivity or quality in a manufacturing plant. To support these efforts, Fog manufacturing has been identified as a suitable computing architecture to handle the surge of data generated from the IoT devices. In Fog manufacturing computational tasks are completed locally through the means of interconnected computing devices called Fog nodes. However, the communication and computation resources in Fog manufacturing are limited. Therefore, its effective utilization requires optimal strategies to schedule the computational tasks and assign the computational tasks to the Fog nodes. A prerequisite for adapting such strategies is to accurately predict the performance of the Fog nodes. In this thesis, a multi-task learning methodology is adopted to predict the performance in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The metrics that reflect the performance in the Fog nodes are heterogenous in nature and cannot be effectively modeled through conventional predictive analysis. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The results show that the multi-task learning model has better prediction accuracy than the benchmarks and that it can model the heterogeneities among the Fog nodes. The proposed model can further be incorporated in scheduling and assignment strategies to effectively utilize Fog manufacturing's computational services.
12

A Genetic Algorithm-Based Place-and-Route Compiler For A Run-time Reconfigurable Computing System

Kahne, Brian C. 14 May 1997 (has links)
Configurable Computing is a technology which attempts to increase computational power by customizing the computational platform to the specific problem at hand. An experimental computing model known as wormhole run-time reconfiguration allows for partial reconfiguration and is highly scalable. In this approach, configuration information and data are grouped together in a computing unit called a stream, which can tunnel through the chip creating a series of interconnected pipelines. The Colt/Stallion project at Virginia Tech implements this computing model into integrated circuits. In order to create applications for this platform, a compiler is needed which can convert a human readable description of an algorithm into the sequences of configuration information understood by the chip itself. This thesis covers two compilers which perform this task. The first compiler, Tier1, requires a programmer to explicitly describe placement and routing inside of the chip. This could be considered equivalent to an assembler for a traditional microprocessor. The second compiler, Tier2, allows the user to express a problem as a dataflow graph. Actual placing and routing of this graph onto the physical hardware is taken care of through the use of a genetic algorithm. A description of the two languages is presented, followed by example applications. In addition, experimental results are included which examine the behavior of the genetic algorithm and how alterations to various genetic operator probabilities affects performance. / Master of Science
13

An Investigation of Run-time Operations in a Heterogeneous Desktop Grid Environment: The Texas Tech University Desktop Grid Case Study

Perez, Jerry Felix 01 January 2013 (has links)
The goal of the dissertation study was to evaluate the existing DG scheduling algorithm. The evaluation was developed through previously explored simulated analyses of DGs performed by researchers in the field of DG scheduling optimization and to improve the current RT framework of the DG at TTU. The author analyzed the RT of an actual DG, thereby enabling other investigators to compare theoretical results with the results of this dissertation case study. Two statistical methods were used to formulate and validate predictive models: multiple linear regression and graphical exploratory data analysis techniques. Using both statistical methods, the author was able to determine that the theoretical model was able to predict the significance of four independent variables of resource fragmentation, computational volatility, resource management, and grid job scheduling on the dependent variables quality of service and job performance affecting RT. After an experimental case study analysis of the DG variables, the author identified the best DG resources to perform optimization of run-time performance of DG at TTU. The projected outcome of this investigation is the improved job scheduling techniques of the DG at TTU.
14

A Top-Down Structured Programming Technique for Mini-Computers

Wu, Chin-yi Robert 05 1900 (has links)
This paper reviews numerous theoretical results on control structures and demonstrates their practical examples. This study deals with the design of run-time support routines by using top-down structured programming technique. A number of examples are given as illustration of this method. In conclusion, structured programming has proved to be an important methodology for systematic program design and development.
15

Monitoring And Checking Of Discrete Event Simulations

Ulu, Buket 01 January 2003 (has links) (PDF)
Discrete event simulation is a widely used technique for decision support. The results of the simulation must be reliable for critical decision making problems. Therefore, much research has concentrated on the verification and validation of simulations. In this thesis, we apply a well-known dynamic verification technique, assertion checking method, as a validation technique. Our aim is to validate the particular runs of the simulation model, rather than the model itself. As a case study, the operations of a manufacturing cell have been simulated. The cell, which is METUCIM Laboratory at the Mechanical Engineering Department of METU, has a robot and a conveyor to carry the materials, and two machines to manufacture the items, and a quality control to measure the correctness of the manufactured items. This simulation is monitored and checked by using the Monitoring and Checking (MaC) tool, a prototype developed at the University of Pennsylvania. The separation of low-level implementation details (pertaining to the code) from the high-level requirement specifications (pertaining to the simuland) helps keep monitoring and checking the simulations at an abstract level.
16

Compiling an Interpreted Processing Language : Improving Performance in a Large Telecommunication System

Mejstad, Valdemar, Tångby, Karl-Johan January 2001 (has links)
In this report we evaluate different techniques for increasing the performance of an interpreted processing language in a telecommunication system, called Billing Gateway R8. We have implemented a prototype in which we first translate the language into C++ code, and then compile it using a C++ compiler. In our prototype we experienced a threefold increase in processing throughput, compared to the original system, when running on a Symmetric Multi Processor with four CPU:s that were under full load. The prototype also showed better scalability than Billing Gateway R8, due to less use of dynamic memory management.
17

Run time verifcation of hybrid systems

Alouffi, Bader January 2016 (has links)
The growing use of computers in modern control systems has led to the develop- ment of complex dynamic systems known as hybrid systems, which integrates both discrete and continuous systems. Given that hybrid systems are systems that operates in real time allowing for changes in continuous state over time periods, and discrete state changes across zero time, their modelling, analysis and verification becomes very difficult. The formal verifications of such systems based on specifications that can guar- antee their behaviour is very important especially as it pertains to safety critical applications. Accordingly, addressing such verifications issues are important and is the focus of this thesis. In this thesis, in order to actualise the specification and verification of hybrid systems, Interval Temporal Logic(ITL) was adopted as the underlying formalism given its inherent characteristics of providing methods that are flexible for both propositional and first-order reasoning regarding periods found in hardware and software system’s descriptions. Given that an interval specifies the behaviour of a system, specifications of such systems are therefore represented as a set of intervals that can be used to gain an understanding of the possible behaviour of the system in terms of its composition whether in sequential or parallel form. ITL is a powerful tool that can handle both forms of composition given that it offers very strong and extensive proof and specification techniques to decipher essential system properties including safety, liveliness and time projections. However, a limitation of ITL is that the intervals within its framework are considered to be a sequence of discrete states. Against this back- drop, the current research provides an extension to ITL with the view to deal with verification and other related issues that centres around hybrid systems. The novelty within this new proposition is new logic termed SPLINE Interval Temporal Logic (SPITL) in which not only a discrete behaviour can be expressed, but also a continuous behaviour can be represented in the form of a spline i.e. the interval is considered to be a sequence of continuous phases instead of a sequence of discrete states. The syntax and semantics of the newly developed SPITL are provided in this thesis and the new extension of the interval temporal logic using a hybrid system as a case study. The overall framework adopted for the overall structure of SPITL is based on three fundamental steps namely the formal specification of hybrid systems is expressed in SPLINE Interval Temporal Logic, followed by the executable subset of ITL, called Tempura, which is used to develop and test a hybrid system specification that is written in SPITL and finally a runtime verification tool for ITL called AnaTempura which is linked with Matlab in order to use them as an integrated tool for the verification of hybrid systems specification. Overall, the current work contributes to the growing body of knowledge in hybrid systems based on the following three major milestones namely: i. the proposition of a new logic termed SPITL; ii. executable subset, Tempura, integrated with SPITL specification for hybrid systems; and iii. the development of a tool termed Ana Tempura which is integrated with Matlab to ensure accurate runtime verification of results.
18

Adaptace programů ve Scale zaměřená na výkon / Performance based adaptation of Scala programs

Kubát, Petr January 2017 (has links)
Dynamic adaptivity of a computer system is its ability to modify the behavior according to the environment in which it is executed. It allows the system to achieve better performance, but usually requires specialized architecture and brings more complexity. The thesis presents an analysis and design of a framework that allows simple and fluent performance-based adaptive development at the level of functions and methods. It closely examines the API requirements and possibilities of integrating such a framework into the Scala programming language using its advanced syntactical constructs. On theoretical level, it deals with the problem of selecting the most appropriate function to execute with given input based on measurements of previous executions. In the provided framework implementation, the main stress is laid on modularity and extensibility, as many possible future extensions are outlined. The solution is evaluated on a variety of development scenarios, ranging from input adaptation of algorithms to environment adaptations of complex distributed computations in Apache Spark.
19

Run Time Assurance for Intelligent Aerospace Control Systems

Dunlap, Kyle 24 May 2022 (has links)
No description available.
20

INCORPORATING SECURITY IN SERVICE LEVEL AGREEMENTS

Asghar, Syed Usman January 2020 (has links)
No description available.

Page generated in 0.085 seconds