• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • Tagged with
  • 28
  • 28
  • 28
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

On co-optimization of constrained satisfiability problems for hardware software applications

Ganeshpure, Kunal 01 January 2011 (has links)
Manufacturing technology has permitted an exponential growth in transistor count and density. However, making efficient use of the available transistors in the design has become exceedingly difficult. Standard design flow involves synthesis, verification, placement and routing followed by final tape out of the design. Due to the presence of various undesirable effects like capacitive crosstalk, supply noise, high temperatures, etc., verification/validation of the design has become a challenging problem. Therefore, having a good design convergence may not be possible within the target time, due to a need for a large number of design iterations. Capacitive crosstalk is one of the major causes of design convergence problems in deep sub-micron era. With scaling, the number of crosstalk violations has been increasing because of reduced inter-wire distances. Consequently only the most severe crosstalk faults are fixed pre-silicon while the rest are tested post-silicon. Testing for capacitive crosstalk involves generation of input patterns which can be applied post-silicon to the integrated circuit and comparison of the output response. These patterns are generated at the gate/Register Transfer Level (RTL) of abstraction using Automatic Test Pattern Generation (ATPG) tools. In this dissertation, an Integer Linear Programming (ILP) based ATPG technique for maximizing crosstalk induced delay increase at the victim net, for multiple aggressor crosstalk faults, is presented. Moreover, various solutions for pattern generation considering both zero as well as unit delay models is also proposed. With voltage scaling, power supply switching noise has become one of the leading causes of signal integrity related failures in deep sub-micron designs. Hence, during power supply network design and analysis of power supply switching noise, computation of peak supply current is an essential step. Traditional peak current estimation approaches involve addition of peak current associated with all the CMOS gates which are switching in a combinational circuit. Consequently, this approach does not take the Boolean and temporal relationships of the circuit into account. This work presents an ILP based technique for generation of an input pattern pair which maximizes switching supply currents for a combinational circuit in the presence of integer gate delays. The input pattern pair generated using the above approach can be applied post-silicon for power droop testing. With high level of integration, Multi-Processor Systems on Chip (MPSoC) feature multiple processor cores and accelerators on the same die, so as to exploit the instruction level parallelism in the application. For hardware-software co-design, application programming model is based on a Task Graph, which represents task dependencies and execution/transfer times for various threads and processes within an application. Mapping an application to an MPSoC traditionally involves representing it in the form of a task graph and employing static scheduling in order to minimize the schedule length. Dynamic system behavior is not taken into consideration during static scheduling, while dynamic scheduling requires the knowledge of task graph at runtime. A run-time task graph extraction heuristic to facilitate dynamic scheduling is also presented here. A novel game theory based approach uses this extracted task graph to perform run-time scheduling in order to minimize total schedule length. With increase in transistor density, power density has gone up substantially. This has lead to generation of regions with very high temperature called Hotspots. Hotspots lead to reliability and performance issues and affect design convergence. In current generation Integrated Circuits (ICs) temperature is controlled by reducing power dissipation using Dynamic Thermal Management (DTM) techniques like frequency and/or voltage scaling. These techniques are reactive in nature and have detrimental effects on performance. Here, a look-ahead based task migration technique is proposed, in order to utilize the multitude of cores available in an MPSoC to eliminate thermal emergencies. Our technique is based on temperature prediction, leveraging upon a novel wavelet based thermal modeling approach. Hence, this work addresses several optimization problems that can be reduced to constrained max-satisfiability, involving integer as well as Boolean constraints in hardware and software domains. Moreover, it provides domain specific heuristic solutions for each of them.
22

On data-path customization in next-generation networks

Shanbhag, Shashank 01 January 2012 (has links)
The Internet is an example of a successful and scalable decentralized system capable of connecting millions of systems and transporting data seamlessly between them. It has been so successful that today, it is impossible to imagine entertainment, education, communication, business and other services without the Internet. In fact, the Internet is widely considered to be just another utility service. Additionally, the diversity of the end-systems ranging from high-end servers to mobile phones and sensors is only adding to the rate of its growth and value. This success is largely the result of careful thought put into the design philosophy of the Internet (globally deployed network layer, isolation of protocol layers). This has resulted in a digital information explosion with recent studies predicting a ten fold increase in the amount of digital content over the next five years. Factors such as information replication, increasingly affordable and heterogeneous end-systems, cheap storage and numerous services are cited to be a few of the reasons for this rapid growth. However, this fixed network layer also poses a barrier to introduction of innovations to support increasingly diverse end-systems and new communication paradigms. Moreover, the inherent issues in security, mobility, performance and reliability cannot be completely resolved by merely changing functionality in the end-systems, and will require addition of functionality in the core of the network as well. Service-centric networking is a new paradigm that seeks to introduce functionality into the network by deploying customized in-network services on-demand. Different compositions of services are used to customize connections to satisfy various user communication requirements. This work addresses four challenges in the context of service-centric networks: (1) automated service composition (2) combined service composition and routing, (3) support for inter-domain data-plane policies in such networks, and (4) end-system support for services through abstractions. Automated service composition deals with the challenge of finding an optimal sequence of services to satisfy communication requirements of a connection. This composed sequence of services is applied to the connection in the data-path. A semantic tree is used to describe communication characteristics and the problem is solved by reducing it to a planning problem. Service composition is typically followed by “service routing”, where the connection is set up such that the services are applied in order. This is not always optimal as we show through experiments. Combined service composition and routing tries to solve both problems in a single stage by reducing it to a planning problem. We further explore the issues of inter-domain data-plane policies in next-generation networks and discuss a system that uses the semantic tree to specify such policies. The system translates these policies into planning rules and determines the right way to set up the connection such that all policies are met. We also discuss the design and implementation of a novel “service socket” API that allows end-system applications to access services in a service-centric networking context. Another key aspect of next-generation networking is virtualization of the physical network infrastructure. Network virtualization allows multiple networks with different protocol stacks to share the same physical infrastructure. A key problem for virtual network providers is the need to efficiently allocate their customers' virtual network requests to the underlying network infrastructure. This problem is known to be computationally intractable and heuristic solutions continue to be developed. Most existing heuristics use a two-stage approach in which virtual nodes are first placed on physical nodes and virtual links are subsequently mapped. We present VHub, a novel single-stage approach that formulates this problem as a p-hub median problem. Our results show that VHub outperforms the state of the art algorithms by mapping 23% more virtual networks in lesser time (26% to 96%). Overall, this dissertation discusses techniques through which data-path customization can be achieved in next-generation networks. To solve some of the technical challenges, this work follows a cross-disciplinary approach exploring ideas from computer networking, distributed systems and algorithms to graph theory, mathematical optimization and artificial intelligence. The solutions are also tested through simulations using real and synthetically generated workloads to validate the design.
23

Principal Network Analysis

Mei, Jonathan B. 08 June 2018 (has links)
<p> Many applications collect a large number of time series, for example, temperature continuously monitored by weather stations across the US or neural activity recorded by an array of electrical probes. These data are often referred to as unstructured. A first task in their analytics is often to derive a low dimensional representation &ndash; a graph or discrete manifold &ndash; that describes the interrelations among the time series and their <i>intra </i>relations across time. </p><p> In general, the underlying graphs can be directed and weighted, possibly capturing the strengths of causal relations, not just the binary existence of reciprocal correlations. Furthermore, the processes generating the data may be non-linear and observed in the presence of unmodeled phenomena or unmeasured agents in a complex networked system. Finally, the networks describing the processes may themselves vary through time. </p><p> In many scenarios, there may be good reasons to believe that the graphs are only able to vary as linear combinations of a set of "principal graphs" that are fundamental to the system. We would then be able to characterize each principal network individually to make sense of the ensemble and analyze the behaviors of the interacting entities. </p><p> This thesis acts as a roadmap of computationally tractable approaches for learning graphs that provide structure to data. It culminates in a framework that addresses these challenges when estimating time-varying graphs from collections of time series. Analyses are carried out to justify the various models proposed along the way and to characterize their performance. Experiments are performed on synthetic and real datasets to highlight their effectiveness and to illustrate their limitations.</p><p>
24

Design Day Analysis - Forecasting Extreme Daily Natural Gas Demand

Kaftan, David 28 July 2018 (has links)
<p> This work provides a framework for Design Day analysis. First, we estimate the temperature conditions which are expected to be colder than all but one day in N years. This temperature is known as the Design Day condition. Then, we forecast an upper bound on natural gas demand when temperature is at the Design Day condition. </p><p> Natural gas distribution companies (LDCs) need to meet demand during extreme cold days. Just as bridge builders design for a nominal load, natural gas distribution companies need to design for a nominal temperature. This nominal temperature is the Design Day condition. The Design Day condition is the temperature that is expected to be colder than every day except one in N years. Once Design Day conditions are estimated, LDCs need to prepare for the Design Day demand. We provide an upper bound on Design Day demand to ensure LDCs will be able to meet demand. </p><p> Design Day conditions are determined in a variety of ways. First, we fit a kernel density function to surrogate temperatures - this method is referred to as the Surrogate Kernel Density Fit. Second, we apply Extreme Value Theory - a field dedicated to finding the maxima or minima of a distribution. In particular, we apply Block-Maxima and Peak-Over-Threshold (POT) techniques. The upper bound of Design Day demand is determined using a modified version of quantile regression. </p><p> Similar Design Day conditions are estimated by both the Surrogate Kernel Density Fit and Peaks-Over-Threshold methods. Both methods perform well. The theory supporting the POT method and the empirical performance of the SKDF method lends confidence in the Design Day conditions estimates. The upper bound of demand on these conditions is well modeled by the modified quantile regression technique.</p><p>
25

Interpreting sensor information in large-scale distributed cyber-physical systems

Javed, Nauman 01 January 2014 (has links)
Devices that sense some aspect of the environment, or collect data about it, process the sensed data to produce useful information, and possibly take actions based on this in- formation to effect desired changes in the environment are becoming ubiquitous. There are numerous examples of such "Cyber-Physical Systems," such as, weather sensors dis- tributed geographically to sense various weather parameters like temperature, air pressure, humidity etc, sensors used at different levels of the energy grid, from power generation to distribution to consumption, that monitor energy production and usage patterns, sen- sors used in various military and civilian surveillance and tracking applications etc. This dissertation focuses on "Distributed Cyber-Physical Systems," the ones that have multiple sensors distributed geographically or spatially. The sensors comprising such Distributed Cyber-Physical Systems may or may not be networked together, although their main pur- pose is to provide localized information to be ultimately fused into an overall picture of the whole geographical space covered by the sensors. This dissertation explores ways of interpreting information in such Distributed Cyber-Physical Systems. In this context, we look at three related problems. The first one is a multiple target localization and tracking problem in a wireless sensor network comprising binary proximity sensors [38]. We analyze this problem using the geometry of sensing of the individual sensors, and apply graph theoretical concepts to develop a fully-distributed multiple, interfering, target localization and tracking algorithm. Our distributed algorithm demonstrates the power of the use of localized information by sensors to make decisions that contribute to the inference about phenomena, in this case target movement, that are essentially global in nature. The distributed implementation of information interpretation also lends efficiency advantages, such as more efficient energy consumption due to reduced communication requirements, as shown in our simulations. The second problem, in sensor information interpretation, that this dissertation looks at is concerned with sensor verification in a system of distributed sensors, all of which are sensing some global phenomena of interest [37]. As a demonstrative application, we use a dataset collected from weather sensors distributed in the U.S. Northeast, each sensor sensing the weather parameters Temperature, Air Pressure, Dew Point, and Visibility, over a time period ranging from late May 2011 to mid-June 2011. Our approach is to first create a statistical model of the weather parameters and then identify outliers in the observed data. These outliers ultimately help verify if the sensors' reports are erroneous. While the first two problems in this dissertation, as described above, deal with sensor information in one domain, target tracking in one case and weather sensing in the other, the third problem we investigate is cross-domain [36]. Here, parameters of one domain affect parameters of another domain, but only the affected domain parameters are measured, and tracked, to ultimately control these parameters in the affected domain. Specifically, we develop methods of network configuration based on distributed estimation and prediction of network performance degradataion parameters, where this performance degradation is originally affected by external environmental parameters such as weather conditions. We take "Routing in Wirelss Mesh Networks in the Face of Adverse Weather Conditions" as an example application to demonstrate our ideas of predictive network configuration. Through the simulations generated using real-world weather data, we are able to show that localized estimation and prediction of wireless link quality, as affected by the extreme weather events, results in remarkable improvements in network routing performance, and performs equally well, or even better, than routing that uses predictions of the affecting weather itself.
26

Racial inequalities in America| Examining socioeconomic statistics using the Semantic Web

Terrell, David 09 September 2016 (has links)
<p> The visualization of recent episodes regarding apparently unjustifiable deaths of minorities, caused by police and federal law enforcement agencies, has been amplified through today&rsquo;s social media and television networks. Such events may seem to imply that issues concerning racial inequalities in America are getting worse. However, we do not know whether such indications are factual; whether this is a recent phenomenon, whether racial inequality is escalating relative to earlier decades, or whether it is better in certain regions of the nation compared to others. </p><p> We have built a semantic engine for the purpose of querying statistics on various metropolitan areas, based on a database of individual deaths. Separately, we have built a database of demographic data on poverty, income, education attainment, and crime statistics for the top 25 most populous metropolitan areas. These data will ultimately be combined with government data to evaluate this hypothesis, and provide a tool for predictive analytics. In this thesis, we will provide preliminary results in that direction.</p><p> The methodology in our research consisted of multiple steps. We initially described our requirements and drew data from numerous datasets, which contained information on the 23 highest populated Metropolitan Statistical Areas in the United States. After all of the required data was obtained we decomposed the Metropolitan Statistical Area records into domain components and created an Ontology/Taxonomy via Prot&eacute;g&eacute; to determine an hierarchy level of nouns towards identifying significant keywords throughout the datasets to use as search queries. Next, we used a Semantic Web implementation accompanied with Python programming language, and FuXi to build and instantiate a vocabulary. The Ontology was then parsed for the entered search query and returned corresponding results providing a semantically organized and relevant output in RDF/XML format.</p>
27

A Formal Framework for Modelling Component Extension and Layers in Distributed Embedded Systems

Förster, Stefan 14 May 2007 (has links)
Der vorliegende Band der wissenschaftlichen Schriftenreihe Eingebettete Selbstorganisierende Systeme widmet sich dem Entwurf von verteilten Eingebetteten Systemen. Einsatzgebiete solcher Systeme sind unter anderem Missions- und Steuerungssysteme von Flugzeugen (Aerospace-Anwendungen) und , mit zunehmender Vernetzung, der Automotive Bereich. Hier gilt es höchste Sicherheitsstandards einzuhalten und maximale Verfügbarkeit zu garantieren. In dieser Arbeit wird diese Problematik frühzeitig im Entwurfsprozess, in der Spezifikationsphase, aufgegriffen. Es werden Implementierungsvarianten wie Hardware und Software sowie Systemkomponenten wie Berechungskomponenten und Kommunikationskomponenten unterschieden. Für die übergreifende Spezifikation wird auf Grundlage des π-Kalküls ein formales Framework, das eine einheitliche Modellierung von Teilsystemen in den unterschiedlichen Entwurfsphasen unterstützt, entwickelt. Besonderer Schwerpunkt der Untersuchungen von Herrn Förster liegt auf Erweiterungen von Systemspezifikationen. So wird es möglich, Teilkomponenten zu verändern oder zu substituieren und die Gesamtspezifikation auf Korrektheit und Konsistenz automatisiert zu überprüfen. / This volume of the scientific series Eingebettete, selbstorganisierende Systeme (Embedded Self-Organized Systems) gives an outline of the design of distributed embedded systems. Fields of application for such systems are, amongst others, mission systems and control systems of airplanes (aeronautic applications) and - with increasing level of integration - also the automotive area. In this area it is essential to meet highest safety standards and to ensure the maximum of availability. Mr Förster addresses these problems in an early state of the design process, namely the specification. Implementation versions like hardware and software are differentiated as well as system components like computation components and communication components. For a general specification Mr Förster develops a formal framework based on the pi-calculus, which supports a standardised modelling of modules in different design steps. The main focus of Mr Förster's research is the extension of system specifications. Therefore it will be possible to modify or substitute modules and to check automatically the correctness and consistency of the total specification. Mr Förster can prove the correctness of his approach and demonstrates impressively the complexity by clearly defined extension relations and formally verifiable embedding in the pi-calculus formalism. A detailed example shows the practical relevance of this research. I am glad that Mr Förster publishes his important research in this scientific series. So I hope you will enjoy reading it and benefit from it.
28

A Formal Fault Model for Component-Based Models of Embedded Systems

Fischer, Marco 14 May 2007 (has links)
Der vierte Band der wissenschaftlichen Schriftenreihe Eingebettete Selbstorganisierende Systeme widmet sich der Entwicklung von Fehlermodellen für eingebettete, verteilte Multi – Prozessorsysteme. Diese werden zu einem hierarchischen Netzwerk zur Steuerung von Flugzeugen (Avionik) verbunden und mehr und mehr im Automotive Bereich eingesetzt. Hier gilt es höchste Sicherheitsstandards einzuhalten und maximale Verfügbarkeit zu garantieren. Herr Fischer integriert die Modellierung von möglichen Fehlern in den Entwurfsprozess. Auf Grundlage des π-Kalküls entwickelt Herr Fischer ein formales Fehlermodell, das eine einheitliche Modellierung von Fehlerfällen unterstützt. Dabei werden interessante Bezüge zur Bi-Simulation sowie zu Methoden des Modell Checkings hergestellt. Die theoretischen Ergebnisse werden an einem komplexen Beispiel anschaulich illustriert. So kann der Leser die Mächtigkeit des entwickelten Ansatzes nachvollziehen und wird motiviert, die entwickelte Methodik auf weitere Anwendungsfälle zu übertragen. / The 4th volume of the scientific series Eingebettete, selbstorganisierende Systeme (Embedded Self-Organized Systems) outlines the design of fault models for embedded distributed multi processor systems. These multi processor systems will be connected to a hierarchical network to control airplanes (avionics) and also be used more and more in the automotive area. Here it is essential to meet highest safety standards and to ensure the maximum of availability. Mr Fischer integrates the modelling of potential faults into the design process. Based on the pi-calculus, he develops a formal framework, which supports a standardised modelling of faults. Thereby, interesting connections to the Bi-Simulation as well as to methods of the Model checking are established. The theoretical results are depicted on a complex example. So it is possible for the reader to understand the complexity of this approach and is motivated to use the developed methodology in other applications. I am glad that Mr Fischer publishes his important research in this scientific series.

Page generated in 0.362 seconds