• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 274
  • 145
  • 59
  • 48
  • 23
  • 12
  • 11
  • 10
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 732
  • 185
  • 137
  • 89
  • 87
  • 87
  • 82
  • 79
  • 75
  • 71
  • 68
  • 57
  • 52
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Visualization of Feature Dependency Structures : A case study at Scania CV AB / Visualisering av regelverk för beroenden mellan produktegenskaper : En fallstudie på Scania CV AB

Bronge, Erica January 2017 (has links)
As many automotive companies have moved towards a higher degree of variability in the product lines they offer their customers, a necessary need has emerged for so called feature dependency structures that are used to describe product feature dependencies and verify order validity. In this study, the possibility of using a node-link graph representation to visualize such a feature dependency structure and the associated affordances and limitations were investigated by the implementation of a case study at the Swedish automotive company Scania CV AB. Qualitative data gathering methods such as contextual inquiry and semi-structured interviews with employees were used to identify key tasks and issues involved in maintenance and analysis of Scania’s in-house feature dependency structure. These findings were used together with user-supported iterative prototyping to create a few visualization prototypes intended to provide support with performance of some of the identified tasks. User evaluation of the prototypes showed that a node-link graph representation was a viable solution to support users with structure maintenance, exhibiting the following affordances: structure exploration, overview and context. Furthermore, the major limitations of the tested representation were found to be lookup of specific information and access to detail. The findings of this study are expected to be of use for other automotive companies that employ a high degree of feature variability in their product lines through the use of complex feature dependency structures. / I samband med att flera fordonstillverkare gått över till att erbjuda en allt större grad av varians i de produktlinjer man erbjuder sina kunder så har ett nödvändigt behov uppstått av att ha regelverk som beskriver de beroenden som finns mellan produktegenskaper och verifierar att inkomna ordrar är giltiga. I den här studien så har möjligheten att visualisera den typen av regelverk med en så kallad ”node-link”-graf samt de styrkor och svagheter som följer med en sådan representation undersökts genom en fallstudie på den svenska fordonstillverkaren Scania CV AB. Med hjälp av kvalitativa datainsamlingsmetoder som så kallad ”Contextual inquiry” och semistrukturerade intervjuer med anställda specialiserade på underhåll av Scanias egna egenskapsregelverk så kunde nyckeluppgifter och svårigheter relaterade till regelverket identifieras. Dessa upptäckter användes sedan tillsammans med användarcentrerat iterativt prototypande för att skapa ett antal visualiseringsprototyper avsedda att underlätta utförandet av några av de tidigare identifierade uppgifterna. Användarutvärdering av prototyperna visade att en visualisering baserad på en ”node-link”-representation var en gångbar lösning som kunde underlätta för användarna. Dess styrkor var att stödja utforskande av strukturen med bra överblick av innehållet och bibehållet sammanhang. Representation var dock svag när det gällde att stödja användaren i att leta upp specifik information och att tillhandahålla mer ingående detaljer. Dessa resultat förväntas vara användbara för andra fordonstillverkare som bygger sina produktlinjer på en hög grad av varians med hjälp av komplexa beroenderegelverk för produktegenskaper.
292

Performance evaluation of alternative network architectures for sensor-satellite integrated networks

Verma, Suraj, Pillai, Prashant, Hu, Yim Fun January 2013 (has links)
No / The last decade has seen an exponential rise in the use of wireless sensor networks (WSNs) in various applications. While these have been primarily used on their own, researchers are now looking into ways of integrating these WSNs with other existing communication technologies. One such network is the satellite network which provides significant advantage in providing communication access to remote locations due to their inherent large coverage areas. Combining WSNs and satellite will enable us to perform efficient remotely monitoring in areas where terrestrial networks may not be present. However in such a scenario, the placement of sensor nodes is crucial in order to ensure efficient routing and energy-efficiency. This paper presents four network architectures for sensor-satellite hybrid networks, sensor-satellite direct communication, connections via a gateway node employing random node layout, grid-based node layout and cluster-based node layout with data aggregation. These architectures were simulated using network simulator 2 (ns-2) and then their packet loss rate, average end-to-end packet delay, and overall energy consumption were compared. The paper concludes by proposing a suitable network topology for environmental monitoring applications.
293

Node Selection, Synchronization and Power Allocation in Cooperative Wireless Networks

Baidas, Mohammed Wael 23 April 2012 (has links)
Recently, there has been an increasing demand for reliable, robust and high data rate communication systems that can counteract the limitations imposed by the scarcity of two fundamental resources for communications: bandwidth and power. In turn, cooperative communications has emerged as a new communication paradigm in which network nodes share their antennas and transmission resources for distributed data exchange and processing. Recent studies have shown that cooperative communications can achieve significant performance gains in terms of signal reliability, coverage area, and power savings when compared with conventional communication schemes. However, the merits of cooperative communications can only be exploited with efficient resource allocation in terms of bandwidth utilization and power control. Additionally, the limited network resources in wireless environments can lead rational network nodes to be selfish and aim at maximizing their own benefits. Therefore, assuming fully cooperative behaviors such as unconditionally sharing of one's resources to relay for other nodes is unjustified. On the other hand, a particular network node may try to utilize resources from other nodes and also share its own resources so as to improve its own performance, which in turn may prompt other nodes to behave similarly and thus promote cooperation. This dissertation aims to answer the following three questions: ``How can bandwidth-efficient multinode cooperative communications be achieved?'', ``How can optimal power allocation be achieved in a distributed fashion?'', and finally, ``How can network nodes dynamically interact with each other so as to promote cooperation?''. In turn, this dissertation focuses on three main problems of cooperation in ad-hoc wireless networks: (i) optimal node selection in network-coded cooperative communications, (ii) auction-based distributed power allocation in single- and multi-relay cooperative networks, and finally (iii) coalitional game-theoretic analysis and modeling of the dynamic interactions among the network nodes and their coalition formations. Bi-directional relay networks are first studied in a scenario where two source nodes are communicating with each other via a set of intermediate relay nodes. The symbol error rate performance and achievable cooperative diversity orders are studied. Additionally, the effect of timing synchronization errors on the symbol error rate performance is investigated. Moreover, a sum-of-rates maximizing optimal power allocation is proposed. Relay selection is also proposed to improve the total achievable rate and mitigate the effect of timing synchronization errors. Multinode cooperative communications are then studied through the novel concept of many-to-many space-time network coding. The symbol error rate performance under perfect and imperfect timing synchronization and channel state information is theoretically analyzed and the optimal power allocation that maximizes the total network rate is derived. Optimal node selection is also proposed to fully exploit cooperative diversity and mitigate timing offsets and channel estimation errors. Further, this dissertation investigates distributed power allocation for single-relay cooperative networks. The distributed power allocation algorithm is conceived as an ascending-clock auction where multiple source nodes submit their power demands based on an announced relay price and are efficiently allocated cooperative transmit power. It is analytically and numerically shown that the proposed ascending-clock auction-based distributed algorithm leads to efficient power allocation, enforces truth-telling, and maximizes the social welfare. A distributed ascending-clock auction-based power allocation algorithm is also proposed for multi-relay cooperative networks. The proposed algorithm is shown to converge to the unique Walrasian Equilibrium allocation which maximizes the social welfare when source nodes truthfully report their cooperative power demands. The proposed algorithm achieves the same performance as could be achieved by centralized control while eliminating the need for complete channel state information and signaling overheads. Finally, the last part of the dissertation studies altruistic coalition formation and stability in cooperative wireless networks. Specifically, the aim is to study the interaction between network nodes and design a distributed coalition formation algorithm so as to promote cooperation while accounting for cooperation costs. This involves an analysis of coalitions' merge-and-split processes as well as the impact of different cooperative power allocation criteria and mobility on coalition formation and stability. A comparison with centralized power allocation and coalition formation is also considered, where the proposed distributed algorithm is shown to provide reasonable tradeoff between network sum-rate and computational complexity. / Ph. D.
294

Mobile Ad-hoc Network Routing Protocols: Methodologies and Applications

Lin, Tao 05 April 2004 (has links)
A mobile ad hoc network (MANET) is a wireless network that uses multi-hop peer-to-peer routing instead of static network infrastructure to provide network connectivity. MANETs have applications in rapidly deployed and dynamic military and civilian systems. The network topology in a MANET usually changes with time. Therefore, there are new challenges for routing protocols in MANETs since traditional routing protocols may not be suitable for MANETs. For example, some assumptions used by these protocols are not valid in MANETs or some protocols cannot efficiently handle topology changes. Researchers are designing new MANET routing protocols and comparing and improving existing MANET routing protocols before any routing protocols are standardized using simulations. However, the simulation results from different research groups are not consistent with each other. This is because of a lack of consistency in MANET routing protocol models and application environments, including networking and user traffic profiles. Therefore, the simulation scenarios are not equitable for all protocols and conclusions cannot be generalized. Furthermore, it is difficult for one to choose a proper routing protocol for a given MANET application. According to the aforementioned issues, my Ph.D. research focuses on MANET routing protocols. Specifically, my contributions include the characterization of differ- ent routing protocols using a novel systematic relay node set (RNS) framework, design of a new routing protocol for MANETs, a study of node mobility, including a quantitative study of link lifetime in a MANET and an adaptive interval scheme based on a novel neighbor stability criterion, improvements of a widely-used network simulator and corresponding protocol implementations, design and development of a novel emulation test bed, evaluation of MANET routing protocols through simulations, verification of our routing protocol using emulation, and development of guidelines for one to choose proper MANET routing protocols for particular MANET applications. Our study shows that reactive protocols do not always have low control overhead, as people tend to think. The control overhead for reactive protocols is more sensitive to the traffic load, in terms of the number of traffic flows, and mobility, in terms of link connectivity change rates, than other protocols. Therefore, reactive protocols may only be suitable for MANETs with small number of traffic loads and small link connectivity change rates. We also demonstrated that it is feasible to maintain full network topology in a MANET with low control overhead. This dissertation summarizes all the aforementioned methodologies and corresponding applications we developed concerning MANET routing protocols. / Ph. D.
295

Low-Power Wireless Sensor Node with Edge Computing for Pig Behavior Classifications

Xu, Yuezhong 25 April 2024 (has links)
A wireless sensor node (WSN) system, capable of sensing animal motion and transmitting motion data wirelessly, is an effective and efficient way to monitor pigs' activity. However, the raw sensor data sampling and transmission consumes lots of power such that WSNs' battery have to be frequently charged or replaced. The proposed work solves this issue through WSN edge computing solution, in which a Random Forest Classifier (RFC) is trained and implemented into WSNs. The implementation of RFC on WSNs does not save power, but the RFC predicts animal behavior such that WSNs can adaptively adjust the data sampling frequency to reduce power consumption. In addition, WSNs can transmit less data by sending RFC predictions instead of raw sensor data to save power. The proposed RFC classifies common animal activities: eating, drinking, laying, standing, and walking with a F-1 score of 93%. The WSN power consumption is reduced by 25% with edge computing intelligence, compare to WSN power that samples and transmits raw sensor data periodically at 10 Hz. / Master of Science / A wireless sensor node (WSN) system that detects animal movement and wirelessly transmits this data is a valuable tool for monitoring pigs' activity. However, the process of sampling and transmitting raw sensor data consumes a significant amount of power, leading to frequent recharging or replacement of WSN batteries. To address this issue, our proposed solution integrates edge computing into WSNs, utilizing a Random Forest Classifier (RFC). The RFC is trained and deployed within the WSNs to predict animal behavior, allowing for adaptive adjustment of data sampling frequency to reduce power consumption. Additionally, by transmitting RFC predictions instead of raw sensor data, WSNs can conserve power by transmitting less data. Our RFC can accurately classify common animal activities, such as eating, drinking, laying, standing, and walking, achieving an F-1 score of 93%. With the integration of edge computing intelligence, WSN power consumption is reduced by 25% compared to traditional WSNs that periodically sample and transmit raw sensor data at 10 Hz.
296

Performance Optimization of Public Key Cryptography on Embedded Platforms

Pabbuleti, Krishna Chaitanya 23 May 2014 (has links)
Embedded systems are so ubiquitous that they account for almost 90% of all the computing devices. They range from very small scale devices with an 8-bit microcontroller and few kilobytes of RAM to large-scale devices featuring PC-like performance with full-blown 32-bit or 64-bit processors, special-purpose acceleration hardware and several gigabytes of RAM. Each of these classes of embedded systems have unique set of challenges in terms of hardware utilization, performance and power consumption. As network connectivity becomes a standard feature in these devices, security becomes an important concern. Public Key Cryptography is an indispensable tool to implement various security features necessary on these embedded platforms. In this thesis, we provide optimized PKC solutions on platforms belonging to two extreme classes of the embedded system spectrum. First, we target high-end embedded platforms Qualcomm Snapdragon and Intel Atom. Each of these platforms features a dual-core processor, a GPU and a gigabyte of RAM. We use the SIMD coprocessor built into these processors to accelerate the modular arithmetic which accounts for the majority of execution time in Elliptic Curve Cryptography. We exploit the structure of NIST primes to perform the reduction step as we perform the multiplication. Our implementation runs over two times faster than OpenSSL implementations on the respective platforms. The second platform we targeted is an energy-harvested wireless sensor node which has a 16-bit MSP430 microcontroller and a low power RF interface. The system derives its power from a solar panel and is constrained in terms of available energy and computational power. We analyze the computation and communication energy requirements for different signature schemes, each with a different trade-off between computation and communication. We investigate the Elliptic Curve Digital Signature Algorithm (ECDSA), the Lamport-Diffie one-time hash-based signature scheme (LD-OTS) and the Winternitz one-time hash-based signature scheme (W-OTS). We demonstrate that there’s a trade-off between energy needs, security level and algorithm selection. However, when we consider the energy needs for the overall system, we show that all schemes are within one order of magnitude from each another. / Master of Science
297

Energy-harvested Lightweight Cryptosystems

Mane, Deepak Hanamant 21 May 2014 (has links)
The Internet of Things will include many resource-constrained lightweight wireless sensing devices, hungry for energy, bandwidth and compute cycles. The sheer amount of devices involved will require new solutions to handle issues such as identification and power provisioning. First, to simplify identity management, device identification is moving from symmetric-key solutions to public-key solutions. Second, to avoid the endless swapping of batteries, passively-powered energy harvesting solutions are preferred. In this contribution, we analyze some of the feasible solutions from this challenging design space. We have built an autonomous, energy-harvesting sensor node which includes a micro-controller, RF-unit, and energy harvester. We use it to analyze the computation and communication energy requirements for Elliptic Curve Digital Signature Algorithm (ECDSA) with different security levels. The implementation of Elliptic Curve Cryptography (ECC) on small microcontrollers is challenging. Most of the earlier literature has considered optimizing the performance of ECC (with respect to cycle count and software footprint) on a given architecture. This thesis addresses a different aspect of the resource-constrained ECC implementation wherein the most suitable architecture parameters are identified for any given application profile. At the high level, an application profile for an ECC-based lightweight device, such as wireless sensor node or RFID tag, is defined by the required security level, signature generation latency and the available energy/power budget. The target architecture parameters of interest include core-voltage, core-frequency, and/or the need for hardware acceleration. We present a methodology to derive and optimize the architecture parameters starting from the application requirements. We demonstrate our methodology on a MSP430F5438A microcontroller, and present the energy/architecture design space for 80-bit and 128-bit security-levels, for prime field curves secp160r1 and nistp256. Our results show that energy cost per authentication is minimized if a microcontroller is operated at the maximum possible frequency. This is because the energy consumed by leakage (i.e., static power dissipation) becomes proportionally less important as the runtime of the application decreases. Hence, in a given energy harvesting method, it is always better to wait as long as possible before initiating ECC computations which are completed at the highest frequency when sufficient energy is available. / Master of Science
298

Civic Space: An Architectural Framework for Urban Invention

Linnstaedt, Andrew John 02 November 2010 (has links)
This project represents the search for an architecture within the physical, historical, and political situation that an existing city presents. Set within the physical bounds of Savannah, it builds upon an understanding of the city as a series of Utopian propositions existing subliminally and often incongruously. As such, the project concerns the making of public space--space to relieve the culturally disjointed condition of modern urban life by acting as a sort of stage for creative expression and collective improvisation. This also involves the making of characteristic places, which by energetically acknowledging, confronting, challenging, or amplifying the cityâ s conceptions of itself, have the potential to generate both physical and metaphysical transformations. Furthermore, in response to urban development paradigms that are either senselessly uncoordinated or mechanistically authoritarian, the project proposes an alternative: the structured interweaving of a â civic layerâ of these generative urban centers, each serving a different part of the city. The centers must function architecturally as the symbols and containers of civic life, providing space and programmatic flexibility to allow for open cultural engagement while aesthetically enlivening the urban fabric and serving collectively as an index to the city at large. / Master of Architecture
299

Transforming and Optimizing Irregular Applications for Parallel Architectures

Zhang, Jing 12 February 2018 (has links)
Parallel architectures, including multi-core processors, many-core processors, and multi-node systems, have become commonplace, as it is no longer feasible to improve single-core performance through increasing its operating clock frequency. Furthermore, to keep up with the exponentially growing desire for more and more computational power, the number of cores/nodes in parallel architectures has continued to dramatically increase. On the other hand, many applications in well-established and emerging fields, such as bioinformatics, social network analysis, and graph processing, exhibit increasing irregularities in memory access, control flow, and communication patterns. While multiple techniques have been introduced into modern parallel architectures to tolerate these irregularities, many irregular applications still execute poorly on current parallel architectures, as their irregularities exceed the capabilities of these techniques. Therefore, it is critical to resolve irregularities in applications for parallel architectures. However, this is a very challenging task, as the irregularities are dynamic, and hence, unknown until runtime. To optimize irregular applications, many approaches have been proposed to improve data locality and reduce irregularities through computational and data transformations. However, there are two major drawbacks in these existing approaches that prevent them from achieving optimal performance. First, these approaches use local optimizations that exploit data locality and regularity locally within a loop or kernel. However, in many applications, there is hidden locality across loops or kernels. Second, these approaches use "one-size-fits-all'' methods that treat all irregular patterns equally and resolve them with a single method. However, many irregular applications have complex irregularities, which are mixtures of different types of irregularities and need differentiated optimizations. To overcome these two drawbacks, we propose a general methodology that includes a taxonomy of irregularities to help us analyze the irregular patterns in an application, and a set of adaptive transformations to reorder data and computation based on the characteristics of the application and architecture. By extending our adaptive data-reordering transformation on a single node, we propose a data-partitioning framework to resolve the load imbalance problem of irregular applications on multi-node systems. Unlike existing frameworks, which use "one-size-fits-all" methods to partition the input data by a single property, our framework provides a set of operations to transform the input data by multiple properties and generates the desired data-partitioning codes by composing these operations into a workflow. / Ph. D.
300

Fast Static Learning and Inductive Reasoning with Applications to ATPG Problems

Dsouza, Michael Dylan 03 March 2015 (has links)
Relations among various nodes in the circuit, as captured by static and inductive invariants, have shown to have a positive impact on a wide range of EDA applications. Techniques such as boolean constraint propagation for static learning and assume-then-verify approach to reason about inductive invariants have been possible due to efficient SAT solvers. Although a significant amount of research effort has been dedicated to the development of effective invariant learning techniques over the years, the computation time for deriving powerful multi-node invariants is still a bottleneck for large circuits. Fast computation of static and inductive invariants is the primary focus of this thesis. We present a novel technique to reduce the cost of static learning by intelligently identifying redundant computations that may not yield new invariants, thereby achieving significant speedup. The process of inductive invariant reasoning relies on the assume-then-verify framework, which requires multiple iterations to complete, making it infeasible for cases with a large set of multi-node invariants. We present filtering techniques that can be applied to a diverse set of multi-node invariants to achieve a significant boost in performance of the invariant checker. Mining and reasoning about all possible potential multi-node invariants is simply infeasible. To alleviate this problem, strategies that narrow down the focus on specific types of powerful multi-node invariants are also presented. Experimental results reflect the promise of these techniques. As a measure of quality, the invariants are utilized for untestable fault identification and to constrain ATPG for path delay fault testing, with positive results. / Master of Science

Page generated in 0.0434 seconds