• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 339
  • 83
  • 33
  • 21
  • 12
  • 10
  • 8
  • 7
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 657
  • 277
  • 186
  • 168
  • 129
  • 89
  • 85
  • 75
  • 72
  • 70
  • 68
  • 61
  • 60
  • 57
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Automatic Tuning of Scientific Applications

Qasem, Apan January 2007 (has links)
Over the last several decades we have witnessed tremendous change in the landscape of computer architecture. New architectures have emerged at a rapid pace with computing capabilities that have often exceeded our expectations. However, the rapid rate of architectural innovations has also been a source of major concern for the high-performance computing community. Each new architecture or even a new model of a given architecture has brought with it new features that have added to the complexity of the target platform. As a result, it has become increasingly difficult to exploit the full potential of modern architectures for complex scientific applications. The gap between the theoretical peak and the actual achievable performance has increased with every step of architectural innovation. As multi-core platforms become more pervasive, this performance gap is likely to increase. To deal with the changing nature of computer architecture and its ever increasing complexity, application developers laboriously retarget code, by hand, which often costs many person-months even for a single application. To address this problem, we developed a software-based strategy that can automatically tune applications to different architectures to deliver portable high-performance. This dissertation describes our automatic tuning strategy. Our strategy combines architecture-aware cost models with heuristic search to find the most suitable optimization parameters for the target platform. The key contribution of this work is a novel strategy for pruning the search space of transformation parameters. By focusing on architecture-dependent model parameters instead of transformation parameters themselves, we show that we can dramatically reduce the size of the search space and yet still achieve most of the benefits of the best tuning possible with exhaustive search. We present an evaluation of our strategy on a set of scientific applications and kernels on several different platforms. The experimental results presented in this dissertation suggest that our approach can produce significant performance improvement on a range of architectures at a cost that is not overly demanding.
72

Optomechanical System Development of the AWARE Gigapixel Scale Camera

Son, Hui January 2013 (has links)
<p>Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems.</p><p>The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology.</p><p>This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.</p> / Dissertation
73

Energy aware techniques for certain problems in Wireless Sensor Networks

Islam, Md Kamrul 27 April 2010 (has links)
Recent years have witnessed a tremendous amount of research in the field of wireless sensor networks (WSNs) due to their numerous real-world applications in environmental and habitat monitoring, fire detection, object tracking, traffic controlling, industrial and machine-health control and monitoring, enemy-intrusion in military battlefields, and so on. However, reducing energy consumption of individual sensors in such networks and obtaining the expected standard of quality in the solutions provided by them is a major challenge. In this thesis, we investigate several problems in WSNs, particularly in the areas of broadcasting, routing, target monitoring, self-protecting networks, and topology control with an emphasis on minimizing and balancing energy consumption among the sensors in such networks. Several interesting theoretical results and bounds have been obtained for these problems which are further corroborated by extensive simulations of most of the algorithms. These empirical results lead us to believe that the algorithms may be applied in real-world situations where we can achieve a guarantee in the quality of solutions with a certain degree of balanced energy consumption among the sensors. / Thesis (Ph.D, Computing) -- Queen's University, 2010-04-27 10:19:39.03
74

Using Community Authored Content to Identify Place-specific Activities

Dearman, David A. 21 August 2012 (has links)
Understanding the context of a person’s interaction with a place is important to enabling ubiquitous computing applications. The ability for mobile computing to provide information and services that are relevant to a user’s current location—which is central to the vision of ubiquitous computing—requires that the technologies be able to characterize the activities that a person may potentially perform in place, whatever this place may be. To support the user as she goes about her day, this ability to characterize the potential activities for a place must support work on a city scale. In this dissertation, we present a method to process place-specific community-authored content (e.g., Yelp.com reviews) to identify a set of the potential activities (articulated as verb-noun pairs) that a person can perform at a specific place and apply this method for places on a city scale. We validate the method by processing the place-specific reviews authored by community members of Yelp.com and show that the majority of the 40 most common verb-noun pairs are true activities that can be performed at the respective place; achieving an average mean precision of up to 79.3% and recall of up to 55.9%. We applied this method by developing a Web-service (the Activity Service) that automatically processes all the places reviewed for a city and provides structured access to the activity data that can be identified for the respective places. To validate that the place and activity data is useful and useable, we developed and evaluated two applications that are supported by the Activity Service: Opportunities Exist and Vocabulary Wallpaper. In addition to these applications, we conducted a design contest to identify other types of applications that can be supported by the Activity Service. Finally, we discuss limitations of the activity data and the Activity Service, and highlight future considerations.
75

Using Community Authored Content to Identify Place-specific Activities

Dearman, David A. 21 August 2012 (has links)
Understanding the context of a person’s interaction with a place is important to enabling ubiquitous computing applications. The ability for mobile computing to provide information and services that are relevant to a user’s current location—which is central to the vision of ubiquitous computing—requires that the technologies be able to characterize the activities that a person may potentially perform in place, whatever this place may be. To support the user as she goes about her day, this ability to characterize the potential activities for a place must support work on a city scale. In this dissertation, we present a method to process place-specific community-authored content (e.g., Yelp.com reviews) to identify a set of the potential activities (articulated as verb-noun pairs) that a person can perform at a specific place and apply this method for places on a city scale. We validate the method by processing the place-specific reviews authored by community members of Yelp.com and show that the majority of the 40 most common verb-noun pairs are true activities that can be performed at the respective place; achieving an average mean precision of up to 79.3% and recall of up to 55.9%. We applied this method by developing a Web-service (the Activity Service) that automatically processes all the places reviewed for a city and provides structured access to the activity data that can be identified for the respective places. To validate that the place and activity data is useful and useable, we developed and evaluated two applications that are supported by the Activity Service: Opportunities Exist and Vocabulary Wallpaper. In addition to these applications, we conducted a design contest to identify other types of applications that can be supported by the Activity Service. Finally, we discuss limitations of the activity data and the Activity Service, and highlight future considerations.
76

Location privacy in automotive telematics

Iqbal, Muhammad Usman, Surveying & Spatial Information Systems, Faculty of Engineering, UNSW January 2009 (has links)
The convergence of transport, communication, computing and positioning technologies has enabled a smart car revolution. As a result, pricing of roads based on telematics technologies has gained significant attention. While there are promised benefits, systematic disclosure of precise location has the ability to impinge on privacy of a special kind, known as location privacy. The aim of this thesis is to provide technical designs that enhance the location privacy of motorists without compromising the benefits of accurate pricing. However, this research looks beyond a solely technology-based solution, For example, the ethical implications of the use of GPS data in pricing models have not been fully understood. Likewise. minimal research exists to evaluate the technical vulnerabilities that could be exploited to avoid criminal or financial penalties. To design a privacy-aware system, it is important to understand the needs of the stakeholders, most importantly the motorists. Knowledge about the anticipated privacy preferences of motorists is important in order to make reasonable predictions about their future willingness to adopt these systems. There is limited research so far Otl user perceptions regarding specific payment options in the uptake of privacy-aware systems. This thesis provides a critical privacy assessment of two mobility pricing systems, namely electronic tolls and mobility-priced insurance. As a result of this assessment. policy recommendations arc developed which could support a common approach in facilitating privacy-aware mobility-pricing strategies. This thesis also evaluates the existing and potential inferential threats and vulnerabilities to develop security and privacy recommendations for privacy-aware pricing designs for tolls and insurance. Utilising these policy recommendations and analysing user-perception with regards to the feasibility of sustaining privacy and willingness to pay for privacy, two privacy-aware mobility pricing designs have been presented which bridge the entire array of privacy interests and bring them together into a unified approach capable of sustaining legal protection as well as satisfying privacy requirements of motorists. It is maintained that it is only by social and technical analysis working in tandem that critical privacy issues in relation to location can be addressed.
77

Multi-Objective Resource Provisioning in Network Function Virtualization Infrastructures

Oliveira, Diogo 09 April 2018 (has links)
Network function virtualization (NFV) and software-dened networking (SDN) are two recent networking paradigms that strive to increase manageability, scalability, pro- grammability and dynamism. The former decouples network functions and hosting devices, while the latter decouples the data and control planes. As more and more service providers adopt these new paradigms, there is a growing need to address multi-failure conditions, particularly those arising from large-scale disaster events. Overall, addressing the virtual network function (VNF) placement and routing problem is crucial to deploy NFV surviv- ability. In particular, many studies have inspected non-survivable VNF provisioning, however no known work have proposed survivable/resilient solutions for multi-failure scenarios. In light of the above, this work proposes and deploys a survivable multi-objective provisioning solution for NFV infrastructures. Overall, this study initially proposes multi- objective solutions to eciently solve the VNF mapping/placement and routing problem. In particular, a integer linear programming (ILP) optimization and a greedy heuristic meth- ods try to maximize the requests acceptance rate while minimizing costs and implementing trac engineering (TE) load-balancing. Next, these schemes are expanded to perform \risk- aware" virtual function mapping and trac routing in order to improve the reliability of user services. Furthermore, additionally to the ILP optimization and greedy heuristic schemes, a metaheuristic genetic algorithm (GA) is also introduced, which is more suitable for large- scale networks. Overall, these solutions are then tested in idealistic and realistic stressor scenarios in order to evaluate their performance, accuracy and reliability.
78

Estimation of QoE aware sustainable throughput in relation to TCP throughput to evaluate the user experience

Routhu, Venkata Sai Kalyan January 2018 (has links)
Recent years the research focus began on “Quality of Experience” (QoE) that addresses user satisfaction level and improvement of service. The notation sustainable throughput, sometimes also called reliable throughput, ensures user satisfaction level at the same time requires an optimum resource to provide the service. In the context of communication, it becomes important to analyze user behavior with respect to network performance.             Since the user is closer to the transport layer than the network layer, there opens a new domain to relate “QoE aware sustainable throughput” and “TCP throughput”. There is a need to further investigation of “QoE aware sustainable throughput” as it the one which sufficiently QoE, while “TCP throughput” is the result of a control process on a layer. Moreover, it is essential to estimate the QoE aware sustainable throughput based on HTTP streaming on the server and client application may result in a closer understanding of the nature of TCP in terms of user expectation.                In this study, we evaluated the performance of video streaming considering the TCP throughput in the presence of network disturbances, packet loss, and delay. The TCP packet behavior is observed in the experimental test setup. The quality assessment at which the QoE problems can still be kept at the desired level is determined. Mean opinion scores of the preferred use cases for the dash and non-dash server is used to estimate the relationship factor between “TCP throughput” and “QoE aware sustainable throughput”.
79

Modeling the power consumption of computing systems and applications through machine learning techniques / Modélisation de la consommation énergétique des systèmes informatiques et ses applications grâce à des techniques d'apprentissage automatique

Fontoura Cupertino, Leandro 17 July 2015 (has links)
Au cours des dernières années, le nombre de systèmes informatiques n'a pas cesser d'augmenter. Les centres de données sont peu à peu devenus des équipements hautement demandés et font partie des plus consommateurs en énergie. L'utilisation des centres de données se partage entre le calcul intensif et les services web, aussi appelés informatique en nuage. La rapidité de calcul est primordiale pour le calcul intensif, mais pour les autres services ce paramètre peut varier selon les accords signés sur la qualité de service. Certains centres de données sont dits hybrides car ils combinent plusieurs types de services. Toutes ces infrastructures sont extrêmement énergivores. Dans ce présent manuscrit nous étudions les modèles de consommation énergétiques des systèmes informatiques. De tels modèles permettent une meilleure compréhension des serveurs informatiques et de leur façon de consommer l'énergie. Ils représentent donc un premier pas vers une meilleure gestion de ces systèmes, que ce soit pour faire des économies d'énergie ou pour facturer l'électricité à la charge des utilisateurs finaux. Les politiques de gestion et de contrôle de l'énergie comportent de nombreuses limites. En effet, la plupart des algorithmes d'ordonnancement sensibles à l'énergie utilisent des modèles de consommation restreints qui renferment un certain nombre de problèmes ouverts. De précédents travaux dans le domaine suggèrent d'utiliser les informations de contrôle fournies par le système informatique lui-même pour surveiller la consommation énergétique des applications. Néanmoins, ces modèles sont soit trop dépendants du type d'application, soit manquent de précision. Ce manuscrit présente des techniques permettant d'améliorer la précision des modèles de puissance en abordant des problèmes à plusieurs niveaux: depuis l'acquisition des mesures de puissance jusqu'à la définition d'une charge de travail générique permettant de créer un modèle lui aussi générique, c'est-à-dire qui pourra être utilisé pour des charges de travail hétérogènes. Pour atteindre un tel but, nous proposons d'utiliser des techniques d'apprentissage automatique.Les modèles d'apprentissage automatique sont facilement adaptables à l'architecture et sont le cœur de cette recherche. Ces travaux évaluent l'utilisation des réseaux de neurones artificiels et la régression linéaire comme technique d'apprentissage automatique pour faire de la modélisation statistique non linéaire. De tels modèles sont créés par une approche orientée données afin de pouvoir adapter les paramètres en fonction des informations collectées pendant l'exécution de charges de travail synthétiques. L'utilisation des techniques d'apprentissage automatique a pour but d'atteindre des estimateurs de très haute précision à la fois au niveau application et au niveau système. La méthodologie proposée est indépendante de l'architecture cible et peut facilement être reproductible quel que soit l'environnement. Les résultats montrent que l'utilisation de réseaux de neurones artificiels permet de créer des estimations très précises. Cependant, en raison de contraintes de modélisation, cette technique n'est pas applicable au niveau processus. Pour ce dernier, des modèles prédéfinis doivent être calibrés afin d'atteindre de bons résultats. / The number of computing systems is continuously increasing during the last years. The popularity of data centers turned them into one of the most power demanding facilities. The use of data centers is divided into high performance computing (HPC) and Internet services, or Clouds. Computing speed is crucial in HPC environments, while on Cloud systems it may vary according to their service-level agreements. Some data centers even propose hybrid environments, all of them are energy hungry. The present work is a study on power models for computing systems. These models allow a better understanding of the energy consumption of computers, and can be used as a first step towards better monitoring and management policies of such systems either to enhance their energy savings, or to account the energy to charge end-users. Energy management and control policies are subject to many limitations. Most energy-aware scheduling algorithms use restricted power models which have a number of open problems. Previous works in power modeling of computing systems proposed the use of system information to monitor the power consumption of applications. However, these models are either too specific for a given kind of application, or they lack of accuracy. This report presents techniques to enhance the accuracy of power models by tackling the issues since the measurements acquisition until the definition of a generic workload to enable the creation of a generic model, i.e. a model that can be used for heterogeneous workloads. To achieve such models, the use of machine learning techniques is proposed. Machine learning models are architecture adaptive and are used as the core of this research. More specifically, this work evaluates the use of artificial neural networks (ANN) and linear regression (LR) as machine learning techniques to perform non-linear statistical modeling.Such models are created through a data-driven approach, enabling adaptation of their parameters based on the information collected while running synthetic workloads. The use of machine learning techniques intends to achieve high accuracy application- and system-level estimators. The proposed methodology is architecture independent and can be easily reproduced in new environments.The results show that the use of artificial neural networks enables the creation of high accurate estimators. However, it cannot be applied at the process-level due to modeling constraints. For such case, predefined models can be calibrated to achieve fair results.% The use of process-level models enables the estimation of virtual machines' power consumption that can be used for Cloud provisioning.
80

Implementacion de Context-Aware Aspects en Reflex y Evaluación en una Aplicación Context-Aware

Herrera Ordenes, Alexis January 2007 (has links)
El objetivo general de este trabajo consiste en extender el framework Reflex para soportar context-aware aspects. Este nuevo framework permitirá integrar sensores de contexto implementados con WildCAT para el desarrollo de una aplicación modelo que requiera aspectos context-aware.

Page generated in 7.9398 seconds