• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 258
  • 98
  • 21
  • 16
  • 11
  • 9
  • 9
  • 9
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 527
  • 527
  • 91
  • 78
  • 77
  • 67
  • 65
  • 57
  • 55
  • 54
  • 51
  • 38
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Infrastructure pour la gestion générique et optimisée des traces d’exécution pour les systèmes embarqués / Infrastructure for generic and optimized management of execution traces for embedded systems

Martin, Alexis 13 January 2017 (has links)
La validation des systèmes est un des aspects critiques dans les phases de développement. Cette validation est d'autant plus importante pour les systèmes embarqués, dont le fonctionnement doit être autonome, mais aussi contraint par des limitations physiques et techniques. Avec la complexification des systèmes embarqués ces dernières années, l'applications de méthodes de validation durant le développement devient trop couteux, et la mise en place de mécanismes de vérification post-conception est nécessaire. L'utilisation de traces d'exécution, permettant de capturer le comportement du système lors de son exécution, se révèle efficace pour la compréhension et la validation des systèmes observés. Cependant, les outils d'exploitation de traces actuels se confrontent à deux défis majeurs, à savoir, la gestion de traces pouvant atteindre des tailles considérables, et l'extraction de mesures pertinentes à partir des informations bas-niveau contenues dans ces traces. Dans cette thèse, faite dans le cadre du projet FUI SoC-TRACE, nous présentons trois contributions. La première concerne la définition d'un format générique pour la représentation des traces d'exécution, enrichi en sémantique. La seconde concerne une infrastructure d'analyse utilisant des mécanismes de workflow permettant l'analyse générique et automatique de traces d'exécution. Cette infrastructure répond au problème de gestion des traces de tailles considérables textit{via} des mécanismes de streaming, permet la création d'analyses modulaires et configurables, ainsi qu'un enchainement automatique des traitements. Notre troisième contribution propose une méthode générique pour l'analyse de performances de systèmes Linux. Cette contribution propose à la fois la méthode et les outils de collecte de traces, mais aussi le workflow permettant d'obtenir des profils unifiés pour les traces capturées. La validation de nos propositions ont été faites d'une part sur des traces issues de cas d'usages proposés par STMicroelectronics, partenaire du projet, et d'autre part sur des traces issues de programmes de benchmarks. L'utilisation d'un format enrichi en sémantique a permis de mettre en évidence des anomalies d'exécutions, et ce de manière semi-automatique. L'utilisation de mécanismes de streaming au sein de notre infrastructure nous a permis de traiter des traces de plusieurs centaines de gigaoctets. Enfin, notre méthode d'analyse générique nous a permis de mettre en évidence, de manière automatique et sans connaissances a priori des programmes, le fonctionnement interne de ces différents benchmarks. La généricité de nos solutions a permis d'observer le comportement de programmes similaires sur des plates-formes et des architectures différentes, et d'en montrer leur impact sur les exécutions. / Validation process is a critical aspect of systems development. This process is a major concern for embedded systems, to assess their autonomous behavior, led by technical and physical constraints. The growth of embedded systems complexity during last years prevents the use of complex and costly development processes such as formal methods. Thus, post-conception validations must be applied. Execution traces are effective for validation and understanding as they allow the capture of systems behavior during their executions. However, trace analysis tools face two major challenges. First, the management of huge execution traces. Second, the ability to retrieve relevant metrics, from the low-level information the trace contains. This thesis was done as part of the SoC-TRACE projet, and presents three contributions. Our first contribution is a definition of a generic execution trace format that expresses semantics. Our second contribution is a workflow-based infrastructure for generic and automatic trace analysis. This infrastructure addresses the problem of huge traces management using streaming mechanisms. It allows modular and configurable analyses, as well as automatic analyses execution. Our third contribution is about the definition of a generic performance analyses for Linux systems. This contribution provides methods and tools for trace recording, and also analysis workflow to obtain unified performance profiles. We validate our contributions on traces from use cases given by STMicroelectronics, partner of the project, and also on traces recorded from benchmarks executions. Our trace format with semantics allowed us to automatically bring out execution problems. Using streaming mechanisms, we have been able to analyze traces that can reach several hundreds of gigabytes. Our generic analysis method for systems let us to automatically highlight, without any prior knowledge, internal behavior of benchmark programs. Our generic solutions point out a similar execution behavior of benchmarks on different machines and architectures, and showed their impact on the execution.
282

Análise do desempenho técnico-construtivo: edifícios Forenses do Estado de São Paulo / Analysis of the technical-constructive performance: Forensic buildings of the state os São Paulo

Iakowsky Netto, Alexandre Paulo 23 April 2009 (has links)
Esta dissertação tem como base de estudo a análise do desempenho técnicoconstrutivo dos edifícios forenses designados pela Secretaria da Justiça e da Cidadania do Estado de São Paulo como sendo o tipo F1, que foram projetados e construídos nos anos de 1970, para comportarem apenas uma Vara Judicial e seu respectivo Cartório, e área construída de 1.121,40 m². Os edifícios objetos desta dissertação deste mestrado (tipo F1) estão situados em comarcas distantes até 200 (duzentos) quilômetros da Capital, quais sejam: Cotia, Mairiporã, Franco da Rocha, Salto, Itanhaém e Angatuba. Dentro de um raciocínio crítico foi aplicado a metodologia da análise do desempenho técnico-construtivo de edifício em função das suas patologias construtivas originadas pelas deficiências do projeto, da execução da obra, dos materiais utilizados na época de sua implantação e sua situação atual de manutenção, considerando seus reflexos e influências nos itens do desempenho dos materiais e técnicas construtivas utilizadas em cada órgão/elemento dos edifícios que serão analisados segundo os requisitos dos usuários ISO 6241. / This master thesis deals has as study base the analysis of the technicalconstructive performance of the forensic buildings assigned by the Secretariat of the Justice and the Citizenship of the State of São Paulo as being the type F1, that projected and they had been constructed in the years of 1970, to hold only one Judicial Pole and its respective Notary\'s office, and constructed area of 1.121, 40 m ². The buildings objects of this master thesis (type F1) they are situated in distant judicials district even 200 (two hundred) kilometers of the Capital, which are: Cotia, Mairiporã, Franc of the Rock, Jump, Itanhaém e Angatuba. Inside of a critical reasoning the methodology of the analysis of the technicianconstructive performance of building in function of its constructive pathologies resultant from lack and deficiencies of the design, construction, materials at the time of its implantation and its current situation of maintenance, considering its consequences and influences in the item of the performance of the materials and constructive techniques used in each agency/element of the buildings that will be analyzed according to requisite of users - ISO 6241.
283

Desempenho de sistemas com dados georeplicados com consistência em momento indeterminado e na linha do tempo / Performace of systems with geo-replicated data with eventual consistency and timeline consistency

Diana, Mauricio José de Oliveira de 21 March 2013 (has links)
Sistemas web de larga escala são distribuídos em milhares de servidores em múltiplos centros de processamento de dados em diferentes localizações geográficas, operando sobre redes de longa distância (WANs). Várias técnicas são usadas para atingir os altos níveis de escalabilidade requeridos por esses sistemas. Replicação de dados está entre as principais delas, e tem por objetivo diminuir a latência, aumentar a vazão e/ou aumentar a disponibilidade do sistema. O principal problema do uso de replicação em sistemas georeplicados é a dificuldade de garantir consistência entre as réplicas sem prejudicar consideravelmente o desempenho e a disponibilidade do sistema. O desempenho do sistema é afetado pelas latências da ordem de centenas de milissegundos da WAN, enquanto a disponibilidade é afetada por falhas que impedem a comunicação entre as réplicas. Quanto mais rígido o modelo de consistência de um sistema de armazenamento, mais simples é o desenvolvimento do sistema que o usa, mas menores são seu desempenho e disponibilidade. Entre os modelos de consistência mais relaxados e mais difundidos em sistemas web georeplicados está a consistência em momento indeterminado (eventual consistency). Esse modelo de consistência garante que em algum momento as réplicas convergem após as escritas terem cessado. Um modelo mais rígido e menos difundido é a consistência na linha do tempo. Esse modelo de consistência usa uma réplica mestre para garantir que não ocorram conflitos na escrita. Nas leituras, os clientes podem ler os valores mais recentes a partir da cópia mestre, ou optar explicitamente por ler valores possivelmente desatualizados para obter maior desempenho ou disponibilidade. A consistência na linha do tempo apresenta disponibilidade menor que a consistência em momento indeterminado em determinadas situações, mas não há dados comparando o desempenho de ambas. O objetivo principal deste trabalho foi a comparação do desempenho de sistemas de armazenamento georeplicados usando esses dois modelos de consistência. Para cada modelo de consistência, foram realizados experimentos que mediram o tempo de resposta do sistema sob diferentes cargas de trabalho e diferentes condições de rede entre centros de processamento de dados. O estudo mostra que um sistema usando consistência na linha do tempo apresenta desempenho semelhante ao mesmo sistema usando consistência em momento indeterminado em uma WAN quando a localidade dos acessos é alta. Esse comparativo pode auxiliar desenvolvedores e administradores de sistemas no planejamento de capacidade e de desenvolvimento de sistemas georeplicados. / Large scale web systems are distributed among thousands of servers spread over multiple data centers in geographically different locations operating over wide area networks (WANs). Several techniques are employed to achieve the high levels of scalability required by such systems. One of the main techniques is data replication, which aims to reduce latency, increase throughput and/or increase availability. The main drawback of replication in geo-replicated systems is that it is hard to guarantee consistency between replicas without considerably impacting system performance and availability. System performance is affected by WAN latencies, typically of hundreds of miliseconds, while system availability is affected by failures cutting off communication between replicas. The more rigid the consistency model provided by a storage system, the simpler the development of the system using it, but the lower its performance and availability. Eventual consistency is one of the more relaxed and most widespread consistency models among geo-replicated systems. This consistency model guarantees that all replicas converge at some unspecified time after writes have stopped. A model that is more rigid and less widespread is timeline consistency. This consistency model uses a master replica to guarantee that no write conflicts occur. Clients can read the most up-to-date values from the master replica, or they can explicitly choose to read stale values to obtain greater performance or availability. Timeline consistency has lower availability than eventual consistency in particular situations, but there are no data comparing their performance. The main goal of this work was to compare the performance of a geo-replicated storage system using these consistency models. For each consistency model, experiments were conducted to measure system response time under different workloads and network conditions between data centers. The study shows that a system using timeline consistency has similar performance than the same system using eventual consistency over a WAN when access locality is high. This comparative may help developers and system administrators on capacity and development planning of geo-replicated systems.
284

Analysis and Optimisation of Real-Time Systems with Stochastic Behaviour

Manolache, Sorin January 2005 (has links)
Embedded systems have become indispensable in our life: household appliances, cars, airplanes, power plant control systems, medical equipment, telecommunication systems, space technology, they all contain digital computing systems with dedicated functionality. Most of them, if not all, are real-time systems, i.e. their responses to stimuli have timeliness constraints. The timeliness requirement has to be met despite some unpredictable, stochastic behaviour of the system. In this thesis, we address two causes of such stochastic behaviour: the application and platform-dependent stochastic task execution times, and the platform-dependent occurrence of transient faults on network links in networks-on-chip. We present three approaches to the analysis of the deadline miss ratio of applications with stochastic task execution times. Each of the three approaches fits best to a different context. The first approach is an exact one and is efficiently applicable to monoprocessor systems. The second approach is an approximate one, which allows for designer-controlled trade-off between analysis accuracy and analysis speed. It is efficiently applicable to multiprocessor systems. The third approach is less accurate but sufficiently fast in order to be placed inside optimisation loops. Based on the last approach, we propose a heuristic for task mapping and priority assignment for deadline miss ratio minimisation. Our contribution is manifold in the area of buffer and time constrained communication along unreliable on-chip links. First, we introduce the concept of communication supports, an intelligent combination between spatially and temporally redundant communication. We provide a method for constructing a sufficiently varied pool of alternative communication supports for each message. Second, we propose a heuristic for exploring the space of communication support candidates such that the task response times are minimised. The resulting time slack can be exploited by means of voltage and/or frequency scaling for communication energy reduction. Third, we introduce an algorithm for the worst-case analysis of the buffer space demand of applications implemented on networks-on-chip. Last, we propose an algorithm for communication mapping and packet timing for buffer space demand minimisation. All our contributions are supported by sets of experimental results obtained from both synthetic and real-world applications of industrial size.
285

Statistical Methods for Computational Markets : Proportional Share Market Prediction and Admission Control

Sandholm, Thomas January 2008 (has links)
We design, implement and evaluate statistical methods for managing uncertainty when consuming and provisioning resources in a federated computational market. To enable efficient allocation of resources in this environment, providers need to know consumers' risk preferences, and the expected future demand. The guarantee levels to offer thus depend on techniques to forecast future usage and to accurately capture and model uncertainties. Our main contribution in this thesis is threefold; first, we evaluate a set of techniques to forecast demand in computational markets; second, we design a scalable method which captures a succinct summary of usage statistics and allows consumers to express risk preferences; and finally we propose a method for providers to set resource prices and determine guarantee levels to offer. The methods employed are based on fundamental concepts in probability theory, and are thus easy to implement, as well as to analyze and evaluate. The key component of our solution is a predictor that dynamically constructs approximations of the price probability density and quantile functions for arbitrary resources in a computational market. Because highly fluctuating and skewed demand is common in these markets, it is difficult to accurately and automatically construct representations of arbitrary demand distributions. We discovered that a technique based on the Chebyshev inequality and empirical prediction bounds, which estimates worst case bounds on deviations from the mean given a variance, provided the most reliable forecasts for a set of representative high performance and shared cluster workload traces. We further show how these forecasts can help the consumers determine how much to spend given a risk preference and how providers can offer admission control services with different guarantee levels given a recent history of resource prices. / QC 20100909
286

Image versus Position: Canada as a Potential Destination for Mainland Chinese

Zou, Pengbo January 2007 (has links)
The potential of the Chinese outbound tourism market is substantial; however, research on this market to Canada is limited. This may be due, in part, to the lack of Approved Destination Status (ADS). This study examined the possible perceived image of Canada obtained by potential Chinese tourists, and to compare to the marketing position of Canada by CTC China Division-in effect, to conduct a product-market match between two concepts. Content analysis and an importance and performance analysis were used in the study. A questionnaire distributed at the Beijing Capital International Airport solicited perceptions of tourism in Canada, importance of selected attributes in travel decision making, performance of selected attributes on Canada, and trip preferences. The marketing position of Canada was examined through a content analysis of the promotional materials circulated from CTC China Division in Beijing, China. The coherences and gaps between perceived image of Canada and marketing position of Canada provide some marketing implications. This study concludes that the general tourism image of Canada is vague but positive, which is probably derived from the historically favorable image of Canada in china. Potential Chinese tourists had little knowledge on specific tourism sights; however, they recognized star attractions of Vancouver, Niagara Falls, and Toronto. Potential Chinese tourists prefer slow-paced trips; group tours; two weeks in length; in fall season; featuring mid-budget accommodation, preferably bed-and-breakfasts; on motor coach; visiting nature based sights at majority; and providing foods of various cultures. The current marketing position of Canada reflected through promotional materials by CTC and its partners has coherences in promoting tourism attractions in Canada to the image of Canada. Gaps exist on the promotion of travel issues and unconventional attractions, which inspires the marketing implications. Promotional resources should be allocate to unconventional tourism attractions with consideration rather than the presence of Chinese and mandarin speaking environment in Canada because of Chinese tourists’ demand for culture diversity. Promotion should include more information about travel expense and visas to establish reasonable consumer expectations.
287

Throughput-oriented analytical models for performance estimation on programmable hardware accelerators

Lai, Junjie 15 February 2013 (has links) (PDF)
In this thesis work, we have mainly worked on two topics of GPU performance analysis. First, we have developed an analytical method and a timing estimation tool (TEG) to predict CUDA application's performance for GT200 generation GPUs. TEG can predict GPU applications' performance in cycle-approximate level. Second, we have developed an approach to estimate GPU applications' performance upper bound based on application analysis and assembly code level benchmarking. With the performance upper bound of an application, we know how much optimization space is left and can decide the optimization effort. Also with the analysis we can understand which parameters are critical to the performance.
288

Performance Analysis of Distributed Virtual Environments

Kwok, Kin Fai Michael January 2006 (has links)
A distributed virtual environment (DVE) is a shared virtual environment where multiple users at their workstations interact with each other. Some of these systems may support a large number of users, e. g. , massive multi-player online games, and these users may be geographically distributed. An important performance measure in a DVE system is the delay for an update of a user's state (e. g. , his position in the virtual environment) to arrive at the workstations of those users who are affected by the update. This update delay often has a stringent requirement (e. g. , less than 100 ms) in order to ensure interactivity among users. <br /><br /> In designing a DVE system, an important issue is how well the system scales as the number of users increases. In terms of scalability, a promising system architecture is a two-level hierarchical architecture. At the lower level, multiple service facilities (or basic systems) are deployed; each basic system interacts with its assigned users. At the higher level, the various basic systems ensure that their copies of the virtual environment are as consistent as possible. Although this architecture is believed to have good properties with respect to scalability, not much is known about its performance characteristics. <br /><br /> This thesis is concerned with the performance characteristics of the two-level hierarchical architecture. We first investigate the issue of scalability. We obtain analytic results on the workload experienced by the various basic systems as a function of the number of users. Our results provide valuable insights into the scalability of the architecture. We also propose a novel technique to achieve weak consistency among copies of the virtual environment at the various basic systems. Simulation results on the consistency/scalability tradeoff are presented. <br /><br /> We next study the update delay in the two-level hierarchical architecture. The update delay has two main components, namely the delay at the basic system (or server delay) and the network delay. For the server delay, we use a network of queues model where each basic system may have one or more processors. We develop an approximation method to obtain results for the distribution of server delay. Comparisons with simulation show that our approximation method yields accurate results. We also measure the time to process an update on an existing online game server. Our approximate results are then used to characterize the 95th-percentile of the server delay, using the measurement data as input. <br /><br /> As to the network delay, we develop a general network model and obtain analytic results for the network delay distribution. Numerical examples are presented to show the conditions under which geographical distribution of basic systems will lead to an improvement in the network delay. We also develop an efficient heuristic algorithm that can be used to determine the best locations for the basic systems in a network.
289

Image versus Position: Canada as a Potential Destination for Mainland Chinese

Zou, Pengbo January 2007 (has links)
The potential of the Chinese outbound tourism market is substantial; however, research on this market to Canada is limited. This may be due, in part, to the lack of Approved Destination Status (ADS). This study examined the possible perceived image of Canada obtained by potential Chinese tourists, and to compare to the marketing position of Canada by CTC China Division-in effect, to conduct a product-market match between two concepts. Content analysis and an importance and performance analysis were used in the study. A questionnaire distributed at the Beijing Capital International Airport solicited perceptions of tourism in Canada, importance of selected attributes in travel decision making, performance of selected attributes on Canada, and trip preferences. The marketing position of Canada was examined through a content analysis of the promotional materials circulated from CTC China Division in Beijing, China. The coherences and gaps between perceived image of Canada and marketing position of Canada provide some marketing implications. This study concludes that the general tourism image of Canada is vague but positive, which is probably derived from the historically favorable image of Canada in china. Potential Chinese tourists had little knowledge on specific tourism sights; however, they recognized star attractions of Vancouver, Niagara Falls, and Toronto. Potential Chinese tourists prefer slow-paced trips; group tours; two weeks in length; in fall season; featuring mid-budget accommodation, preferably bed-and-breakfasts; on motor coach; visiting nature based sights at majority; and providing foods of various cultures. The current marketing position of Canada reflected through promotional materials by CTC and its partners has coherences in promoting tourism attractions in Canada to the image of Canada. Gaps exist on the promotion of travel issues and unconventional attractions, which inspires the marketing implications. Promotional resources should be allocate to unconventional tourism attractions with consideration rather than the presence of Chinese and mandarin speaking environment in Canada because of Chinese tourists’ demand for culture diversity. Promotion should include more information about travel expense and visas to establish reasonable consumer expectations.
290

Performance Analysis of Distributed Virtual Environments

Kwok, Kin Fai Michael January 2006 (has links)
A distributed virtual environment (DVE) is a shared virtual environment where multiple users at their workstations interact with each other. Some of these systems may support a large number of users, e. g. , massive multi-player online games, and these users may be geographically distributed. An important performance measure in a DVE system is the delay for an update of a user's state (e. g. , his position in the virtual environment) to arrive at the workstations of those users who are affected by the update. This update delay often has a stringent requirement (e. g. , less than 100 ms) in order to ensure interactivity among users. <br /><br /> In designing a DVE system, an important issue is how well the system scales as the number of users increases. In terms of scalability, a promising system architecture is a two-level hierarchical architecture. At the lower level, multiple service facilities (or basic systems) are deployed; each basic system interacts with its assigned users. At the higher level, the various basic systems ensure that their copies of the virtual environment are as consistent as possible. Although this architecture is believed to have good properties with respect to scalability, not much is known about its performance characteristics. <br /><br /> This thesis is concerned with the performance characteristics of the two-level hierarchical architecture. We first investigate the issue of scalability. We obtain analytic results on the workload experienced by the various basic systems as a function of the number of users. Our results provide valuable insights into the scalability of the architecture. We also propose a novel technique to achieve weak consistency among copies of the virtual environment at the various basic systems. Simulation results on the consistency/scalability tradeoff are presented. <br /><br /> We next study the update delay in the two-level hierarchical architecture. The update delay has two main components, namely the delay at the basic system (or server delay) and the network delay. For the server delay, we use a network of queues model where each basic system may have one or more processors. We develop an approximation method to obtain results for the distribution of server delay. Comparisons with simulation show that our approximation method yields accurate results. We also measure the time to process an update on an existing online game server. Our approximate results are then used to characterize the 95th-percentile of the server delay, using the measurement data as input. <br /><br /> As to the network delay, we develop a general network model and obtain analytic results for the network delay distribution. Numerical examples are presented to show the conditions under which geographical distribution of basic systems will lead to an improvement in the network delay. We also develop an efficient heuristic algorithm that can be used to determine the best locations for the basic systems in a network.

Page generated in 0.1009 seconds