• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5939
  • 1421
  • 871
  • 726
  • 722
  • 668
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Proposta de algoritmo de cacheamento para proxies VoD e sua avaliação usando um novo conjunto de métricas / Proposal of caching algorithm for VoD proxy implementation and its evaluation including a new set of metrics for efficiency analysis

Neves, Bruno Silveira January 2015 (has links)
Atualmente, o serviço digital conhecido como Vídeo sob Demanda - Video on Demand (VoD) - está em ascensão e costuma requerer uma quantidade significativa de recursos físicos para a sua implementação. Para reduzir os custos de operacionalização desse serviço, uma das alternativas comumente usada é o emprego de proxies que cacheiam as partes mais importantes do acervo, com o objetivo de atender a demanda para esse conteúdo no lugar do servidor primário do sistema VoD. Nesse contexto, para melhorar a eficiência do proxy, propõe-se neste trabalho um novo algoritmo de cacheamento que explora o posicionamento dos clientes ativos para determinar a densidade de clientes dentro de uma janela de tempo existente em frente de cada trecho de vídeo. Ao cachear os trechos de vídeo com maior densidade em frente a eles, o algoritmo é capaz de alcançar um alto desempenho, em termos de taxa de acertos para as requisições recebidas pelo proxy, durante intervalos de alta carga de trabalho. Para avaliar esta abordagem, o novo algoritmo desenvolvido foi comparado com outros de natureza semelhante, fazendo uso tanto de métricas tradicionais, como a taxa de acertos, como também de métricas físicas, como, por exemplo, o uso de recursos de processamento. Os resultados mostram que o novo algoritmo explora melhor a banda de processamento disponível na arquitetura de base do proxy para obter uma taxa de acertos maior em comparação com os algoritmos usados na análise comparativa. Por fim, para dispor das ferramentas necessárias para construir essa análise, produziu-se uma outra contribuição importante nesse trabalho: a implementação de um simulador de proxies VoD que, até onde se sabe, é o primeiro a possibilitar a avaliação do hardware utilizado para implementar essa aplicação. / Today, Video on Demand (VoD) is a digital service on the rise that requires a lot of resources for its implementation. To reduce the costs of running this service, one of the commonly used alternatives is using proxies that cache the most important portions of the collection in order to meet the demand for this content in place of the primary server of the VoD system. In this context, to improve the efficiency of proxy, we proposed a novel caching algorithm that explores the positioning of the active clients to determine the density of clients inside a time window existing in front of each video chunk. By caching the video chunks with the greater density in front of them, the algorithm is able to achieve high performance, in terms of the hit ratio for the requests received by the proxy, during periods of high workload. To better evaluate our approach, we compare it with others of similar nature, using both traditional metrics like hit rate, as well as physical metrics, such as the use of processing resources. The results show that the new algorithm exploits the processing bandwidth available in the underlying architecture of the proxy for obtaining a larger hit rate in comparison to the other algorithms used in the comparative analysis. Finally, to dispose of the necessary tools to perform this analysis, we produced another important contribution in this work: the implementation of a VoD proxy simulator that, to the best of our knowledge, is the first one to enable the evaluation of the hardware used to implement this application.
342

Proposta de algoritmo de cacheamento para proxies VoD e sua avaliação usando um novo conjunto de métricas / Proposal of caching algorithm for VoD proxy implementation and its evaluation including a new set of metrics for efficiency analysis

Neves, Bruno Silveira January 2015 (has links)
Atualmente, o serviço digital conhecido como Vídeo sob Demanda - Video on Demand (VoD) - está em ascensão e costuma requerer uma quantidade significativa de recursos físicos para a sua implementação. Para reduzir os custos de operacionalização desse serviço, uma das alternativas comumente usada é o emprego de proxies que cacheiam as partes mais importantes do acervo, com o objetivo de atender a demanda para esse conteúdo no lugar do servidor primário do sistema VoD. Nesse contexto, para melhorar a eficiência do proxy, propõe-se neste trabalho um novo algoritmo de cacheamento que explora o posicionamento dos clientes ativos para determinar a densidade de clientes dentro de uma janela de tempo existente em frente de cada trecho de vídeo. Ao cachear os trechos de vídeo com maior densidade em frente a eles, o algoritmo é capaz de alcançar um alto desempenho, em termos de taxa de acertos para as requisições recebidas pelo proxy, durante intervalos de alta carga de trabalho. Para avaliar esta abordagem, o novo algoritmo desenvolvido foi comparado com outros de natureza semelhante, fazendo uso tanto de métricas tradicionais, como a taxa de acertos, como também de métricas físicas, como, por exemplo, o uso de recursos de processamento. Os resultados mostram que o novo algoritmo explora melhor a banda de processamento disponível na arquitetura de base do proxy para obter uma taxa de acertos maior em comparação com os algoritmos usados na análise comparativa. Por fim, para dispor das ferramentas necessárias para construir essa análise, produziu-se uma outra contribuição importante nesse trabalho: a implementação de um simulador de proxies VoD que, até onde se sabe, é o primeiro a possibilitar a avaliação do hardware utilizado para implementar essa aplicação. / Today, Video on Demand (VoD) is a digital service on the rise that requires a lot of resources for its implementation. To reduce the costs of running this service, one of the commonly used alternatives is using proxies that cache the most important portions of the collection in order to meet the demand for this content in place of the primary server of the VoD system. In this context, to improve the efficiency of proxy, we proposed a novel caching algorithm that explores the positioning of the active clients to determine the density of clients inside a time window existing in front of each video chunk. By caching the video chunks with the greater density in front of them, the algorithm is able to achieve high performance, in terms of the hit ratio for the requests received by the proxy, during periods of high workload. To better evaluate our approach, we compare it with others of similar nature, using both traditional metrics like hit rate, as well as physical metrics, such as the use of processing resources. The results show that the new algorithm exploits the processing bandwidth available in the underlying architecture of the proxy for obtaining a larger hit rate in comparison to the other algorithms used in the comparative analysis. Finally, to dispose of the necessary tools to perform this analysis, we produced another important contribution in this work: the implementation of a VoD proxy simulator that, to the best of our knowledge, is the first one to enable the evaluation of the hardware used to implement this application.
343

Model checking CSPZ: Techniques to overcome state explosion

MOTA, Alexandre Cabral January 2001 (has links)
Made available in DSpace on 2014-06-12T15:53:07Z (GMT). No. of bitstreams: 2 arquivo4927_1.pdf: 1466209 bytes, checksum: 2dd8cd7b46b828a5aa1d2a3f50a6ebef (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2001 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Cabral Mota, Alexandre; Cezar Alves Sampaio, Augusto. Model checking CSPZ: Techniques to overcome state explosion. 2001. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2001.
344

[en] ADAPTIVE QUANTIZATION IN DPCM SYSTEMS / [pt] QUANTIZAÇÃO ADAPTIVA EM SISTEMAS DPCM

ABRAHAM ALCAIM 07 May 2007 (has links)
[pt] Em algumas aplicações, como por exemplo sinais de dados, a variância do sinal pode ser desconhecida, porém constante. Nesses casos, quantizadores adaptivos que utilizam algoritmos de estimação local da variância não são apropriados para a discretização do sinal. Algoritmos mais adequados para essa situação são aqueles que se preocupam em aprender a variância do sinal de entrada. Neste trabalho são examinados quatro algoritmos de aprendizagem de variância, com vistas ao seu emprego em quatização adaptiva. Um destes algoritmos, proposto por A. Gersho e D. J. Goodman, é um algoritmo de aproximação estocástica que converge com probabilidade 1. É mostrado que um outro algoritmo, também de aproximação estocástica, converge com probabilidade 1 para a aplicação em um quantizador adaptivo com entradas independentes. Os outros dois algoritmos consistem de modificações introduzidas sobre dois primeiros, com a finalidade de obter uma maior velocidade de convergência. Finalmente, é analisado, através de simulações em computador, o desempenho desses quatro quantizadores adaptivos quando usados em sistemas DPCM. / [en] In some applications, such as data transmission, signal variance is unknown but constant. In such cases, adaptive quantizers using local variance estimation algorithms are not appropriate for the signal quantizations. The most suitable algorithms for this situation are those which learn the input signal variance. This work examines four variance learning algorithms for application in adaptive quantization. One of them, proposed by A. Gersho and D. J. Goodman, is a stochastic approximation algorithm which converges with probability one, when applied to adaptive quantization. The remaining two algorithms are modified versions of the first two, in order to obtain greater convergence speed. Finally, performance of these four adaptive quantizers, when used in DPCM systems, is analyzed through computer simulations.
345

Measurement Quantization in Compressive Imaging

Lin, Yuzhang, Lin, Yuzhang January 2016 (has links)
In compressive imaging the measurement quantization and its impact on the overall system performance is an important problem. This work considers several challenges that derive from quantization of compressive measurements. We investigate the design of scalar quantizer (SQ), vector quantizer (VQ), and tree-structured vector quantizer (TSVQ) for information-optimal compressive imaging. The performance of these quantizer designs is quantified for a variety of compression rates and measurement signal-to-noise-ratio (SNR) using simulation studies. Our simulation results show that in the low SNR regime a low bit-depth (3 bit per measurement) SQ is sufficient to minimize the degradation due to measurement quantization. However, in mid-to-high SNR regime, quantizer design requires higher bit-depth to preserve the information in the measurements. Simulation results also confirm the superior performance of VQ over SQ. As expected, TSVQ provides a good tradeoff between complexity and performance, bounded by VQ and SQ designs on either side of performance/complexity limits. In compressive image the size of final measurement data (i.e. in bits) is also an important system design metric. In this work, we also optimize the compressive imaging system using this design metric, and investigate how to optimally allocate the number of measurement and bits per measurement, i.e. the rate allocation problem. This problem is solved using both an empirical data driven approach and a model-based approach. As a function of compression rate (bits per pixel), our simulation results show that compressive imaging can outperform traditional (non-compressive) imaging followed by image compression (JPEG 2000) in low-to-mid SNR regime. However, in high SNR regime traditional imaging (with image compression) offers a higher image fidelity compare to compressive imaging for a given data rate. Compressive imaging using blockwise measurements is partly limited due to its inability to perform global rate allocation. We also develop an optimal minimum mean-square error (MMSE) reconstruction algorithm for quantized compressed measurements. The algorithm employs Monte-Carlo Markov Chain (MCMC) sampling technique to estimate the posterior mean. Simulation results show significant improvement over approximate MMSE algorithms.
346

Nåbarhetstestning i en baneditor : En undersökning i hur nåbarhetstester kan implementeras i en baneditor samt funktionens potential i att ersätta manuell testning

Sehovic, Mirsad, Carlsson, Markus January 2014 (has links)
Denna studie undersöker om det är möjligt att införa nåbarhetstestning i en baneditor. Testets syfte är att ersätta manuell testing, det vill säga att bankonstruktören inte ska behöva spela igenom banan för att säkerställa att denne kommer kunna nå alla nåbara positioner.För att kunna utföra studien skapas en enkel baneditor som testplattform. Vidare utförs en jämförande studie av flera alternativa algoritmer för att fastställa vilken som är mest passande för nåbarhetstestning i en baneditor.Resultatet från den jämförande studien visade att A* (A star) var den mest passande algoritmen för funktionen. Huruvida automatisk testning kan ersätta manuell testning är diskutabelt, men resultatet pekar på en ökad effektivitet i tid när det kommer till banbygge. / The following study examines whether it is possible to implement reachability testing in a map editor designed for 2D-platform games. The purpose of reachability testing is to replace manual testing, that being the level designer having to play through the map just to see if the player can reach all supposedly reachable positions in the map.A simple map editor is created to enable the implementation after which we perform a theoretical study in order to determine which algorithm would be best suited for the implementation of the reachability testing.The results comparing algorithms shows that A* (A star) worked best with the function. Whether or not manual testing can be replaced by automatic testing is open for debate, however the results points to an increase in time efficiency when it comes to level design.
347

Evolutionary Computation in Continuous Optimization and Machine Learning

Dahlberg, Leslie January 2017 (has links)
Evolutionary computation is a field which uses natural computational processes to optimize mathematical and industrial problems. Differential Evolution, Particle Swarm Optimization and Estimation of Distribution Algorithm are some of the newer emerging varieties which have attracted great interest among researchers. This work has compared these three algorithms on a set of mathematical and machine learning benchmarks and also synthesized a new algorithm from the three other ones and compared it to them. The results from the benchmark show which algorithm is best suited to handle various machine learning problems and presents the advantages of using the new algorithm. The new algorithm called DEDA (Differential Estimation of Distribution Algorithms) has shown promising results at both machine learning and mathematical optimization tasks.
348

Analyzing and adapting graph algorithms for large persistent graphs

Larsson, Patrik January 2008 (has links)
In this work, the graph database Neo4j developed by Neo Technology is presented together with some of it's functionality when it comes to accessing data as a graph. This type of data access brings the possibility to implement common graph algorithms on top of Neo4j. Examples of such algorithms are presented together with their theoretical backgrounds. These are mainly algorithms for finding shortest paths and algorithms for different graph measures such as centrality measures. The implementations that have been made are presented, as well as complexity analysis and the performance measures performed on them. The conclusions include that Neo4j is well suited for these types of implementations.
349

Content Is President: The Influence of Netflix on Taste, Politics and The Future of Television

Esack, Alanna 14 December 2017 (has links)
The evolving television industry relies heavily on the corresponding shift in the audiences that it addresses. New practice for consumption and production, particularly the “disruptive” force of streaming services like Netflix, have been evidenced not only in the methods of the companies themselves but also in the content they have begun to offer. A milestone in the television industry, Netflix’s first original series House of Cards provides an innovative and meaningful installment to the genre of political melodrama, which has its own cultural significance and heritage of mapping audience relations to the media. Analyzing the text, this paper reveals how industrial strategies relate to taste cultures and produce cynical political television drama.
350

FULL-VIEW COVERAGE PROBLEMS IN CAMERA SENSOR NETWORKS

Li, Chaoyang 08 August 2017 (has links)
Camera Sensor Networks (CSNs) have emerged as an information-rich sensing modality with many potential applications and have received much research attention over the past few years. One of the major challenges in research for CSNs is that camera sensors are different from traditional scalar sensors, as different cameras from different positions can form distinct views of the object in question. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera coverage, since the face image (or the targeted aspect) of the object may be missed. The angle between the object's facing direction and the camera's viewing direction is used to measure the quality of sensing in CSNs instead. This distinction makes the coverage verification and deployment methodology dedicated to conventional sensor networks unsuitable. A new coverage model called full-view coverage can precisely characterize the features of coverage in CSNs. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. In this dissertation, we consider three areas of research for CSNS: 1. an analytical theory for full-view coverage; 2. energy efficiency issues in full-view coverage CSNs; 3. Multi-dimension full-view coverage theory. For the first topic, we propose a novel analytical full-view coverage theory, where the set of full-view covered points is produced by numerical methodology. Based on this theory, we solve the following problems. First, we address the full-view coverage holes detection problem and provide the healing solutions. Second, we propose $k$-Full-View-Coverage algorithms in camera sensor networks. Finally, we address the camera sensor density minimization problem for triangular lattice based deployment in full-view covered camera sensor networks, where we argue that there is a flaw in the previous literature, and present our corresponding solution. For the second topic, we discuss lifetime and full-view coverage guarantees through distributed algorithms in camera sensor networks. Another energy issue we discuss is about object tracking problems in full-view coverage camera sensor networks. Next, the third topic addresses multi-dimension full-view coverage problem where we propose a novel 3D full-view coverage model, and we tackle the full-view coverage optimization problem in order to minimize the number of camera sensors and demonstrate a valid solution. This research is important due to the numerous applications for CSNs. Especially some deployment can be in remote locations, it is critical to efficiently obtain accurate meaningful data.

Page generated in 0.0963 seconds