• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 7
  • 2
  • 2
  • Tagged with
  • 25
  • 12
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Identifying Method Memoization Opportunities in Java Programs

Chugh, Pallavi January 2016 (has links) (PDF)
Memorization of a method is a commonly used re-factoring wherein developer modules the code of a method to save return values for some or all incoming parameter values. Whenever a parameter-tuple is received for the second or subsequent time, the method's execution can be elided and the corresponding saved value can be returned. It is quite challenging for developers to identify suitable methods for memorization, as these may not necessarily be the methods that account for a high fraction of the running time in the program. What are really sought are the methods that cumulatively incur signi_cant execution time in invocations that receive repeat parameter values. Our primary contribution is a novel dynamic analysis approach that emits a report that contains, for each method in an application, an estimate of the execution time savings to be expected from memorizing this method. The key technical novelty of our approach is a set of design elements that allow our approach to target real-world programs, and to compute the estimates in a re-grained manner. We describe our approach in detail, and evaluate an implementation of it on several real-world programs. Our evaluation reveals that there do exist many methods with good estimated savings that the approach is reasonably ancient, and that it has good precision (relative to actual savings).
22

[pt] ANALIZANDO O USO DE MEMORIA EM LUA / [en] PROFILING MEMORY IN LUA

PABLO MARTINS MUSA 16 July 2020 (has links)
[pt] Inchaço de memória e um problema que ocorre quando a memória consumida por um programa excede a expectativa do programador. Em muitos casos, o inchaço de memória prejudica o desempenho ou, até mesmo, interrompe a execução de aplicações. Detectar e consertar inchaços de memória é uma tarefa difícil para programadores e, portanto, eles costumam usar ferramentas para identificar e consertar problemas desta natureza. Nas últimas duas décadas, muitos trabalhos e ferramentas foram desenvolvidos com o intuito de ajudar programadores a abordar problemas de inchaço de memória, entre eles perfiladores de memória. Apesar de perfiladores de memória terem sido muito estudados nos últimos anos, existe uma lacuna em relação a linguagens de script. Nessa dissertação, nós estudamos perfiladores de memória para linguagens de script. Primeiro, nos propomos uma classificação que divide as ferramentas em manual e automática baseada em como elas são usadas pelos programadores. Em seguida, após estudar ferramentas disponíveis em três linguagens de script diferentes, nós experimentamos algumas das técnicas estudadas ao construir dois perfiladores de memória automáticos para ajudar programadores Lua a resolver inchaços de memória. Finalmente, nós avaliamos ambas as ferramentas com relação a facilidade de integração ao programa, a utilidade dos relatórios para o entendimento de programas desconhecidos e para a localização de inchaços de memória e ao custo de desempenho que elas geram. / [en] Memory bloat is a software problem that happens when the memory consumption of a program exceeds the programmer s expectations. In many cases, memory bloat hurts performance or even crashes applications. Detecting and fixing memory bloat problems is a difficult task for programmers and, thus, they usually need tools to identify and fix these problems. The past two decades produced an extensive research and many tools to help programmers tackle memory bloat, including memory profilers. Although memory profilers have been largely studied in the last years, there is a gap regarding scripting languages. In this thesis, we study memory profilers in scripting languages. First, we propose a classification in which we divide memory profilers in manual and automatic, based on how the programmer uses the memory profiler. Then, after reviewing memory profilers available in three different scripting languages, we experiment some of the studied techniques by implementing two automatic memory profilers to help Lua programmers deal with memory bloat. Finally, we evaluate our tools regarding how easy it is to incorporate them to a program, how useful their reports are to understand an unknown program and track memory bloats, and how much overhead they impose.
23

The evolving American research university and non-faculty professional work

Lee, Elida Teresa 27 February 2013 (has links)
This exploratory study was a response to claims that non-faculty professionals at universities were the cause of administrative bloat. The purpose of the study was to build from the work of Rhoades (1998) and Kane (2007) to determine whether non-faculty professional employees at the University of Texas at Austin(UT Austin) performed core university work of research, teaching and/or public service. In the spring of 2012 a survey was sent out to 1036 UT Austin non-faculty professional employees. The survey results determined that a sizable number of non-faculty professional employees at UT Austin were performing or directly contributing to research, teaching and/or public service. In addition to the three areas of core work, it was determined that non-faculty professional employees at UT Austin had advanced degrees, published in peer-reviewed journals, had specialized skills and bodies of knowledge, applied for grants and engaged in entrepreneurial activities. / text
24

Network and end-host support for HTTP adaptive video streaming

Mansy, Ahmed 04 April 2014 (has links)
Video streaming is widely recognized as the next Internet killer application. It was not one of the Internet's original target applications and its protocols (TCP in particular) were tuned mainly for e efficient bulk file transfer. As a result, a significant effort has focused on the development of UDP-based special protocols for streaming multimedia on the Internet. Recently, there has been a shift in video streaming from UDP to TCP, and specifically to HTTP. HTTP streaming provides a very attractive platform for video distribution on the Internet mainly because it can utilize all the current Internet infrastructure. In this thesis we make the argument that the marriage between HTTP streaming and the current Internet infrastructure can create many problems and inefficiencies. In order to solve these issues, we provide a set of techniques and protocols that can help both the network and end-hosts to make better decisions to improve video streaming quality. The thesis makes the following contributions: - We conduct a characterization study of popular commercial streaming services on mobile platforms. Our study shows that streaming services make different design decisions when implementing video players on different mobile platforms. We show that this can lead to several inefficiencies and undesirable behaviors specially when several clients compete for bandwidth in a shared bottleneck link. - Fairness between traffic flows has been preserved on the Internet through the use of TCP. However, due to the dynamics of adaptive video players and the lack of standard client adaptation techniques, fairness between multiple competing video flows is still an open issue of research. Our work extends the definition of standard bitrate fairness to utility fairness where utility is the Quality of Experience (QoE) of a video stream. We define QoE max-min fairness for a set of adaptive video flows competing for bandwidth in a network and we develop an algorithm that computes the set of bitrates that should be assigned to each stream to achieve fairness. We design and implement a system that can apply QoE fairness in home networks and evaluate the system on a real home router. - A well known problem that has been associated with TCP traffic is the buffer bloat problem. We use an experimental setup to show that adaptive video flows can cause buffer bloat which can significantly harm time sensitive applications sharing the same bottleneck link with video traffic. In addition, we develop a technique that can be used by video players to mitigate this problem. We implement our technique in a real video player and evaluate it on our testbed. - With the increasing popularity of video streaming on the Internet, the amounts of traffic on the peering links between video streaming providers and Internet Service Providers (ISPs) have become the source of many disputes. Hybrid CDN/P2P streaming systems can be used to reduce the amounts of traffic on the peering links by leveraging users upload bandwidth to redistribute some of the load to other peers. We develop an analysis for hybrid CDN/P2P systems that broadcast live adaptive video streams. The analysis helps the CDN to make better decisions to optimize video quality for its users.
25

Evaluating the efficiency of general purpose and specialized game engines for 2D games

Thomas Michael Brogan III (18429519) 24 April 2024 (has links)
<p dir="ltr">In the ever-changing landscape of game development, the choice of game engine plays a critical role in deciding the efficiency and performance of a game. This research paper presents a comparative analysis of the performance benchmarks of large general purpose game engines, specifically Unreal Engine 5, Unity, and Godot, versus small genre-specific engines in the context of a simple 2D projectile dodging game. The study focuses on two-dimensional games, which are particularly popular with small studios and indie developers. All three general purpose engines evaluated claim to support building both 2D and 3D applications, however since 2D game logic tends to be smaller scoped and more compact such games are impacted greater by any overhead and bloat the engine introduces, which this research paper intends to evaluate. A series of controlled experiments are conducted to assess each engine's performance in processor utilization, power consumption, memory usage and storage space requirements.</p>

Page generated in 0.0288 seconds