Spelling suggestions: "subject:"float"" "subject:"block""
21 |
Generaliseringsförmåga vid genetisk programmeringSvensson, Daniel January 2003 (has links)
I detta arbete undersöks hur bestraffningsmetoder för att bestraffa storleken på GP-program påverkar generaliseringsförmågan. Arbetet grundar sig på ett arbete som Cavaretta och Chellapilla gjort, där de undersöker skillnaden i generaliseringsförmåga mellan bestraffningsmetoden ”Complexity Penalty functions” och ingen bestraffningsmetod. I detta arbete har nya experiment gjorts med ”Complexity Penalty functions” och ”Adaptive parsimony pressure”, som är en annan bestraffningsmetod. Dessa bestraffningsmetoder har undersökts i samma domän som Cavaretta och Chellapilla och ytterligare i en domän för att ge en bättre bild av hur de generaliserar. I arbetet visar det sig att användningen av någon av bestraffningsmetoderna ”Complexity Penalty functions” och ”Adaptive parsimony pressure” oftast ger bättre generaliseringsförmåga hos GP-program. Detta motsäger det Cavaretta och Chellapilla kommer fram till i sitt arbete. ”Adaptive parsimony pressure” verkar också vara bättre på att generalisera än ”Complexity Penalty functions”.
|
22 |
Identifying Method Memoization Opportunities in Java ProgramsChugh, Pallavi January 2016 (has links) (PDF)
Memorization of a method is a commonly used re-factoring wherein developer modules the code of a method to save return values for some or all incoming parameter values. Whenever a parameter-tuple is received for the second or subsequent time, the method's execution can be elided and the corresponding saved value can be returned. It is quite challenging for developers to identify suitable methods for memorization, as these may not necessarily be the methods that account for a high fraction of the running time in the program. What are really sought are the methods that cumulatively incur signi_cant execution time in invocations that receive repeat parameter values. Our primary contribution is a novel dynamic analysis approach that emits a report that contains, for each method in an application, an estimate of the execution time savings to be expected from memorizing this method. The key technical novelty of our approach is a set of design elements that allow our approach to target real-world programs, and to compute the estimates in a re-grained manner. We describe our approach in detail, and evaluate an implementation of it on several real-world programs. Our evaluation reveals that there do exist many methods with good estimated savings that the approach is reasonably ancient, and that it has good precision (relative to actual savings).
|
23 |
[pt] ANALIZANDO O USO DE MEMORIA EM LUA / [en] PROFILING MEMORY IN LUAPABLO MARTINS MUSA 16 July 2020 (has links)
[pt] Inchaço de memória e um problema que ocorre quando a memória
consumida por um programa excede a expectativa do programador. Em
muitos casos, o inchaço de memória prejudica o desempenho ou, até
mesmo, interrompe a execução de aplicações. Detectar e consertar inchaços
de memória é uma tarefa difícil para programadores e, portanto, eles
costumam usar ferramentas para identificar e consertar problemas desta
natureza. Nas últimas duas décadas, muitos trabalhos e ferramentas foram
desenvolvidos com o intuito de ajudar programadores a abordar problemas
de inchaço de memória, entre eles perfiladores de memória. Apesar de
perfiladores de memória terem sido muito estudados nos últimos anos,
existe uma lacuna em relação a linguagens de script. Nessa dissertação,
nós estudamos perfiladores de memória para linguagens de script.
Primeiro, nos propomos uma classificação que divide as ferramentas em
manual e automática baseada em como elas são usadas pelos
programadores. Em seguida, após estudar ferramentas disponíveis em três
linguagens de script diferentes, nós experimentamos algumas das técnicas
estudadas ao construir dois perfiladores de memória automáticos para
ajudar programadores Lua a resolver inchaços de memória. Finalmente,
nós avaliamos ambas as ferramentas com relação a facilidade de integração
ao programa, a utilidade dos relatórios para o entendimento de programas
desconhecidos e para a localização de inchaços de memória e ao custo de
desempenho que elas geram. / [en] Memory bloat is a software problem that happens when the memory
consumption of a program exceeds the programmer s expectations. In
many cases, memory bloat hurts performance or even crashes applications.
Detecting and fixing memory bloat problems is a difficult task for
programmers and, thus, they usually need tools to identify and fix these
problems. The past two decades produced an extensive research and many
tools to help programmers tackle memory bloat, including memory
profilers. Although memory profilers have been largely studied in the last
years, there is a gap regarding scripting languages. In this thesis, we study
memory profilers in scripting languages. First, we propose a classification
in which we divide memory profilers in manual and automatic, based on
how the programmer uses the memory profiler. Then, after reviewing
memory profilers available in three different scripting languages, we
experiment some of the studied techniques by implementing two automatic
memory profilers to help Lua programmers deal with memory bloat.
Finally, we evaluate our tools regarding how easy it is to incorporate them
to a program, how useful their reports are to understand an unknown
program and track memory bloats, and how much overhead they impose.
|
24 |
The evolving American research university and non-faculty professional workLee, Elida Teresa 27 February 2013 (has links)
This exploratory study was a response to claims that non-faculty professionals at universities were the cause of administrative bloat. The purpose of the study was to build from the work of Rhoades (1998) and Kane (2007) to determine whether non-faculty professional employees at the University of Texas at Austin(UT Austin) performed core university work of research, teaching and/or public service. In the spring of 2012 a survey was sent out to 1036 UT Austin non-faculty professional employees. The survey results determined that a sizable number of non-faculty professional employees at UT Austin were performing or directly contributing to research, teaching and/or public service. In addition to the three areas of core work, it was determined that non-faculty professional employees at UT Austin had advanced degrees, published in peer-reviewed journals, had specialized skills and bodies of knowledge, applied for grants and engaged in entrepreneurial activities. / text
|
25 |
Network and end-host support for HTTP adaptive video streamingMansy, Ahmed 04 April 2014 (has links)
Video streaming is widely recognized as the next Internet killer application. It was not one of the Internet's original target applications and its protocols (TCP in particular) were tuned mainly for e efficient bulk file transfer. As a result, a significant effort has focused on the development of UDP-based special protocols for streaming multimedia on the Internet. Recently, there has been a shift in video streaming from UDP to TCP, and specifically to HTTP. HTTP streaming provides a very attractive platform for video distribution on the Internet mainly because it can utilize all the current Internet infrastructure. In this thesis we make the argument that the marriage between HTTP streaming and the current Internet infrastructure can create many problems and inefficiencies. In order to solve these issues, we provide a set of techniques and protocols that can help both the network and end-hosts to make better decisions to improve video streaming quality. The thesis makes the following contributions:
- We conduct a characterization study of popular commercial streaming services on mobile platforms. Our study shows that streaming services make different design decisions when implementing video players on different mobile platforms. We show that this can lead to several inefficiencies and undesirable behaviors specially when several clients compete for bandwidth in a shared bottleneck link.
- Fairness between traffic flows has been preserved on the Internet through the use of TCP. However, due to the dynamics of adaptive video players and the lack of standard client adaptation techniques, fairness between multiple competing video flows is still an open issue of research. Our work extends the definition of standard bitrate fairness to utility fairness where utility is the Quality of Experience (QoE) of a video stream. We define QoE max-min fairness for a set of adaptive video flows competing for bandwidth in a network and we develop an algorithm that computes the set of bitrates that should be assigned to each stream to achieve fairness. We design and implement a system that can apply QoE fairness in home networks and evaluate the system on a real home router.
- A well known problem that has been associated with TCP traffic is the buffer bloat problem. We use an experimental setup to show that adaptive video flows can cause buffer bloat which can significantly harm time sensitive applications sharing the same bottleneck link with video traffic. In addition, we develop a technique that can be used by video players to mitigate this problem. We implement our technique in a real video player and evaluate it on our testbed.
- With the increasing popularity of video streaming on the Internet, the amounts of traffic on the peering links between video streaming providers and Internet Service Providers (ISPs) have become the source of many disputes. Hybrid CDN/P2P streaming systems can be used to reduce the amounts of traffic on the peering links by leveraging users upload bandwidth to redistribute some of the load to other peers. We develop an analysis for hybrid CDN/P2P systems that broadcast live adaptive video streams. The analysis helps the CDN to make better decisions to optimize video quality for its users.
|
26 |
Evaluating the efficiency of general purpose and specialized game engines for 2D gamesThomas Michael Brogan III (18429519) 24 April 2024 (has links)
<p dir="ltr">In the ever-changing landscape of game development, the choice of game engine plays a critical role in deciding the efficiency and performance of a game. This research paper presents a comparative analysis of the performance benchmarks of large general purpose game engines, specifically Unreal Engine 5, Unity, and Godot, versus small genre-specific engines in the context of a simple 2D projectile dodging game. The study focuses on two-dimensional games, which are particularly popular with small studios and indie developers. All three general purpose engines evaluated claim to support building both 2D and 3D applications, however since 2D game logic tends to be smaller scoped and more compact such games are impacted greater by any overhead and bloat the engine introduces, which this research paper intends to evaluate. A series of controlled experiments are conducted to assess each engine's performance in processor utilization, power consumption, memory usage and storage space requirements.</p>
|
Page generated in 0.022 seconds