Spelling suggestions: "subject:"dnd real time"" "subject:"nnd real time""
81 |
Combining structural and reduced-form models for macroeconomic analysis and policy forecastingMonti, Francesca 08 February 2011 (has links)
Can we fruitfully use the same macroeconomic model to forecast and to perform policy analysis? There is a tension between a model’s ability to forecast accurately and its ability to tell a theoretically consistent story. The aim of this dissertation is to propose ways to soothe this tension, combining structural and reduced-form models in order to have models that can effectively do both.
|
82 |
Vector occluders: an empirical approximation for rendering global illumination effects in real-timeSherif, William 01 February 2013 (has links)
Precomputation has been previously used as a means to get global illumination effects
in real-time on consumer hardware of the day. Our work uses Sloan’s 2002 PRT method
as a starting point, and builds on it with two new ideas.
We first explore an alternative representation for PRT data. “Cpherical harmonics”
(CH) are introduced as an alternative to spherical harmonics, by substituting the
Chebyshev polynomial in the place of the Legendre polynomial as the orthogonal
polynomial in the spherical harmonics definition. We show that CH can be used instead
of SH for PRT with near-equivalent performance.
“Vector occluders” (VO) are introduced as a novel, precomputed, real-time, empirical
technique for adding global illumination effects including shadows, caustics and
interreflections to a locally illuminated scene on static geometry. VO encodes PRT data
as simple vectors instead of using SH. VO can handle point lights, whereas a standard
SH implementation cannot. / UOIT
|
83 |
Exploiting Coherence and Data-driven Models for Real-time Global IlluminationNowrouzezahrai, Derek 17 February 2011 (has links)
Realistic computer generated images are computed by combining geometric effects, reflectance models for several captured and phenomenological materials, and real-world lighting according to mathematical models of physical light transport. Several important lighting phenomena should be considered when targeting realistic image simulation.
A combination of soft and hard shadows, which arise from the interaction of surface and light geometries, provide necessary shape perception cues for a viewer. A wide variety of realistic materials, from physically-captured reflectance datasets to empirically designed mathematical models, modulate the virtual surface appearances in a manner that can further dissuade a viewer from considering the possibility of computational image synthesis over that of reality. Lastly, in many important cases, light reflects off many different surfaces before entering the eye. These secondary effects can be critical in grounding the viewer in a virtual world, since the human visual system is adapted to the physical world, where such effects are constantly in play.
Simulating each of these effects is challenging due to their individual underlying complexity. The net complexity is compounded when several effects are combined. This thesis will investigate real-time approaches for simulating these effects under stringent performance and memory constraints, and with varying degrees of interactivity.
In order to make these computations tractable given these added constraints, I will use data and signal analysis techniques to identify predictable patterns in the different spatial and angular signals used during image synthesis. The results of this analysis will be exploited with several analytic and data-driven mathematical models that are both efficient, and yield accurate approximations with predictable and controllable error.
|
84 |
The Influence of Offsets on Real-time performance in Switched Multihop Networks.Ramachandran, Ajit, Roy, Proshanta Kumar January 2012 (has links)
High performance real-time applications have become and will continue to be anintegral part of today’s world. With this comes the requirement to provide reliablecommunication networks for these applications requiring real-time guarantees.Depending on the specific application the requirements vary and adapting to allthese requirements is important. Ethernet is a commonly used communication medium in these real-time applicationnetworks because of the advantages it provides with its simplicity, which comesalong with lower cost and higher bit rates. However since Ethernet was notspecifically designed for real-time applications, it has been under constant study inorder provide the required QoS (Quality of Service) requirements for theapplication. In this thesis our aim is to provide a less pessimistic approach to the real-timeanalysis of packet switched networks by the use of knowledge about the offsetintroduced to the packets travelling in the network. Therefore we have taken aspecific application with high real-time requirements, namely a radar application.We are using the available data to simulate and analyze the network’s performanceunder the use of offsets. The analysis is done by calculating some of the commonlyused QoS requirements such as end to end delay, deadline miss ratio and link utilization.
|
85 |
Tracking a tennis ball using image processing techniquesMao, Jinzi 30 August 2006
In this thesis we explore several algorithms for automatic real-time tracking of a tennis ball. We first investigate the use of background subtraction with color/shape recognition for fast tracking of the tennis ball. We then compare our solution with a cascade of boosted Haar classifiers [68] in a simulated environment to estimate the accuracy and ideal processing speeds. The results show that background subtraction techniques were not only faster but also more accurate than Haar classifiers. Following these promising results, we extend the background subtraction and develop other three improved techniques. These techniques use more accurate background models, more reliable and stringent criteria. They allow us to track the tennis ball in a real tennis environment with cameras having higher resolutions and frame rates. <p>We tested our techniques with a large number of real tennis videos. In the indoors environment, We achieved a true positive rate of about 90%, a false alarm rate of less than 2%, and a tracking speed of about 20 fps. For the outdoors environment, the performance of our techniques is not as good as the indoors cases due to the complexity and instability of the outdoors environment. The problem can be solved by resetting our system such that the camera focuses mainly on the tennis ball. Therefore, the influence of the external factors is minimized.<p>Despite the existing limitations, our techniques are able to track a tennis ball with very high accuracy and fast speed which can not be achieved by most tracking techniques currently available. We are confident that the motion information generated from our techniques is reliable and accurate. Giving this promising result, we believe some real-world applications can be constructed.
|
86 |
Exploiting Coherence and Data-driven Models for Real-time Global IlluminationNowrouzezahrai, Derek 17 February 2011 (has links)
Realistic computer generated images are computed by combining geometric effects, reflectance models for several captured and phenomenological materials, and real-world lighting according to mathematical models of physical light transport. Several important lighting phenomena should be considered when targeting realistic image simulation.
A combination of soft and hard shadows, which arise from the interaction of surface and light geometries, provide necessary shape perception cues for a viewer. A wide variety of realistic materials, from physically-captured reflectance datasets to empirically designed mathematical models, modulate the virtual surface appearances in a manner that can further dissuade a viewer from considering the possibility of computational image synthesis over that of reality. Lastly, in many important cases, light reflects off many different surfaces before entering the eye. These secondary effects can be critical in grounding the viewer in a virtual world, since the human visual system is adapted to the physical world, where such effects are constantly in play.
Simulating each of these effects is challenging due to their individual underlying complexity. The net complexity is compounded when several effects are combined. This thesis will investigate real-time approaches for simulating these effects under stringent performance and memory constraints, and with varying degrees of interactivity.
In order to make these computations tractable given these added constraints, I will use data and signal analysis techniques to identify predictable patterns in the different spatial and angular signals used during image synthesis. The results of this analysis will be exploited with several analytic and data-driven mathematical models that are both efficient, and yield accurate approximations with predictable and controllable error.
|
87 |
Global illumination and approximating reflectance in real-timeNowicki, Tyler B. 10 April 2007 (has links)
Global illumination techniques are used to improve the realism of 3D scenes. Calculating accurate global illumination requires a method for solving the rendering equation. However, the integral form of this equation cannot be evaluated. This thesis presents research in non real-time illumination techniques which are evaluated with a finite number of light rays. This includes a new technique which improves realism of the scene over traditional techniques.
All computer rendering requires distortion free texture mapping to appear plausible to the eye. Inverse texture mapping, however, can be numerically unstable and computationally expensive. Alternative techniques for texture mapping and texture coordinate generation were developed to simplify rendering.
Real-time rendering is improved by pre-calculating non real-time reflections. The results of this research demonstrate that a polynomial approximation of reflected light can be more accurate than a constant approximation. The solution improves realism and makes use of new features in graphics hardware. / May 2007
|
88 |
Optimization of Component Connections for an Embedded Component SystemAzumi, Takuya, Takada, Hiroaki, Oyama, Hiroshi 29 August 2009 (has links)
No description available.
|
89 |
Planificación de Diferentes Clases de Aplicaciones en Entornos No Dedicados Considerando Procesadores MulticoreGarcía Gutiérrez, José Ramón 19 July 2010 (has links)
A día de hoy es prácticamente imposible encontrar una gran institución que no disponga de un parque de ordenadores considerable, debido al alto nivel de informatización de la sociedad actual. El enorme potencial que representan estos miles de ordenadores atrae poderosamente la atención en los ámbitos científicos e industriales, generando opciones viables para su aprovechamiento. Las universidades, instituciones que históricamente se han mantenido a la vanguardia en la investigación e innovación científica, representan un caso especialmente bien posicionado a la hora de generar tanto los recursos informáticos como la necesidad de su uso.El poder de cómputo existente en los laboratorios y aulas de estudio universitarias, agrupaciones naturales de recursos informáticos, crea grandes oportunidades para la computación paralela, animándonos a buscar opciones viables para su aprovechamiento. Como resultado de este interés, nuestro grupo ha creado un entorno de planificación, enfocado hacia los clusters no dedicados. La constante y rápida evolución de los componentes, tanto a nivel de la arquitectura de la CPU como del sistema operativo, así como de las aplicaciones ejecutadas, hace que tengamos que adaptar nuestras propuestas. Nuestra propuesta consiste en crear una Máquina Virtual con doble funcionalidad, ejecutar la carga local de usuario y aprovechar los períodos de inactividad de nodos a efectos de poder usarlos para ejecutar carga paralela. Tanto el tipo de las aplicaciones como las características del hardware del escenario objetivo, y en el momento actual ambas han evolucionado. Nuevos tipos de aplicaciones paralelas con requerimientos periódicos de CPU son cada día más comunes en el mundo científico e industrial. Este tipo de aplicaciones pueden requerir un tiempo de retorno (turnaround) específico o una Calidad de Servicio (Quality of Service, QoS) determinada. Para nuestro caso particular, reviste especial importancia el conocimiento que poseemos de los usuarios locales, debido a que nuestro entorno está diseñado para trabajar en clusters no dedicados. Un usuario local puede estar visualizando un vídeo almacenado en su ordenador, lo cual implica necesidades de CPU periódicas y un mayor uso de memoria. La aparición de nuevos tipos de aplicaciones, como vídeo bajo demanda ó realidad virtual, que se caracterizan por la necesidad de cumplir sus deadlines, presentando requerimientos periódicos de recursos. Este tipo de aplicaciones, donde la pérdida de deadlines no se considera un fallo severo, han sido denominadas en la literatura aplicaciones soft-real time (SRT) periódicas.Esta interesante evolución de las necesidades de los usuarios no es el único digno de atención. El crecimiento en la capacidad de cómputo de los procesadores en los últimos años se ha visto frenado a causa de las barreras físicas del espacio y la velocidad de las señales, obligando a los fabricantes de procesadores a explorar otras vías de crecimiento. Desde hace ya algún tiempo el paralelismo de las aplicaciones se ha convertido en una de las grandes apuestas. Hoy en día los procesadores de dos núcleos son la mínima configuración que encontraremos en un ordenador, y se prevé que el número de núcleos continuará creciendo en los próximos años.Los clusters no dedicados ofrecen un gran potencial de un uso, debido a que los recursos materiales ya están disponibles y el cálculo paralelo se realiza simultáneamente con el del usuario local. Imaginando el escenario actual en los clusters no dedicados, encontramos nuevas aplicaciones de escritorio y paralelas, así como plataformas hardware más potentes y complejas. En esta situación investigar el problema y realizar propuestas relacionadas con la planificación de los diferentes tipos de aplicaciones en clusters no dedicados, considerando las plataformas multicore, supone un nuevo reto a asumir por los investigadores y conforma el núcleo de este trabajo. / Today it is virtually impossible to find an institution that does not have a considerable amount of computers, due to the high level of computerization of society. The enormous potential of these large number of computers attract a lot of attention in science and industry, generating viable options for their use. The universities, institutions that historically have remained at the forefront of research and scientific innovation, represent a case particularly well positioned in generating both, computing resources and the need to use. The computational power present in laboratories and university study rooms, natural groupings of information resources, creating great opportunities for parallel computing, encouraging us to seek viable options for their use. As a result of this interest, our group has created a parallel scheduling environment, focused on non-dedicated clusters. The constant and fast evolution of the components, both at the architecture of the CPU and the operating system and applications executed, forces us to adapt our proposals. Our proposal is to create a Virtual Machine with dual functionality, run the local load user and take advantage of downtime for the purposes of nodes it can be used to run parallel load. At present both, applications and hardware specifications of the target scenario, have evolved. New types of parallel applications with periodic CPU requirements are becoming more common in science and industry. Such applications may require a return time (turnaround) or a specific QoS (Quality of Service). Since our framework is designed to work in non-dedicated clusters, having knowledge of the local users behavior is of particular importance. A local user may be viewing a video stored on your computer, which involves periodic CPU requirements and increased use of memory. The emerging new types of applications, such as video on demand or virtual reality are a fact. This new types of applications are characterized by the need to meet their deadlines, presenting periodic resource requirements. This type of application, where the loss of deadlines is not considered a severe failure, has been named in the literature uses soft-real time (SRT) at regular intervals. This exciting evolution of user needs is not the only one worthy of attention. The growth in computing power of processors in recent years has been hampered because of the physical barriers of space and speed of the signals, forcing chip makers to explore other avenues of growth. For some time the parallelism of the applications has become one of the biggest bets. Today's dual-core processors are the minimum configuration of any computer, and it is expected that the number of nuclei continue to grow in the coming years. The non-dedicated clusters offer great potential for use, because the computational resources are already available, and parallel computing is performed simultaneously with the local user. Figuring out the current scenario in the non-dedicated clusters, we find new desktop applications, parallel and more powerful and complex hardware platforms. In this situation, research lines related to the planning of the different types of applications in non dedicated clusters, considering multi-core platforms, is a new challenge to be assumed by researchers and constitute the core of this work.
|
90 |
A Link-Level Communication Analysis for Real-Time NoCsGholamian, Sina January 2012 (has links)
This thesis presents a link-level latency analysis for real-time network-on-chip interconnects that use priority-based wormhole switching. This analysis incorporates both direct and indirect
interferences from other traffic flows, and it leverages pipelining and parallel transmission of data across the links. The resulting link-level analysis provides a tighter worst-case upper-bound than existing techniques, which we verify with our analysis and simulation experiments. Our
experiments show that on average, link-level analysis reduces the worst-case latency by 28.8%, and improves the number of flows that are schedulable by 13.2% when compared to previous work.
|
Page generated in 0.0955 seconds