• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 355
  • 54
  • 47
  • 45
  • 37
  • 19
  • 15
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 722
  • 317
  • 111
  • 76
  • 71
  • 65
  • 57
  • 54
  • 52
  • 51
  • 41
  • 41
  • 40
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Module 13: Tracing and Textures

Craig, Leendert 01 January 2022 (has links)
https://dc.etsu.edu/entc-2170-oer/1013/thumbnail.jpg
72

Functional Labeling of Individualized Post-Synaptic Neurons using Optogenetics and trans-Tango

Castaneda, Allison Nicole 11 July 2023 (has links)
Neural circuitry, or how neurons connect across brain regions to form functional units, is the fundamental basis of all brain processing and behavior. There are several neural circuit analysis tools available across different model organisms, but currently the field lacks a comprehensive method that can 1) target post-synaptic neurons using a pre-synaptic driver line, 2) assess post-synaptic neuron morphology, and 3) test behavioral response of the post-synaptic neurons in an isolated manner. This work will present FLIPSOT, or Functional Labeling of Individualized Post-Synaptic Neurons using Optogenetics and trans-Tango, which is a method developed to fulfill all three of these conditions. FLIPSOT uses a pre-synaptic driver line to drive trans-Tango, triggering heat-shock-dependent expression of post-synaptic optogenetic receptors. When heat shocked for a suitable duration of time, optogenetic activation or inhibition is made possible in a randomized selection of post-synaptic cells, allowing testing and comparison of function. Finally, imaging of each brain confirms which neurons were targeted per animal, and analysis across trials can reveal which post-synaptic neurons are necessary and/or sufficient for the relevant behavior. FLIPSOT is then tested within Drosophila melanogaster to evaluate the necessity and sufficiency of post-synaptic neurons in the Drosophila Heating Cell circuit, which is a circuit that functions to drive warmth avoidance behavior. FLIPSOT presents a new combinatory tool for evaluation of behavioral necessity and sufficiency of post-synaptic cells. The tool can easily be utilized to test many different behaviors and circuits through modification of the pre-synaptic driver line. Lastly, the success of this tool within flies paves the way for possible future adaptation in other model organisms, including mammals. / Doctor of Philosophy / The human brain is made up of billions of neurons, each of which are interconnected in various ways to allow communication. When a group of connected neurons work together to carry out a specific function, that group is known as a neural circuit. Neural circuits are the physical basis of brain activity, and different circuits are necessary for all bodily functions, including breathing, movement, regulation of sleep, memory, and all senses. Disruptions in neural circuits can be found in many brain-related diseases and disorders such as depression, anxiety, and Alzheimer's disease. One example of a neural circuit is that of temperature sensation. When someone holds a cube of ice, temperature-sensing neurons in the hand pass signals along neurons in the spine until they reach the brain. There, the signals are carried to various brain regions to be processed and recognized as cold, and eventually, pain. When the sensory signals of cold and pain grow too prominent to ignore, the person may move to avoid the feeling. In this case, the brain will send signals back down to neurons responsible for movement in the arm, allowing the person to drop the ice cube. Avoidance of temperatures that are too warm or cold is an evolutionary trait that is important in preventing the body from harm. Even in a relatively simple system like temperature sensation, neural circuits can be complex and difficult to study, especially in higher order organisms such as mammals. For this reason, it can be beneficial to use simpler animals such as Drosophila melanogaster, or the common fruit fly. Flies have far fewer neurons than humans, meaning their neuronal connections are also significantly less complicated, and there are many genetic tools available in flies that aren't available in mammalian models such as mice. Additionally, flies are inexpensive, easy to raise, and grow quickly, making them ideal for troubleshooting new tools and replicating experiments. Though somewhat different in anatomy, fly brain function is similar enough to humans and other mammals that findings can often be applied across species. Studies in flies can also be applied in other insects, such as mosquitoes, which are notorious for carrying deadly diseases. Though there are several available tools in flies to study neural circuits, many tools are better for usage in sensory neurons themselves than in the neurons that carry signals in the brain afterward. This work presents a new tool, abbreviated as FLIPSOT, that modifies and combines several existing genetic methods in order to help examine those higher order neurons. FLIPSOT allows users to determine which higher order neurons are important in leading to behavioral responses, as opposed to carrying the signal to other brain regions, such as those associated with memory. Then, FLIPSOT is implemented in a warmth-sensing neural circuit known as the Heating Cell (HC) circuit and used to identify the higher order neurons needed for fly warmth avoidance. Development of tools such as FLIPSOT helps to expand our knowledge in the fields of neural circuits and behavior. Genetic tools can also be more easily tested in flies prior to attempting to implement them in other organisms, such as mice. Finally, studying temperature in flies can help create a deeper understanding of how temperature sensation works in all animals, including humans.
73

The Throw: An Introduction to Diagrammatics

Johnson, Ryan J. 21 April 2008 (has links)
No description available.
74

Practical Feedback and Instrumentation Enhancements for Performant Security Testing of Closed-source Executables

Nagy, Stefan 25 May 2022 (has links)
The Department of Homeland Security reports that over 90% of cyberattacks stem from security vulnerabilities in software, costing the U.S. $109 billion dollars in damages in 2016 alone according to The White House. As NIST estimates that today's software contains 25 bugs for every 1,000 lines of code, the prompt discovery of security flaws is now vital to mitigating the next major cyberattack. Over the last decade, the software industry has overwhelmingly turned to a lightweight defect discovery approach known as fuzzing: automated testing that uncovers program bugs through repeated injection of randomly-mutated test cases. Academic and industry efforts have long exploited the semantic richness of open-source software to enhance fuzzing with fast and fine-grained code coverage feedback, as well as fuzzing-enhancing code transformations facilitated through lightweight compiler-based instrumentation. However, the world's increasing reliance on closed-source software (i.e., commercial, proprietary, and legacy software) demands analogous advances in automated security vetting beyond open-source contexts. Unfortunately, the semantic gaps between source code and opaque binary code leave fuzzing nowhere near as effective on closed-source targets. The difficulty of balancing coverage feedback speed and precision in binary executables leaves fuzzers frequently bottlenecked and orders-of-magnitude slower at uncovering security vulnerabilities in closed-source software. Moreover, the challenges of analyzing and modifying binary executables at scale leaves closed-source software fuzzing unable to fully leverage the sophisticated enhancements that have long accelerated open-source software vulnerability discovery. As the U.S. Cybersecurity and Infrastructure Security Agency reports that closed-source software makes up over 80% of the top routinely exploited software today, combating the ever-growing threat of cyberattacks demands new practical, precise, and performant fuzzing techniques unrestricted by the availability of source code. This thesis answers the following research questions toward enabling fast, effective fuzzing of closed-source software: 1. Can common-case fuzzing insights be exploited to more achieve low-overhead, fine-grained code coverage feedback irrespective of access to source code? 2. What properties of binary instrumentation are needed to extend performant fuzzing-enhancing program transformation to closed-source software fuzzing? In answering these questions, this thesis produces the following key innovations: A. The first code coverage techniques to enable fuzzing speed and code coverage greater than source-level fuzzing for closed-source software targets. (chapter 3) B. The first instrumentation platform to extend both compiler-quality code transformation and compiler-level speed to closed-source fuzzing contexts (chapter 4) / Doctor of Philosophy / The Department of Homeland Security reports that over 90% of cyberattacks stem from security vulnerabilities in software, costing the U.S. $109 billion dollars in damages in 2016 alone according to The White House. As NIST estimates that today's software contains 25 bugs for every 1,000 lines of code, the prompt discovery of security flaws is now vital to mitigating the next major cyberattack. Over the last decade, the software industry has overwhelmingly turned to lightweight defect discovery through automated testing, uncovering program bugs through the repeated injection of randomly-mutated test cases. Academic and industry efforts have long exploited the semantic richness of open-source software (i.e., software whose full internals are publicly available, interpretable, and changeable) to enhance testing with fast and fine-grained exploration feedback; as well as testing-enhancing program transformations facilitated during the process by which program executables are generated. However, the world's increasing reliance on closed-source software (i.e., software whose internals are opaque to anyone but its original developer) like commercial, proprietary, and legacy programs demands analogous advances in automated security vetting beyond open-source contexts. Unfortunately, the challenges of understanding programs without their full source information leaves testing nowhere near as effective on closed-source programs. The difficulty of balancing exploration feedback speed and precision in program executables leaves testing frequently bottlenecked and orders-of-magnitude slower at uncovering security vulnerabilities in closed-source software. Moreover, the challenges of analyzing and modifying program executables at scale leaves closed-source software testing unable to fully leverage the sophisticated enhancements that have long accelerated open-source software vulnerability discovery. As the U.S. Cybersecurity and Infrastructure Security Agency reports that closed-source software makes up over 80% of the top routinely exploited software today, combating the ever-growing threat of cyberattacks demands new practical, precise, and performant software testing techniques unrestricted by the availability of programs' source code. This thesis answers the following research questions toward enabling fast, effective fuzzing of closed-source software: 1. Can common-case testing insights be exploited to more achieve low-overhead, fine-grained exploration feedback irrespective of access to programs' source code? 2. What properties of program modification techniques are needed to extend performant testing-enhancing program transformations to closed-source programs? In answering these questions, this thesis produces the following key innovations: A. The first techniques enabling testing of closed-source programs with speed and exploration higher than on open-source programs. (chapter 3) B. The first platform to extend high-speed program transformations from open-source programs to closed-source ones (chapter 4)
75

Optical and Thermal Radiative Simulation of an Earth Radiation Budget Instrument

Fronk, Joel Seth 08 June 2021 (has links)
Researchers at the NASA Langley Research Center (LaRC) are developing a next-generation instrument for monitoring the Earth radiation budget (ERB) from low Earth orbit. This instrument is called the DEMonstrating the Emerging Technology for measuring the Earth's Radiation (DEMETER) instrument. DEMETER is a candidate to replace the Clouds and Earth's Radiant Energy System (CERES) instruments which currently monitor the ERB. LaRC has partnered with the Thermal Radiation Group at Virginia Tech to model and evaluate the thermal and optical design of the DEMETER instrument. The effort reported here deals with the numerical modeling of the optical and thermal radiative performance the DEMETER instrument. The numerical model is based on the Monte Carlo Ray-Trace (MCRT) method. The major optical components of the instrument are incorporated into the ray-trace model using 3-D surface equations. A CAD model of the instrument baffle is imported directly into the ray-trace environment using an STL triangular mesh. The instrument uses a single freeform mirror to focus radiation on the detector. A method for incorporating freeform surfaces into a ray-trace model is described. The development and capabilities of the model are reported. The model is used to run several ray-traces to compare two different quasi-black surface coatings for the DEMETER telescope baffle. Included is a list of future tests the Thermal Radiation Group will use the model to accomplish. / Master of Science / For decades NASA has used satellite-mounted scientific instruments to monitor the Earth radiation budget (ERB). The ERB is the energy balance of the planet Earth with its surroundings. Radiation from the sun is absorbed and reflected by the Earth. The Earth also emits radiation. The balance between these heat transfer components drives the planetary climate. Researchers at the NASA Langley Research Center (LaRC) are developing a new instrument for monitoring the ERB from low Earth orbit. This Earth observing instrument is called the DEMonstrating the Emerging Technology for measuring the Earth's Radiation (DEMETER) instrument. NASA has partnered with the Thermal Radiation Group at Virginia Tech to model and evaluate the thermal and optical design of the DEMETER instrument. The effort reported here deals with the numerical modeling of radiation heat transfer in the DEMETER instrument. The numerical model uses the Monte Carlo Ray-Trace (MCRT) method to evaluate the thermal and optical behavior of the DEMETER instrument. The development and capabilities of the model are reported. The model is used to run a series of simulations to compare the performance of two different quasi-black surface coatings for the DEMETER telescope baffle. Included is a list of future tasks the Thermal Radiation Group will accomplish using the model.
76

Novel Site-Specific Techniques for Predicting Radio Wave Propagation

Sheethalnath, Praveen T. 22 May 2001 (has links)
This thesis addresses various aspects related to site-specific propagation prediction using ray tracing techniques. Propagation prediction based on ray tracing techniques requires that all the different physical objects, which affect the propagation of radio waves, be modeled. The first part of the thesis concentrates on modeling the buildings and the terrain for the above-mentioned application. A survey of the various geographic products that are available to model the environment is presented. The different methods used to model the terrain are analyzed and the most suitable method for a ray based application is suggested. A method to model the buildings in an environment from commercially available data is described. A novel method to combine the building information with the terrain information is presented. An in depth discussion of deterministic propagation prediction using ray tracing is presented in the latter half of the thesis. An overview of the various ray based algorithms that exists in the literature are presented and the limitations and the computational complexity of ray based methods are discussed. All ray based algorithms model the receivers as point objects and predict the propagation characteristics at a particular point in space. However, to optimize the design of a wireless broadcast or a point to multi point system such as a Wireless LAN (WLAN) or a cellular system, propagation characteristics at multiple points in space need to be known. The standard ray tracing algorithms can be notoriously time consuming when used to predict the characteristics of multiple receivers. A new, computationally less intensive algorithm to predict the propagation characteristics of multiple receivers is described. This algorithm significantly reduces the computation time by using "grid mode" predictions for broadcast channels. / Master of Science
77

Avaliação do algoritmo de "ray tracing" em multicomputadores. / Evaluation of the ray tracing algorithm in multicomputers.

Santos, Eduardo Toledo 29 June 1994 (has links)
A Computação Gráfica, área em franco desenvolvimento, têm caminhado em busca da geração, cada vez mais rápida, de imagens mais realísticas. Os algoritmos que permitem a síntese de imagens realísticas demandam alto poder computacional, fazendo com que a geração deste tipo de imagem, de forma rápida, requeira o uso de computadores paralelos. Hoje, a técnica que permite gerar as imagens mais realísticas é o "ray tracing" . Os multicomputadores, por sua vez, são a arquitetura de computadores paralelos mais promissora na busca do desempenho computacional necessário às aplicações modernas. Esta dissertação aborda o problema da implementação do algoritmo de "ray tracing" em multicomputadores. A paralelização desta técnica para uso em computadores paralelos de memória distribuída pode ser feita de muitas formas diferentes, sempre envolvendo um compromisso entre a velocidade de processamento e a memória utilizada. Neste trabalho conceitua-se este problema e introduz-se ferramentas para a avaliação de soluções que levam em consideração a eficiência de processamento e a redundância no uso de memória. Também é apresentada uma nova taxonomia que, além de permitir a classificação de propostas para implementações de "ray tracing" paralelo, orienta a procura de novas soluções para este problema. O desempenho das soluções em cada classe desta taxonomia é avaliado qualitativamente. Por fim, são sugeridas novas alternativas de paralelização do algoritmo de "ray tracing" em multicomputadores. / Computer Graphics is headed today towards the synthesis of more realistic images, in less time. The algorithms used for realistic image synthesis demand high computer power, so that the synthesis of this kind of image, in short periods of time, requires the use of parallel computers. Nowadays, the technique that yields the most realistic images is ray tracing. On its turn, multicomputers are the most promising parallel architecture for reaching the performance needed in modern applications. This dissertation is on the problem of implementing the ray tracing algorithm on multicomputers. The parallelization of this technique on distributed memory parallel computers can take several forms, always involving a compromise between speed and memory. In this work, this problem is conceptualized and tools for evaluation of solutions that account for efficiency and redundancy, are introduced. It is also presented a new taxonomy that can be used for both the classification of parallel ray tracing proposals and for driving the search of new solutions to this problem. The performances of entries in each class of the taxonomy are qualitatively assessed. New alternatives for parallelizing the ray tracing algorithm on multicomputers, are suggested.
78

Extensões ao algoritmo de 'RAY TRACING' parametrizado. / Extensions on the parameterized ray tracing algorithm.

Santos, Eduardo Toledo 01 July 1998 (has links)
Ray tracing é um algoritmo para a síntese de imagens por computador. Suas características principais são a alta qualidade das imagens que proporciona (incorporando sombras, reflexões e transparências entre outros efeitos) e, por outro lado, a grande demanda em termos de processamento. O ray tracing parametrizado é um algoritmo baseado no ray tracing, que permite a obtenção de imagens com a mesma qualidade a um custo computacional dezenas de vezes menor, porém com restrições. Estas restrições são a necessidade de geração de um arquivo de dados inicial, cujo tempo de processamento é pouco maior que o do ray tracing convencional e a não possibilidade de alteração de qualquer parâmetro geométrico da cena. Por outro lado, a geração de versões da mesma cena com mudanças nos parâmetros ópticos (cores, intensidades de luz, texturas, reflexões, transparências, etc.) é extremamente rápida. Esta tese propõe extensões ao algoritmo de ray tracing parametrizado, procurando aliviar algumas de suas restrições. Estas extensões permitem alterar alguns parâmetros geométricos como a posição das fontes de luz, parâmetros de fontes de luz spot e mapeamento de revelo entre outros, mantendo o bom desempenho do algoritmo original. Também é estudada a paralelização do algoritmo e outras formas de aceleração do processamento. As extensões propostas permitem ampliar o campo de aplicação do algoritmo original incentivando sua adoção mais generalizada. / Ray tracing is an image synthesis computer algorithm. Its main features are the high quality of the generated images (which incorporate shadows, reflections and transparency, among other effects) and, on the other hand, a high processing demand. Parameterized ray tracing is an algorithm based on ray tracing which allows the synthesis of images with the same quality but tens of times faster than ray tracing, although with some restrictions. These restrictions are the requirement of generating a data file (which takes a little longer than standard ray tracing to create) and the fact that no geometric modifications are allowed. On the other side, the processing time for creating new versions of the image with changes only on optical parameters (colors, light intensities, textures, reflections, transparencies, etc.) is extremely fast. This Ph.D. dissertation proposes extensions to the parameterized ray tracing algorithm for diminishing its restrictions. These extensions allow changing some geometric parameters like the light source positions, spotlight parameters and bump-mapping among others, keeping the processing performance of the original algorithm. The parallelization of the algorithm is also focused as well as other performance enhancements. The proposed extensions enlarge the field of application of the original algorithm, encouraging more general adoption.
79

The Study of Energy Consumption of Acceleration Structures for Dynamic CPU and GPU Ray Tracing

Chang, Chen Hao Jason 08 January 2007 (has links)
Battery life has been the slowest growing resource on mobile systems for several decades. Although much work has been done on designing new chips and peripherals that use less energy, there has not been much work on reducing energy consumption by removing energy intensive tasks from graphics algorithms. In our work, we focus on energy consumption of the ray tracing task because it is a resource-intensive, global-illumination algorithm. We focus our effort on ray tracing dynamic scenes, thus we concentrate on identifying the major elements determining the energy consumption of acceleration structures. We believe acceleration structures are critical in reducing energy consumption because they need to be built inexpensively, but must also be complex enough to boost rendering speed. We conducted tests on a Pentium 1.6 GHz laptop with GeForce Go 6800 GPU. In our experiments, we investigated various elements that modify the acceleration structure build algorithm, and we compared the energy usage of CPU and GPU rendering with different acceleration structures. Furthermore, the energy per frame when ray tracing dynamic scenes was gathered and compared to identify the best acceleration structure that provides a good balance between building energy consumption and rendering energy consumption. We found the bounding volume hierarchy to be the best acceleration structure when rendering dynamic scenes with the GPU on our test system. A bounding volume hierarchy is not the most inexpensive structure to build, but it can be rendered cheaply on the GPU while introducing acceptable energy overhead when rebuilding. In addition, we found the fastest algorithm was also the most inexpensive in terms of energy consumption. We propose an energy model based on this finding.
80

Extensões ao algoritmo de 'RAY TRACING' parametrizado. / Extensions on the parameterized ray tracing algorithm.

Eduardo Toledo Santos 01 July 1998 (has links)
Ray tracing é um algoritmo para a síntese de imagens por computador. Suas características principais são a alta qualidade das imagens que proporciona (incorporando sombras, reflexões e transparências entre outros efeitos) e, por outro lado, a grande demanda em termos de processamento. O ray tracing parametrizado é um algoritmo baseado no ray tracing, que permite a obtenção de imagens com a mesma qualidade a um custo computacional dezenas de vezes menor, porém com restrições. Estas restrições são a necessidade de geração de um arquivo de dados inicial, cujo tempo de processamento é pouco maior que o do ray tracing convencional e a não possibilidade de alteração de qualquer parâmetro geométrico da cena. Por outro lado, a geração de versões da mesma cena com mudanças nos parâmetros ópticos (cores, intensidades de luz, texturas, reflexões, transparências, etc.) é extremamente rápida. Esta tese propõe extensões ao algoritmo de ray tracing parametrizado, procurando aliviar algumas de suas restrições. Estas extensões permitem alterar alguns parâmetros geométricos como a posição das fontes de luz, parâmetros de fontes de luz spot e mapeamento de revelo entre outros, mantendo o bom desempenho do algoritmo original. Também é estudada a paralelização do algoritmo e outras formas de aceleração do processamento. As extensões propostas permitem ampliar o campo de aplicação do algoritmo original incentivando sua adoção mais generalizada. / Ray tracing is an image synthesis computer algorithm. Its main features are the high quality of the generated images (which incorporate shadows, reflections and transparency, among other effects) and, on the other hand, a high processing demand. Parameterized ray tracing is an algorithm based on ray tracing which allows the synthesis of images with the same quality but tens of times faster than ray tracing, although with some restrictions. These restrictions are the requirement of generating a data file (which takes a little longer than standard ray tracing to create) and the fact that no geometric modifications are allowed. On the other side, the processing time for creating new versions of the image with changes only on optical parameters (colors, light intensities, textures, reflections, transparencies, etc.) is extremely fast. This Ph.D. dissertation proposes extensions to the parameterized ray tracing algorithm for diminishing its restrictions. These extensions allow changing some geometric parameters like the light source positions, spotlight parameters and bump-mapping among others, keeping the processing performance of the original algorithm. The parallelization of the algorithm is also focused as well as other performance enhancements. The proposed extensions enlarge the field of application of the original algorithm, encouraging more general adoption.

Page generated in 0.0621 seconds