• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 560
  • 560
  • 560
  • 560
  • 196
  • 133
  • 91
  • 88
  • 85
  • 81
  • 76
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Designing In-Headset Authoring Tools for Virtual Reality Video

Nguyen, Cuong 07 December 2017 (has links)
Virtual Reality (VR) video is emerging as a new art form. Viewing VR video requires wearing the VR headset to fully experience the immersive surrounding of the content. However, the novel viewing experience of VR video creates new challenges and requirements for conventional video authoring tools, which were designed mainly for working with normal video on a desktop display. Designing effective authoring tools for VR video requires intuitive video interfaces specific to VR. This dissertation develops new workflows and systems that enable filmmakers to create and improve VR video while fully immersed in a VR headset. We introduce a series of authoring tools that enables filmmakers to work with video in VR: 1) Vremiere, an in-headset video editing application that enables editors to edit VR video entirely in the headset, 2) CollaVR, a networked system that enables multiple users to collaborate and review video together in VR, and 3) a set of techniques to assist filmmakers in managing and accessing interfaces in stereoscopic VR video without suffering depth conflicts. The design of these applications is grounded in existing practices and principles learned in interviews with VR professionals. A series of studies is conducted to evaluate these systems, which demonstrate the potential of in-headset video authoring.
412

Modeling the performance of many-core programs on GPUs with advanced features

Pei, Mo Mo January 2012 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
413

Designing low power SRAM system using energy compression

Nair, Prashant 10 April 2013 (has links)
The power consumption in commercial processors and application specific integrated circuits increases with decreasing technology nodes. Power saving techniques have become a first class design point for current and future VLSI systems. These systems employ large on-chip SRAM memories. Reducing memory leakage power while maintaining data integrity is a key criterion for modern day systems. Unfortunately, state of the art techniques like power-gating can only be applied to logic as these would destroy the contents of the memory if applied to a SRAM system. Fortunately, previous works have noted large temporal and spatial locality for data patterns in commerical processors as well as application specific ICs that work on images, audio and video data. This thesis presents a novel column based Energy Compression technique that saves SRAM power by selectively turning off cells based on a data pattern. This technique is applied to study the power savings in application specific inegrated circuit SRAM memories and can also be applied for commercial processors. The thesis also evaluates the effects of processing images before storage and data cluster patterns for optimizing power savings.
414

Statistical methods for coupling expert knowledge and automatic image segmentation and registration

Kolesov, Ivan A. 20 December 2012 (has links)
The objective of the proposed research is to develop methods that couple an expert user's guidance with automatic image segmentation and registration algorithms. Often, complex processes such as fire, anatomical changes/variations in human bodies, or unpredictable human behavior produce the target images; in these cases, creating a model that precisely describes the process is not feasible. A common solution is to make simplifying assumptions when performing detection, segmentation, or registration tasks automatically. However, when these assumptions are not satisfied, the results are unsatisfactory. Hence, removing these, often times stringent, assumptions at the cost of minimal user input is considered an acceptable trade-off. Three milestones towards reaching this goal have been achieved. First, an interactive image segmentation approach was created in which the user is coupled in a closed-loop control system with a level set segmentation algorithm. The user's expert knowledge is combined with the speed of automatic segmentation. Second, a stochastic point set registration algorithm is presented. The point sets can be derived from simple user input (e.g. a thresholding operation), and time consuming correspondence labeling is not required. Furthermore, common smoothness assumptions on the non-rigid deformation field are removed. Third, a stochastic image registration algorithm is designed to capture large misalignments. For future research, several improvements of the registration are proposed, and an iterative, landmark based segmentation approach, which couples the segmentation and registration, is envisioned.
415

Image Segmentation and Shape Analysis of Blood Vessels with Applications to Coronary Atherosclerosis

Yang, Yan 22 March 2007 (has links)
Atherosclerosis is a systemic disease of the vessel wall that occurs in the aorta, carotid, coronary and peripheral arteries. Atherosclerotic plaques in coronary arteries may cause the narrowing (stenosis) or complete occlusion of the arteries and lead to serious results such as heart attacks and strokes. Medical imaging techniques such as X-ray angiography and computed tomography angiography (CTA) have greatly assisted the diagnosis of atherosclerosis in living patients. Analyzing and quantifying vessels in these images, however, is an extremely laborious and time consuming task if done manually. A novel image segmentation approach and a quantitative shape analysis approach are proposed to automatically isolate the coronary arteries and measure important parameters along the vessels. The segmentation method is based on the active contour model using the level set formulation. Regional statistical information is incorporated in the framework through Bayesian pixel classification. A new conformal factor and an adaptive speed term are proposed to counter the problems of contour leakage and narrowed vessels resulting from the conventional geometric active contours. The proposed segmentation framework is tested and evaluated on a large amount of 2D and 3D, including synthetic and real 2D vessels, 2D non-vessel objects, and eighteen 3D clinical CTA datasets of coronary arteries. The centerlines of the vessels are proposed to be extracted using harmonic skeletonization technique based on the level contour sets of the harmonic function, which is the solution of the Laplace equation on the triangulated surface of the segmented vessels. The cross-sectional areas along the vessels can be measured while the centerline is being extracted. Local cross-sectional areas can be used as a direct indicator of stenosis for diagnosis. A comprehensive validation is performed by using digital phantoms and real CTA datasets. This study provides the possibility of fully automatic analysis of coronary atherosclerosis from CTA images, and has the potential to be used in a real clinical setting along with a friendly user interface. Comparing to the manual segmentation which takes approximately an hour for a single dataset, the automatic approach on average takes less than five minutes to complete, and gives more consistent results across datasets.
416

Mathematical approaches to digital color image denoising

Deng, Hao 14 September 2009 (has links)
Many mathematical models have been designed to remove noise from images. Most of them focus on grey value images with additive artificial noise. Only very few specifically target natural color photos taken by a digital camera with real noise. Noise in natural color photos have special characteristics that are substantially different from those that have been added artificially. In this thesis previous denoising models are reviewed. We analyze the strengths and weakness of existing denoising models by showing where they perform well and where they don't. We put special focus on two models: The steering kernel regression model and the non-local model. For Kernel Regression model, an adaptive bilateral filter is introduced as complementary to enhance it. Also a non-local bilateral filter is proposed as an application of the idea of non-local means filter. Then the idea of cross-channel denoising is proposed in this thesis. It is effective in denoising monochromatic images by understanding the characteristics of digital noise in natural color images. A non-traditional color space is also introduced specifically for this purpose. The cross-channel paradigm can be applied to most of the exisiting models to greatly improve their performance for denoising natural color images.
417

Delay sensitive delivery of rich images over WLAN in telemedicine applications

Sankara Krishnan, Shivaranjani. January 2009 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
418

Visual intention detection algorithm for wheelchair motion.

Luhandjula, Thierry Kalonda. January 2012 (has links)
D. Tech. Electrical Engineering. / Proposes a vision-based solution for intention recognition of a person from the motions of the head and the hand This solution is intended to be applied in the context of wheelchair bound individuals whose intentions of interest are the wheelchairs direction and speed variation indicated by a rotation and a vertical motion respectively. Both head-based and hand-based solutions are proposed as an alternative to solutions using joysticks pneumatic switches, etc.
419

A high resolution digital system for automated aerial surveying.

Coleman, Andrew Stuart. January 2000 (has links)
Resource managers frequently require moderate to high resolution imagery within short turnaround periods for use in a GIS-based management system. These spatial data can greatly enhance their ability to make timely, cost-saving decisions and recommendations. MBB Consulting Engineers, Inc., of Pietermaritzburg, South Africa had for many years made use of airborne videography to provide the imagery for several resource-based applications. Applications included detailed land use mapping in various South African river catchments and identification, density classification and mapping of alien vegetation. While the system was low cost and easy to operate, MBB had found that their system was inherently limited, particularly by its lack of automation and poor spatial resolution. This project was started because of a need to address these limitations and provide an airborne remote sensing system that was more automated and could produce higher resolution imagery than the existing system. In addition, the overall cost and time required to produce a map of the resource of interest needed to be reduced. The system developed in this project aimed to improve upon the pre-flight planning and in-flight image acquisition aspects of the existing system. No new post-flight image processing procedures were developed, but possible future refinement of the post-flight image processing routine was considered throughout the development of the system. A pre-flight planning software package was developed that could quickly and efficiently calculate the positions offlight lines and photographs or images with a minimum of user input. The in-flight image acquisition setup developed involved the integration of a high resolution digital still camera, a Global Positioning System (GPS), and camera control software. The use of the rapidly developing and improving technology of a digital still camera was considered to be a better alternative than a video graphic or traditional film camera system for a number of reasons. In particular, digital still cameras produce digital imagery without the need for development and scanning of aerial photographs or frame grabbing of video images. Furthermore, the resolution of current digital still cameras is already significantly better than that of video cameras and is rivalling the resolution of 35rnm film. The system developed was tested by capturing imagery of an urban test area. The images obtained were then rectified using photogrammetric techniques. Results obtained were promising with planimetric accuracies of 5 to 1 Om being obtained. From this test it was concluded that for high accuracy applications involving numerous images, use would be made of softcopy photogrammetric software to semi-automatically position and rectify images, while for applications requiring fewer images and lower accuracy, images could be rectified using the simpler technique of assigning GCPs for each image from scanned orthophotos. / Thesis (MSc.)- University of Natal,Pietermaritzburg, 2000.
420

Visualising the invisible :articulating the inherent features of the digital image

McQuade, Patrick John, Art, College of Fine Arts, UNSW January 2007 (has links)
Contemporary digital imaging practice has largely adopted the visual characteristics of its closest mediatic relative, the analogue photograph, In this regard, new media theorist Lev Manovich observes that "Computer software does not produce such images by default. The paradox of digital visual culture is that although all imaging is becoming computer-based, the dominance of photographic and cinematic imagery is becoming even stronger. But rather than being a direct, "natural" result of photo and film technology, these images are constructed on computers" (Manovich 2001: 179), Manovich articulates the disjuncture between the technical processes involved in the digital image creation process and the visual characteristics of the final digital image with its replication of the visual qualities of the analogue photograph. This research addresses this notion further by exploring the following. What are the defining technical features of these computer-based imaging processes? Could these technical features be used as a basis in developing an alternative aesthetic for the digital image? Why is there a reticence to visually acknowledge these technical features in contemporary digital imaging practice? Are there historic mediated precedents where the inherent technical features of the medium are visually acknowledged in the production of imagery? If these defining technical features of the digital imaging process were visually acknowledged in this image creation process, what would be the outcome? The studio practice component of the research served as a foundation for the author's artistic and aesthetic development where the intent was to investigate and highlight four technical qualities of the digital image identified through the case studies of three digital artists, and other secondary sources, These technical qualities include: the composite RGB colour system of the digital image as it appears on screen; the pixellated microstructure of the digital image; the luminosity of the digital image as it appears on a computer monitor, and the underlying numeric and (ASCII based) alphanumeric codes of the image file which enables that most defining feature of the image file, that of programmability, Based on research in the visualization of these numeric and alphanumeric codes, digital images of bacteria produced through the use of the scanning electron microscope, were chosen as image content for an experimental body of work to draw the conceptual link between these numeric and alphanumeric codes of the image file and the coded genetic sequence of an individual bacterial entity.

Page generated in 2.1252 seconds