• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2328
  • 825
  • 769
  • 188
  • 90
  • 72
  • 18
  • 17
  • 16
  • 13
  • 8
  • 8
  • 8
  • 8
  • 8
  • Tagged with
  • 4735
  • 4735
  • 4735
  • 1962
  • 1929
  • 1900
  • 1223
  • 946
  • 946
  • 752
  • 576
  • 574
  • 543
  • 476
  • 391
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1081

Internet usage for improvement of learning

Chanboualapha, Sonexay, Islam, Md. Rofiqul January 2012 (has links)
The internet usage is increasing rapidly, especially for learning in the field ofeducation and informatics. The investigation of indentifying and analyzing internetusage for learning improvement is necessary to implement with students. In order toensure the use of the internet improve students’ learning, it is necessary to investigatewith developed and developing countries in the term of comparison. In our researchconducted examining with students in one developed country (Sweden) and onedeveloping country (Laos) to identify and analyze the relationship between internetusage and students’ learning. We collected data with survey through questionnaires byquantitative research, and analyzed the relationship by correlation analysis. Findingindicated that internet usage has the positive relationship with students’ learning ashigher of using internet and higher of grade. However, the use of era technology andstudents’ learning in developed country are absolutely higher than developingcountry. Thus, we ensure that internet usage is a positive relationship with students’learning. / Program: Masterutbildning i Informatik
1082

Information Technology For E-learning in Developing C / Information Technology For E-learning in Developing Countries

Bukhari, Rabia Arfin January 2011 (has links)
E-learning is a rapidly emerging concept facilitating learners in the field of education. Continuous advancements in information technologies are enhancing the possibilities of its growth. Developed countries have realised its strength and adopted it warmly but in developing countries it is still a new concept. There are many limitations in developing countries for its implementation and growth. In my research I have identified the core limitations associated with the growth of E-learning in developing countries and found out some possible solutions. I have selected different subject areas which can support in solving my research questions. In the textual analysis I have found that different cultural, technological and awareness problems are creating obstacles for its implementation. In the empirical survey these problems are verified from the students and teachers who are associated with E-learning and would like to see its implementation in developing countries. In the results of my research findings I have shown how information technology can be helpful for enhancing the possibilities of E-learning and identified how sub systems of E-learning can support its growth.
1083

A GRAPHICAL USER INTERFACE FOR LARGE-SCALE GENE EXPRESSION ANALYSIS

Thi Cam Thach, Doan January 2011 (has links)
Recently, the whole-genome expression analysis – which is analyzing most or all of the genes in biological systems, and is a rich and powerful way to discover gene pathway - has become increasingly affordable because of the increasing amount of microarray data available in public databases. Additionally, due to the enormously available information content in these repositories, researchers have to spend large amount of time to decide on the right information to proceed. There should be an application to assist biological researchers reducing the time in finding good data sets to analyze. In this project, a thorough study in HCI, Information Visualization, interaction design and development methodologies are carried out in order to build a web-based user interface that enables searching and browsing gene expression data and their correlation (web-based). Findings from literature review are applied to create a web-based user interface in large-gene expression analysis. Then, a survey is carried out to collect and analyze pilot users‟ feedback. The questionnaire shows that the users are very interested in using the system and they would like to spend more time interacting with it. They give positive feedbacks about interactive data visualization in the website help them to save time on viewing, navigating and interpreting complicated data. Besides, it is easy to navigate and learn how to use the system to achieve interesting findings in biology. The questionnaire shows that the author is successful in applying findings from literature review to build the website. Besides, from the results there are suggestions for improvement such as the flexibility in the website by automatically recognizing the alias gene names from different databases, filling-in gene symbols using first few characters, narrowing down a search to a particular species such as human or rat, etc. / Program: Masterutbildning i Informatik
1084

Eye Tracking’s Impact on Player Performance and Experience in a 2D Space Shooter Video Game.

Arredal, Martin January 2018 (has links)
Background. Although a growing market, most of the commercially available gamestoday that features eye tracking support is rendered in a 3D perspective. Games ren-dered in 2D have seen little support for eye trackers from developers. By comparing the differences in player performance and experience between an eye tracker and acomputer mouse when playing a classic 2D genre: space shooter, this thesis aim tomake an argument for the implementation of eye tracking in 2D video games. Objectives. Create a 2D space shooter video game where movement will be handledthrough a keyboard but the input method for aiming will alter between a computermouse and an eye tracker. Methods. Using a Tobii EyeX eye tracker, an experiment was conducted with fif-teen participants. To measure their performance, three variables was used: accuracy,completion time and collisions. The participants played two modes of a 2D spaceshooter video game in a controlled environment. Depending on which mode wasplayed, the input method for aiming was either an eye tracker or a computer mouse.The movement was handled using a keyboard for both modes. When the modes hadbeen completed, a questionnaire was presented where the participants would ratetheir experience playing the game with each input method. Results. The computer mouse had a better performance in two out of three per-formance variables. On average the computer mouse had a better accuracy andcompletion time but more collisions. However, the data gathered from the question-naire shows that the participants had on average a better experience when playingwith an eye tracker Conclusions. The results from the experiment shows a better performance for par-ticipants using the computer mouse, but participants felt more immersed with the eyetracker and giving it a better score on all experience categories. With these results,this study hope to encourage developers to implement eye tracking as an interactionmethod for 2D video games. However, future work is necessary to determine if theexperience and performance increase or decrease as the playtime gets longer.
1085

Flutes, Pianos, and Machines: Compositions for Instruments and Electronic Sounds

Jacobs, Bryan Charles January 2015 (has links)
This dissertation is comprised of three recent original compositions- Dis Un Il Im Ir, In sin fin bin din bin fin sin in, and Percussion+Guitar. Each work has a unique approach to integrating instrumental performance with humans and computers. The essay component details unique computer-performer interactions I’ve developed to overcome complications in the concert presentation of previous acousmatic and mixed media works. The three works discussed here are related in their instrumentation and compositional style. Dis Un Il Im Ir (2013), for flute, piano, and MIDI keyboard, experiments with the limit of human virtuosity and attempts to extend its affect via sound synthesis and digital samples. In sin fin bin din bin fin sin in (2104), for four computer-controlled pianos with electronic sounds, focuses on repeated melodic and harmonic patterns explored in previous works contrasted with unruly mechanical spasms. Percussion+Guitar (2015), for two computer-controlled flutes (contrary to the title), features a specially designed instrument built at Columbia University’s Computer Music Center. This composition is a duet with a structure defined by heightened rhythmic angularity and blazing fast speeds demonstrating the computer's special skills as a performer. The essay part of this dissertation includes an analysis of the pitches, rhythms, and gestures where appropriate. I provide details about the artistic uses of software and hardware for each project. I trace my artistic inspirations for composing with and for computers and robots to my experiences in acousmatic music, pop production, and hands-on music making. I describe my process of organizing contrasting sounds into form-bearing elements- an approach inspired by Pierre Schaeffer’s typomorphology of sound objects later revisited by Lasse Thoresen. The paper concludes with a brief discussion about future works.
1086

Unmediated Interaction: Communicating with Computers and Embedded Devices as If They Are Not There

Smith, Brian Anthony January 2018 (has links)
Although computers are smaller and more readily accessible today than they have ever been, I believe that we have barely scratched the surface of what computers can become. When we use computing devices today, we end up spending a lot of our time navigating to particular functions or commands to use devices their way rather than executing those commands immediately. In this dissertation, I explore what I call unmediated interaction, the notion of people using computers as if the computers are not there and as if the people are using their own abilities or powers instead. I argue that facilitating unmediated interaction via personalization, new input modalities, and improved text entry can reduce both input overhead and output overhead, which are the burden of providing inputs to and receiving outputs from the intermediate device, respectively. I introduce three computational methods for reducing input overhead and one for reducing output overhead. First, I show how input data mining can eliminate the need for user inputs altogether. Specifically, I develop a method for mining controller inputs to gain deep insights about a players playing style, their preferences, and the nature of video games that they are playing, all of which can be used to personalize their experience without any explicit input on their part. Next, I introduce gaze locking, a method for sensing eye contact from an image that allows people to interact with computers, devices, and other objects just by looking at them. Third, I introduce computationally optimized keyboard designs for touchscreen manual input that allow people to type on smartphones faster and with far fewer errors than currently possible. Last, I introduce the racing auditory display (RAD), an audio system that makes it possible for people who are blind to play the same types of racing games that sighted players can play, and with a similar speed and sense of control as sighted players. The RAD shows how we can reduce output overhead to provide user interface parity between people with and without disabilities. Together, I hope that these systems open the door to even more efforts in unmediated interaction, with the goal of making computers less like devices that we use and more like abilities or powers that we have.
1087

Deep learning based facial expression recognition and its applications

Jan, Asim January 2017 (has links)
Facial expression recognition (FER) is a research area that consists of classifying the human emotions through the expressions on their face. It can be used in applications such as biometric security, intelligent human-computer interaction, robotics, and clinical medicine for autism, depression, pain and mental health problems. This dissertation investigates the advanced technologies for facial expression analysis and develops the artificial intelligent systems for practical applications. The first part of this work applies geometric and texture domain feature extractors along with various machine learning techniques to improve FER. Advanced 2D and 3D facial processing techniques such as Edge Oriented Histograms (EOH) and Facial Mesh Distances (FMD) are then fused together using a framework designed to investigate their individual and combined domain performances. Following these tests, the face is then broken down into facial parts using advanced facial alignment and localising techniques. Deep learning in the form of Convolutional Neural Networks (CNNs) is also explored also FER. A novel approach is used for the deep network architecture design, to learn the facial parts jointly, showing an improvement over using the whole face. Joint Bayesian is also adapted in the form of metric learning, to work with deep feature representations of the facial parts. This provides a further improvement over using the deep network alone. Dynamic emotion content is explored as a solution to provide richer information than still images. The motion occurring across the content is initially captured using the Motion History Histogram descriptor (MHH) and is critically evaluated. Based on this observation, several improvements are proposed through extensions such as Average Spatial Pooling Multi-scale Motion History Histogram (ASMMHH). This extension adds two modifications, first is to view the content in different spatial dimensions through spatial pooling; influenced by the structure of CNNs. The other modification is to capture motion at different speeds. Combined, they have provided better performance over MHH, and other popular techniques like Local Binary Patterns - Three Orthogonal Planes (LBP-TOP). Finally, the dynamic emotion content is observed in the feature space, with sequences of images represented as sequences of extracted features. A novel technique called Facial Dynamic History Histogram (FDHH) is developed to capture patterns of variations within the sequence of features; an approach not seen before. FDHH is applied in an end to end framework for applications in Depression analysis and evaluating the induced emotions through a large set of video clips from various movies. With the combination of deep learning techniques and FDHH, state-of-the-art results are achieved for Depression analysis.
1088

Facilitating individual learning, collaborative learning and behaviour change in citizen science through interface design

Sharma, Nirwan January 2018 (has links)
Citizen science is a collaboration between members of the public and scientific experts. Within the environmental realm – where citizen science is particularly well expressed – this collaboration often concerns members of the public involved in scientific data gathering and processing at a large-scale to generate data that can subsequently be used by the scientists to improve scientific knowledge, understanding and theories. As these collaborations are increasingly being mediated via digital technologies, the overall aim of this thesis was to explore the potential of user interface design for citizen science, within the context of environmental sciences while using an established citizen science platform, BeeWatch. Particular attention was paid to the potential of such interface development to foster a move from situations of 'expert-novice' to progressive forms of collaborations and participation in citizen science. The overall conclusion from this thesis is that interactive technologies can lead to the development of expertise for biological recording – and thus, narrowing the gap between expert and novice – as well as progressing the level of participation within and fostering behaviour changes for conservation action.
1089

Use of projector-camera system for human-computer interaction.

January 2012 (has links)
用投影機替代傳統的顯示器可在較小尺寸的設備上得到較大尺寸的顯示,從而彌補了傳統顯示器移動性差的不足。投影機照相機系統通過不可感知的結構光,在顯示視頻內容的同時具備了三維傳感能力,從而可為自然人機交互提供良好的平臺。投影機照相機系統在人機交互中的應用主要包括以下四個核心內容: (1)同時顯示和傳感,即如何在最低限度的影響原始投影的前提下,使得普通視頻投影機既是顯示設備又是三維傳感器;(2) 三維信息的理解:即如何通過利用額外的信息來彌補稀疏點云的不足,從而改善系統性能; (3) 分割:即如何在不斷變化投影內容的影響下得到準確的分割(4) 姿態識別:即如何從單張圖像中得到三維姿態。本文將針對上述四個方面進行深入的研究和探討,並提出改造方案。 / 首先,為了解決嵌入編碼不可見性與編碼恢復魯棒性之間的矛盾,本文提出一種在編解碼兩端同時具備抗噪能力的方法。我們使用特殊設計的幾何圖元和較大的海明距離來編碼,從而增強了抗噪聲干擾能力。同時在解碼端,我們使用事先通過訓練得到的幾何圖元檢測器來檢測和識別嵌入圖像的編碼,從而解決了因噪聲干擾使用傳統結構光中的分割方法很難提取嵌入編碼的困難。 / 其次在三維信息的理解方面,我們提出了一個通過不可感知結構光來實現六自由度頭部姿態估計的方法。首先,通過精心設計的投影策略和照相機-投影機的同步,在不可感知結構光的照射下,我們得到了模式圖和與之相對應的紋理圖。然後,在紋理圖中使用主動表觀模型定位二維面部特徵,在模式圖中通用結構光方法計算出點雲坐標,結合上述兩種信息來計算面部特征點的三維坐標。最后,通過不同幀中對應特征點三維坐標間的相關矩陣的奇異值分解來估計頭部的朝向和位移。 / 在分割方面,我們提出一種在投影機-照相機系統下由粗到精的手部分割方法。首先手部區域先通過對比度顯著性檢測的方法粗略分割出來,然後通過保護邊界的平滑方法保證分割區域的一致性,最后精確的分割結果自置信度分析得到。 / 最後,我們又探討如何僅使用投影機和照相機將在普通桌面上的投影區域轉化成觸摸屏的方案。我們將一種經過統計分析得到的隨機二元編碼嶽入到普通投影內容中,從而在用戶沒有感知的情況下,使得投影機-照相機系統具備三維感知的能力。最終手指是否觸及桌面是通過投影機-照相機-桌面系統的標定信息,精准的手部區域分割和手指尖定位,投影機投影平面勻照相機圖像平面的單應映射以及最入投影的編碼來確定。 / The use of a projector in place of traditional display device would dissociate display size from device size, making portability much less an issue. Associated with camera, the projector-camera system allows simultaneous video display and 3D acquisition through imperceptible structured light sensing, providing a vivid and immersed platform for natural human-computer interaction. Key issues involved in the approach include: (1) Simultaneous Display and Acquisition: how to make normal video projector not only a display device but also a 3D sensor even with the prerequisite of incurring minimum disturbance to the original projection; (2) 3D Information Interpretation: how to interpret the spare depth information with the assistance of some additional cues to enhance the system performance; (3) Segmentation: how to acquire accurate segmentation in the presence of the incessant variation of the projected video content; (4) Posture Recognition: how to infer 3D posture from single image. This thesis aims at providing improved solutions to each of these issues. / To address the conflict between imperceptibility of the embedded codes and the robustness of code retrieval, noise-tolerant schemes to both the coding and decoding stages are introduced. At the coding end, specifically designed primitive shapes and large Hamming distance are employed to enhance tolerance toward noise. At the decoding end, pre-trained primitive shape detectors are used to detect and identify the embedded codes a task difficult to achieve by segmentation that is used in general structured light methods, for the weakly embedded information is generally interfered by substantial noise. / On 3D information interpretation, a system that estimates 6-DOF head pose by imperceptible structured light sensing is proposed. First, through elaborate pattern projection strategy and camera-projector synchronization, pattern-illuminated images and the corresponding scene-texture image are captured with imperceptible patterned illumination. Then, 3D positions of the key facial feature points are derived by a combination of the 2D facial feature points in the scene-texture image localized by AAM and the point cloud generated by structured light sensing. Eventually, the head orientation and translation are estimated by SVD of a correlation matrix that is generated from the 3D corresponding feature point pairs over different frames. / On the segmentation issue, we describe a coarse-to-fine hand segmentation method for projector-camera system. After rough segmentation by contrast saliency detection and mean shift-based discontinuity-preserved smoothing, the refined result is confirmed through confidence evaluation. / Finally, we address how an HCI (Human-Computer Interface) with small device size, large display, and touch input facility can be made possible by a mere projector and camera. The realization is through the use of a properly embedded structured light sensing scheme that enables a regular light-colored table surface to serve the dual roles of both a projection screen and a touch-sensitive display surface. A random binary pattern is employed to code structured light in pixel accuracy, which is embedded into the regular projection display in a way that the user perceives only regular display but not the structured pattern hidden in the display. With the projection display on the table surface being imaged by a camera, the observed image data, plus the known projection content, can work together to probe the 3D world immediately above the table surface, like deciding if there is a finger present and if the finger touches the table surface, and if so at what position on the table surface the finger tip makes the contact. All the decisions hinge upon a careful calibration of the projector-camera-table surface system, intelligent segmentation of the hand in the image data, and exploitation of the homography mapping existing between the projector’s display panel and the camera’s image plane. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Dai, Jingwen. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 155-182). / Abstract also in Chinese. / Abstract --- p.i / 摘要 --- p.iv / Acknowledgement --- p.vi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Challenges --- p.2 / Chapter 1.2.1 --- Simultaneous Display and Acquisition --- p.2 / Chapter 1.2.2 --- 3D Information Interpretation --- p.3 / Chapter 1.2.3 --- Segmentation --- p.4 / Chapter 1.2.4 --- Posture Recognition --- p.4 / Chapter 1.3 --- Objective --- p.5 / Chapter 1.4 --- Organization of the Thesis --- p.5 / Chapter 2 --- Background --- p.9 / Chapter 2.1 --- Projector-Camera System --- p.9 / Chapter 2.1.1 --- Projection Technologies --- p.10 / Chapter 2.1.2 --- Researches in ProCams --- p.16 / Chapter 2.2 --- Natural Human-Computer Interaction --- p.24 / Chapter 2.2.1 --- Head Pose --- p.25 / Chapter 2.2.2 --- Hand Gesture --- p.33 / Chapter 3 --- Head Pose Estimation by ISL --- p.41 / Chapter 3.1 --- Introduction --- p.42 / Chapter 3.2 --- Previous Works --- p.44 / Chapter 3.2.1 --- Head Pose Estimation --- p.44 / Chapter 3.2.2 --- Imperceptible Structured Light --- p.46 / Chapter 3.3 --- Method --- p.47 / Chapter 3.3.1 --- Pattern Projection Strategy for Imperceptible Structured Light Sensing --- p.47 / Chapter 3.3.2 --- Facial Feature Localization --- p.48 / Chapter 3.3.3 --- 6 DOF Head Pose Estimation --- p.54 / Chapter 3.4 --- Experiments --- p.57 / Chapter 3.4.1 --- Overview of Experiment Setup --- p.57 / Chapter 3.4.2 --- Test Dataset Collection --- p.58 / Chapter 3.4.3 --- Results --- p.59 / Chapter 3.5 --- Summary --- p.63 / Chapter 4 --- Embedding Codes into Normal Projection --- p.65 / Chapter 4.1 --- Introduction --- p.66 / Chapter 4.2 --- Previous Works --- p.68 / Chapter 4.3 --- Method --- p.70 / Chapter 4.3.1 --- Principle of Embedding Imperceptible Codes --- p.70 / Chapter 4.3.2 --- Design of Embedded Pattern --- p.73 / Chapter 4.3.3 --- Primitive Shape Identification and Decoding --- p.76 / Chapter 4.3.4 --- Codeword Retrieval --- p.77 / Chapter 4.4 --- Experiments --- p.79 / Chapter 4.4.1 --- Overview of Experiment Setup --- p.79 / Chapter 4.4.2 --- Embedded Code Imperceptibility Evaluation --- p.81 / Chapter 4.4.3 --- Primitive Shape Detection Accuracy Evaluation --- p.82 / Chapter 4.5 --- Sensitivity Evaluation --- p.84 / Chapter 4.5.1 --- Working Distance --- p.85 / Chapter 4.5.2 --- Projection Surface Orientation --- p.87 / Chapter 4.5.3 --- Projection Surface Shape --- p.88 / Chapter 4.5.4 --- Projection Surface Texture --- p.91 / Chapter 4.5.5 --- Projector-Camera System --- p.91 / Chapter 4.6 --- Applications --- p.95 / Chapter 4.6.1 --- 3D Reconstruction with Normal Video Projection --- p.95 / Chapter 4.6.2 --- Sensing Surrounding Environment on Mobile Robot Platform --- p.97 / Chapter 4.6.3 --- Natural Human-Computer Interaction --- p.99 / Chapter 4.7 --- Summary --- p.99 / Chapter 5 --- Hand Segmentation in PROCAMS --- p.102 / Chapter 5.1 --- Previous Works --- p.103 / Chapter 5.2 --- Method --- p.106 / Chapter 5.2.1 --- Rough Segmentation by Contrast Saliency --- p.106 / Chapter 5.2.2 --- Mean-Shift Region Smoothing --- p.108 / Chapter 5.2.3 --- Precise Segmentation by Fusing --- p.110 / Chapter 5.3 --- Experiments --- p.111 / Chapter 5.4 --- Summary --- p.115 / Chapter 6 --- Surface Touch-Sensitive Display --- p.116 / Chapter 6.1 --- Introduction --- p.117 / Chapter 6.2 --- Previous Works --- p.119 / Chapter 6.3 --- Priors in Pro-Cam System --- p.122 / Chapter 6.3.1 --- Homography Estimation --- p.123 / Chapter 6.3.2 --- Radiometric Prediction --- p.124 / Chapter 6.4 --- Embedding Codes into Video Projection --- p.125 / Chapter 6.4.1 --- Imperceptible Structured Light --- p.125 / Chapter 6.4.2 --- Embedded Pattern Design Strategy and Statistical Analysis --- p.126 / Chapter 6.5 --- Touch Detection using Homography and Embedded Code --- p.129 / Chapter 6.5.1 --- Hand Segmentation --- p.130 / Chapter 6.5.2 --- Fingertip Detection --- p.130 / Chapter 6.5.3 --- Touch Detection Through Homography --- p.131 / Chapter 6.5.4 --- From Resistive Touching to Capacitive Touching --- p.133 / Chapter 6.6 --- Experiments --- p.135 / Chapter 6.6.1 --- System Initialization --- p.137 / Chapter 6.6.2 --- Display Quality Evaluation --- p.139 / Chapter 6.6.3 --- Touch Accuracy Evaluation --- p.141 / Chapter 6.6.4 --- Trajectory Tracking Evaluation --- p.145 / Chapter 6.6.5 --- Multiple-Touch Evaluation --- p.145 / Chapter 6.6.6 --- Efficiency Evaluation --- p.147 / Chapter 6.7 --- Summary --- p.149 / Chapter 7 --- Conclusion and Future Work --- p.150 / Chapter 7.1 --- Conclusion and Contributions --- p.150 / Chapter 7.2 --- Related Publications --- p.152 / Chapter 7.3 --- Future Work --- p.153 / Bibliography --- p.155
1090

Tele-immersive display with live-streamed video.

January 2001 (has links)
Tang Wai-Kwan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 88-95). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Applications --- p.3 / Chapter 1.2 --- Motivation and Goal --- p.6 / Chapter 1.3 --- Thesis Outline --- p.7 / Chapter 2 --- Background and Related Work --- p.8 / Chapter 2.1 --- Panoramic Image Navigation --- p.8 / Chapter 2.2 --- Image Mosaicing --- p.9 / Chapter 2.2.1 --- Image Registration --- p.10 / Chapter 2.2.2 --- Image Composition --- p.12 / Chapter 2.3 --- Immersive Display --- p.13 / Chapter 2.4 --- Video Streaming --- p.14 / Chapter 2.4.1 --- Video Coding --- p.15 / Chapter 2.4.2 --- Transport Protocol --- p.18 / Chapter 3 --- System Design --- p.19 / Chapter 3.1 --- System Architecture --- p.19 / Chapter 3.1.1 --- Video Capture Module --- p.19 / Chapter 3.1.2 --- Video Streaming Module --- p.23 / Chapter 3.1.3 --- Stitching and Rendering Module --- p.24 / Chapter 3.1.4 --- Display Module --- p.24 / Chapter 3.2 --- Design Issues --- p.25 / Chapter 3.2.1 --- Modular Design --- p.25 / Chapter 3.2.2 --- Scalability --- p.26 / Chapter 3.2.3 --- Workload distribution --- p.26 / Chapter 4 --- Panoramic Video Mosaic --- p.28 / Chapter 4.1 --- Video Mosaic to Image Mosaic --- p.28 / Chapter 4.1.1 --- Assumptions --- p.29 / Chapter 4.1.2 --- Processing Pipeline --- p.30 / Chapter 4.2 --- Camera Calibration --- p.33 / Chapter 4.2.1 --- Perspective Projection --- p.33 / Chapter 4.2.2 --- Distortion --- p.36 / Chapter 4.2.3 --- Calibration Procedure --- p.37 / Chapter 4.3 --- Panorama Generation --- p.39 / Chapter 4.3.1 --- Cylindrical and Spherical Panoramas --- p.39 / Chapter 4.3.2 --- Homography --- p.41 / Chapter 4.3.3 --- Homography Computation --- p.42 / Chapter 4.3.4 --- Error Minimization --- p.44 / Chapter 4.3.5 --- Stitching Multiple Images --- p.46 / Chapter 4.3.6 --- Seamless Composition --- p.47 / Chapter 4.4 --- Image Mosaic to Video Mosaic --- p.49 / Chapter 4.4.1 --- Varying Intensity --- p.49 / Chapter 4.4.2 --- Video Frame Management --- p.50 / Chapter 5 --- Immersive Display --- p.52 / Chapter 5.1 --- Human Perception System --- p.52 / Chapter 5.2 --- Creating Virtual Scene --- p.53 / Chapter 5.3 --- VisionStation --- p.54 / Chapter 5.3.1 --- F-Theta Lens --- p.55 / Chapter 5.3.2 --- VisionStation Geometry --- p.56 / Chapter 5.3.3 --- Sweet Spot Relocation and Projection --- p.57 / Chapter 5.3.4 --- Sweet Spot Relocation in Vector Representation --- p.61 / Chapter 6 --- Video Streaming --- p.65 / Chapter 6.1 --- Video Compression --- p.66 / Chapter 6.2 --- Transport Protocol --- p.66 / Chapter 6.3 --- Latency and Jitter Control --- p.67 / Chapter 6.4 --- Synchronization --- p.70 / Chapter 7 --- Implementation and Results --- p.71 / Chapter 7.1 --- Video Capture --- p.71 / Chapter 7.2 --- Video Streaming --- p.73 / Chapter 7.2.1 --- Video Encoding --- p.73 / Chapter 7.2.2 --- Streaming Protocol --- p.75 / Chapter 7.3 --- Implementation Results --- p.76 / Chapter 7.3.1 --- Indoor Scene --- p.76 / Chapter 7.3.2 --- Outdoor Scene --- p.78 / Chapter 7.4 --- Evaluation --- p.78 / Chapter 8 --- Conclusion --- p.83 / Chapter 8.1 --- Summary --- p.83 / Chapter 8.2 --- Future Directions --- p.84 / Chapter A --- Parallax --- p.86

Page generated in 0.1368 seconds