• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1865
  • 727
  • 412
  • 249
  • 204
  • 143
  • 42
  • 37
  • 36
  • 35
  • 35
  • 30
  • 25
  • 14
  • 13
  • Tagged with
  • 4486
  • 1096
  • 1043
  • 607
  • 586
  • 510
  • 501
  • 460
  • 446
  • 378
  • 375
  • 367
  • 367
  • 354
  • 305
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Object Detection for Contactless Vital Signs Estimation

Yang, Fan 15 June 2021 (has links)
This thesis explores the contactless estimation of people’s vital signs. We designed two camera-based systems and applied object detection algorithms to locate the regions of interest where vital signs are estimated. With the development of Deep Learning, Convolutional Neural Network (CNN) model has many applications in the real world nowadays. We applied the CNN based frameworks to the different types of camera based systems and improve the efficiency of the contactless vital signs estimation. In the field of medical healthcare, contactless monitoring draws a lot attention in the recent years because the wide use of different sensors. However most of the methods are still in the experimental phase and have never been used in real applications. We were interested in monitoring vital signs of patients lying in bed or sitting around the bed at a hospital. This required using sensors that have range of 2 to 5 meters. We developed a system based on the depth camera for detecting people’s chest area and the radar for estimating the respiration signal. We applied a CNN based object detection method to locate the position of the subject lying in the bed covered with blanket. And the respiratory-like signal is estimated from the radar device based on the detected subject’s location. We also create a manually annotated dataset containing 1,320 depth images. In each of the depth image the silhouette of the subject’s upper body is annotated, as well as the class. In addition, a small subset of the depth images also labeled four keypoints for the positioning of people’s chest area. This dataset is built on the data collected from the anonymous patients at the hospital which is substantial. Another problem in the field of human vital signs monitoring is that systems seldom contain the functions of monitoring multiple vital signs at the same time. Though there are few attempting to work on this problem recently, they are still all prototypes and have a lot limitations like shorter operation distance. In this application, we focused on contactless estimating subjects’ temperature, breathing rate and heart rate at different distances with or without wearing the mask. We developed a system based on thermal and RGB camera and also explore the feasibility of CNN based object detection algorithms to detect the vital signs from human faces with specifically defined RoIs based on our thermal camera system. We proposed the methods to estimate respiratory rate and heart rate from the thermal videos and RGB videos. The mean absolute difference (MAE) between the estimated HR using the proposed method and the baseline HR for all subjects at different distances is 4.24 ± 2.47 beats per minute, the MAE between the estimated RR and the reference RR for all subjects at different distances is 1.55 ± 0.78 breaths per minute.
182

Forward Leading Vehicle Detection for Driver Assistant System

Wen, Wen 14 May 2021 (has links)
Keeping a safe distance from the forward-leading vehicle is an essential feature of modern Advanced Driver Assistant Systems (ADAS), especially for transportation companies with a fleet of trucks. We propose in this thesis a Forward Collision Warning (FCW) system, which collects visual information using smartphones attached for instance to the windshield of a vehicle. The basic idea is to detect the forward-leading vehicle and estimate its distance from the vehicle. Given the limited resources of computation and memory of mobile devices, the main challenge of this work is running CNN-based object detectors at real-time without hurting the performance. In this thesis, we analyze the bounding boxes distribution of the vehicles, then propose an efficient and customized deep neural network for forward-leading vehicle detection. We apply a detection-tracking scheme to increase the frame rate of vehicle detection and maintain good performance. Then we propose a simple leading vehicle distance estimation approach for monocular cameras. With the techniques above, we build an FCW system that has low computation and memory requirements that are suitable for mobile devices. Our FCW system has 49% less allocated memory, 7.5% higher frame rate, and 21% less battery consumption speed than popular deep object detectors. A sample video is available at https://youtu.be/-ptvfabBZWA.
183

Camouflaged Object Segmentation in Images

Yan, Jinnan January 2019 (has links)
No description available.
184

Object Detection with Swin Vision Transformers from Raw ADC Radar Signals

Giroux, James 15 August 2023 (has links)
Object detection utilizing frequency modulated continuous wave radar is becoming increasingly popular in the field of autonomous vehicles. Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow. Thus, there is a necessity for fully autonomous systems to utilize radar sensing applications in downstream decision-making tasks, generally handled by deep learning algorithms. Commonly, three transformations have been used to form range-azimuth-Doppler cubes in which deep learning algorithms could perform object detection. This method has drawbacks, specifically the pre-processing costs associated with performing multiple Fourier Transforms and normalization. We develop a network utilizing raw radar analog-to-digital converter output capable of operating in near real-time given the removal of all pre-processing. We obtain inference time estimates one-fifth of the traditional range-Doppler pipeline, decreasing from $\SI{156}{\milli\second}$ to $\SI{30}{\milli\second}$, and similar decreases in comparison to the full range-azimuth-Doppler cube. Moreover, we introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations, \textit{i.e.}, relatively low and high numbers of transmitters and receivers. Our network increases both average recall, and mean intersection over union performance by $\sim 6-7\%$, obtaining state-of-the-art F1 scores as a result on high-definition radar. On low-definition radar, we note an increase in mean average precision of $\sim 2.5\%$ over state-of-the-art range-Doppler networks when raw analog-to-digital converter data is used, and a $\sim5\%$ increase over networks using the full range-azimuth-Doppler cube.
185

Scalable Multi-Task Learning R-CNN for Classification and Localization in Autonomous Vehicle Technology

Rinchen, Sonam 28 April 2023 (has links)
Multi-task learning (MTL) is a rapidly growing field in the world of autonomous vehicles, particularly in the area of computer vision. Autonomous vehicles are heavily reliant on computer vision technology for tasks such as object detection, object segmentation, and object tracking. The complexity of sensor data and the multiple tasks involved in autonomous driving can make it challenging to design effective systems. MTL addresses these challenges by training a single model to perform multiple tasks simultaneously, utilizing shared representations to learn common concepts between a group of related tasks, and improving data efficiency. In this thesis, we proposed a scalable MTL system for object detection that can be used to construct any MTL network with different scales and shapes. The proposed system is an extension to the state-of-art algorithm called Mask RCNN. It is designed to overcome the limitations of learning multiple objects in multi-label learning. To demonstrate the effectiveness of the proposed system, we built three different networks using it and evaluated their performance on the state-of-the-art BDD100k dataset. Our experimental results demonstrate that the proposed MTL networks outperform a base single-task network, Mask RCNN, in terms of mean average precision at 50 (mAP50). Specifically, the proposed MTL networks achieved a mAP50 of 66%, while the base network only achieved 53%. Furthermore, we also conducted comparisons between the proposed MTL networks to determine the most efficient way to group tasks together in order to create an optimal MTL network for object detection on the BDD100k dataset.
186

Incident Response Enhancements using Streamlined UAV Mission Planning, Imaging, and Object Detection

Link, Eric Matthew 29 June 2023 (has links)
Systems composed of simple, reliable tools are needed to facilitate adoption of Uncrewed Aerial Vehicles (UAVs) into incident response teams. Existing systems require operators to have highly skilled level of knowledge of UAV operations, including mission planning, low-level system operation, and data analysis. In this paper, a system is introduced to reduce required operator knowledge level via streamlined mission planning, in-flight object detection, and data presentation. For mission planning, two software programs are introduced that utilize geographic data to: (1) update existing missions to a constant above ground level altitude; and (2) auto-generate missions along waterways. To test system performance, a UAV platform based on the Tarot 960 was equipped with an Nvidia Jetson TX2 computing device and a FLIR GigE camera. For demonstration of on-board object detection, the You Only Look Once v8 model was trained on mock propane tanks. A Robot Operating System package was developed to manage communication between the flight controller, camera, and object detection model. Finally, software was developed to present collected data in easy to understand interactive maps containing both detected object locations and surveyed area imagery. Several flight demonstrations were conducted to validate both the performance and usability of the system. The mission planning programs accurately adjust altitude and generate missions along waterways. While in flight, the system demonstrated the capability to take images, perform object detection, and return estimated object locations with an average accuracy of 3.5 meters. The calculated object location data was successfully formatted into interactive maps, providing incident responders with a simple visualization of target locations and surrounding environment. Overall, the system presented meets the specified objectives by reducing the required operator skill level for successful deployment of UAVs into incident response scenarios. / Master of Science / Systems composed of simple, reliable tools are needed to facilitate adoption of Uncrewed Aerial Vehicles (UAVs) into incident response teams. Existing systems require operators to have a high level of knowledge of UAV operations. In this paper, a new system is introduced that reduces required operator knowledge via streamlined mission planning, in-flight object detection, and data presentation. Two mission planning computer programs are introduced that allow users to: (1) update existing missions to maintain constant above ground level altitude; and (2) to autonomously generate missions along waterways. For demonstration of in-flight object detection, a computer vision model was trained on mock propane tanks. Software for capturing images and running the computer vision model was written and deployed onto a UAV equipped with a computer and camera. For post-flight data analysis, software was written to create image mosaics of the surveyed area as well as to plot detected objects on maps. The mission planning software was shown to appropriately adjust altitude in existing missions and to generate new missions along waterways. Through several flight demonstrations, the system appropriately captured images and identified detected target locations with an average accuracy of 3.5 meters. Post-flight, the collected images were successfully combined into single-image mosaics with detected objects marked as points of interest. Overall, the system presented meets the specified objectives by reducing the required operator skill level for successful deployment of UAVs into incident response scenarios.
187

An Investigation into the Motion Cues Eliciting a Perception of Animacy

Szego, Paul 07 1900 (has links)
<p> The perception of animacy - judging an object as appearing alive - is a fundamental social perception dating back to Piaget. The present research investigates motion to examine how and when people will perceive an ambiguous moving object as appearing alive.</p> <p> Chapter 1 uses a number of methods to illustrate that people will judge a relatively faster-moving object as appearing alive more often than an identical but relatively slower-moving object. Chapter 2 demonstrates that people are more likely to perceive an object moving at a constant speed if it appears to move relatively faster than a similar object. Further, people will make this judgement even if the differences in speed are not real, but merely illusory.</p> <p> Chapter 3 describes a specific case where the association of greater speed and animacy is not perceptually maintained. By showing people objects that appear to fall or rise - thereby obeying or violating gravity - it is shown that our perceptions of animacy are not fixed, but rather are functionally adapted to at least one regular and predictable feature of the visual environment; namely gravity. This suggests that some aspects of our perceptions of animacy have been shaped over evolutionary time.</p> <p> The following chapter examines whether our perceptions of animacy are structured - like our perceptions of colours - categorically, such that there is an identifiable boundary between the velocities that elicit a perception of animacy and the velocities that do not. Results suggest that people do not perceive animacy categorically</p> <p> The final empirical chapter illustrates that experience over the lifespan also influences our perceptions of animacy and of speed. Monolingual readers of a language read from left-to-right (viz., English) were biased to judge an object moving in that direction as appearing faster and more alive than an object moving at the same speed in the opposite direction. However, bilingual readers of both English and a language read from right-to-left did not exhibit this bias.</p> / Thesis / Doctor of Philosophy (PhD)
188

USING DYNAMIC MIXINS FOR SOFTWARE DEVELOPMENT

Burton, Ronald January 2018 (has links)
Object-oriented programming has gained significant traction in the software development community and is now the common approach for developing large, commercial applications. Many of these applications require the behaviour of objects to be modified at run-time. Contemporary class-based, statically-typed languages such as C++ and Java require collaboration with external objects to modify an object’s behaviour. Furthermore, such an object must be designed to order to support such collaborations. Dynamic languages such as Python which natively support object extension do not guarantee type safety. In this work, using dynamic mixins with static typing is proposed as a means of providing type-safe, object extension. A new language called mix is introduced that allows a compiler to syntactically check the type-safety of an object extension. A model to support object-oriented development is extended to support dynamic mixins. The utility of the approach is illustrated using sample use cases. Finally, a compiler was implemented to validate the practicality of the model proposed. / Thesis / Doctor of Philosophy (PhD)
189

Perceived Size Modulates Cortical Processing of Objects

Brown, James Michael 28 January 2016 (has links)
Empirical object recognition research indicates that objects are represented and perceived as hierarchical part-whole arrangements that vary according to bottom-up and top-down biases. An ongoing debate within object recognition research concerns whether local or global image properties are more fundamental for the perception of objects. Similarly, there is also disagreement about whether the visual system is guided by holistic or analytical processes. Neuroimaging findings have revealed functional distinctions between low and higher-level visual processes across lateral occipital-temporal cortex (LOC), primary visual cortices (V1/V2) and ventral occipital-temporal cortex. Recent studies suggest activations in these object recognition areas and others, such as the fusiform face area (FFA) and extra-striate body area (EBA), are collinear with activations associated with the perception scenes and buildings. Together, this information warrants the focus of the proposed study: to investigate the neural correlates of object recognition and perceived size. During the experiment subjects tracked a fixation stimulus while simultaneously being presented with images of shape contours and faces. Contours and face stimuli subtended small, medium and large visual angles in order to evaluate variance in neural activation across perceived size. In the present study visual areas were hypothesized to modulate as a function of visual angle, meaning that the part-whole relationships of objects vary with their perceived size. / Master of Science
190

Object Proposals in Computer Vision

Chavali, Neelima 09 September 2015 (has links)
Object recognition is a central problem in computer vision which deals with both localizing and identifying objects in images. Object proposals have recently become an important part of the object recognition process. Object proposals are algorithms used for localizing objects in images. This thesis is a study in object proposals and is composed of three parts. First, we present a new data-driven approach for generating object proposals. Second, we release a MATLAB library which can be used to generate object proposals using all the existing algorithms. The library can also be used for evaluating object proposals using the three most commonly used metrics. Finally, we identify previously unnoticed bias in the existing protocol for evaluating object proposals and propose ways to alleviate this bias. / Master of Science

Page generated in 0.0308 seconds