71 |
On The Development Of In-Flight Autonomous Integrity Monitoring Of Stored Geo-Spatial Data Using Forward-Looking Remote Sensing TechnologyYoung, Steven D. 21 April 2005 (has links)
No description available.
|
72 |
Implementation of a 3D Imaging Sensor Aided Inertial Measurement Unit Navigation SystemVenable, Donald T. 03 October 2008 (has links)
No description available.
|
73 |
A study of the generalized eigenvalue decomposition in discriminant analysisZhu, Manli 12 September 2006 (has links)
No description available.
|
74 |
Improving the Quality of LiDAR Point Cloud Data for Greenhouse Crop MonitoringSi, Gaoshoutong 09 August 2022 (has links)
No description available.
|
75 |
Feature Extraction and Feasibility Study on CT Image Guided ColonoscopyShen, Yuan 14 May 2010 (has links)
Computed tomographic colonography(CTC), also called virtual colonoscopy, uses CT scanning and computer post-processing to create two dimensional images and three dimensional virtual views inside of the colon. Computer-aided polyp detection(CAPD) automatically detects colonic polyps and presents them to the user in either a first or second reader paradigm, with a goal reducing examination time while increasing the detection sensitivity. During colonoscopy, the endoscopists use the colonoscope inside of a patient's colon to target potential polyps and validate CAPD found ones. However, there is no direct information linking between CT images and the real-time optical colonoscopy(OC) video provided during the operation, thus endoscopists need to rely largely on their past experience to locate and remove polyps. The goal of this research project is to study the feasibility of developing an image guided colonoscopy(IGC) system that combines CTC images, real-time colonoscope position measurements, and video stream to validate and guide the removal of polyps found in CAPD. System would ease polyp level validation of CTC and improve the accuracy and efficiency of guiding the endoscopist to the target polyps. In this research project, a centerline based matching algorithm has been designed to estimate, in real time, the relative location of the colonoscope in the virtual colonoscopy environment. Furthermore, the feasibility of applying online simultaneous localization and mapping(SLAM) into CT image guided colonoscopy has been evaluated to further improve the performance of localizing and removing the pre-defined target polyps. A colon phantom is used to provide a testing setup to assess the performance of the proposed algorithms. / Master of Science
|
76 |
Supervoxel Based Object Detection and Seafloor Segmentation Using Novel 3d Side-Scan SonarPatel, Kushal Girishkumar 12 November 2021 (has links)
Object detection and seafloor segmentation for conventional 2D side-scan sonar imagery is a well-investigated problem. However, due to recent advances in sensing technology, the side-scan sonar now produces a true 3D point cloud representation of the seafloor embedded with echo intensity. This creates a need to develop algorithms to process the incoming 3D data for applications such as object detection and segmentation, and an opportunity to leverage advances in 3D point cloud processing developed for terrestrial applications using optical sensors (e.g. LiDAR). A bottleneck in deploying 3D side-scan sonar sensors for online applications is attributed to the complexity in handling large amounts of data which requires higher memory for storing and processing data on embedded computers. The present research aims to improve data processing capabilities on-board autonomous underwater vehicles (AUVs). A supervoxel-based framework for over-segmentation and object detection is proposed which reduces a dense point cloud into clusters of similar points in a neighborhood. Supervoxels extracted from the point cloud are then described using feature vectors which are computed using geometry, echo intensity and depth attributes of the constituent points. Unsupervised density based clustering is applied on the feature space to detect objects which appear as outliers. / Master of Science / Acoustic imaging using side-scan sonar sensors has proven to be useful for tasks like seafloor mapping, mine countermeasures and habitat mapping. Due to advancements in sensing technology, a novel type of side-scan sonar sensor is developed which provides true 3D representation of the seafloor along with the echo intensity image. To improve the usability of the novel sensors on-board the carrying vehicles, efficient algorithms needs to be developed. In underwater robotics, limited computational and data storage capabilities are available which poses additional challenges in online perception applications like object detection and segmentation. In this project, I investigate a clustering based approach followed by an unsupervised machine learning method to perform detection of objects on the seafloor using the novel side scan sonar. I also show the usability of the approach for performing segmentation of the seafloor.
|
77 |
3D face recognition based on machine learningQatawneh, S., Ipson, Stanley S., Qahwaji, Rami S.R., Ugail, Hassan January 2008 (has links)
Yes / 3D facial data has a great potential for overcoming the problems of illumination and pose variation in face recognition. In this paper, we present a 3D facial system based on the machine learning. We used landmarks for feature extraction and Cascade Correlation neural network to make the final decision. Experiments are presented using 3D face images from the Face Recognition Grand Challenge database version 2.0. For CCNN using Jack-knife evaluation, an accuracy of 100% has been achieved for 7 faces with different expression, with 100% for both of specificity and sensitivity.
|
78 |
Exercise Classification with Machine LearningEkstrand, Joel January 2023 (has links)
Innowearable AB has developed a product called Inno-XTM that calculates musclefatigue during three exercises: squat jumps, wall sit, and leg extension. Inno-X uses an accelerometer and a surface electromyography sensor. The goal of thisproject was to create the signal processing part of a machine-learning (ML) pipeline that classifies the exercises in real-time. Data was collected from the sensors to create a training environment that could later be translated to a real-time environment using a sliding window technique. A Savitsky-Golay filter (SG), lowpass, and highpass filters were tested in order to remove noise from the signal. The best filter proved to be the SG filter. Both time and frequency domain features were used in feature extraction. The finished product used 24 features from both domains combined. These methods together with the ML algorithms created in a collabora-tive project led to a classification accuracy for the training environment of 98.62%, while the real-time environment reached 90%. By collecting a larger and more diverse dataset, and addressing the issue of leg extension and wall sit exercises being too similar, real-time classification can be further improved which will make the ML pipeline usable for Innowearables’ customers. / Innowearable AB har utvecklat en produkt som heter Inno-XTM som räknar ut muskeltröttheten vid 3 övningar: upphopp, jägarvila och benextensioner. Inno-X använder en accelerometer och en yt-elektromyografi-sensor. Målet med projektet var att skapa signalprocesseringsdelen av en machine learning (ML) pipelinesom klassificerar dessa övningar i realtid. Data samlades in från sensorerna för att skapa en träningsmiljö som sedan kunde gå över i realtidsmiljö genom attanvända en sliding-window teknik. Savitsky-Golay (SG) filter, högpassfilter, och lågpassfilter användes för att reducera brus i sensorsignalerna. SG filtret presterade bäst. Features från både tids- och frekvensdomän användes i feature extraction. Slutprodukten använde 24 features kombinerat från båda domänen. Dessa metoder tillsammans med ML algoritmer som togs fram i ett partnerprojekt gav ett resultat i träningsmiljön på 98.62% i klassificeringsnoggrannhet och 90% för realtidsmiljön. Genom att samla större mängd data med mer diversitet och lösa problemetatiken i att jägarvila och benextensioner är för lika, kommer realtidsklas-sifikationen förbättras vilket hade gjort att ML pipelinen blir användbar för Innowearables kunder.
|
79 |
Deep Visual Inertial-Aided Feature Extraction Network for Visual Odometry : Deep Neural Network training scheme to fuse visual and inertial information for feature extraction / Deep Visual Inertial-stöttat Funktionsextraktionsnätverk för Visuell Odometri : Träningsalgoritm för djupa Neurala Nätverk som sammanför visuell- och tröghetsinformation för särdragsextraktionSerra, Franco January 2022 (has links)
Feature extraction is an essential part of the Visual Odometry problem. In recent years, with the rise of Neural Networks, the problem has shifted from a more classical to a deep learning approach. This thesis presents a fine-tuned feature extraction network trained on pose estimation as a proxy task. The architecture aims at integrating inertial information coming from IMU sensor data in the deep local feature extraction paradigm. Specifically, visual features and inertial features are extracted using Neural Networks. These features are then fused together and further processed to regress the pose of a moving agent. The visual feature extraction network is effectively fine-tuned and is used stand-alone for inference. The approach is validated via a qualitative analysis on the keypoints extracted and also in a more quantitative way. Quantitatively, the feature extraction network is used to perform Visual Odometry on the Kitti dataset where the ATE for various sequences is reported. As a comparison, the proposed method, the proposed without IMU and the original pre-trained feature extraction network are used to extract features for the Visual Odometry task. Their ATE results and relative trajectories show that in sequences with great change in orientation the proposed system outperforms the original one, while on mostly straight sequences the original system performs slightly better. / Feature extraktion är en viktig del av visuell odometri (VO). Under de senaste åren har framväxten av neurala nätverk gjort att tillvägagångsättet skiftat från klassiska metoder till Deep Learning metoder. Denna rapport presenterar ett kalibrerat feature extraheringsnätverk som är tränat med posesuppskattning som en proxyuppgift. Arkitekturen syftar till att integrera tröghetsinformation som kommer från sensordata i feature extraheringsnätverket. Specifikt extraheras visuella features och tröghetsfeatures med hjälp av neurala nätverk. Dessa features slås ihop och bearbetas ytterligare för att estimera position och riktning av en rörlig kamera. Metoden har undersökts genom en kvalitativ analys av featurepunkternas läge men även på ett mer kvantitativt sätt där VO-estimering på olika bildsekvenser från KITTI-datasetet har jämförts. Resultaten visar att i sekvenser med stora riktningsförändringar överträffar det föreslagna systemet det ursprungliga, medan originalsystemet presterar något bättre på sekvenser som är mestadels raka.
|
80 |
An Explorative Parameter Sweep: Spatial-temporal Data Mining in Stochastic Reaction-diffusion SimulationsWrede, Fredrik January 2016 (has links)
Stochastic reaction-diffusion simulations has become an efficient approach for modelling spatial aspects of intracellular biochemical reaction networks. By accounting for intrinsic noise due to low copy number of chemical species, stochastic reaction-diffusion simulations have the ability to more accurately predict and model biological systems. As with many simulations software, exploration of the parameters associated with the model can be needed to yield new knowledge about the underlying system. The exploration can be conducted by executing parameter sweeps for a model. However, with little or no prior knowledge about the modelled system, the effort for practitioners to explore the parameter space can get overwhelming. To account for this problem we perform a feasibility study on an explorative behavioural analysis of stochastic reaction-diffusion simulations by applying spatial-temporal data mining to large parameter sweeps. By reducing individual simulation outputs into a feature space involving simple time series and distribution analytics, we were able to find similar behaving simulations after performing an agglomerative hierarchical clustering.
|
Page generated in 0.1831 seconds