• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 16
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Visual Hierarchical Dimension Reduction

Yang, Jing 09 January 2002 (has links)
Traditional visualization techniques for multidimensional data sets, such as parallel coordinates, star glyphs, and scatterplot matrices, do not scale well to high dimensional data sets. A common approach to solve this problem is dimensionality reduction. Existing dimensionality reduction techniques, such as Principal Component Analysis, Multidimensional Scaling, and Self Organizing Maps, have serious drawbacks in that the generated low dimensional subspace has no intuitive meaning to users. In addition, little user interaction is allowed in those highly automatic processes. In this thesis, we propose a new methodology to dimensionality reduction that combines automation and user interaction for the generation of meaningful subspaces, called the visual hierarchical dimension reduction (VHDR) framework. Firstly, VHDR groups all dimensions of a data set into a dimension hierarchy. This hierarchy is then visualized using a radial space-filling hierarchy visualization tool called Sunburst. Thus users are allowed to interactively explore and modify the dimension hierarchy, and select clusters at different levels of detail for the data display. VHDR then assigns a representative dimension to each dimension cluster selected by the users. Finally, VHDR maps the high-dimensional data set into the subspace composed of these representative dimensions and displays the projected subspace. To accomplish the latter, we have designed several extensions to existing popular multidimensional display techniques, such as parallel coordinates, star glyphs, and scatterplot matrices. These displays have been enhanced to express semantics of the selected subspace, such as the context of the dimensions and dissimilarity among the individual dimensions in a cluster. We have implemented all these features and incorporated them into the XmdvTool software package, which will be released as XmdvTool Version 6.0. Lastly, we developed two case studies to show how we apply VHDR to visualize and interactively explore a high dimensional data set.
12

Numerické metody pro rekonstrukci chybějící obrazové informace / Numerical methods for missing image processing data reconstruction

Bah, Ebrima M. January 2019 (has links)
The Diploma thesis deals with reconstruction of Missing data of an Image. It is done by the use of appropriate Mathematical theory and numerical algorithm to reconstruct missing information. The result of this implementation is the reconstruction of missing image information. The thesis also compares different numerical methods, and see which one of them perform best in terms of efficiency and accuracy of the given problem, hence it is used for the reconstruction of missing data.
13

Symbolic Semantic Memory in Transformer Language Models

Morain, Robert Kenneth 16 March 2022 (has links)
This paper demonstrates how transformer language models can be improved by giving them access to relevant structured data extracted from a knowledge base. The knowledge base preparation process and modifications to transformer models are explained. We evaluate these methods on language modeling and question answering tasks. These results show that even simple additional knowledge augmentation leads to a reduction in validation loss by 73%. These methods also significantly outperform common ways of improving language models such as increasing the model size or adding more data.
14

MACHINE LEARNING ALGORITHMS and THEIR APPLICATIONS in CLASSIFYING CYBER-ATTACKS on a SMART GRID NETWORK

Aribisala, Adedayo, Khan, Mohammad S., Husari, Ghaith 01 January 2021 (has links)
Smart grid architecture and Software-defined Networking (SDN) have evolved into a centrally controlled infrastructure that captures and extracts data in real-time through sensors, smart-meters, and virtual machines. These advances pose a risk and increase the vulnerabilities of these infrastructures to sophisticated cyberattacks like distributed denial of service (DDoS), false data injection attack (FDIA), and Data replay. Integrating machine learning with a network intrusion detection system (NIDS) can improve the system's accuracy and precision when detecting suspicious signatures and network anomalies. Analyzing data in real-time using trained and tested hyperparameters on a network traffic dataset applies to most network infrastructures. The NSL-KDD dataset implemented holds various classes, attack types, protocol suites like TCP, HTTP, and POP, which are critical to packet transmission on a smart grid network. In this paper, we leveraged existing machine learning (ML) algorithms, Support vector machine (SVM), K-nearest neighbor (KNN), Random Forest (RF), Naïve Bayes (NB), and Bagging; to perform a detailed performance comparison of selected classifiers. We propose a multi-level hybrid model of SVM integrated with RF for improved accuracy and precision during network filtering. The hybrid model SVM-RF returned an average accuracy of 94% in 10-fold cross-validation and 92.75%in an 80-20% split during class classification.
15

A proposed minimum data set for international primary care optometry: a modified Delphi study

Davey, Christopher J., Slade, S.V., Shickle, D. 04 May 2017 (has links)
Yes / Purpose: To identify a minimum list of metrics of international relevance to public health, research and service development which can be extracted from practice management systems and electronic patient records in primary optometric practice. Methods: A two stage modified Delphi technique was used. Stage 1 categorised metrics that may be recorded as being part of a primary eye examination by their importance to research using the results from a previous survey of 40 vision science and public health academics. Delphi stage 2 then gauged the opinion of a panel of 7 vision science academics and achieved consensus on contentious metrics and methods of grading/classification. Results: A consensus regarding inclusion and response categories was achieved for nearly all metrics. A recommendation was made of 53 metrics which would be appropriate in a minimum data set. Conclusions: This minimum data set should be easily integrated into clinical practice yet allow vital data to be collected internationally from primary care optometry. It should not be mistaken for a clinical guideline and should not add workload to the optometrist. A pilot study incorporating an additional Delphi stage prior to implementation is advisable to refine some response categories. / This work was supported by the College of Optometrists.
16

Implementation and Evaluation of Monocular SLAM

Martinsson, Jesper January 2022 (has links)
This thesis report aims to explain the research, implementation, and testing of a monocular SLAM system in an application developed by Voysys AB called Oden, as well as the making and investigation of a new data set used to test the SLAM system. Using CUDASIFT to find and match feature points, OpenCV to compute the initial guess, and the Ceres Solver to optimize the results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
17

Semi-supervised Ensemble Learning Methods for Enhanced Prognostics and Health Management

Shi, Zhe 15 May 2018 (has links)
No description available.
18

A Benchmark Data Set and Comparative Study for Protein Structural Alignment Tools

Mittal, Dipti January 2008 (has links)
No description available.
19

Efficient number similarity check

Simonsson, David January 2024 (has links)
Efficiency in algorithms is important, especially in terms of execution time, as it directly impacts user experience. For example, when a customer visits a website, even a mere one-second delay can significantly reduce their patience, and the likelihood of them abandoning the site increases. This principle applies to search algorithms as well. This project is about implementing a time-efficient tree-based search algorithm that focuses on finding similarities between search input and stored data. The objective is to achieve an execution time as close to O(1) regardless of the data size. The implemented algorithm will be compared with a linear search algorithm, which has an execution time that grows along with the data size. By measuring the executiontimes of both search methods, the project aims to demonstrate the superiority of the tree-based search algorithm in terms of time efficiency.
20

Feasibility of Event-Based Sensors to Detect and Track Unresolved, Fast-Moving, and Short-Lived Objects

Tinch, Jonathan Luc 13 July 2022 (has links)
No description available.

Page generated in 0.2682 seconds