• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 1
  • Tagged with
  • 16
  • 16
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Recommender System for Audio Recordings

Lee, Jong Seo 01 January 2010 (has links) (PDF)
Nowadays the largest E-commerce or E-service websites offer millions of products for sale. A Recommender system is defined as software used by such websites for recommending commercial or noncommercial product items to users according to the users’ tastes. In this project, we develop a recommender system for a private multimedia web service company. In particular, we devise three recommendation engines using different data filtering methods – named weighted-average, K-nearest neighbors, and item-based – which are based on collaborative filtering techniques, which work by recording user preferences on items and by anticipating the future likes and dislikes of users by comparing the records, for prediction of user preference. To acquire proper input data for the three engines, we retrieve data from database using three data collection techniques: active filtering, passive filtering, and item-based filtering. For experimental purpose we compare prediction accuracy of those three recommendation engines with the results from each engine and additionally we evaluate the performance of weighted-average method using an empirical analysis approach – a methodology which was devised for verification of predictive accuracy.
2

IRIG-106 CHAPTER 10 RECORDER WITH BUILT-IN DATA FILTERING MECHANISM

Berdugo, Albert, Natale, Louis 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Sixteen years ago, RCC added Chapter 8 to the IRIG-106 standard for the acquisition of 100% MIL-STD-1553 data from up to eight buses for recording and/or transmission. In the past 5 years, the RCC recording committee added Chapter 10 to the IRIG-106 standard for acquisition of 100% data from PCM, MIL-STD-1553 busses, Video, ARINC-429, Ethernet, IEEE-1394, and others. IRIG-106 Chapter 10 recorder suppliers have further developed customer-specific interfaces to meet additional customer needs. These needs have included unique radar and avionic bus interfaces such as F-16 Fibre Channel, F-35 Fibre Channel, F-22 FOTR, and others. IRIG-106 Chapter 8 and Chapter 10 have provided major challenges to the user community when the acquired avionics bus data included data that must be filtered and never leave the test platform via TM or recording media. The preferred method of filtering data to ensure that it is never recorded or transmitted is to do so at the interface level with the avionic busses. This paper describes the data filtering used on the F-22 Program for the MIL-STD-1553 buses and the FOTR bus as part of the IRIG-106 Chapter 10 Multiplexer/Recorder System. This filtering method blocks selected data at the interface level prior to being transferred over the system bus to the media(s). Additionally, the paper describes the configuration method for defining the data to be blocked and the report generated in order to allow for a second party to verify proper programming of the system.
3

Data Filtering and Modeling for Smart Manufacturing Network

Li, Yifu 13 August 2020 (has links)
A smart manufacturing network connects machines via sensing, communication, and actuation networks. The data generated from the networks are used in data-driven modeling and decision-making to improve quality, productivity, and flexibility while reducing the cost. This dissertation focuses on improving the data-driven modeling of the quality-process relationship in smart manufacturing networks. The quality-process variable relationships are important to understand for guiding the quality improvement by optimizing the process variables. However, several challenges emerge. First, the big data sets generated from the manufacturing network may be information-poor for modeling, which may lead to high data transmission and computational loads and redundant data storage. Second, the data generated from connected machines often contain inexplicit similarities due to similar product designs and manufacturing processes. Modeling such inexplicit similarities remains challenging. Third, it is unclear how to select representative data sets for modeling in a manufacturing network setting, considering inexplicit similarities. In this dissertation, a data filtering method is proposed to select a relatively small and informative data subset. Multi-task learning is combined with latent variable decomposition to model multiple connected manufacturing processes that are similar-but-non-identical. A data filtering and modeling framework is also proposed to filter the manufacturing data for manufacturing network modeling adaptively. The proposed methodologies have been validated through simulation and the applications to real manufacturing case studies. / Doctor of Philosophy / The advancement of the Internet-of-Things (IoT) integrates manufacturing processes and equipment into a network. Practitioners analyze and apply the data generated from the network to model the manufacturing network to improve product quality. The data quality directly affects the modeling performance and decision effectiveness. However, the data quality is not well controlled in a manufacturing network setting. In this dissertation, we propose a data quality assurance method, referred to as data filtering. The proposed method selects a data subset from raw data collected from the manufacturing network. The proposed method reduces the complexity in modeling while supporting decision effectiveness. To model the data from multiple similar-but-non-identical manufacturing processes, we propose a latent variable decomposition-based multi-task learning model to study the relationships between the process variables and product quality variable. Lastly, to adaptively determine the appropriate data subset for modeling each process in the manufacturing network, we further proposed an integrated data filtering and modeling framework. The proposed integrated framework improved the modeling performance of data generated by babycare manufacturing and semiconductor manufacturing.
4

Data Filtering Unit (DFU): Dealing With Cryptovariable Keys in Data Recorded Using the IRIG 106 Chapter 10 Format

Manning, Dennis, Williams, Rick, Ferrill, Paul 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Recent advancements in IRIG 106 Chapter 10 recording systems allow the recording of all on board 1553 bus and PCM traffic to a single media. These advancements have also brought about the issue of extracting data with different levels of classification that was written to single location. Carrying GPS “smart” weapons further complicates this issue since the recording of GPS keys adds another level of classification to the mix. The ability to separate and/or remove higher level data from a data product is now required. This paper describes the design of a hardware device that will filter specified data from IRIG 106 Chapter 10 recorder memory modules (RMMs) to prevent the storage device or computer from becoming classified at the level of the specified data.
5

Mapování skalních útvarů pomocí geoinformačních metod / Topographic mapping of rock formations usig GIS methods

Bashir, Faraz Ahmed January 2021 (has links)
Topographic mapping of rock formations using GIS methods Abstract This thesis deals with issues of creating 3D models of rock formations with data from terrestrial laser scanning, close range photogrammetry and UAV photogrammetry. The theoretical part focuses on explaining functioning and usage of those methods. Beside that there is described issues of 3D point cloud filtering. Practical part of this work describes data collecting and processing procedure. Further there is proposed filtering process which aim to remove noise points from point clouds and remove vegetation with combination of vegetation index ExG, clustering algorithm DBSCAN and Hough Transform. The proposed method is tested on the selected rock formation in Bohemian Switzerland National Park. The evaluation of the proposed method is based on comparison of models filtered with proposed method with reference models, which are filtered manually. Finally, the achieved accuracy of the models is evaluated using geodetic measurements. key words laser scanning, photogrammetry, UAV, point cloud, data filtering
6

3D visualization of dynamic drive test data / 3D-visualisering av dynamiska körprovsdata

Lindhe, Alexander, Szalontai, Julia January 2015 (has links)
The modular product system of Scania CV AB provides the possibility of complete truck customization while using a limited number of interchangeable components. The high product modularity sets high demands on quality assurance of the delivered products. Geometry and layout assurance is a key factor of the quality control. Dynamic geometry assurance of trucks is accomplished by performing physical tests while measuring the movement of certain components. The results are then analysed in order to ensure that unwanted collisions does not occur during the operation of the vehicle. Test results are presented in test reports containing 2D plots of delta movements that occur at certain measurement points. Test reports are considered difficult to interpret and design mistakes have occurred due to misinterpretations. The purpose of the master thesis was to develop a 3D visualization method that can complement test reports and facilitate the understanding of test results. In this master thesis, several visualization methods were identified. The identified visualization methods were evaluated according to requirements derived from interviews held at Scania. One method was then chosen for further development. The thesis project focused on cabin movement visualization. However, the aim of the development was to create a general method that is applicable for all main components, e.g. chassis and engine. The result of the development was a visualization method including a MATLAB script and a CATIA macro. The MATLAB script filters raw test data for extreme positions of the cabin. These positions are then recalculated as transformation matrices and exported as an Excel sheet. The Excel sheet is further imported by the CATIA macro, which instantiates and positions user selected components into the previously found extreme position. The developed visualization method was then verified and confirmed of providing reliable results. Furthermore, benefits and drawbacks of the visualization method are discussed. The developed visualization method is then evaluated by the previous set requirements, showing that these are fulfilled. Even though more verification of the visualization method is suggested, it is concluded that the method can and should be implemented into the current workflow. / Scania CV AB’s modulära produktsystem medför möjligheten till komplett lastbilsanpassning samtidigt som endast ett begränsat antal utbytbara komponenter används. Den höga produktmodulariteten ställer höga krav på kvalitetssäkring av de levererade produkterna. Geometri och layoutsäkring är en nyckelfaktor inom säkerhetställandet av kvaliteten. Dynamisk geometrisäkran av lastbilar utförs genom att mäta rörelser av vissa komponenter under fysiska provningar. Resultaten analyseras sedan för att säkerställa att inga oönskande kollisioner inträffar under drift av fordonet. Provresultaten presenteras i provningsrapporter i form av 2D-plottar visande deltarörelser som inträffat vid specifika mätpunkter. Provningsrapporter anses vara svårtolkade och konstruktionsmisstag har inträffat på grund av feltolkningar av dessa. Syftet med examensarbetet var att utveckla en 3D-visualiseringsmetod som kan komplettera provningsrapporter och underlätta förståelsen av provresultaten. I detta examensarbete har flera visualiseringsmetoder identifierats. De identifierade visualiseringsmetoderna utvärderades sedan enligt krav härledda från intervjuer som hölls på Scania. En metod valdes därefter för vidare utveckling. Examensarbetet inriktades mot visualisering av hyttrörelser. Målet med utvecklingen var dock att skapa en generell metod för rörelser av alla huvudkomponenter, som till exempel axlar och motor. Resultatet av utvecklingen var en visualiseringsmetod som inkluderade ett MATLAB-script samt ett CATIA-makro. MATLAB-scriptet filtrerar råtestdata för extrema positioner av hytten. Dessa positioner räknas sedan om som transformationsmatriser och exporteras til ett Excel-ark. Excel-arket importeras sedan av CATIA-makrot till CATIA, som instansierar och positionerar användarvalda komponenter i de tidigare hittade extrempositionerna. Den utvecklade visualiseringsmetoden verifieras sedan och det bekräftas att tillförlitliga resultat fås fram. Dessutom diskuteras fördelarna och nackdelarna med visualiseringmetoden. Den utvecklade visualiseringsmetoden utvärderas sedan med de tidigare ställda kraven. Utvärderingen visar att dessa uppfylls. Även om ytterligare verifiering av visualiseringsmetoden föreslås, dras slutsatsen att metoden kan och bör implementeras i det aktuella arbetsflödet.
7

Data Driven High Performance Data Access

Ramljak, Dusan January 2018 (has links)
Low-latency, high throughput mechanisms to retrieve data become increasingly crucial as the cyber and cyber-physical systems pour out increasing amounts of data that often must be analyzed in an online manner. Generally, as the data volume increases, the marginal utility of an ``average'' data item tends to decline, which requires greater effort in identifying the most valuable data items and making them available with minimal overhead. We believe that data analytics driven mechanisms have a big role to play in solving this needle-in-the-haystack problem. We rely on the claim that efficient pattern discovery and description, coupled with the observed predictability of complex patterns within many applications offers significant potential to enable many I/O optimizations. Our research covers exploitation of storage hierarchy for data driven caching and tiering, reduction of distance between data and computations, removing redundancy in data, using sparse representations of data, the impact of data access mechanisms on resilience, energy consumption, storage usage, and the enablement of new classes of data driven applications. For caching and prefetching, we offer a powerful model that separates the process of access prediction from the data retrieval mechanism. Predictions are made on a data entity basis and used the notions of ``context'' and its aspects such as ``belief'' to uncover and leverage future data needs. This approach allows truly opportunistic utilization of predictive information. We elaborate on which aspects of the context we are using in areas other than caching and prefetching different situations and why it is appropriate in the specified situation. We present in more details the methods we have developed, BeliefCache for data driven caching and prefetching and AVSC for pattern mining based compression of data. In BeliefCache, using a belief, an aspect of context representing an estimate of the probability that the storage element will be needed, we developed modular framework BeliefCache, to make unified informed decisions about that element or a group. For the workloads we examined we were able to capture complex non-sequential access patterns better than a state-of-the-art framework for optimizing cloud storage gateways. Moreover, our framework is also able to adjust to variations in the workload faster. It also does not require a static workload to be effective since modular framework allows for discovering and adapting to the changes in the workload. In AVSC, using an aspect of context to gauge the similarity of the events, we perform our compression by keeping relevant events intact and approximating other events. We do that in two stages. We first generate a summarization of the data, then approximately match the remaining events with the existing patterns if possible, or add the patterns to the summary otherwise. We show gains over the plain lossless compression for a specified amount of accuracy for purposes of identifying the state of the system and a clear tradeoff in between the compressibility and fidelity. In other mentioned research areas we present challenges and opportunities with the hope that will spur researchers to further examine those issues in the space of rapidly emerging data intensive applications. We also discuss the ideas how our research in other domains could be applied in our attempts to provide high performance data access. / Computer and Information Science
8

A Knowledge-Based Approach to Urban-feature Classification Using Aerial Imagery with Airborne LiDAR Data

Huang, Ming-Jer 11 June 2007 (has links)
Multi-spectral Satellite imagery, among remotely sensed data from airborne and spaceborne platforms, contained the NIR band information is the major source for the land- cover classification. The main purpose of aerial imagery is for thematic land-use/land-cover mapping which is rarely used for land cover classification. Recently, the newly developed digital aerial cameras containing NIR band with up to 10cm ultra high resolution makes the land-cover classification using aerial imagery possible. However, because the urban ground objects are so complex, multi-spectral imagery is still not sufficient for urban classification. Problems include the difficulty in discriminating between trees and grass, the misclassification of buildings due to diverse roof compositions and shadow effects, and the misclassification of cars on roads. Recently, aerial LiDAR (ULiUght UDUetection UAUnd URUanging) data have been integrated with remotely sensed data to obtain better classification results. The LiDAR-derived normalized digital surface models (nDSMs) calculated by subtracting digital elevation models (DEMs) from digital surface models (DSMs) becomes an important factor for urban classification. This study proposed an adaptive raw-data-based, surface-based LiDAR data-filtering algorithm to generate DEMs as the foundation of generating the nDSMs. According to the experiment results, the proposed adaptive LiDAR data-filtering algorithm not only successfully filters out ground objects in urban, forest, and mixed land cover areas but also derives DEMs within the LiDAR data measuring accuracy based on the absolute and relative accuracy evaluation experiments results. For the aerial imagery urban classification, this study first conducted maximum likelihood classification (MLC) experiments to identify features suitable for urban classification using LiDAR data and aerial imagery. The addition of LiDAR height data improved the overall accuracy by up to 28 and 18%, respectively, compared to cases with only red¡Vgreen¡Vblue (RGB) and multi-spectral imagery. It concludes that the urban classification is highly dependent on LiDAR height rather than on NIR imagery. To further improve classification, this study proposes a knowledge-based classification system (KBCS) that includes a three-level height, ¡§asphalt road, vegetation, and non-vegetation¡¨ (A¡VV¡VN) classification model, rule-based scheme and knowledge-based correction (KBC). The proposed KBCS improved overall accuracy by 12 and 7% compared to maximum likelihood and object-based classification, respectively. The classification results have superior visual interpretability compared to the MLC classified image. Moreover, the visual details in the KBCS are superior to those of the OBC without involving a selection procedure for optimal segmentation parameters.
9

A Generalized Adaptive Mathematical Morphological Filter for LIDAR Data

Cui, Zheng 14 November 2013 (has links)
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth’s surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in “cut-off” errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most “cut-off” points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
10

Log data filtering in embedded sensor devices

Olsson, Jakob, Yberg, Viktor January 2015 (has links)
Data filtering is the disposal of unnecessary data in a data set, to save resources such as server capacity and bandwidth. The method is used to reduce the amount of stored data and thereby prevent valuable resources from processing insignificant information.The purpose of this thesis is to find algorithms for data filtering and to find out which algorithm gives the best effect in embedded devices with resource limitations. This means that the algorithm needs to be resource efficient in terms of memory usage and performance, while saving enough data points to avoid modification or loss of information. After an algorithm has been found it will also be implemented to fit the Exqbe system.The study has been done by researching previously done studies in line simplification algorithms and their applications. A comparison between several well-known and studied algorithms has been done to find which suits this thesis problem best.The comparison between the different line simplification algorithms resulted in an implementation of an extended version of the Ramer-Douglas-Peucker algorithm. The algorithm has been optimized and a new filter has been implemented in addition to the algorithm. / Datafiltrering är att ta bort onödig data i en datamängd, för att spara resurser såsom serverkapacitet och bandbredd. Metoden används för att minska mängden lagrad data och därmed förhindra att värdefulla resurser används för att bearbeta obetydlig information. Syftet med denna tes är att hitta algoritmer för datafiltrering och att undersöka vilken algoritm som ger bäst resultat i inbyggda system med resursbegränsningar. Det innebär att algoritmen bör vara resurseffektiv vad gäller minnesanvändning och prestanda, men spara tillräckligt många datapunkter för att inte modifiera eller förlora information. Efter att en algoritm har hittats kommer den även att implementeras för att passa Exqbe-systemet. Studien är genomförd genom att studera tidigare gjorda studier om datafiltreringsalgoritmer och dess applikationer. Jämförelser mellan flera välkända algoritmer har utförts för att hitta vilken som passar denna tes bäst. Jämförelsen mellan de olika filtreringsalgoritmerna resulterade i en implementation av en utökad version av Ramer-Douglas-Peucker-algoritmen. Algoritmen har optimerats och ett nytt filter har implementerats utöver algoritmen.

Page generated in 0.1151 seconds