• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 881
  • 201
  • 126
  • 109
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • Tagged with
  • 1735
  • 414
  • 313
  • 247
  • 229
  • 184
  • 175
  • 168
  • 167
  • 157
  • 157
  • 152
  • 152
  • 151
  • 142
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Spam Filter Improvement Through Measurement

Lynam, Thomas Richard January 2009 (has links)
This work supports the thesis that sound quantitative evaluation for spam filters leads to substantial improvement in the classification of email. To this end, new laboratory testing methods and datasets are introduced, and evidence is presented that their adoption at Text REtrieval Conference (TREC)and elsewhere has led to an improvement in state of the art spam filtering. While many of these improvements have been discovered by others, the best-performing method known at this time -- spam filter fusion -- was demonstrated by the author. This work describes four principal dimensions of spam filter evaluation methodology and spam filter improvement. An initial study investigates the application of twelve open-source filter configurations in a laboratory environment, using a stream of 50,000 messages captured from a single recipient over eight months. The study measures the impact of user feedback and on-line learning on filter performance using methodology and measures which were released to the research community as the TREC Spam Filter Evaluation Toolkit. The toolkit was used as the basis of the TREC Spam Track, which the author co-founded with Cormack. The Spam Track, in addition to evaluating a new application (email spam), addressed the issue of testing systems on both private and public data. While streams of private messages are most realistic, they are not easy to come by and cannot be shared with the research community as archival benchmarks. Using the toolkit, participant filters were evaluated on both, and the differences found not to substantially confound evaluation; as a result, public corpora were validated as research tools. Over the course of TREC and similar evaluation efforts, a dozen or more archival benchmarks -- some private and some public -- have become available. The toolkit and methodology have spawned improvements in the state of the art every year since its deployment in 2005. In 2005, 2006, and 2007, the spam track yielded new best-performing systems based on sequential compression models, orthogonal sparse bigram features, logistic regression and support vector machines. Using the TREC participant filters, we develop and demonstrate methods for on-line filter fusion that outperform all other reported on-line personal spam filters.
292

Log Event Filtering Using Clustering Techniques

Wasfy, Ahmed January 2009 (has links)
Large software systems are composed of various different run-time components, partner applications and, processes. When such systems operate they are monitored so that audits can be performed once a failure occurs or when maintenance operations are performed. However, log files are usually sizeable, and require filtering and reduction to be processed efficiently. Furthermore, there is no apparent correspondence of how logged events relate to particular use cases the system may be performing. In this thesis, we have developed a framework that is based on heuristic clustering algorithms to achieve log filtering, log reduction and, log interpretation. More specifically we define the concept of the Event Dependency Graph, and we present event filtering and use case identification techniques, that are based on event clustering. The clustering process groups together all events that relate to a collection of initial significant events that relate to a use case. We refer to these significant events as beacon events. Beacon events can be identified automatically or semiautomatically by examining log event types or event names against event types or event names in the corresponding specification of a use case being considered (e.g. events in sequence diagrams). Furthermore, the user can select other or additional initial clustering conditions based on his or her domain knowledge of the system. The clustering technique can be used in two possible ways. The first is for large logs to be reduced or sliced, with respect to a particular use case so that, operators can better focus their attention to specific events that relate to specific operations. The second is for the determination of active use cases where operators select particular seed events of interest and then examine the resulting reduced logs against events or event types stemming from different alternative known use cases being considered, in order to identify the best match and consequently provide insights on which of these alternative use cases may be running at any given time. The approach has shown very promising results towards the identification of executing use cases among various alternative ones in various runs of the Session Initiation Protocol.
293

A Localisation and Navigation System for an Autonomous Wheel Loader

Lilja, Robin January 2011 (has links)
Autonomous vehicles are an emerging trend in robotics, seen in a vast range of applications and environments. Consequently, Volvo Construction Equipment endeavour to apply the concept of autonomous vehicles onto one of their main products. In the company’s Autonomous Machine project an autonomous wheel loader is being developed. As an ob jective given by the company; a demonstration proving the possibility of conducting a fully autonomous load and haul cycle should be performed. Conducting such cycle requires the vehicle to be able to localise itself in its task space and navigate accordingly. In this Master’s Thesis, methods of solving those requirements are proposed and evaluated on a real wheel loader. The approach taken regarding localisation, is to apply sensor fusion, by extended Kalman filtering, to the available sensors mounted on the vehicle, including; odometric sensors, a Global Positioning System receiver and an Inertial Measurement Unit. Navigational control is provided through an interface developed, allowing high level software to command the vehicle by specifying drive paths. A path following controller is implemented and evaluated. The main objective was successfully accomplished by integrating the developed localisation and navigational system with the existing system prior this thesis. A discussion of how to continue the development concludes the report; the addition of a continuous vision feedback is proposed as the next logical advancement.
294

UKF and EKF with time dependent measurement and model uncertainties for state estimation in heavy duty diesel engines

Berggren, Henrik, Melin, Martin January 2011 (has links)
The continuous challenge to decrease emissions, sensor costs and fuel consumption in diesel engines is battled in this thesis. To reach higher goals in engine efficiency and environmental sustainability the prediction of engine states is essential due to their importance in engine control and diagnosis. Model output will be improved with help from sensors, advanced mathematics and non linear Kalman filtering. The task consist of constructing non linear Kalman Filters and to adaptively weight measurements against model output to increase estimation accuracy. This thesis shows an approach of how to improve estimates by nonlinear Kalman filtering and how to achieve additional information that can be used to acquire better accuracy when a sensor fails or to replace existing sensors. The best performing Kalman filter shows a decrease of the Root Mean Square Error of 75 % in comparison to model output.
295

Investigations in Tracking and Colour Classification / Undersökningar inom följning och färgklassificering

Moe, Anders January 1998 (has links)
In this report, mainly three different problems are considered. The first problem considered is how to filter position data of vehicles. To do so the vehicles have to be tracked. This is done with Kalman filters. The second problem considered is how to control a camera to keep a vehicle in the center of the image, under three different conditions. This is mainly solved with a Kalman filter. The last problem considered is how to use the color of the vehicles to make classification of them more robust. Some suggestions on how this might be done are given. However, no really good method to do this has been found. / Den här rapporten behandlar huvudsakligen tre olika problem. Det första problemet är hur man ska filtrera fordons positions data. För att göra detta måste fordonen följas. Detta är gjort med ett Kalmanfilter. Det andra problemet var att styra en kamera så att ett givet fordon ligger mitt i bild, tre olika förhallånde har betraktats. Detta löstes huvudsakligen med ett Kalmanfilter. Det sista problemet var hur man ska använda fordonens färg så att man får säkrare klassificering av dem. Några förslag på hur detta kan göras ges, men ingen riktigt bra metod har hittats.
296

Design and implementation of temporal filtering and other data fusion algorithms to enhance the accuracy of a real time radio location tracking system

Malik, Zohaib Mansoor January 2012 (has links)
A general automotive navigation system is a satellite navigation system designed for use inautomobiles. It typically uses GPS to acquire position data to locate the user on a road in the unit's map database. However, due to recent improvements in the performance of small and lightweight micro-machined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems, possible. This has resulted in an increased interest in the topic of inertial navigation. In location tracking system, sensors are used either individually or in conjunction like in data fusion. However, still they remain noisy, and so there is a need to measure maximum data and then make an efficient system that can remove the noise from data and provide a better estimate. The task of this thesis work was to take data from two sensors, and use an estimation technique toprovide an accurate estimate of the true location. The proposed sensors were an accelerometer and a GPS device. This thesis however deals with using accelerometer sensor and using estimation scheme, Kalman filter. The thesis report presents an insight to both the proposed sensors and different estimation techniques. Within the scope of the work, the task was performed using simulation software Matlab. Kalman filter’s efficiency was examined using different noise levels.
297

Design and implementation of temporal filtering and other data fusion algorithms to enhance the accuracy of a real time radio location tracking system

Malik, Zohaib Mansoor January 2012 (has links)
A general automotive navigation system is a satellite navigation system designed for use in automobiles. It typically uses GPS to acquire position data to locate the user on a road in the unit's map database. However, due to recent improvements in the performance of small and light weight micro-machined electromechanical systems (MEMS) inertial sensors have made the application ofinertial techniques to such problems, possible. This has resulted in an increased interest in the topic of inertial navigation. In location tracking system, sensors are used either individually or in conjunction like in data fusion.However, still they remain noisy, and so there is a need to measure maximum data and then make an efficient system that can remove the noise from data and provide a better estimate.The task of this thesis work was to take data from two sensors, and use an estimation technique to provide an accurate estimate of the true location. The proposed sensors were an accelerometer and aGPS device. This thesis however deals with using accelerometer sensor and using estimation scheme, Kalman filter. This thesis report presents an insight to both the proposed sensors and different estimation techniques.Within the scope of the work, the task was performed using simulation software Matlab. Kalman filter’s efficiency was examined using different noise levels.
298

A Framework for Nonlinear Filtering in MATLAB

Rosén, Jakob January 2005 (has links)
The object of this thesis is to provide a MATLAB framework for nonlinear filtering in general, and particle filtering in particular. This is done by using the object-oriented programming paradigm, resulting in truly expandable code. Three types of discrete and nonlinear state-space models are supported by default, as well as three filter algorithms: the Extended Kalman Filter and the SIS and SIR particle filters. Symbolic expressions are differentiated automatically, which allows for comfortable EKF filtering. A graphical user interface is also provided to make the process of filtering even more convenient. By implementing a specified interface, programming new classes for use within the framework is easy and guidelines for this are presented.
299

Spam Filter Improvement Through Measurement

Lynam, Thomas Richard January 2009 (has links)
This work supports the thesis that sound quantitative evaluation for spam filters leads to substantial improvement in the classification of email. To this end, new laboratory testing methods and datasets are introduced, and evidence is presented that their adoption at Text REtrieval Conference (TREC)and elsewhere has led to an improvement in state of the art spam filtering. While many of these improvements have been discovered by others, the best-performing method known at this time -- spam filter fusion -- was demonstrated by the author. This work describes four principal dimensions of spam filter evaluation methodology and spam filter improvement. An initial study investigates the application of twelve open-source filter configurations in a laboratory environment, using a stream of 50,000 messages captured from a single recipient over eight months. The study measures the impact of user feedback and on-line learning on filter performance using methodology and measures which were released to the research community as the TREC Spam Filter Evaluation Toolkit. The toolkit was used as the basis of the TREC Spam Track, which the author co-founded with Cormack. The Spam Track, in addition to evaluating a new application (email spam), addressed the issue of testing systems on both private and public data. While streams of private messages are most realistic, they are not easy to come by and cannot be shared with the research community as archival benchmarks. Using the toolkit, participant filters were evaluated on both, and the differences found not to substantially confound evaluation; as a result, public corpora were validated as research tools. Over the course of TREC and similar evaluation efforts, a dozen or more archival benchmarks -- some private and some public -- have become available. The toolkit and methodology have spawned improvements in the state of the art every year since its deployment in 2005. In 2005, 2006, and 2007, the spam track yielded new best-performing systems based on sequential compression models, orthogonal sparse bigram features, logistic regression and support vector machines. Using the TREC participant filters, we develop and demonstrate methods for on-line filter fusion that outperform all other reported on-line personal spam filters.
300

Log Event Filtering Using Clustering Techniques

Wasfy, Ahmed January 2009 (has links)
Large software systems are composed of various different run-time components, partner applications and, processes. When such systems operate they are monitored so that audits can be performed once a failure occurs or when maintenance operations are performed. However, log files are usually sizeable, and require filtering and reduction to be processed efficiently. Furthermore, there is no apparent correspondence of how logged events relate to particular use cases the system may be performing. In this thesis, we have developed a framework that is based on heuristic clustering algorithms to achieve log filtering, log reduction and, log interpretation. More specifically we define the concept of the Event Dependency Graph, and we present event filtering and use case identification techniques, that are based on event clustering. The clustering process groups together all events that relate to a collection of initial significant events that relate to a use case. We refer to these significant events as beacon events. Beacon events can be identified automatically or semiautomatically by examining log event types or event names against event types or event names in the corresponding specification of a use case being considered (e.g. events in sequence diagrams). Furthermore, the user can select other or additional initial clustering conditions based on his or her domain knowledge of the system. The clustering technique can be used in two possible ways. The first is for large logs to be reduced or sliced, with respect to a particular use case so that, operators can better focus their attention to specific events that relate to specific operations. The second is for the determination of active use cases where operators select particular seed events of interest and then examine the resulting reduced logs against events or event types stemming from different alternative known use cases being considered, in order to identify the best match and consequently provide insights on which of these alternative use cases may be running at any given time. The approach has shown very promising results towards the identification of executing use cases among various alternative ones in various runs of the Session Initiation Protocol.

Page generated in 0.2061 seconds