• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 861
  • 143
  • 109
  • 88
  • 63
  • 43
  • 26
  • 17
  • 14
  • 13
  • 12
  • 8
  • 6
  • 5
  • 4
  • Tagged with
  • 1812
  • 313
  • 306
  • 280
  • 212
  • 210
  • 181
  • 168
  • 165
  • 145
  • 143
  • 136
  • 133
  • 130
  • 117
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Surface Profiling the Sanding Process of Dry Wall on Construction

Alex, Dony Cherian Unknown Date
No description available.
112

Automated Router and Switch Backup

Bjurdelius, Andreas, Bjurdelius, Pierre, Blomqvist, Alexander January 2014 (has links)
Today's companies are growing in a steady pace, with more and more network devices added to the network it is very important to keep track of and monitor the status of devices. Even though the wireless evolution has come, it all depends on the wired connections to supply a continuous connection to the rest of the world.   This thesis explores, tests and informs about creating a functional system that automatically creates backups of configuration files from network devices and how to troubleshoot networking problems and maintain a network to keep it in good shape.   Even though many companies have manual backups of router and switch configurations, the possibility to have this part automated should be desired by most companies. It can open up for the administrators in the company to have more time over to help the employees that are experiencing problems at the same time as the automated system eliminates the possible errors that a human can cause. Of course one can see it the other way, that it takes away manual labor for the employees, but it is just a small part of the job yet it is so very important that making this service automated is a good choice for a company. Integrity is proven by the means of backups and by the option to see the difference between the previous backups and the most recent.   The three of us have worked as a group to do all tests and to write the documentation. After working with a couple of companies it is clear that well functional backup systems of network devices are not as common as it should be. Companies that do take backups of the network devices often do this manually. When seeing this it makes sense to use a reliable system that uses revision handling so it is easy to see the recent changes made to the devices.   The results ended up in a working automated backup system for routers and switches. The automated system is running Debian and connects to all the routers and switches in the network to collect the configuration files with the help of rancid. The thesis also explains the functions of concepts such as disaster recovery and different maintenance models.
113

Advanced dead reckoning navigation for mobile robots

Banta, Larry Eugene 05 1900 (has links)
No description available.
114

A learning model adaptive estimator for an automated guided vehicle

Lapin, Brett Denton 08 1900 (has links)
No description available.
115

Automated Resolution Selection for Image Segmentation

Al-Qunaieer, Fares January 2014 (has links)
It is well known in image processing in general, and hence in image segmentation in particular, that computational cost increases rapidly with the number and dimensions of the images to be processed. Several fields, such as astronomy, remote sensing, and medical imaging, use very large images, which might also be 3D and/or captured at several frequency bands, all adding to the computational expense. Multiresolution analysis is one method of increasing the efficiency of the segmentation process. One multiresolution approach is the coarse-to-fine segmentation strategy, whereby the segmentation starts at a coarse resolution and is then fine-tuned during subsequent steps. Until now, the starting resolution for segmentation has been selected arbitrarily with no clear selection criteria. The research conducted for this thesis showed that starting from different resolutions for image segmentation results in different accuracies and speeds, even for images from the same dataset. An automated method for resolution selection for an input image would thus be beneficial. This thesis introduces a framework for the selection of the best resolution for image segmentation. First proposed is a measure for defining the best resolution based on user/system criteria, which offers a trade-off between accuracy and time. A learning approach is then described for the selection of the resolution, whereby extracted image features are mapped to the previously determined best resolution. In the learning process, class (i.e., resolution) distribution is imbalanced, making effective learning from the data difficult. A variant of AdaBoost, called RAMOBoost, is therefore used in this research for the learning-based selection of the best resolution for image segmentation. RAMOBoost is designed specifically for learning from imbalanced data. Two sets of features are used: Local Binary Patterns (LBP) and statistical features. Experiments conducted with four datasets using three different segmentation algorithms show that the resolutions selected through learning enable much faster segmentation than the original ones, while retaining at least the original accuracy. For three of the four datasets used, the segmentation results obtained with the proposed framework were significantly better than with the original resolution with respect to both accuracy and time.
116

SHARP: Sustainable Hardware Acceleration for Rapidly-evolving Pre-existing systems.

Beeston, Julie 13 September 2012 (has links)
The goal of this research is to present a framework to accelerate the execution of software legacy systems without having to redesign them or limit future changes. The speedup is accomplished through hardware acceleration, based on a semi-automatic infrastructure which supports design decisions and simulate their impact. Many programs are available for translating code written in C into VHDL (Very High Speed Integrated Circuit Hardware Description Language). What is missing is simpler and more direct strategies to incorporate encapsulatable portions of the code, translate them to VHDL and to allow the VHDL code and the C code to communicate through a flexible interface. SHARP is a streamlined, easily understood infrastructure which facilitates this process in two phases. In the first part, the SHARP GUI (An interactive Graphical User Interface) is used to load a program written in a high level general purpose programming language, to scan the code for SHARP POINTs (Portions Only Including Non-interscoping Types) based on user defined constraints, and then automatically translate such POINTs to a HDL. Finally the infrastructure needed to co-execute the updated program is generated. SHARP POINTs have a clearly defined interface and can be used by the SHARP scheduler. In the second part, the SHARP scheduler allows the SHARP POINTs to run on the chosen reconfigurable hardware, here an FPGA (Field Programmable Gate Array) and to commu- nicate cleanly with the original processor (for the software). The resulting system will be a good (though not necessarily optimal) acceleration of the original software application, that is easily maintained as the code continues to develop and evolve. / Graduate
117

Multi-person tracking system for complex outdoor environments

Tanase, Cristina-Madalina January 2015 (has links)
The thesis represents the research in the domain of modern video tracking systems and presents the details of the implementation of such a system. Video surveillance is a high point of interest and it relies on robust systems that interconnect several critical modules: data acquisition, data processing, background modeling, foreground detection and multiple object tracking. The present work analyzes different state of the art methods that are suitable for each module. The emphasis of the thesis is on the background subtraction stage, as the final accuracy and performance of the person tracking dramatically dependent on it. The experimental results show the performance of four different foreground detection algorithms, including two variations of self-organizing feature maps for background modeling, a machine learning technique. The undertaken work provides a comprehensive view of the actual state of the research in the foreground detection field and multiple object tracking and offers solution for common problems that occur when tracking in complex scenes. The chosen data set for experiments covers extremely different and complex scenes (outdoor environments) that allow a detailed study of the appropriate approaches and emphasize the weaknesses and strengths of each algorithm. The proposed system handles problems like: dynamic backgrounds, illumination changes, camouflage, cast shadows, frequent occlusions and crowded scenes. The tracking obtains a maximum Multiple Object Tracking Accuracy of 92,5% for the standard video sequence MWT and a minimum of 32,3% for an extremely difficult sequence that challenges every method.
118

Monitoring the Generation and Execution of Optimal Plans

Fritz, Christian Wilhelm 24 September 2009 (has links)
In dynamic domains, the state of the world may change in unexpected ways during the generation or execution of plans. Regardless of the cause of such changes, they raise the question of whether they interfere with ongoing planning efforts. Unexpected changes during plan generation may invalidate the current planning effort, while discrepancies between expected and actual state of the world during execution may render the executing plan invalid or sub-optimal, with respect to previously identified planning objectives. In this thesis we develop a general monitoring technique that can be used during both plan generation and plan execution to determine the relevance of unexpected changes and which supports recovery. This way, time intensive replanning from scratch in the new and unexpected state can often be avoided. The technique can be applied to a variety of objectives, including monitoring the optimality of plans, rather then just their validity. Intuitively, the technique operates in two steps: during planning the plan is annotated with additional information that is relevant to the achievement of the objective; then, when an unexpected change occurs, this information is used to determine the relevance of the discrepancy with respect to the objective. We substantiate the claim of broad applicability of this relevance-based technique by developing four concrete applications: generating optimal plans despite frequent, unexpected changes to the initial state of the world, monitoring plan optimality during execution, monitoring the execution of near-optimal policies in stochastic domains, and monitoring the generation and execution of plans with procedural hard constraints. In all cases, we use the formal notion of regression to identify what is relevant for achieving the objective. We prove the soundness of these concrete approaches and present empirical results demonstrating that in some contexts orders of magnitude speed-ups can be gained by our technique compared to replanning from scratch.
119

Surface Profiling the Sanding Process of Dry Wall on Construction

Alex, Dony Cherian 06 1900 (has links)
The growing interest in the industrialization of construction process; promotes opportunities for automation. Automation brings improvement in quality and productivity, while reducing worker’s exposure to hazardous work environments. The integration of robotics in interior finishing works, such as sanding and painting of drywalls is a relatively new concept. Progressing to a stage where fully autonomous robots are used for interior finishing works requires intermediate steps; namely surface profiling. This thesis describes a theoretical concept of shadow profilometery to profile the surface of an installed drywall. A shadow was cast over the area under consideration, and the shadow profile was captured as a 2D image by a camera. Digital image processing techniques were utilized for identifying regions that deviate from a flat surface. The methodology discussed in this research, was tested on a virtual system, and the results were found to be encouraging. / Construction Engineering and Management
120

Automated spatial information retrieval and visualisation of spatial data

Walker, Arron R. January 2007 (has links)
An increasing amount of freely available Geographic Information System (GIS) data on the Internet has stimulated recent research into Spatial Information Retrieval (SIR). Typically, SIR looks at the problem of retrieving spatial data on a dataset by dataset basis. However in practice, GIS datasets are generally not analysed in isolation. More often than not multiple datasets are required to create a map for a particular analysis task. To do this using the current SIR techniques, each dataset is retrieved one by one using traditional retrieval methods and manually added to the map. To automate map creation the traditional SIR paradigm of matching a query to a single dataset type must be extended to include discovering relationships between different dataset types. This thesis presents a Bayesian inference retrieval framework that will incorporate expert knowledge in order to retrieve all relevant datasets and automatically create a map given an initial user query. The framework consists of a Bayesian network that utilises causal relationships between GIS datasets. A series of Bayesian learning algorithms are presented that automatically discover these causal linkages from historic expert knowledge about GIS datasets. This new retrieval model improves support for complex and vague queries through the discovered dataset relationships. In addition, the framework will learn which datasets are best suited for particular query input through feedback supplied by the user. This thesis evaluates the new Bayesian Framework for SIR. This was achieved by utilising a test set of queries and responses and measuring the performance of the respective new algorithms against conventional algorithms. This contribution will increase the performance and efficiency of knowledge extraction from GIS by allowing users to focus on interpreting data, instead of focusing on finding which data is relevant to their analysis. In addition, they will allow GIS to reach non-technical people.

Page generated in 0.0864 seconds