• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 184
  • 184
  • 184
  • 32
  • 21
  • 20
  • 18
  • 18
  • 17
  • 16
  • 16
  • 16
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Automating condition monitoring of vegetation on railway trackbeds and embankments

Nyberg, Roger Gote January 2015 (has links)
Vegetation growing on railway trackbeds and embankments present potential problems. The presence of vegetation threatens the safety of personnel inspecting the railway infrastructure. In addition vegetation growth clogs the ballast and results in inadequate track drainage which in turn could lead to the collapse of the railway embankment. Assessing vegetation within the realm of railway maintenance is mainly carried out manually by making visual inspections along the track. This is done either on-site or by watching videos recorded by maintenance vehicles mainly operated by the national railway administrative body. A need for the automated detection and characterisation of vegetation on railways (a subset of vegetation control/management) has been identified in collaboration with local railway maintenance subcontractors and Trafikverket, the Swedish Transport Administration (STA). The latter is responsible for long-term planning of the transport system for all types of traffic, aswell as for the building, operation and maintenance of public roads and railways. The purpose of this research project was to investigate how vegetation can be measured and quantified by human raters and how machine vision can automate the same process. Data were acquired at railway trackbeds and embankments during field measurement experiments. All field data (such as images) in this thesis work was acquired on operational, lightly trafficked railway tracks, mostly trafficked by goods trains. Data were also generated by letting (human) raters conduct visual estimates of plant cover and/or count the number of plants, either on-site or in-house by making visual estimates of the images acquired from the field experiments. Later, the degree of reliability of (human) raters' visual estimates were investigated and compared against machine vision algorithms. The overall results of the investigations involving human raters showed inconsistency in their estimates, and are therefore unreliable. As a result of the exploration of machine vision, computational methods and algorithms enabling automatic detection and characterisation of vegetation along railways were developed. The results achieved in the current work have shown that the use of image data for detecting vegetation is indeed possible and that such results could form the base for decisions regarding vegetation control. The performance of the machine vision algorithmwhich quantifies the vegetation cover was able to process 98% of the image data. Investigations of classifying plants from images were conducted in in order to recognise the specie. The classification rate accuracy was 95%. Objective measurements such as the ones proposed in thesis offers easy access to the measurements to all the involved parties and makes the subcontracting process easier i.e., both the subcontractors and the national railway administration are given the same reference framework concerning vegetation before signing a contract, which can then be crosschecked post maintenance. A very important issue which comes with an increasing ability to recognise species is the maintenance of biological diversity. Biological diversity along the trackbeds and embankments can be mapped, and maintained, through better and robust mo nitoring procedures. Continuously monitoring the state of vegetation along railways is highly recommended in order to identify a need for maintenance actions, and in addition to keep track of biodiversity. The computational methods or algorithms developed formthe foundation of an automatic inspection system capable of objectively supporting manual inspections, or replacing manual inspections.
162

Wiki-Mediated Collaborative Writing (WMCW) : an investigation of learners' perceptions and the impact of WMCW on preparatory year medical students studying English language in a university in Saudi Arabia

Al Khateeb, Ahmed January 2014 (has links)
Many learners of English as a second or foreign language at university, especially preparatory year students, in Saudi Arabia and elsewhere struggle to achieve a satisfactory level of English language writing. Writing in English with control of accurate mechanics of writing and vocabulary and syntax, logical flow of ideas and clear structure of organisation and coherence is a condition for students’ academic success and vital for effective written communication. Despite its importance, the majority of such learners fail to meet these requirements and they have difficulties in composing texts with a logical sequence of ideas and persuasive content (Roberts and Cimasko, 2008). Part of this problem is said to occur because many writing instructors still follow traditional teaching methodologies such as the grammar-translation method and use of repetitive exercises. Such practices may seem demotivating to many learners, particularly the young generation of learner writers. However, there are a number emerging technologies such as social networking tools (e.g. wikis), which if included in normal classes can help and are therefore relevant. Many such tools utilise writing and written messages. There is now a mismatch between what learners do in the traditional class and what they actually spend most of their time on outside class (web 2.0 technologies). A compromise between two environments: formal (in class) and informal (outside class) could offer solutions. The current study aimed to fill a gap in the research by addressing the specific problems related to learning writing. It will suggest that a process-oriented wiki-mediated collaborative writing (PWMCW) approach can assist learners in practising writing in second/foreign language. The research also aimed to provide a formal learning setting for writing outside the classroom, to train the ESL/EFL learner writers to target a new audience other than their instructor. In this way, they will learn to develop their abilities to share knowledge and to respond to peers and their own feedback. The study addressed three main questions (eight sub-questions): to explore how the students perceive the PWMCW, how the learner writers process it and how it impacts on their collaborative and individual texts. The study takes a quasi-experimental case study design (one single pre-and-post-experimental group) in order to contribute to the continuity of development of learner writers regardless of place-related restrictions (Green et al., 2011). It was carried out with a mixed-research design. The quantitative analysis provided robust statistical operations to identify the significance level for certain issues, e.g. e feedback, authentic tasks and peers interaction. The qualitative analysis showed how collaborative planning and revision are achieved during the PWMCW. The data were collected from pre-and-post questionnaires, initial-and-follow-up focus groups, delayed interviews, wiki-based contributions and samples for written texts. A purposive sampling was applied and a group of university level, preparatory year, language learners were chosen in one of the universities in Saudi Arabia. This procedure is held to ensure that writing can be socially processed in an online learning environment. The findings revealed significant and insignificant changes in the perceptions of the learners along with emerging specific themes which contributed to understanding the topic of the PWMCW. The findings also explored the nature of how the collaborative writers worked together to establish a good start for better written texts, by emphasising collaborative planning and collaborative revision. Finally, the findings showed the impact of the PWMCW on the texts produced collaboratively (that used collaborative planning and collaborative revision) and individually (those texts produced by the individual learners before and after the course).
163

Robust, scalable, and practical algorithms for recommender systems

Ghazanfar, Mustansar Ali January 2012 (has links)
The purpose of recommender systems is to filter information unseen by a user to predict whether a user would like a given item. Making effective recommendations from a domain consisting of millions of ratings is a major research challenge in the application of machine learning and data mining. A number of approaches have been proposed to solvethe recommendation problem, where the main motivation is to increase the accuracy of the recommendations while ignoring other design objectives such as scalability, sparsity and imbalanced dataset problems, cold-start problems, and long tail problems. The aim of this thesis is to develop recommendation algorithms that satisfy the aforementioned design objectives making the recommendation generation techniques applicable to a wider range of practical situations and real-world scenarios. With this in mind, in the first half of the thesis, we propose novel hybrid recommendation algorithms that give accurate results and eliminate some of the known problems with recommender systems. More specifically, we propose a novel switching hybrid recommendation framework that combines Collaborative Filtering (CF) with a content-based filtering algorithm. Our experiments show that the performance of our algorithm is better than (or comparable to) the other hybrid recommendation approaches available in the literature. While reducing the dimensions of the dataset by Singular Value Decomposition (SVD), prior to applying CF, we discover that the SVD-based CF fails to produce reliable recommendations for some datasets. After further investigation, we fi�nd out that the SVD-based recommendations depend on the imputation methods used to approximate the missing values in the user-item rating matrix. We propose various missing value imputation methods, which exhibit much superior accuracy and performance compared to the traditional missing value imputation method - item average. Furthermore, we show how the gray-sheep users problem associated with a recommender systemcan effectively be solved using the K-means clustering algorithm. After analysing the effect of different centroid selection approaches and distance measures in the K-means clustering algorithm, we demonstrate how the gray-sheep users in a recommender system can be identified by treating them as an outlier problem. We demonstrate that the performance (accuracy and coverage) of the CF-based algorithms suffers in the case of gray-sheep users. We propose a hybrid recommendation algorithm to solve the gray-sheep users problem. In the second half of the thesis, we propose a new class of kernel mapping recommender system methods that we call KMR for solving the recommendation problem. The proposed methods find the multi-linear mapping between two vector spaces based on the structure-learning technique. We propose the user- and item-based versions of the KMR algorithms and offer various ways to combine them. We report results of an extensive evaluation conducted on five different datasets under various recommendation conditions. Our empirical study shows that the proposed algorithms offer a state-of-the-art performance and provide robust performance under all conditions. Furthermore, our algorithms are quite flexible as they can incorporate more information|ratings, demographics, features, and contextual information|easily into the forms of kernels and moreover, these kernels can be added/multiplied. We then adapt the KMR algorithm to incorporate new data incrementally. We offer a new heuristic namely KMRincr that can build the model without retraining the whole model from scratch when new data are added to the recommender system, providing significant computation savings. Our final contribution involves adapting the KMR algorithms to build the model on-line. More specifically, we propose a perceptron-type algorithm namely KMR percept which is a novel, fast, on-line algorithm for building the model that maintains good accuracy and scales well with the data. We provide the temporal analysis of the KMR percept algorithm. The empirical results reveal that the performance of the KMR percept is comparable to the KMR, and furthermore, it overcomes some of the conventional problems with recommender systems.
164

The influence of real-world factors on threat detection performance in airport X-ray screening

Godwin, Hayward James January 2008 (has links)
The visual search task carried out by X-ray screening personnel has begun to be investigated in a number of recent experiments. The goal of the present thesis was, therefore, to extend previous examinations of the factors that may be detrimental to screener performance, to understand those factors in more detail, and to bring those factors to bear upon current models of visual search. It has been argued that screener performance is impaired by searching for infrequent targets (the prevalence effect), by searching for several targets simultaneously (the dual-target cost), and by the tumultuous environment in which screeners work . Over the course of six experiments, these factors, and, in some cases, the interaction between these factors, was examined. Experiments 1, 2 and 3 explored the role that the prevalence effect and the dual-target cost have upon the performance of untrained participants. Experiment 4 revealed that airport screeners are, in fact, vulnerable to both the prevalence effect and the dual-target cost, highlighting the relevance of the present work to those working in an applied environment. Experiment 5 tested the impact of ambient noise upon search performance and the dual-target cost, and found that ambient noise has no deleterious impact. Experiment 6 set the foundation for future research involving the impact of external distractions upon search performance, with the results showing that observers are slowed substantially when conducting even a simple mental arithmetic in conjunction with a search task. Based on the results from the experiments, it appears that actual screener performance could be improved by increasing the prevalence of ‘dummy’ items, as well as tasking with screeners to search for only a single target at any one time. Efforts could also be made to reduce sources of external distraction.
165

Modelling of quayside logistics problems at container terminals

Luo, Jiabin January 2013 (has links)
Container terminals serve as an interface between marine and land transportation. Since the introduction of containerisation in 1960s, the number of containers handled worldwide has dramatically grown every year. With the increasing containerisation, nowadays container terminals are working at maximum capacity. Therefore, the efficiency of stacking and transportation of large number of containers to and from the quayside is critical to any container terminal. We have investigated the integration of container-handling equipment (such as quay cranes, yard cranes, automated guided vehicles and straddle carriers) scheduling and container storage allocation problems in two types of container-handling system: one is automated container terminal, which represents the current container terminal development and the other is straddle-carrier system, which has been used by most European container terminals. For each type of container terminal, we have studied three integrated problems respectively considering container unloading process (during which containers are unloaded from a ship and delivered to the storage yard), container loading process (during which containers are picked up from the yard and delivered to the quayside to be loaded onto a ship) and dual-cycle process (unloading and loading of containers simultaneously). Our aims are to determine the optimal schedules of container-handling equipment and assign optimal yard locations for containers. The objective is to minimise the berth time of the ship, which is the most important factor to evaluate the efficiency of container terminals. We have developed six models for the above problems. Optimal solutions can be obtained in small sizes of the problems under investigation; however, large-sized problems are hard to solve optimally in a reasonable time. Therefore, genetic algorithms are designed for each model to solve the problem in large sizes. The computational results show the effectiveness of the proposed models and heuristic approaches in dealing with problems in container terminals.
166

Managing a metro rail project to avoid cost overruns

Thomas, John Heulyn January 2009 (has links)
While technical failures remain the most common triggers for overruns in metro projects, the causes have not typically been deficiencies in the underlying engineering principles but in project management. This work involves the complementary use of requirements and risk management processes and real options theory. The Crossrail project provides a case study with a scheme design for an underground station at Farringdon being considered in detail. The requirements process documented in this research is capable of providing an interactive format for managing project requirements and importantly, any changes that are made to them. This is achieved using commercial software (Telelogic DOORS®) and it is shown that this process is effective when working on multidisciplinary metro projects. This process is then expanded to consider the interaction between risks on a project. This is identified as being crucial given the impacts that technical, project and external risks can have on each other. The developed risk process therefore allows the interactions between all risks to be recorded and provides a holistic view of all risks for management purposes. The requirements and risk processes are complemented by a fuzzy logic methodology to evaluate global and elemental risks (such as political or client risks). Over 50 external risk factors which are known to have caused overruns on previous projects are identified and the performance of Crossrail is evaluated against each risk factor by way of a questionnaire circulated to industry professionals. An approach to avoiding cost overruns is demonstrated by the application of real options theory where the chosen design for Farringdon station is developed alongside an alternative design. Real options theory is used to value the cost of implementing the design alternative should it be needed during the project construction cycle due to cost increases and the potential occurrence of major risks. This implementation cost is presented as a fixed cost agreed prior to construction rather than being an added cost to the agreed budget once construction has started. It is proposed that using real options in this context can avoid significant cost overruns by predetermining the value of payments to be made for changing from one design to another. This thesis will show how additions and adjustments to existing processes and the inclusion of real options valuation in the procurement of metro projects can help practitioners avoid cost overruns in a metro rail project.
167

An investigation into information reuse for cost prediction : from needs to a data reuse framework

Jeeson Daniel, Joshua January 2014 (has links)
The need to be able to reuse a wide variety of data that an organise has created, constitutes a part of the challenge known as ‘Broad Data’. The aim of this research was to create a framework that would enable the reuse of broad data while complying with the corporate requirements of data security and privacy. A key requirement in enabling reuse of broad data is to ensure maximum interoperability among datasets, which in Linked Data depends on the URIs (Uniform Resource Identifier) that the datasets have in common (i.e. reused). The URIs in linked data can be dereferenced to obtain more information about it from its owner and hence dereferencing can have a profound impact on making someone reuse a URI. However, the wide variety of vocabulary in broad data means the provenance and ownership of URIs could be key in promoting its reuse by the data creators. The full potential offered by linked data cannot be realised due to the fundamental way the URIs are currently constructed. In part, this is because the World Wide Web (Web) was designed for an open web of documents, not a secure web of data. By making subtle but essential changes to the building blocks one can change the way data is handled on the Web, thereby creating what has been proposed in this thesis as the World Wide Information web (WWI). The WWI is based on a framework of people and things that are active contributors to the web of data (hereinafter referred to as ‘active thing’), identified by URIs. The URI for an active thing is constructed from its path in the organisational stakeholder hierarchy to represent the provenance of ownership. As a result, it becomes easier to reference data held in sparse and heterogeneous resources, to navigate complex organisational structures, and to automatically include the provenance of the data to support trust based data reuse and an organic growth of linked data. As a result, the new data retrieval technique referred to as ‘domino request’ was demonstrated, where sparsely located linked data can be reused as though it was from a single source. With the use of a domino request on WWI web there is no more need to include the name of the organisation itself to maintain a catalogue of all the data sources to be queried and thus, making the ‘security by privacy’ on the Web a reality. At the same time, WWI allows the data owner or its stakeholder to maintain their privacy not only on the source of data, but also on the provenance of individual URIs that describe the data. The thesis concludes that WWI is a suitable framework for broad data reuse and in addition demonstrates its application in managing data in the air travel industry, where security by privacy could play a significant role in controlling the flow of data among its ‘internet of things’ that have multiple stakeholders.
168

Enabling security and risk-based operation of container line supply chains under high uncertainties

Riahi, Ramin January 2010 (has links)
Container supply chains are vulnerable to many risks. Vulnerability can be defined as an exposure to serious disturbances arising from the risks within the supply chain as well as the risks external to the supply chain. Vulnerability can also be defined as exposure to serious disturbances arising from a hazard or a threat. Containers are one of the major sources of security concerns and have been used, for example, to smuggle illegal immigrants, weapons, and drugs. The consequences of the use of a weapon of mass destruction or discovery of such a device in a container are serious. Estimates suggest that a weapon of mass destruction explosion and the resulting port closure could cost billions of dollars. The annual cost of container losses as consequences of serious disturbances arising from hazards is estimated as $500 million per year. The literature review, historical failure data, and statistical analysis in the context of containerships' accidents from a safety point of view clearly indicate that the container cargo damage, machinery failure, collision, grounding, fire/explosion, and contact are the most significant accident categories with high percentages of occurrences. Another important finding from the literature review is that the most significant basic event contributing to the supply chains' vulnerability is human error. Therefore, firstly, this research makes full use of the Evidential Reasoning (ER) advantages and further develops and extends the Fuzzy Evidential Reasoning (FER) by exploiting a conceptual and sound methodology for the assessment of a seafarer's reliability. Accordingly, control options to enhance seafarers' reliability are suggested. The proposed methodology enables and facilitates the decision makers to measure the reliability of a seafarer before his/her designation to any activities and during his/her seafaring period. Secondly, this research makes full use of the Bayesian Networks (BNs) advantages and further develops and extends the Fuzzy Bayesian Networks (FBNs) and a "symmetric method" by exploiting a conceptual and sound methodology for the assessment of human reliability. Furthermore a FBN model (i. e. dependency network), which is capable of illustrating the dependency among the variables, is constructed. By exploiting the proposed FBN model, a general equation for the reduction of human reliability attributable to a person's continuous hours of wakefulness, acute sleep loss and cumulative sleep debt is formulated and tested. A container supply chain includes dozens of stakeholders who can physically come into contact with containers and their contents and are potentially related with the container trade and transportation. Security-based disruptions can occur at various points along the supply chain. Experience has shown that a limited percentage of inspection, coupled with a targeted approach based on risk analysis, can provide an acceptable security level. Thus, in order not to hamper the logistics process in an intolerable manner, the number of physical checks should be chosen cautiously. Thirdly, a conceptual and sound methodology (i. e. FBN model) for evaluating a container's security score, based on the importer security filling, shipping documents, ocean or sea carriers' reliability, and the security scores of various commercial operators and premises, is developed. Accordingly, control options to avoid unnecessary delays and security scanning are suggested. Finally, a decision making model for assessing the security level of a port associated with ship/port interface and based on the security score of the ship's cargo containers, is developed. It is further suggested that regardless of scanning all import cargo containers, one realistic way to secure the supply chain, due to lack of information and number of variables, is to enhance the ocean or sea carriers' reliability through enhancing their ship staff's reliability. Accordingly a decision making model to analyse the cost and benefit (i.e. CBA) is developed.
169

Advanced risk management in offshore terminals and marine ports

Mokhtari, Kambiz January 2011 (has links)
This research aims to propose a Risk Management (RM) framework and develop a generic risk-based model for dealing with potential hazards and risk factors associated with offshore terminals' and marine ports' operations and management. Hazard identification was conducted through an appropriate literature review of major risk factors of these logistic infrastructures. As a result in the first phase of this research a Fuzzy Analytical Hierarchal Process (FAHP) method was used for determining the relative weights of the risk factors identified via the literature review. This has led to the development of a generic risk -based model which can help related industrial professionals and risk managers assess the risk factors and develop appropriate strategies to take preventive/corrective actions for mitigation purposes, with a view of maintaining efficient offshore terminals' and marine ports' operations and management. In the second phase of the research the developed risk-based model incorporating Fuzzy Set Theory (FST), an Evidential Reasoning (ER) approach and the IDS software were used to evaluate the risk levels of different ports in real situations using a case study. The IDS software based on an ER approach was used to aggregate the previously determined relative weights of the risk factors with the new evaluation results of risk levels for the real ports. The third phase of the research made use of the Cause and Consequence Analysis (CCA) including the Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) under a fuzzy environment, to analyse in detail the most significant risk factors determined from the first phase of the research, using appropriate case-studies. In the fourth phase of the research an individual RM strategy was tailored and implemented on the most significant risk factor identified previously. In the last phase of the research and in order to complete the RM cycle, the best mitigation strategies were introduced and evaluated in the form of ideal solutions for mitigating the identified risk factors. All methods used in this research have quantitative and qualitative nature. Expert judgements carried out for gathering the required information accounted for the majority of data collected. The proposed RM framework can be a useful method for managers and auditors when conducting their RM programmes in the offshore and marine industries. The novelty of this research can help the Quality, Health, Safety, Environment and Security (QHSES) managers, insurers and risk managers in the offshore and marine industries investigate the potential hazards more appropriately if there is uncertainty of data sources. In this research with considering strategic management approaches to RM development the proposed RM framework and risk based model contribute to knowledge by developing and evaluating an effective methodology for future use of the RM professionals.
170

A risk based appraisal of maritime regulations in the shipping industry

Karahalios, Hristos January 2009 (has links)
No description available.

Page generated in 0.2192 seconds