• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Efficient algorithms for optimal matching problems under preferences

Kwanashie, Augustine January 2015 (has links)
In this thesis we consider efficient algorithms for matching problems involving preferences, i.e., problems where agents may be required to list other agents that they find acceptable in order of preference. In particular we mainly study the Stable Marriage problem (SM), the Hospitals / Residents problem (HR) and the Student / Project Allocation problem (SPA), and some of their variants. In some of these problems the aim is to find a stable matching which is one that admits no blocking pair. A blocking pair with respect to a matching is a pair of agents that prefer to be matched to each other than their assigned partners in the matching if any. We present an Integer Programming (IP) model for the Hospitals / Residents problem with Ties (HRT) and use it to find a maximum cardinality stable matching. We also present results from an empirical evaluation of our model which show it to be scalable with respect to real-world HRT instance sizes. Motivated by the observation that not all blocking pairs that exist in theory will lead to a matching being undermined in practice, we investigate a relaxed stability criterion called social stability where only pairs of agents with a social relationship have the ability to undermine a matching. This stability concept is studied in instances of the Stable Marriage problem with Incomplete lists (smi) and in instances of hr. We show that, in the smi and hr contexts, socially stable matchings can be of varying sizes and the problem of finding a maximum socially stable matching (max smiss and max hrss respectively) is NP-hard though approximable within 3/2. Furthermore we give polynomial time algorithms for three special cases of the problem arising from restrictions on the social network graph and the lengths of agents’ preference lists. We also consider other optimality criteria with respect to social stability and establish inapproximability bounds for the problems of finding an egalitarian, minimum regret and sex equal socially stable matching in the sm context. We extend our study of social stability by considering other variants and restrictions of max smiss and max hrss. We present NP-hardness results for max smiss even under certain restrictions on the degree and structure of the social network graph as well as the presence of master lists. Other NP-hardness results presented relate to the problem of determining whether a given man-woman pair belongs to a socially stable matching and the problem of determining whether a given man (or woman) is part of at least one socially stable matching. We also consider the Stable Roommates problem with Incomplete lists under Social Stability (a non-bipartite generalisation of smi under social stability). We observe that the problem of finding a maximum socially stable matching in this context is also NP-hard. We present efficient algorithms for three special cases of the problem arising from restrictions on the social network graph and the lengths of agents’ preference lists. These are the cases where (i) there exists a constant number of acquainted pairs (ii) or a constant number of unacquainted pairs or (iii) each preference list is of length at most 2. We also present algorithmic results for finding matchings in the spa context that are optimal with respect to profile, which is the vector whose ith component is the number of students assigned to their ith-choice project. We present an efficient algorithm for finding a greedy maximum matching in the spa context — this is a maximum matching whose profile is lexicographically maximum. We then show how to adapt this algorithm to find a generous maximum matching — this is a matching whose reverse profile is lexicographically minimum. We demonstrate how this approach can allow additional constraints, such as lecturer lower quotas, to be handled flexibly. We also present results of empirical evaluations carried out on both real world and randomly generated datasets. These results demonstrate the scalability of our algorithms as well as some interesting properties of these profile-based optimality criteria. Practical applications of spa motivate the investigation of certain special cases of the problem. For instance, it is often desired that the workload on lecturers is evenly distributed (i.e. load balanced). We enforce this by either adding lower quota constraints on the lecturers (which leads to the potential for infeasible problem instances) or adding a load balancing optimisation criterion. We present efficient algorithms in both cases. Another consideration is the fact that certain projects may require a minimum number of students to become viable. This can be handled by enforcing lower quota constraints on the projects (which also leads to the possibility of infeasible problem instances). A technique of handling this infeasibility is the idea of closing projects that do not meet their lower quotas (i.e. leaving such project completely unassigned). We show that the problem of finding a maximum matching subject to project lower quotas where projects can be closed is NP-hard even under severe restrictions on preference lists lengths and project upper and lower quotas. To offset this hardness, we present polynomial time heuristics that find large feasible matchings in practice. We also present ip models for the spa variants discussed and show results obtained from an empirical evaluation carried out on both real and randomly generated datasets. These results show that our algorithms and heuristics are scalable and provide good matchings with respect to profile-based optimality.
242

The application of multiple modalities to improve home care and reminder systems

Warnock, David January 2014 (has links)
Existing home care technology tends to be pre-programmed systems limited to one or two interaction modalities. This can make them inaccessible to people with sensory impairments and unable to cope with a dynamic and heterogeneous environment such as the home. This thesis presents research that considers how home care technology can be improved through employing multiple visual, aural, tactile and even olfactory interaction methods. A wide range of modalities were tested to gather a better insight into their properties and merits. That information was used to design and construct Dyna-Cue, a prototype multimodal reminder system. Dyna-Cue was designed to use multiple modalities and to switch between them in real time to maintain higher levels of effectiveness and acceptability. The Dyna-Cue prototype was evaluated against other models of reminder delivery and was shown to be an effective and appropriate tool that can help people to manage their time and activities.
243

Profiling a parallel domain specific language using off-the-shelf tools

Al-Saeed, Majed Mohammed Abdullah January 2015 (has links)
Profiling tools are essential for understanding and tuning the performance of both parallel programs and parallel language implementations. Assessing the performance of a program in a language with high-level parallel coordination is often complicated by the layers of abstraction present in the language and its implementation. This thesis investigates whether it is possible to profile parallel Domain Specific Languages (DSLs) using existing host language profiling tools. The key challenge is that the host language tools report the performance of the DSL runtime system (RTS) executing the application rather than the performance of the DSL application. The key questions are whether a correct, effective and efficient profiler can be constructed using host language profiling tools; is it possible to effectively profile the DSL implementation, and what capabilities are required of the host language profiling tools? The main contribution of this thesis is the development of an execution profiler for the parallel DSL, Haskell Distributed Parallel Haskell (HdpH) using the host language profiling tools. We show that it is possible to construct a profiler (HdpHProf) to support performance analysis of both the DSL applications and the DSL implementation. The implementation uses several new GHC features, including the GHC-Events Library and ThreadScope, develops two new performance analysis tools for DSL HdpH internals, i.e. Spark Pool Contention Analysis, and Registry Contention Analysis. We present a critical comparative evaluation of the host language profiling tools that we used (GHC-PPS and ThreadScope) with another recent functional profilers, EdenTV, alongside four important imperative profilers. This is the first report on the performance of functional profilers in comparison with well established industrial standard imperative profiling technologies. We systematically compare the profilers for usability and data presentation. We found that the GHC-PPS performs well in terms of overheads and usability so using it to profile the DSL is feasible and would not have significant impact on the DSL performance. We validate HdpHProf for functional correctness and measure its performance using six benchmarks. HdpHProf works correctly and can scale to profile HdpH programs running on up to 192 cores of a 32 nodes Beowulf cluster. We characterise the performance of HdpHProf in terms of profiling data size and profiling execution runtime overhead. It shows that HdpHProf does not alter the behaviour of the GHC-PPS and retains low tracing overheads close to the studied functional profilers; 18% on average. Also, it shows a low ratio of HdpH trace events in GHC-PPS eventlog, less than 3% on average. We show that HdpHProf is effective and efficient to use for performance analysis and tuning of the DSL applications. We use HdpHProf to identify performance issues and to tune the thread granularity of six HdpH benchmarks with different parallel paradigms, e.g. divide and conquer, flat data parallel, and nested data parallel. This include identifying problems such as, too small/large thread granularity, problem size too small for the parallel architecture, and synchronisation bottlenecks. We show that HdpHProf is effective and efficient for tuning the parallel DSL implementation. We use the Spark Pool Contention Analysis tool to examine how the spark pool implementation performs when accessed concurrently. We found that appropriate thread granularity can significantly reduce both conflict ratios, and conflict durations, by more than 90%. We use the Registry Contention Analysis tool to evaluate three alternatives of the registry implementations. We found that the tools can give a better understanding of how different implementations of the HdpH RTS perform.
244

Ship design with the human factor : evacuation and normal operations modelling in the ship design process

Deere, Steven John January 2012 (has links)
This thesis addresses the modelling of human factors and how they can impact ship design. Two different but related applications areas are considered; passenger ship evacuation analysis and naval vessel normal operations and evacuation analysis. In the first instance, this thesis investigates the impact of the current regulatory specified passenger response time distributions upon evacuation analysis and then recommends a more realistic passenger response time distribution which should be implemented when performing an evacuation analysis of a passenger RO-RO vessel. This realistic passenger response time distribution is based upon the results of sea trials. The results of this analysis have been adopted by the IMO and form part of the new guideline document, IMO MSC 1238. In addition, this thesis addresses the analysis of the human factors’ performance of a naval vessel. Naval vessels are built primarily for undertaking assigned missions in times of war and conflict. While the safety of those on board is important, the ability of the vessel to function and complete its assigned mission is of paramount importance. This thesis utilises an evacuation model, maritimeEXODUS, which was extended to incorporate the functionality of modelling non-evacuation scenarios, to assess the human factors’ performance of a naval vessel during both normal operations and evacuation scenarios. This thesis develops a methodology for simultaneously assessing the human factors’ performance of both a range of normal operation scenarios and evacuation scenario on board a naval vessel. The methodology, called the Human Performance Metric (HPM), is discriminating, diagnostic, systematic, transparent and reproducible in nature. This thesis then implements the HPM methodology into the early stages of the design cycle for a new naval vessel. The thesis presents the software modifications required to implement the methodology in to the design cycle as well as presenting a demonstration of the new system.
245

Improving patient confidentiality systems in Libya using UK experience

Ajaj, Shaban Al-Furgani January 2012 (has links)
Patient confidentiality has received much attention in recent years because of the rise in the number of confidentiality breach incidents and the need to improve the provision of health services in general. Patient confidentiality is defined as the patients‟ right to the protection of their personal medical information within health institutions under normal conditions. While literature on the protection of patient confidentiality exists, there is little or no attempt made to use a theoretical model to represent this, and hence, with which to appraise the practice of patient confidentiality in health care systems. The main aim of this research study is to contribute to the development of a model for the protection of patient confidentiality in Libya, using experience and evidence from elsewhere, and also to suggest means to improve confidentiality through the application of lessons from the UK health service. The standpoint taken is a pragmatic one, as the focus is on the utility of the proposed model. There are two principal strands to the research: one concerns the views of experts as to factors that influence patient confidentiality. The second major one is the development of a System Dynamics Model to present the flow of patient data and the places where breaches of confidentiality are likely to occur. These two strands are then considered jointly to provide a basis for conclusions and recommendations of particular relevance in Libya (and perhaps more generally). The data used to identify the main factors that affect the practice of patient confidentiality were collected using two stages: literature review and expert surveys. The first iteration requesting views was sent to experts from Libya, Europe and elsewhere in the field of patient confidentiality, to establish a set of factors that might influence the practice of patient confidentiality. A second iteration followed with selected respondents to rank the relative importance of elements of contributing to two factors, trust and ethics, that were identified in the first expert letter survey. The results from the expert letters indicated that the main factors that influence the practice of patient confidentiality, especially in Libya, were trust, ethics, regulation and technology. The results from the interviews and the focus group showed that the findings of the current research had ecological validity. This is based on the Libyan participants‟ views, which strongly supported the research results as having the potential to improve Libyan patient confidentiality systems by learning from the UK experience. The responses were used to inform the insights obtained from the UK NHS model of patient confidentiality of 2003, which was developed into an innovative simulation using Systems Dynamics Modelling (SDM). Quantitative data to populate the model was drawn from NHS statistics. The model was „validated‟ through personal interviews and a focus group with individuals who had experience in the practice of patient confidentiality in the Libyan health service. The results of the running of the SDM model were also compared to known data to provide a check on validity. The proposed SDM model of patient confidentiality was shown to have ecological validity though the views of medical staff and medical records managers in two major general UK hospitals. The premise was that breaches of patient confidentiality could occur either from (i) human error when dealing with patient medical data within the national health services by staff such as frontline medical staff, doctors and nurses, or (ii) at locations of safe-keeping of patient notes, where medical records managers and others store patient medical data on IT systems, with varying dynamics and volume. The results obtained from the developed model of patient confidentiality are encouraging; they may assist health service managers to minimize breaches of patient confidentiality occurrences. Therefore, the current study proposes a framework and recommendations that can help to improve the protection of patient confidentiality systems in the Libyan health service and assist in delivering a good quality of health care.
246

Quantitative studies of the structure and chemistry of materials at the nano- and atomic-scale

Bigatti, Marco January 2015 (has links)
In this thesis electron microscopy was employed to characterise the nanoscale and atomic scale structure and chemistry of organic and inorganic materials. In chapter 4, the thin film morphology of the organic blend of [poly(9,9-dioctylfluorene- co-benzothiadiazole)] (commonly referred as F8BT) and poly[9,9-dioctyfluorene-co- N-(4-butylphenyl)-diphenylamine] (abbreviated as TFB) was investigated, mainly by bright field transmission electron microscopy (BF-TEM). F8BT and TFB are conjugated polymers, which are candidates to replace inorganic semiconductors in many applications because of their simple preparation and processing procedures. The phase separation of the F8BT:TFB blend was investigated at different compositions. Polymer domains were found in the thin film, with sub- micrometer size which varies with concentration. The 1:1 weight ratio sample showed sub-micrometer TFB rich areas in a F8BT matrix, while the 1:4 weight ratio thin film presented F8BT phases, whose areas are mostly below 0.02 μm2, in a TFB layer. Since some electronic applications, especially in optoelectronics, show increased efficiency after addition of quantum dots in the polymer blend, the effect of CdSe quantum dots on the phase separation of the organic blend was investigated together with their effect on the nanoscale morphology. The CdSe quantum dots were found to aggregate in clusters with limited dispersion within the polymer domains, which did not present significantly morphology changes as a consequence of quantum dots (QDs) addition. The atomic structure and chemistry of the inorganic Ba6−3xNd8+2xTi18O54 microwave ceramic was quantitatively investigated in chapter 4, using high resolution scanning transmission electron microscopy (HR-STEM) and electron energy loss spectroscopy (EELS). These materials are an essential part of telecommunication systems, they can be found in components such as resonators and antennas, on account of their high permittivity, temperature stability and the very low dielectric loss at microwave frequencies. The unit cell was refined with sub-Å precision based on extensive data analysis of HR-STEM images and the unit cell structure showed no significant changes as a consequence of changes in composition or cooling rate after annealing. Ba was found to substitute preferentially to specific Nd atomic columns in the structure, and these trends apply across the whole composition range. These results were confirmed by comparisons with image simulations and provided a starting point for improved refinements of X-ray data.
247

An evaluation of load sharing algorithms for heterogeneous distributed systems

Leslie, Robert January 1997 (has links)
Distributed systems offer the ability to execute a job at other nodes than the originating one. Load sharing algorithms use this ability to distribute work around the system in order to achieve greater efficiency. This is reflected in substantially reduced response times. In the majority of studies the systems on which load sharing has been evaluated have been homogeneous in nature. This thesis considers load sharing in heterogeneous systems, in which the heterogeneity is exhibited in the processing power of the constituent nodes. Existing algorithms are evaluated and improved ones proposed. Most of the performance analysis is done through simulation. A model of diskless workstations communicating and transferring jobs by Remote Procedure Call is used. All assumptions about the overheads of inter-node communication are based upon measurements made on the university networks. The comparison of algorithms identifies those characteristics that offer improved performance in heterogeneous systems. The level of system information required for transfer is investigated and an optimum found. Judicious use of the collected information via algorithm design is shown to account for much of the improvement. However detailed examination of algorithm behaviour compared with that of a 'optimum' load sharing scenario reveals that there are occasions when full use of all the information available is not beneficial. Investigations are carried out on the most promising algorithms to assess their adaptability, scalability and stability under a variety of differing conditions. The standard definitions of load balancing and load sharing are shown not to apply when considering heterogeneous systems. To validate the assumptions in the simulation model a load sharing scenario was implemented on a network of Sun workstations at the University. While the scope of the implementation was somewhat limited by lack of resources, it does demonstrate the relative ease with which the algorithms can be implemented without alteration of the operating system code or modification at the kernel level.
248

A new method for generic three dimensional human face modelling for emotional bio-robots

Zhang, Xu January 2012 (has links)
Existing 3D human face modelling methods are confronted with difficulties in applying flexible control over all facial features and generating a great number of different face models. The gap between the existing methods and the requirements of emotional bio-robots applications urges the creation of a generic 3D human face model. This thesis focuses on proposing and developing two new methods involved in the research of emotional bio-robots: face detection in complex background images based on skin colour model and establishment of a generic 3D human face model based on NURBS. The contributions of this thesis are: A new skin colour based face detection method has been proposed and developed. The new method consists of skin colour model for skin regions detection and geometric rules for distinguishing faces from detected regions. By comparing to other previous methods, the new method achieved better results of detection rate of 86.15% and detection speed of 0.4-1.2 seconds without any training datasets. A generic 3D human face modelling method is proposed and developed. This generic parametric face model has the abilities of flexible control over all facial features and generating various face models for different applications. It includes: The segmentation of a human face of 21 surface features. These surfaces have 34 boundary curves. This feature-based segmentation enables the independent manipulation of different geometrical regions of human face. The NURBS curve face model and NURBS surface face model. These two models are built up based on cubic NURBS reverse computation. The elements of the curve model and surface model can be manipulated to change the appearances of the models by their parameters which are obtained by NURBS reverse computation. A new 3D human face modelling method has been proposed and implemented based on bi-cubic NURBS through analysing the characteristic features and boundary conditions of NURBS techniques. This model can be manipulated through control points on the NURBS facial features to build any specific face models for any kind of appearances and to simulate dynamic facial expressions for various applications such as emotional bio-robots, aesthetic surgery, films and games, and crime investigation and prevention, etc.
249

Computer simulation of dislocation interaction with radiation-induced obstacles in iron

Grammatikopoulos, Panagiotis January 2009 (has links)
Assessment of candidate materials for fusion power plants provide one of the major structural materials challenges of the next decades. Computer simulation provides a useful alternative to experiments on real-life irradiated materials. Within the framework of a multi-scale modelling approach, atomic scale studies by molecular dynamics (MD) and statics (MS) are of importance, since they enable understanding of atomic interaction mechanisms invisible at coarser scales. Nano-scale defect clusters, such as voids, solute-atom precipitates and dislocation loops can form in metals irradiated by high-energy atomic particles. Since they are obstacles to dislocation glide, they can affect plasticity, substantially changing the yield and flow stresses and ductility. In this study, a model for α-Fe developed by Osetsky and Bacon [26] has been used, that enables dislocation motion under applied shear strain at various temperatures and strain rates. Three main results were obtained. First, the two interatomic potentials used (A97 [79] and A04 [31]) were assessed with respect to reproducing dislocation properties. Both were in good agreement but for one fact: an unexpected and not previously reported displacement of core atoms along the direction of the dislocation line of a 1/2[111](1-10) edge dislocation was observed for the A97 potential. A connection of this phenomenon with differences in Peierls stress values for the two potentials was proposed. Second, the interaction of a 1/2[111](1-10) edge dislocation with a number of different configurations of spherical voids and Cu-precipitates 2 and 4 nm in diameter was investigated. The defects were centred on, above and below the dislocation glide plane. The mechanisms governing the interactions were analysed. For the first time it was observed that by interacting with a void, the dislocation can undergo both positive and negative climb, depending on the void position. A bcc to fcc phase transition was observed for the larger precipitates, in agreement with literature findings. Third, the obstacle strength of 1/2‹111› and ‹100› loops was obtained under various conditions and geometries for both potentials. Interactions are sometimes complex, but could be described in terms of conventional dislocation reactions in which Burgers vector is conserved. The critical resolved shear stress for dislocation breakaway and the fraction of interstitials left behind are wide-ranging. Finally, a mapping of all obstacle strengths was created for the purpose of comparison. ‹100› loops with Burgers vector parallel to the dislocation glide plane and 1/2‹111› loops proved to be strong obstacles. Small size voids are stronger than Cu-precipitates of the same size. The complexity of some reactions and the variety of obstacle strengths poses a challenge for the development of continuum models of dislocation behaviour in irradiated iron.
250

PC-based visual simulation of high pressure arc plasma

Zhan, Yiyi January 2011 (has links)
This include the mass spectroscopic measurement of gases generated in the arc in water case, the more accurate arc voltage measurement just before current zero, and the post arc current measurement.

Page generated in 0.0667 seconds