Spelling suggestions: "subject:"1776 computer software"" "subject:"1776 coomputer software""
391 |
Interactive volume deformation based on model fitting latticesXu, Qian January 2012 (has links)
Volume visualization, which is a relatively new branch in scientific visualization, not only displays surface features of a model, but enables an intuitive presentation of the internal information of the object. Its comprehensive visualization algorithms developed in the last decade have brought in challenges such as complex data processing, real-time operations, and application-specific system performances. These challenges were elaborated in the manner of research objectives in the thesis. By devising a novel volume deformation pipeline, this thesis managed to explore volume-model related operations applied for complicated applications through illustrating the feasibility of the designed system that was verified by experimental results. The contribution of the programme was demonstrated via testifying the effectivities of the four system design characteristics. Firstly, the clustering-based segmentation methods were adopted by the volumetric data processing module within the proposed volume deformation system for managing the complicated structures often existing in large volume data sets. Secondly, a novel mesh construction method was formulated in terms of optimizing the control lattices for the following deformation process. Thirdly, the volume deformation approach devised in the research has taken advantages of the parameterization process of the entire shape-change process. Finally, the GPU-based parallel process architecture was utilized to accelerate the calculation of Gaussian sampling in the lattice construction process; the progressive locations of the removed points in the simplification scheme; and the integration of kinetic energy for determining the deformation behaviours.
|
392 |
Looking at the video/computer games industry : what implications does gender socialisation have for women in counter-stereotypical careers?Carr, Marian January 2015 (has links)
Figures show that the games industry remains a male dominated occupation. Anecdotally a link has been proposed between higher male consumption of platform-based games and career choices, though figures also show a higher female to male ratio in social networking games. This suggests that perhaps the type of gameplay may influence career choice. This study therefore interviews women who have chosen a counter-stereotypical gendered career pathway and who are students in Further Education and Higher Education who are in the process of making career choices, to understand better a potential link between gameplay and career choice. The key findings relate to the positive emotive language utilised by the participants, a suggested link between gameplay, the concept of a 'gamer' and the choice of the games industry as a career, and serious concerns of abusive online gaming behaviour experienced by females. By adopting a subtle realist approach this study found a strong emotive view of games and the games development courses created a quantifiable link to a career. A high level of identifiable traits, traditionally considered both masculine and feminine, suggested that an outmoded view of gender stereotypes also appeared to negatively affect career choice. From the findings further research is suggested in relation to experiences of females both within the industry and at secondary school. At a wider level this study suggests both games courses and the industry would benefit from incorporating both traditionally masculine and feminine traits as part of developing more effective and inclusive recruitment strategies. Finally further research is proposed regarding how games and the gaming community relate to females both as characters and as players.
|
393 |
Modelling of the thermal chemical damage caused to carbon fibre compositesChippendale, Richard January 2013 (has links)
Previous investigations relating to lightning strike damage of Carbon Fibre Composites (CFC), have assumed that the energy input from a lightning strike is caused by the resistive (Joule) heating due to the current injection and the thermal heat ux from the plasma channel. Inherent within this statement, is the assumption that CFCs can be regarded as a perfect resistor. The validity of such an assumption has been experimentally investigated within this thesis. This experimental study has concluded that a typical quasi-isotropic CFC panel can be treated as a perfect resistor up to a frequency of at least 10kHz. By considering the frequency components within a lightning strike current impulse, it is evident that the current impulse leads predominately to Joule heating. This thesis has experimentally investigated the damage caused to samples of CFC, due to the different current impulse components, which make up a lightning strike. The results from this experiment have shown that the observed damage on the surface is different for each of the different types of current impulse. Furthermore, the damage caused to each sample indicates that, despite masking only the area of interest, the wandering arc on the surface stills plays an important role in distributing the energy input into the CFC and hence the observed damage. Regardless of the different surface damage caused by the different current impulses, the resultant damage from each component current impulse shows polymer degradation with fracturing and lifting up of the carbon fibres. This thesis has then attempted to numerically investigate the physical processes which lead to this lightning strike damage. Within the current state of the art knowledge there is no proposed method to numerically represent the lightning strike arc attachment and the subsequent arc wandering. Therefore, as arc wandering plays an important role in causing the observed damage, it is not possible to numerically model the lightning strike damage. An analogous damage mechanism is therefore needed so the lighting strike damage processes can be numerically investigated. This thesis has demonstrated that damage caused by laser ablation, represents a similar set of physical processes, to those which cause the lightning strike current impulse damage, albeit without any additional electrical processes. Within the numerical model, the CFC is numerically represented through a homogenisation approach and so the relevance and accuracy of a series of analytical methods for predicting the bulk thermal and electrical conductivity for use with CFCs have been investigated. This study has shown that the electrical conductivity is dominated by the percolation effects due to the fibre to fibre contacts. Due to the more comparable thermal conductivity between the polymer and the fibres, the bulk thermal conductivity is accurately predicted by an extension of the Eshelby Method. This extension allows the bulk conductivity of a composite system with more than two composite components to be calculated. Having developed a bespoke thermo-chemical degradation model, a series of validation studies have been conducted. First, the homogenisation approach is validated by numerically investigating the electrical conduction through a two layer panel of CFC. These numerical predictions showed initially unexpected current ow patterns. These predictions have been validated through an experimental study, which in turn validates the application of the homogenisation approach. The novelty within the proposed model is the inclusion of the transport of produced gasses through the decomposing material. The thermo-chemical degradation model predicts that the internal gas pressure inside the decomposing material can reach 3 orders of magnitude greater than that of atmospheric pressure. This explains the de-laminations and fibre cracking observed within the laser ablated damage samples. The numerical predictions show that the inclusion of thermal gas transport has minimal impact on the predicted thermal chemical damage. The numerical predictions have further been validated against the previously obtained laser ablation results. The predicted polymer degradation shows reasonable agreement with the experimentally observed ablation damage. This along with the previous discussions has validated the physical processes implemented within the thermo-chemical degradation model to investigate the thermal chemical lightning strike damage.
|
394 |
Mathematical modelling of evaporation mechanisms and instabilities in cryogenic liquidsThomas, Angeli Elizabeth January 1999 (has links)
In this thesis we propose a model for laminar natural convection within a mixture of two cryogenic fluids with preferential evaporation. This full model was developed after a number of smaller models of the behaviour of the surface of the fluid had been examined. Throughout we make careful comparison between our analytical and computational work and existing experimental and theoretical results. The coupled differential equations for the main model were solved using an explicit upwind scheme for the vorticity-transport, temperature and concentration equations and the multigrid method for the Poisson equation. From plots of the evolution of the system, it is found that convection becomes stronger when preferential evaporation is included. This new model demonstrates how to include preferential evaporation, and can be applied to other fluid systems.
|
395 |
Design optimization of flexible space structures for passive vibration suppressionNair, Prasanth B. January 2000 (has links)
This research is concerned with the development of a computational framework for the design of large flexible space structures with non periodic geometries to achieve passive vibration suppression. The present system combines an approximation model management framework (AMMF) developed for evolutionary optimization algorithms (EAs) with reduced basis approximate dynamic reanalysis methods. Formulations based on reduced basis representations are presented for approximating the eigenvalues and eigenvectors, which are then used to compute the frequency response. The second method involves direct approximation of the frequency response via a dynamic stiffness matrix formulation. Both the reduced basis methods use the results of a single exact analysis to approximate the dynamic resp9onse. An AMMF is then developed to make use of the computationally cheap approximate analysis techniques in lieu of exact analysis to arrive at better designs on a limited computational budget. A coevolutionary genetic search strategy is developed here to ensure that design changes during the optimization iterations lead to low-rank perturbations of the structural system matrices. This ensures that the reduced basis methods developed here give good quality approximations for moderate changes in the geometrical design variables. The k-means algorithm is employed for cluster analysis of the population of designs to determine design points at which exact analysis should be carried out. The fitness of the designs in an EA generation is then approximated using reduced basis models constructed around the points where exact analysis is carried out. Results are presented for optimal design of a two-dimensional space structure to achieve passive vibration suppression. It is shown that significant vibration isolation of the order of 50 dB over a 100 Hz bandwidth can be achieved. Further, it is demonstrated that the coevolutionary search strategy can arrive at a better design as compared to conventional approaches, when a constraint is imosed on the computational budget available for optimization. Detailed computational studies are presented to gain insights into the mechanisms employed by the optimal design to achieve this performance. It is also shown that the final design is robust to parmetric uncertainties.
|
396 |
Real-time video scene analysis with heterogeneous processorsBlair, Calum Grahame January 2014 (has links)
Field-Programmable Gate Arrays (FPGAs) and General Purpose Graphics Processing Units (GPUs) allow acceleration and real-time processing of computationally intensive computer vision algorithms. The decision to use either architecture in any application is determined by task-specific priorities such as processing latency, power consumption and algorithm accuracy. This choice is normally made at design time on a heuristic or fixed algorithmic basis; here we propose an alternative method for automatic runtime selection. In this thesis, we describe our PC-based system architecture containing both platforms; this provides greater flexibility and allows dynamic selection of processing platforms to suit changing scene priorities. Using the Histograms of Oriented Gradients (HOG) algorithm for pedestrian detection, we comprehensively explore algorithm implementation on FPGA, GPU and a combination of both, and show that the effect of data transfer time on overall processing performance is significant. We also characterise performance of each implementation and quantify tradeoffs between power, time and accuracy when moving processing between architectures, then specify the optimal architecture to use when prioritising each of these. We apply this new knowledge to a real-time surveillance application representative of anomaly detection problems: detecting parked vehicles in videos. Using motion detection and car and pedestrian HOG detectors implemented across multiple architectures to generate detections, we use trajectory clustering and a Bayesian contextual motion algorithm to generate an overall scene anomaly level. This is in turn used to select the architectures to run the compute-intensive detectors for the next frame on, with higher anomalies selecting faster, higher-power implementations. Comparing dynamic context-driven prioritisation of system performance against a fixed mapping of algorithms to architectures shows that our dynamic mapping method is 10% more accurate at detecting events than the power-optimised version, at the cost of 12W higher power consumption.
|
397 |
Generic security templates for information system security arguments : mapping security arguments within healthcare systemsHe, Ying January 2014 (has links)
Industry reports indicate that the number of security incidents happened in healthcare organisation is increasing. Lessons learned (i.e. the causes of a security incident and the recommendations intended to avoid any recurrence) from those security incidents should ideally inform information security management systems (ISMS). The sharing of the lessons learned is an essential activity in the “follow-up” phase of security incident response lifecycle, which has long been addressed but not given enough attention in academic and industry. This dissertation proposes a novel approach, the Generic Security Template (GST), aiming to feed back the lessons learned from real world security incidents to the ISMS. It adapts graphical Goal Structuring Notations (GSN), to present the lessons learned in a structured manner through mapping them to the security requirements of the ISMS. The suitability of the GST has been confirmed by demonstrating that instances of the GST can be produced from real world security incidents of different countries based on in-depth analysis of case studies. The usability of the GST has been evaluated using a series of empirical studies. The GST is empirically evaluated in terms of its given effectiveness in assisting the communication of the lessons learned from security incidents as compared to the traditional text based approach alone. The results show that the GST can help to improve the accuracy and reduce the mental efforts in assisting the identification of the lessons learned from security incidents and the results are statistically significant. The GST is further evaluated to determine whether users can apply the GST to structure insights derived from a specific security incident. The results show that students with a computer science background can create an instance of the GST. The acceptability of the GST is assessed in a healthcare organisation. Strengths and weaknesses are identified and the GST has been adjusted to fit into organisational needs. The GST is then further tested to examine its capability to feed back the security lessons to the ISMS. The results show that, by using the GST, lessons identified from security incidents from one healthcare organisation in a specific country can be transferred to another and can indeed inform the improvements of the ISMS. In summary, the GST provides a unified way to feed back the lessons learned to the ISMS. It fosters an environment where different stakeholders can speak the same language while exchanging the lessons learned from the security incidents around the world.
|
398 |
Artificial societies and information theory : modelling of sub system formation based on Luhmann's autopoietic theoryDi Prodi, Paolo January 2012 (has links)
This thesis develops a theoretical framework for the generation of artificial societies. In particular it shows how sub-systems emerge when the agents are able to learn and have the ability to communicate. This novel theoretical framework integrates the autopoietic hypothesis of human societies, formulated originally by the German sociologist Luhmann, with concepts of Shannon's information theory applied to adaptive learning agents. Simulations were executed using Multi-Agent-Based Modelling (ABM), a relatively new computational modelling paradigm involving the modelling of phenomena as dynamical systems of interacting agents. The thesis in particular, investigates the functions and properties necessary to reproduce the paradigm of society by using the mentioned ABM approach. Luhmann has proposed that in society subsystems are formed to reduce uncertainty. Subsystems can then be composed by agents with a reduced behavioural complexity. For example in society there are people who produce goods and other who distribute them. Both the behaviour and communication is learned by the agent and not imposed. The simulated task is to collect food, keep it and eat it until sated. Every agent communicates its energy state to the neighbouring agents. This results in two subsystems whereas agents in the first collect food and in the latter steal food from others. The ratio between the number of agents that belongs to the first system and to the second system, depends on the number of food resources. Simulations are in accordance with Luhmann, who suggested that adaptive agents self-organise by reducing the amount of sensory information or, equivalently, reducing the complexity of the perceived environment from the agent's perspective. Shannon's information theorem is used to assess the performance of the simulated learning agents. A practical measure, based on the concept of Shannon's information ow, is developed and applied to adaptive controllers which use Hebbian learning, input correlation learning (ICO/ISO) and temporal difference learning. The behavioural complexity is measured with a novel information measure, called Predictive Performance, which is able to measure at a subjective level how good an agent is performing a task. This is then used to quantify the social division of tasks in a social group of honest, cooperative food foraging, communicating agents.
|
399 |
An investigation of eyes-free spatial auditory interfaces for mobile devices : supporting multitasking and location-based informationVazquez-Alvarez, Yolanda January 2013 (has links)
Auditory interfaces offer a solution to the problem of effective eyes-free mobile interactions. However, a problem with audio, as opposed to visual displays, is dealing with multiple simultaneous information streams. Spatial audio can be used to differentiate between different streams by locating them into separate spatial auditory streams. In this thesis, we consider which spatial audio designs might be the most effective for supporting multiple auditory streams and the impact such spatialisation might have on the users' cognitive load. An investigation is carried out to explore the extent to which 3D audio can be effectively incorporated into mobile auditory interfaces to offer users eyes-free interaction for both multitasking and accessing location-based information. Following a successful calibration of the 3D audio controls on the mobile device of choice for this work (the Nokia N95 8GB), a systematic evaluationof 3D audio techniques is reported in the experimental chapters of this thesis which considered the effects of multitasking, multi-level displays, as well as differences between egocentric and exocentric designs. One experiment investigates the implementation and evaluation of a number of different spatial (egocentric) and non-spatial audio techniques for supporting eyes-free mobile multitasking that included spatial minimisation. The efficiency and usability of these techniques was evaluated under varying cognitive load. This evaluation showed an important interaction between cognitive load and the method used to present multiple auditory streams. The spatial minimisation technique offered an effective means of presenting and interacting with multiple auditory streams simultaneously in a selective-attention task (low cognitive load) but it was not as effective in a divided-attention task (high cognitive load), in which the interaction benefited significantly from the interruption of one of the stream. Two further experiments examine a location-based approach to supporting multiple information streams in a realistic eyes-free mobile environment. An initial case study was conducted in an outdoor mobile audio-augmented exploratory environment that allowed for the analysis and description of user behaviour in a purely exploratory environment. 3D audio was found to be an effective technique to disambiguate multiple sound sources in a mobile exploratory environment and to provide a more engaging and immersive experience as well as encouraging an exploratory behaviour. A second study extended the work of the previous case study by evaluating a number of complex multi-level spatial auditory displays that enabled interaction with multiple location-based information in an indoor mobile audio-augmented exploratory environment. It was found that a consistent exocentric design across levels failed to reduce workload or increase user satisfaction, so this design was widely rejected by users. However, the rest of spatial auditory displays tested in this study encouraged an exploratory behaviour similar to that described in the previous case study, here further characterised by increased user satisfaction and low perceived workload.
|
400 |
Mutually reinforcing systemsFerguson, John Urquhart January 2011 (has links)
Human computation can be described as outsourcing part of a computational process to humans. This technique might be used when a problem can be solved better by humans than computers or it may require a level of adaptation that computers are not yet capable of handling. This can be particularly important in changeable settings which require a greater level of adaptation to the surrounding environment. In most cases, human computation has been used to gather data that computers struggle to create. Games with by-products can provide an incentive for people to carry out such tasks by rewarding them with entertainment. These are games which are designed to create a by-product during the course of regular play. However, such games have traditionally been unable to deal with requests for specific data, relying instead on a broad capture of data in the hope that it will cover specific needs. A new method is needed to focus the efforts of human computation and produce specifically requested results. This would make human computation a more valuable and versatile technique. Mutually reinforcing systems are a new approach to human computation that tries to attain this focus. Ordinary human computation systems tend to work in isolation and do not work directly with each other. Mutually reinforcing systems are an attempt to allow multiple human computation systems to work together so that each can benefit from the other's strengths. For example, a non-game system can request specific data from a game. The game can then tailor its game-play to deliver the required by-products from the players. This is also beneficial to the game because the requests become game content, creating variety in the game-play which helps to prevent players getting bored of the game. Mobile systems provide a particularly good test of human computation because they allow users to react to their environment. Real world environments are changeable and require higher levels of adaptation from the users. This means that, in addition to the human computation required by other systems, mobile systems can also take advantage of a user's ability to apply environmental context to the computational task. This research explores the effects of mutually reinforcing systems on mobile games with by-products. These effects will be explored by building and testing mutually reinforcing systems, including mobile games. A review of existing literature, human computation systems and games with by-products will set out problems which exist in outsourcing parts of a computational process to humans. Mutually reinforcing systems are presented as one approach of addressing some of these problems. Example systems have been created to demonstrate the successes and failures of this approach and their evolving designs have been documented. The evaluation of these systems will be presented along with a discussion of the outcomes and possible future work. A conclusion will summarize the findings of the work carried out. This dissertation shows that extending human computation techniques to allow the collection and classification of useful contextual information in mobile environments is possible and can be extended to allow the by-products to match the specific needs of another system.
|
Page generated in 0.049 seconds