Spelling suggestions: "subject:"1776 computer software"" "subject:"1776 coomputer software""
471 |
Modelling and testing the definitions of teleworking within a local council environmentHaq, Khawaja Al-M. January 2014 (has links)
Teleworking was defined in terms of comprehension: root definition, conceptual definition and abstraction definition. The definitions were subsequently modelled in terms of four theories: socio-factors of teleworking (model 1 of 4), maturity model of teleworking (model 2 of 4), technical factors of teleworking (model 3 of 4) and taxonomy of teleworking (model 4 of 4). The modelling of the definitions of teleworking as stated adds further comprehension to the concept of teleworking. Teleworking is a socio and technical working practice and so, the research study turned to the socio aspect: there were a number of socio-factors (minor and major) identified as per existing literature. Subsequently, major socio-factors were mapped to a teleworking maturity model in terms of layers, resource, policy and connectivity. The technical aspect of the research study was able to identify and divide factors into dimensions, attributes and organisational roles. The three models (socio, technical and maturity) were brought together in terms of taxonomy of teleworking: an amalgamation of the socio and technical factors of teleworking in addition to three layers of a maturity model. The research methodology followed a positivist viewpoint with socio-factors measured using 7-point Likert scales. There were a large number of measures for socio-teleworking and so two research methods were adopted to reduce the number to a manageable amount namely: initial questionnaire design and Q-sort study. Following exclusions, a web-based survey was created with the remaining socio-measures of teleworking. The web-based survey was conducted in terms of a pilot study (at councils in the north of England) before surveying 264 employees at Council-Z (the primary study). Data collected from Council-Z was analysed in terms of confirmatory factor analysis. Theoretical models (factor structures) were created in terms of resource, policy and connectivity. The factor structures of each stated layer were tested for consistency to data. Four factor structures of resource were identified, A, B, C and D. Factor structure D showed the highest level of convergence of theory to observed data that is, the best-fitting model. Six factor structures of policy were identified, with factor structure C2 the most favourable in terms of exclusion of ambiguities and model-fit statistics. Three factor structures of connectivity were identified and for each of the absolute and incremental fit statistics factor structure B was consistently within the cut-off values for good model-fit, factor structure B was also the best fitting model. In terms of the utility of the study, definitions of teleworking and the modelling of the definitions have improved understanding of the research area. The extensive number of factors of teleworking identified through the theoretical modelling process and the measurements of these have demonstrated improved measurement techniques. The best-fitting models as per the confirmatory factor analyses have broad applicability to other similar organisations, and finally the data from the three best-fitting models can be utilised by Council-Z to introduce informed teleworking initiatives. In terms of limitations and future work, technical factors were out of scope in this research study. Hence, types of teleworking practices linked to technical factors of teleworking would be future work as would studies of the linkage between the socio-and technical factors. In terms of the taxonomical model empirical validation would be sought of each of the seven major socio-factors in terms of factor structures. This study empirically tested for each of the three layers of the maturity model, as opposed to each of the major socio-factors within the three layers. Furthermore, additional factors may be identifiable through future work, adding to the taxonomy and in turn, the comprehension of teleworking would be enhanced alongside further standardisation of teleworking definitions and measurements.
|
472 |
Dimensional reduction and design optimization of gas turbine engine casings for tip clearance studiesStanley, Felix January 2010 (has links)
The objective of this research is to develop a design process that can optimize an engine casing assembly to reduce tip clearance losses. Performing design optimization on the casings that form a gas turbine engine's external structure is a very tedious and cumbersome process. The design process involves the conceptual, the preliminary and the detailed design stages. The redesign costs involved are high when changes are made to the design of a part in the detailed design stage. Normally a 2D configuration is envisaged by the design team in the conceptual design stage. Engine thrust, mass flow, operating temperature, materials and manufacturing processes available at the time of design, mass of the engine, loads and assembly conditions are a few of the many important variables that are taken into consideration when designing an aerospace component. The linking together of this information into the design process to achieve an optimal design using a quick robust method is still a daunting task. In this thesis, we present techniques to extract midsurfaces of complex 3D axisymmetric and non-axisymmetric geometries based on medial axis transforms. We use the proposed FE modeling technique for optimizing the geometry by designing a sequential workflow consisting of CAD, FE analysis and optimization algorithms within an integrated system. An existing commercial code was first used to create a midsurface shell model and the results showed that such models could replace 3D models for defection studies. These softwares being black box codes could not be customized. Such limitations restrict their use in batch mode and development for research purposes. We recognized an immediate need to develop a bespoke code that could be used to extract midsurfaces for FE modeling. Two codes, Mantle-2D and Mantle-3D have been developed using Matlab to handle 3D axisymmetric and non-axisymmetric geometries respectively. Mantle-2D is designed to work with 2D cross-section geometry as an input while Mantle-3D deals with complex 3D geometries. The Pareto front (PF) of 2000 designs of the shell based optimization problem when superimposed on the PF of the solid based optimization, has provided promising results. A DoE study consisting of 200 designs was also conducted and results showed that the shell model differs in mass and defection by <1% and <5.0% respectively. The time taken to build/solve a solid model varied between 45-75 minutes while the equivalent midsurface based shell model built using Mantle-2D required only 3-4 minutes. The Mantle-3D based dimensional reduction process for a complex non-axisymmetric solid model has also been demonstrated with encouraging results. This code has been used to extract and mesh the midsurface of a non-axisymmetric geometry with shell elements for use in finite element analysis. 101 design points were studied and the results compared with the corresponding solid model. The first 10 natural frequencies of the resulting shell model deviates from the solid model by <4.0% for the baseline design, while the mass and defection errors were <3.5% and <9.0% for all 101 design points.
|
473 |
Dynamic analysis of arch dams subjected to seismic disturbancesErmutlu, H. E. January 1968 (has links)
No description available.
|
474 |
An object-based analysis of cloud motion from sequences of METEOSAT satellite dataNewland, Franz Thomas January 1999 (has links)
The need for wind and atmospheric dynamics data for weather modelling and forecasting is well founded. Current texture-based techniques for tracking clouds in sequences of satellite imagery are robust at generating global cloud motion winds, but their use as wind data makes many simplifying assumptions on the causal relationships between cloud dynamics and the underlying windfield. These can be summarised under the single assumption that clouds must act as passive tracers for the wind. The errors thus introduced are now significant in light of the improvements made to weather models and forecasting techniques since the first introduction of satellite-derived wind information in the late 1970s. In that time, the algorithms used to track cloud in satellite imagery have not changed fundamentally. There is therefore a need to address the simplifying assumptions and to adapt the nature of the analyses applied accordingly. A new approach to cloud motion analysis from satellite data is introduced in this thesis which tracks the motion of clouds at different scales, making it possible to identify and understand some of the different transport mechanisms present in clouds and remove or reduce the dependence on the simplifying assumptions. Initial work in this thesis examines the suitability of different motion analysis tools for determining the motion of the cloud content in the imagery using a fuzzy system. It then proposes tracking clouds as flexible structures to analyse the motion of the clouds themselves, and using the nature of cloud edges to identify the atmospheric flow around the structures. To produce stable structural analyses, the cloud data are initially smoothed. A novel approach using morphological operators is presented that maintains cloud edge gradients whilst maximising coherence in the smoothed data. Clouds are analysed as whole structures, providing a new measure of synoptic-scale motion. Internal dynamics of the cloud structures are analysed using medial axis transforms of the smoothed data. Tracks of medial axes provide a new measure of cloud motion at a mesoscale. The sharpness in edge gradient is used as a new measure to identify regions of atmospheric flow parallel to a cloud edge (jet flows, which cause significant underestimation in atmospheric motion under the present approach) and regions where the flow crosses the cloud boundary. The different motion characteristics displayed by the medial axis tracks and edge information provide an indication of the atmospheric flow at different scales. In addition to generating new parameters for measuring cloud and atmospheric dynamics, the approach enables weather modellers and forecasters to identify the scale of flow captured by the currently used cloud tracers (both satellite-derived and from other sources). This would allow them to select the most suitable tracers for describing the atmospheric dynamics at the scale of their model or forecast. This technique would also be suitable for any other fluid flow analyses where coherent and stable gradients persist in the flow, and where it is useful to analyse the flow dynamics at more than one scale.
|
475 |
Grid approaches to data-driven scientific and engineering workflowsPaventhan, Arumugam January 2007 (has links)
Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support near-realtime data movement, high-performance processing and effective data management. In this context, we consider two related technology areas: Grid computing which is fast emerging as an accepted way forward for the large-scale, distributed and multi-institutional resource sharing and Database systems whose capabilities are undergoing continuous change providing new possibilities for scientific data management in Grid. In this thesis, we look into the challenging requirements while integrating data-driven scientific and engineering experiment workflows onto Grid. We consider wind tunnels that house multiple experiments with differing characteristics, as an application exemplar. This thesis contributes two approaches while attempting to tackle some of the following questions: How to allow domain-specific workflow activity development by hiding the underlying complexity? Can new experiments be added to the system easily? How can the overall turnaround time be reduced by an end-to-end experimental workflow support? In the first approach, we show how experiment-specific workflows can help accelerate application development using Grid services. This has been realized with the development of MyCoG, the first Commodity Grid toolkit for .NET supporting multi-language programmability. In the second , we present an alternative approach based on federated database services to realize an end-to-end experimental workflow. We show with the help of a real-world example, how database services can be building blocks for scientific and engineering workflows.
|
476 |
Encouraging collaboration through a new data management approachJohnston, Steven January 2006 (has links)
The ability to store large volumes of data is increasing faster than processing power. Some existing data management methods often result in data loss, inaccessibility or repetition of simulations. We propose a framework which promotes collaboration and simplifies data management. In particular we have demonstrated the proposed framework in the scenario of handling large scale data generated from biomolecular simulations in a multiinstitutional global collaboration. The framework has extended the ability of the Python problem solving environment to manage data files and metadata associated with simulations. We provide a transparent and seamless environment for user submitted code to analyse and post-process data stored in the framework. Based on this scenario we have further enhanced and extended the framework to deal with the more generic case of enabling any existing data file to be post processed from any .NET enabled programming language.
|
477 |
Enhanced pre-clinical assessment of total knee replacement using computational modelling with experimental corroboration & probabilistic applicationsStrickland, Anthony Michael January 2009 (has links)
Demand for Total Knee Replacement (TKR) surgery is high and rising; not just in numbers of procedures, but in the diversity of patient demographics and increase of expectations. Accordingly, greater efforts are being invested into the pre-clinical analysis of TKR designs, to improve their performance in-vivo. A wide range of experimental and computational methods are used to analyse TKR performance pre-clinically. However, direct validation of these methods and models is invariably limited by the restrictions and challenges of clinical assessment, and confounded by the high variability of results seen in-vivo. Consequently, the need exists to achieve greater synergy between different pre-clinical analysis methods. By demonstrating robust corroboration between in-silico and in-vitro testing, and both identifying & quantifying the key sources of uncertainty, greater confidence can be placed in these assessment tools. This thesis charts the development of a new generation of fast computational models for TKR test platforms, with closer collaboration with in-vitro test experts (and consequently more rigorous corroboration with experimental methods) than previously. Beginning with basic tibiofemoral simulations, the complexity of the models was progressively increased, to include in-silico wear prediction, patellofemoral & full lower limb models, rig controller-emulation, and accurate system dynamics. At each stage, the models were compared extensively with data from the literature and experimental tests results generated specifically for corroboration purposes. It is demonstrated that when used in conjunction with, and complementary to, the corresponding experimental work, these higher-integrity in-silico platforms can greatly enrich the range and quality of pre-clinical data available for decision-making in the design process, as well as understanding of the experimental platform dynamics. Further, these models are employed within a probabilistic framework to provide a statistically-quantified assessment of the input factors most influential to variability in the mechanical outcomes of TKR testing. This gives designers a much richer holistic visibility of the true system behaviour than extant 'deterministic' simulation approaches (both computational and experimental). By demonstrating the value of better corroboration and the benefit of stochastic approaches, the methods used here lay the groundwork for future advances in pre-clinical assessment of TKR. These fast, inexpensive models can complement existing approaches, and augment the information available for making better design decisions prior to clinical trials, accelerating the design process, and ultimately leading to improved TKR delivery in-vivo to meet future demands.
|
478 |
Time-domain and harmonic balance turbulent Navier-Stokes analysis of oscillating foil aerodynamicsPiskopakis, Andreas January 2014 (has links)
The underlying thread of the research work presented in this thesis is the development of a robust, accurate and computationally efficient general-purpose Reynolds-Averaged Navier-Stokes code for the analysis of complex turbulent flow unsteady aerodynamics, ranging from low-speed applications such as hydrokinetic and wind turbine flows to high-speed applications such as vibrating transonic wings. The main novel algorithmic contribution of this work is the successful development of a fully-coupled multigrid solution method of the Reynolds-Averaged Navier-Stokes equations and the two-equation shear stress transport turbulence model of Menter. The new approach, which also includes the implementation of a high-order restriction operator and an effective limiter of the prolonged corrections, is implemented and successfully demonstrated in the existing steady, time-domain and harmonic balance solvers of a compressible Navier-Stokes research code. The harmonic balance solution of the Navier-Stokes equations is a fairly new technology which can substantially reduce the run-time required to compute nonlinear periodic flow fields with respect to the conventional time-domain approach. The thesis also features the investigation of one modelling and one numerical aspect often overlooked or not comprehensively analysed in turbulent computational fluid dynamics simulations of the type discussed in the thesis. The modelling aspect is the sensitivity of the turbulent flow solution to the, to a certain extent, arbitrary value of the scaling factor appearing in the solid wall boundary condition of the second turbulent variable of the Shear Stress Transport turbulence model. The results reported herein highlight that the solution variability associated with the typical choices of such a scaling factor can be similar or higher than the solution variability caused by the choices of different turbulence models. The numerical aspect is the sensitivity of the turbulent flow solution to the order of the discretisation of the turbulence model equations. The results reported herein highlight that the existence of significant solution differences between first and second order space-discretisation of the turbulence equations vary with the flow regime (e.g. fully subsonic or transonic), operating conditions that may or may not result in flow separation (e.g. angle of attack), and also the grid refinement. The newly developed turbulent flow capabilities are validated by considering a wide range of test cases with flow regime varying from low-speed subsonic to transonic. The solutions of the research code are compared with experimental data, theoretical solutions and also numerical solutions obtained with a state-of-the-art time-domain commercial code. The main computational results of this research regard a low-speed renewable energy application and an aeronautical engineering application. The former application is a thorough comparative analysis of a hydrokinetic turbine working in a low-speed laminar and a high-Reynolds number turbulent regime. The time-domain results obtained with the newly developed turbulent code are used to analyse and discusses in great detail the unsteady aerodynamic phenomena occurring in both regimes. The main motivation for analysing this problem is both to highlight the predictive capabilities and the numerical robustness of the developed turbulent time-domain flow solver for complex realistic problems, and to shed more light on the complex physics of this emerging renewable energy device. The latter application is the time-domain and harmonic balance turbulent flow analysis of a transonic wing section animated by pitching motion. The main motivation of these analyses is to assess the computational benefits achievable by using the harmonic balance solution of the Reynolds-Averaged Navier-Stokes and Shear Stress Transport equations rather than the conventional time-domain solution, and also to further demonstrate the predictive capabilities of the developed Computational Fluid Dynamics system. To this aim, the numerical solutions of this research code are compared to both available experimental data, and the time-domain results computed by a state-of-the-art commercial package regularly used by the industry and the Academia worldwide.
|
479 |
Machine learning techniques for high dimensional dataChi, Yuan January 2015 (has links)
This thesis presents data processing techniques for three different but related application areas: embedding learning for classification, fusion of low bit depth images and 3D reconstruction from 2D images. For embedding learning for classification, a novel manifold embedding method is proposed for the automated processing of large, varied data sets. The method is based on binary classification, where the embeddings are constructed so as to determine one or more unique features for each class individually from a given dataset. The proposed method is applied to examples of multiclass classification that are relevant for large scale data processing for surveillance (e.g. face recognition), where the aim is to augment decision making by reducing extremely large sets of data to a manageable level before displaying the selected subset of data to a human operator. In addition, an indicator for a weighted pairwise constraint is proposed to balance the contributions from different classes to the final optimisation, in order to better control the relative positions between the important data samples from either the same class (intraclass) or different classes (interclass). The effectiveness of the proposed method is evaluated through comparison with seven existing techniques for embedding learning, using four established databases of faces, consisting of various poses, lighting conditions and facial expressions, as well as two standard text datasets. The proposed method performs better than these existing techniques, especially for cases with small sets of training data samples. For fusion of low bit depth images, using low bit depth images instead of full images offers a number of advantages for aerial imaging with UAVs, where there is a limited transmission rate/bandwidth. For example, reducing the need for data transmission, removing superfluous details, and reducing computational loading of on-board platforms (especially for small or micro-scale UAVs). The main drawback of using low bit depth imagery is discarding image details of the scene. Fortunately, this can be reconstructed by fusing a sequence of related low bit depth images, which have been properly aligned. To reduce computational complexity and obtain a less distorted result, a similarity transformation is used to approximate the geometric alignment between two images of the same scene. The transformation is estimated using a phase correlation technique. It is shown that that the phase correlation method is capable of registering low bit depth images, without any modi�cation, or any pre and/or post-processing. For 3D reconstruction from 2D images, a method is proposed to deal with the dense reconstruction after a sparse reconstruction (i.e. a sparse 3D point cloud) has been created employing the structure from motion technique. Instead of generating a dense 3D point cloud, this proposed method forms a triangle by three points in the sparse point cloud, and then maps the corresponding components in the 2D images back to the point cloud. Compared to the existing methods that use a similar approach, this method reduces the computational cost. Instated of utilising every triangle in the 3D space to do the mapping from 2D to 3D, it uses a large triangle to replace a number of small triangles for flat and almost flat areas. Compared to the reconstruction result obtained by existing techniques that aim to generate a dense point cloud, the proposed method can achieve a better result while the computational cost is comparable.
|
480 |
Modelling of catalytic aftertreatment of NOx emissions using hydrocarbon as a reductantSawatmongkhon, Boonlue January 2012 (has links)
Hydrocarbon selective catalytic reduction (HC-SCR) is emerging as one of the most practical methods for the removal of nitrogen oxides (NOx) from light-duty-diesel engine exhaust gas. In order to further promote the chemical reactions of NOx-SCR by hydrocarbons, an understanding of the HC-SCR process at the molecular level is necessary. In the present work, a novel surface-reaction mechanism for HC-SCR is set up with emphasis on microkinetic analysis aiming to investigate the chemical behaviour during the process at a molecular level via detailed elementary reaction steps. Propane (C3H8) is chosen as the reductant of HC-SCR. The simulation is designed for a single channel of a monolith, typical for automotive catalytic converters, coated with a silver alumina catalyst (Ag/Al2O3). The complicated physical and chemical details occurring in the catalytic converter are investigated by using the numerical method of computational fluid dynamics (CFD) coupled with the mechanism. The C3H8-SCR reaction mechanism consists of 94 elementary reactions, 24 gas-phase species and 24 adsorbed surface species. The mechanism is optimised by tuning some important reaction parameters against some measurable data from experiments. The optimised mechanism then is validated with another set of experimental data. The numerical simulation shows good agreements between the modelling and the experimental data. Finally, the numerical modelling also provides information that is difficult to measure for example, gas-phase concentration distribution, temperature profiles, wall temperatures and the occupation of adsorbed species on catalyst surface. Consequently, computational modelling can be used as an effective tool to design and/or optimise the catalytic exhaust aftertreatment system.
|
Page generated in 0.0737 seconds