• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4616
  • 1155
  • 690
  • 230
  • 165
  • 151
  • 147
  • 132
  • 88
  • 58
  • 48
  • 36
  • 24
  • 18
  • 17
  • Tagged with
  • 9977
  • 1361
  • 1261
  • 958
  • 927
  • 830
  • 730
  • 679
  • 618
  • 591
  • 571
  • 524
  • 518
  • 475
  • 467
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

An investigation of the optical, visual and economic performance of the pseudophakic eye

Afsar, Asfa Jubeen January 2000 (has links)
No description available.
472

Reducing the risks of telehealthcare expansion through the automation of efficiency evaluation

Alexandru, Cristina Adriana January 2015 (has links)
Several European countries, including the UK, are investing in large-scale telehealthcare pilots, to thoroughly evaluate the benefits of telehealthcare. Due to the high level of risk associated with such projects, it becomes desirable to be able to predict the success of telehealthcare systems in potential deployments, in order to inform investment and help save resources. An important factor for the success of any telehealthcare deployment is usability, as it helps to achieve the benefits of the technology through increased productivity, decreased error rates, and better acceptance. In particular, efficiency, one of the characteristics of usability, should be seen as a central measure for success, as the timely care of a high number of patients is one of the important claims of telehealthcare. Despite the recognized importance of usability, it is seen as secondary in the design of telehealthcare systems. The resulting problems are difficult to predict due to the heterogeneity of deployment contexts. This thesis proposes the automation of usability evaluation through the use of modelling and simulation techniques. It describes a generic methodology which can guide a modeller in reusing models for predicting characteristics of usability within different deployment sites. It also describes a modelling approach which can be used together with the methodology, to run in parallel a user model, inspired from a cognitive architecture, and a system model, represented as a basic labelled transition system. The approach simulates a user working with a telehealthcare system, and within her environment, to predict the efficiency of the system and work process surrounding it. The modeller can experiment with different inputs to the models in terms of user profile, workload, ways of working, and system design, to model different potential- real or hypothetical- deployments, and obtain efficiency predictions for each. A comparison of the predictions helps analyse the effects on efficiency of changes in deployments. The work is presented as an experimental investigation, but emphasises the great potential of modelling and simulation for helping to inform investment, help reduce costs, mitigate risks and suggest changes that would be necessary for improving the usability, and therefore success or telehealthcare deployments. My vision is that, if used commercially, the approaches presented in this thesis could help reduce risks for scaling up telehealthcare deployments.
473

Modelling generic access network components

Miklos, Zoltan 13 March 2006 (has links)
There are 1 files which have been withheld at the author's request. Master of Science in Engineering - Engineering / Modelling of telecommunications access networks which concentrate traffic is essential for architectural studies, design and operational efficiency. This work develops the concept of an Intermediate Services Access Network (ISAN) that represents an enhanced narrowband synchronous transfer mode access network which provides an evolutionary step from the existing POTS and N-ISDN access networks to the Fibre to the x (FTTx) networks. Models of the ISAN are developed to support architectural and traffic studies. Generic components are identified from a study of several typical ISAN network architectures. The components include intelligent nodes, transmission links and exchange interfaces. The modelling methodology used seeks firstly to identify resources in the access network and then model them as object classes. Entity-Relationship diagram techniques, defined by the International Telecommunications Union, are used in this work to identify, decompose and represent components in an access network. Recurring components in this work are termed generic components and have attributes that make them reusable. The classes developed consist of generic classes, and technology or application specific classes. Software classes are developed to represent traffic sources with selectable parameters including Poisson arrivals, negative exponential or lognormal holding times and asymmetric originating and terminating models. The identified object classes are implemented using the object-oriented simulation language MODSIM III. An existing unidirectional ring network is simulated to quantify the traffic performance of this type of network under telephone traffic conditions. The ring network is further developed to enhance traffic capacity and performance under link failure conditions. As an economic consideration, this hypothetical ring network uses a single backup link in the event of link failure. The network is simulated with different types of types of traffic (telephone, payphone and Internet dial-up traffic) and under link failure conditions to establish the grade of service.
474

Bioartificial livers : theoretical methods to improve and optimize design

Davidson, Adam J. January 2011 (has links)
In this work, a mathematical modelling approach is taken to improve and optimize the designs of bioartificial liver (BAL) systems. BALs are an alternative therapy for the extremely serious condition of liver failure where liver transplant is currently the only viable option. As yet, large-scale clinical trials have not been successful enough in order for BALs to gain regulatory approval. Through the work in this report, it is envisaged that BAL design can be improved to the point where they can gain clinical acceptanceOne of the main issues in BAL design is the provision of adequate oxygen to the cell mass. To this end, a mathematical model to describe oxygen mass transport is developed based on the principle of Krogh cylinders. The results of this model are subsequently interpreted and presented in Operating Region charts, an image of a parameter space that corresponds to viable BAL designs. These charts allow several important design trends to be identified, e.g. numerous short and thin hollow fibres are favourable over fewer thicker, longer fibres. In addition, it is shown that a physiologically relevant cell number of more than 10% of the native liver cell mass can be supported in these devices under the right conditions. Subsequently the concept of the Operating Region is expanded to include zonation, a metabolic phenomenon where local oxygen tension is a primary modulator of liver cell function. It is found that zonation profiles can be well controlled and under standard conditions a plasma flow rate of 185 ml/min to the BAL would distribute the three metabolic zones evenly. Finally, the principles of the Operating Region charts and zonation are applied to three existing commercial BAL designs; the HepaMate, BLSS and ELAD systems. In each case it could be seen that the default designs of each system did not present ideal environments for liver cells. Through consideration of zonation profiles, each device design and operating parameters could be optimized to produce in vivo-like environments. In the case of the ELAD, reducing the plasma flow rate from 500 to 90 ml/min resulted in a balanced zonation profile. Overall, the work in this report has developed and detailed a series of tools that will assist a BAL designer in making judicious choices over bioreactor design and operating parameters. As a result, it is hoped that BALs can take a step forward towards clinical practice and ultimately saving lives.
475

Large amplitude forced roll motion in two dimensions : experiments and theory

Tang, Alan Shung-tse January 1991 (has links)
No description available.
476

Modelagem de workflow utilizando um modelo de dados temporal orientado a objetos com papéis

Nicolao, Mariano January 1998 (has links)
Um dos grandes problemas relacionados a modelagem de workflow consiste na utilização de técnicas de modelagem conceitual especificas a cada sistema de workflow, não havendo, dessa forma, urn modelo aceito consensualmente. Esta situação, decorrência do ambiente competitivo neste mercado, leva a não inclusão de muitas características conceitualmente importantes relacionadas a modelagem nos técnicas geralmente utilizadas. Um importante aspecto a ser tratado nos modelos conceituais e a questão da modelagem formal do workflow e que constitui o terra central deste trabalho. Esta dissertação apresenta uma técnica de modelagem de workflow utilizando como modelo de dados referencial o TF-ORM (Temporal Functionality in Objects with Roles Model). Esta técnica desenvolve uma especificação rigorosa de workflow em um nível conceitual, formalizando com a utilização de um modelo técnico seu comportamento interno (cooperação e interação entre tarefas) e seu relacionamento para o ambiente (designação de tarefas de trabalho para executores). Neste modelo, construções são apresentadas para representar, de forma eficiente, a modularização e o paralelismo. Uma linguagem textual de definição de workflow e apresentada. Adicionalmente é apresentada a utilização de descrições formais do workflow para gerar o esquema de dados do workflow e o conjunto de regras para seu gerenciamento. Em adição, o paradigma de regras oferece um formalismo conveniente para expressar computações reativas influenciadas por eventos externos, gerados fora do WFMS (Workflow Manager System). Finalmente é realizada uma analise sobre algumas ferramentas comerciais, procurando validar a praticidade dos modelos conceituais desenvolvidos. Os principais conceitos envolvidos em workflow são descritos e classificados de forma a possibilitar, a validação tanto dos conceitos quanto da modelagem através de um estudo de caso e a utilização de um sistema comercial. / One of the greatest problems in workflow modelling is the use of specific conceptual modelling techniques associated to each workflow system; there is not a consensual accepted model. This situation, a consequence of the strong competitive environment in this market, leads to the non-inclusion of many important conceptual characteristics. This restriction is a consequence of the restricted modelling techniques closely related with implementation models. An important aspect to be considered, and the central subject of this work, is the formal workflow modelling. A modelling technique using the TF-ORM (Temporary Functionality in Objects with Rolls Model) data model is here presented. The modelling technique develops a rigorous specification of workflow at the conceptual level, formalising in one model its internal behaviour (the co-operation and interaction among tasks) and its relationship with the environment (the designation of tasks). In this model, constructions where developed to represent, in an efficient form, the modularity and the parallelism of the activities. A formal language for the workflow definition is presented. Additionally, the use of formal workflow description is used to generate the data flow and rules set for its management. In addition, the rules paradigm offers a convenient formalism to express reactive computations influenced by external events generated outside the Workflow Manager System. Finally a case study is accomplished using some commercial modelling tools, to validate the developed conceptual models practicality.
477

Global analysis of predicted and observed dynamic topography

Richards, Frederick David January 2019 (has links)
While the bulk of topography on Earth is generated and maintained by variations in the thickness and density of crust and lithosphere, a significant time-variable contribution is expected as a result of convective flow in the underlying mantle. For over three decades, this dynamic topography has been calculated numerically from inferred density structure and radial viscosity profiles. Resulting models predict ±2 km of long wavelength (i.e., ~ 20,000 km) dynamic topography with minor contributions at wavelengths shorter than ~ 5,000 km. Recently, observational studies have revealed that, at the longest wavelengths, dynamic topography variation is ~ 30% that predicted, with ±1 km amplitudes recovered at shorter wavelengths. Here, the existing database of water-loaded basement depths is streamlined, revised and augmented. By fitting increasingly sophisticated thermal models to a combined database of these oceanic basement depths and corrected heat flow measurements, the average thermal structure of oceanic lithosphere is constrained. Significantly, optimal models are consistent with invariable geochemical and seismological constraints whilst yielding similar values of mantle potential temperature and plate thickness, irrespective of whether heat flow, subsidence or both are fit. After recalculating residual depth anomalies relative to optimal age-depth subsidence and combining them with continental constraints from gravity anomalies, a global spherical harmonic representation is generated. Although, long wavelength dynamic topography increases by ~ 40% in the revised observation-based model, spectral analysis confirms that a fundamental discrepancy between observations and predictions remains. Significantly, residual depth anomalies reveal a ~4,000 km-scale eastward tilt across the Indian Peninsula. This asymmetry extends onshore from the high-elevation Western Ghats in the west to the Krishna-Godavari floodplains in the east. Calibrated inverse modelling of drainage networks suggest that the tilt of the peninsula grew principally in Neogene times with vertical motions linked to asthenospheric temperature anomalies. Uplift rates of up to 0.1 mm a⁻¹ place important constraints on the spatio-temporal evolution of dynamic topography and suggest that rates of transient vertical motion exceed those predicted by many modelling studies. Most numerical models excise the upper ~ 300 km of Earth's mantle and are unable to reconstruct the wavelength and rate of uplift observed across Peninsular India. By contrast, through conversion of upper mantle shear wave velocities to density using a calibrated anelastic parameterisation, it is shown that shorter wavelength (i.e., ≤ 5,000 km) dynamic topography, can mostly be explained by ±150°C asthenospheric temperature anomalies. Inclusion of anelastically corrected density structure in whole-mantle instantaneous flow models also serves to reduce discrepancy between predictions and observations of dynamic topography at long wavelengths. Residual mismatch between observations and predictions is further improved if the basal 300-600 km of large low shear wave velocity regions in the deep mantle are geochemically distinct and negatively buoyant. Finally, inverse modelling of geoid, dynamic topography, gravity and core-mantle boundary topography observations using adapted density structure suggests that geodynamic constraints can be acceptably fit using plausible radial viscosity profiles, contradicting a long-standing assertion that modest long wavelength dynamic topography is incompatible with geoid observations.
478

The Useful Elements of Pre-principalship Preparation

Roberts, Barry Llewellyn January 2007 (has links)
Abstract The importance of the role of the principal in good schools is acknowledged by many sources. The preparation of new principals is therefore an important factor in ensuring children are educated in good schools. New Zealand does not have a formal system of principal preparation. The purpose of this study was to examine the experiences of pre-principalship preparation with the aim of discovering those activities and developments that were useful in assisting teachers to make the transition to successful principalship. The research question addressed in this study is,; What are the elements of pre-principalship preparation that are most useful for potential and aspiring principals in furthering their career aims? Using qualitative methodology, a group of people who had attended the Aspiring and Potential Principals' Pilot run by the School of Education at the University of Waikato, were questioned using semi structured interviews about their experiences. Five of the six were holding principal positions, the sixth was in a deputy principal's position and had some relieving principal experience. The results the research generated indicated that while there were varying needs for potential principals because of their varied backgrounds, there were six useful experiences for all identified. These included, attendance at some form of targeted principal preparation programme, a background of ongoing professional learning, developing networks, developing successful mentoring, experience of models of principalship and support of 'family'. Different people had different levels of benefit from these experiences but they were common to all. It is hoped that this research will give assistance to guiding professional development for the potential and aspiring principals of tomorrow.
479

Modelling of phosphorus-donor based silicon qubit and nanoelectronic devices

Escott, Christopher Colin, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Modelling of phosphorus donor-based silicon (Si:P) qubit devices and mesoscopic single-electron devices is presented in this thesis. This theoretical analysis is motivated by the use of Si:P devices for scalable quantum computing. Modelling of Si:P single-electron devices (SEDs) using readily available simulation tools is presented. The mesoscopic properties of single and double island devices with source-drain leads is investigated through ion implantation simulation (using Crystal-TRIM), 3D capacitance extraction (FastCap) and single-electron circuit simulation (SIMON). Results from modelling two generations of single and double island Si:P devices are given, which are shown to accurately capture their charging behaviour. The trends extracted are used to forecast limits to the reduction in size of this Si:P architecture. Theoretical analysis of P2+:Si charge qubits is then presented. Calculations show large ranges for the SET measurement signal, Δq, and geometric ratio factor, α, are possible given the 'top-down' fabrication procedure. The charge qubit energy levels are calculated using the atomistic simulator NEMO 3-D coupled to TCAD calculations of the electrostatic potential distribution, further demonstrating the precise control required over the position of the donors. Theory has also been developed to simulate the microwave spectroscopy of P2+:Si charge qubits in a decohering environment using Floquet theory. This theory uses TCAD finite-volume modelling to incorporate realistic fields from actual device gate geometries. The theory is applied to a specific P2+:Si charge qubit device design to study the effects of fabrication variations on the measurement signal. The signal is shown to be a sensitive function of donor position. Design and analysis of two different spin qubit architectures concludes this thesis. The first uses a high-barrier Schottky contact, SET and an implanted P donor to create a double-well suitable for implementation as a qubit. The second architecture is a MOS device that combines an electron reservoir and SET into a single structure, formed from a locally depleted accumulation layer. The design parameters of both architectures are explored through capacitance modelling, TCAD simulation, tunnel barrier transmission and NEMO 3-D calculations. The results presented strengthen the viability of each architecture, and show a large Δq (> 0.1e) can be expected.
480

A form based meta-schema for information and knowledge elicitation

Wijesekera, Dhammika Harindra, n/a January 2006 (has links)
Knowledge is considered important for the survival and growth of an enterprise. Currently knowledge is stored in various places including the bottom drawers of employees. The human being is considered to be the most important knowledge provider. Over the years knowledge based systems (KBS) have been developed to capture and nurture the knowledge of domain experts. However, such systems were considered to be separate and different from the traditional information systems development. Many KBS development projects have failed. The main causes for such failures have been recognised as the difficulties associated with the process of knowledge elicitation, in particular the techniques and methods employed. On the other hand, the main emphasis of information systems development has been in the areas of data and information capture relating to transaction based systems. For knowledge to be effectively captured and nurtured it is necessary for knowledge to be part of the information systems development activity. This thesis reports on a process of investigation and analysis conducted into the areas of information, knowledge and the overlapping areas. This research advocates a hybrid approach, where knowledge and information capture to be considered as one in a unified environment. A meta-schema design based on Formal Object Role Modelling (FORM), independent of implementation details, is introduced for this purpose. This is considered to be a key contribution of this research activity. Both information and knowledge is expected to be captured through this approach. Meta data types are provided for the capture of business rules and they form part of the knowledge base of an organisation. The integration of knowledge with data and information is also described. XML is recognised by many as the preferred data interchange language and it is investigated for the purpose of rule interchange. This approach is expected to enable organisations to interchange business rules and their meta-data, in addition to data and their schema. During interchange rules can be interpreted and applied by receiving systems, thus providing a basis for intelligent behaviour. With the emergence of new technologies such as the Internet the modelling of an enterprise as a series of business processes has gained prominence. Enterprises are moving towards integration, establishing well-described business processes within and across enterprises, to include their customers and suppliers. The purpose is to derive a common set of objectives and benefit from potential economic efficiencies. The suggested meta-schema design can be used in the early phases of requirements elicitation to specify, communicate, comprehend and refine various artefacts. This is expected to encourage domain experts and knowledge analysts work towards describing each business process and their interactions. Existing business processes can be documented and business efficiencies can be achieved through a process of refinement. The meta-schema design allows for a ?systems view? and sharing of such views, thus enabling domain experts to focus on their area of specialisation whilst having an understanding of other business areas and their facts. The design also allows for synchronisation of mental models of experts and the knowledge analyst. This has been a major issue with KBS development and one of the main reasons for the failure of such projects. The intention of this research is to provide a facility to overcome this issue. The natural language based FORM encourages verbalisation of the domain, hence increasing the understanding and comprehension of available business facts.

Page generated in 0.0954 seconds