• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Towards ontology design patterns to model multiple classification criteria of domain concepts in the Semantic Web

Rodriguez Castro, Benedicto January 2012 (has links)
This thesis explores a very recurrent modeling scenario in ontology design that deals with the notion of real world concepts that can be classified according to multiple criteria. Current ontology modeling guidelines do not explicitly consider this aspect in the representation of such concepts. Such void leaves ample room for ad-hoc practices that can lead to unexpected or undesired results in ontology artifacts. The aim is to identify best practices and design patterns to represent such concepts in OWL DL ontologies suitable for deployment in the Web of Data and the Semantic Web. To assist with these issues, an initial set of basic design guidelines is put forward, that mitigates the opportunity for ad-hoc modeling decisions in the development of ontologies for the problem scenario described. These guidelines relies upon an existing simplified methodology for facet analysis from the field of Library and Information Science. The outcome of this facet analysis produces a Faceted Classification Scheme (FCS) for the concept in question where in most cases a facet would correspond to a classification criterion. The Value Partition, the Class As Property Value and the Normalisation Ontology Design Patterns (ODPs) are revisited to produce an ontology representation of a FCS. A comparative analysis between a FCS and the Normalisation ODP in particular, revealed the existence of key similarities between the elements in the generic structure of both knowledge representation paradigms. These similarities allow to establish a series of mappings to transform a FCS into an OWL DL ontology that contains a valid representation of the classification criteria involved in the characterization of the domain concept. An existing FCS example in the domain of \Dishwasher Detergent" and existing ontology examples in the domain of \Pizza", \Wine" and \Fault" (in the context of a computer system) are used to illustrate the outcome of this research
602

Methods for increased energy and flux in high harmonic generation

Butcher, Thomas J. January 2012 (has links)
High harmonic generation (HHG) is a nonlinear light matter interaction that results in the generation of high order harmonics of a driving optical field. It is routinely used to generate coherent short wavelength radiation in the soft x-ray and extreme ultraviolet (XUV) regimes. HHG-based XUV sources require a highly intense driving pulse to be focused into a target gas typically within a gas cell, gas jet or hollow capillary. They can be used for a variety of applications, one of which is nanoscale imaging. The work presented in this thesis focuses on the development of two high flux HHG sources for use in tabletop nanoscale imaging; a capillary based HHG system using a Ti:Sapphire based laser and a gas cell based HHG system using an Yb-doped fibre laser. The manufacture and use of a 7 cm hollow core capillary in HHG is described. The propagation of the pump pulse is modelled using a new nonlinear propagation model and compared to experimental results. The pulse is found to undergo pulse self-compression using a new regime of high ionisation pulse compression. The pulse is observed to reduce in length from 53 fs to 28 fs, with post compression reducing this further to 15 fs. The XUV spectrum from the 7 cm capillary is measured and its dependence on gas pressure discussed using calculations of the XUV transmission within the capillary. Using the observations made of the 7 cm capillary a new more efficient 4.5 cm capillary is designed and manufactured. Comparison between the two capillaries shows an increase in flux of the new capillary design of more than an order of magnitude, with a calculated value of 5.3x10^12 ph harm^-1 s^-1 cm^-2, one of the highest in the world. A gas cell is used in the Yb-doped fibre laser based HHG source and the XUV signal is measured using an XUV photodiode. The XUV signal is characterised by measuring its dependence on focal position, gas pressure and pump laser power. A novel method of increasing the flux by twisting of a second lens outside the vacuum chamber is discovered and was found to double the measured signal. The maximum flux for this fibre laser based HHG source is calculated and found to be 2.2x10^12 ph s^-1, the highest measured for a fibre based HHG source.
603

An analogue approach for the processing of biomedical signals

Mangieri, Eduardo January 2012 (has links)
Constant device scaling has signifcantly boosted electronic systems design in the digital domain enabling incorporation of more functionality within small silicon area and at the same time allows high-speed computation. This trend has been exploited for developing high-performance miniaturised systems in a number of application areas like communication, sensor network, main frame computers, biomedical information processing etc. Although successful, the associated cost comes in the form of high leakage power dissipation and systems reliability. With the increase of customer demands for smarter and faster technologies and with the advent of pervasive information processing, these issues may prove to be limiting factors for application of traditional digital design techniques. Furthermore, as the limit of device scaling is nearing, performance enhancement for the conventional digital system design methodology cannot be achieved any further unless innovations in new materials and new transistor design are made. To this end, an alternative design methodology that may enable performance enhancement without depending on device scaling is much sought today. Analogue design technique is one of these alternative techniques that have recently gained considerable interests. Although it is well understood that there are several roadblocks still to be overcome for making analogue-based system design for information processing as the main-stream design technique (e.g., lack of automated design tool, noise performance, efficient passive components implementation on silicon etc.), it may offer a faster way of realising a system with very few components and therefore may have a positive implication on systems performance enhancement. The main aim of this thesis is to explore possible ways of information processing using analogue design techniques in particular in the field of biomedical systems.
604

Simulating the colour of port wine stain skin

Lister, Thomas January 2013 (has links)
Currently, laser treatments for Port Wine Stain (PWS) lesions are considered the choice therapy, but response is poor or treatments are ineffective for around half of patients. It is proposed in this thesis that improvements to the effectiveness of laser treatment can be achieved through the acquisition of estimated PWS vessel number density, depths and diameters for each individual lesion. Information regarding PWS vessel architecture is found to be contained within the colour of the lesion. Presented in this thesis is a method of extracting this information through colour measurements and the inverse application of a skin model. Colour measurements are performed on 14 participants using a Konica-Minolta CM2600d spectrophotometer employing a xenon flashlamp illumination source and an integrating sphere. Light transport is simulated through an 8 layer mathematical skin model inclusive of horizontal, pseudo-cylindrical PWS blood vessels using a new Monte Carlo programme. Within the programme, model parameters were adjusted in an iterative process and skin colour was reproduced with a mean discrepancy of 1.9% reflection for clinically normal skin (24 datasets) and 2.4% for PWS skin (25 datasets). The programme estimated anatomical properties of the measured regions of skin, yielding epidermal melanin volume fractions from 0.4% to 3.3% and mean melanosome diameters from 41 nm to 384 nm across the participant group. The response to laser treatment was assessed for 10 participants through colour measurements taken immediately before and at least 6 weeks after treatment and through expert analysis of photographs for 9 participants taken at these times. Treatment response was not found to correlate directly with the pre-treatment melanin parameters estimated by the programme. Mean depths, diameters and number densities of PWS vessels were also estimated by the programme before and after treatment. These parameters were compared to data obtained from Optical Coherence Tomography (OCT) images for 5 participants. Number densities and diameters predicted by the simulation varied by no more than 10% from the values determined by OCT for 4 and 5 out of 7 regions respectively. Mean depths predicted by the simulation did not correspond with those determined by OCT however. This may be a result of the limited contribution of deeper vessels to the colour of PWS skin. Predicted PWS parameters were compared to treatment response assessed by colour measurement for 10 participants and by photographic analysis for 9 of these. Predicted vessel number densities were not found to correspond with treatment response. Vessel diameters predicted by the simulation correlated with treatment response when compared with the pulse lengths selected for treatment. Optical coefficients derived from the skin model were used to estimate appropriate laser treatment radiant exposures at the predicted mean vessel depths and these radiant exposures corresponded strongly with the treatment response. Suggestions for improvements in the predictions of melanosome diameters through changes in the adjacent skin minimisation procedure within the programme are discussed. The apparent underestimation of PWS blood vessel number densities and mean depths (compared to biopsy studies) may be a result of the reduced influence of deeper PWS vessels upon skin colour. Further investigation, including modifications to the PWS vessel minimisation procedure within the programme, would be necessary to determine whether improvements in these predictions may be achievable. The results of the study show that the new Monte Carlo programme is capable of extracting, from measurements of skin colour, realistic estimates of PWS skin characteristics which can be used to predict treatment response and therefore inform treatment parameters on an individual PWS.
605

Parallel discrete event simulation on the SpiNNaker engine

Bai, Chuan January 2013 (has links)
The SpiNNaker engine is a multiprocessor system, designed with a scalable interconnection system to perform real-time neural network simulation. The scalable property of the SpiNNaker system has the potential of providing high computation power making it suitable for solving certain large scale systems, such as neural networks. In addition, biological neural systems are intrinsically non-deterministic, and there are a number of design axioms of SpiNNaker that made it ideally suited to the simulation of systems with such properties. Interesting though they are, the non-deterministic attributes of SpiNNaker-based simulation are not the focus of this thesis. The high computational power available, coupled with the extremely low inter-chip communication cost, made SpiNNaker an attractive platform for other application areas in addition to its principal goal. One such problem is parallel discrete event simulation (PDES), which is the focus of this work. Discrete event simulation is a simple yet powerful algorithmic technique. Parallel discrete event simulation, on the other hand, is much more complicated due to the increase in complexity arising from the need to keep simulation data synchronized in a distributed environment. This property of PDES makes it a suitable candidate for generic simulation evaluation. Based on this insight, this thesis carries out the evaluation of the generic simulation capability of the SpiNNaker platform using a specially built framework running on the conventional parallel processing cluster to model the actual SpiNNaker system. In addition, a novel load balancing technique was also introduced and evaluated in this project.
606

Dielectrics for high temperature superconducting applications

Truong, L. H. January 2013 (has links)
This thesis is concerned with the development of condition monitoring for future design of high temperature superconducting (HTS) power apparatus. In particular, the use of UHF sensing for detecting PD activity within HTS has been investigated. Obtained results indicate that fast current pulses during PD in LN2 radiate electromagnetic waves which can be captured by the UHF sensor. PD during a negative streamer in LN2 appears in the form of a series of pulses less than 1 μs apart. This sequence cannot be observed using conventional detection method due to its bandwidth limitation. Instead, a slowly damped pulse is recorded which shows the total amount of charge transferred during this period. A study into PD streamer development within LN2 has been undertaken that reveals the characteristics of pre-breakdown phenomena in LN2. For negative streamers, when the electric field exceeds a threshold value, field emission from the electrode becomes effective which leads to the formation of initial cavities. Breakdown occurs within these gaseous bubbles and results in the development of negative streamers. For positive streamers, the process is much less well-understood due to the lack of initial electrons. However, from the recorded current pulses and shadow graphs, the physical mechanism behind positive streamer development is likely to be a more direct process, such as field ionisation, compared with the step-wise expansion in the case of negative streamers. The mechanisms that cause damage to solid dielectrics immersed in LN2 have been investigated. Obtained results indicate that pre-breakdown streamers can cause significant damage to the solid insulation barrier. Damage is the result of charge bombardment and mechanical forces rather than thermal effects. Inhomogeneous materials, such as glass fibre reinforced plastic (GRP), tend to introduce surface defects which can create local trapping sites. The trapped charges when combined with those from streamers can create much larger PD events. Consequently, damage observed on GRP barriers is much more severe than that on PTFE barriers under similar experimental conditions. Thus, design of future HTS power apparatus must consider this degradation phenomenon in order to improve the reliability of the insulation system.
607

Spectroscopic analysis of nanodielectric interfaces

Yeung, C. January 2013 (has links)
Polymeric nanocomposites have received an exceptional amount of attention over the recent years as they have the ability to possess enhanced properties. The use of nanosized phases in composite materials, as opposed to their microsized counterpart, delivers characteristics which allow nanodielectric systems to operate at an increased performance and improved efficiency. The requirements of the polymeric system can easily be tailored to suit speci�c applications with as little as 2 wt.% filler loading, whilst maintaining the typical weight of the virgin material. With the transition from micrometric to nanomeric phases, the volume of the interfacial region increases dramatically and this is where the mechanisms behind nanocomposite behaviour are believed to occur. As the potential for nanodielectrics is endless, the importance of in-depth studies into the �ller-matrix interface is fundamental. Many studies have already partaken in research which uses organosilanes as a coupling agent, however few the quantity of organosilane as a variable parameter, or compared the use of hydrous and anhydrous functionalisation methods. This study investigates the consequences of introducing differently functionalised nanosilicas into epoxy systems; a number of spectroscopic techniques (Raman spectroscopy, Fourier transform infrared spectroscopy and combustion analysis) were employed to quantify the level of surface modification on the surface of silica nanoparticles, before mixing methods were developed in an attempt to reach nanoparticle homogeneity in an epoxy matrix. Scanning electron microscopy was employed to investigate the dispersion state of the filler with respect to the degree of functionalisation, whilst data from AC breakdown studies, differential scanning calorimetry and dielectric spectroscopy were analysed to determine the effects of differently functionalised nanosilica in a dielectric system. Theinvestigation shows how condensation reactions within the interphase has an infuence dielectric behaviour, and highlights how changes in the stoichiometry of the epoxy system alters the polymerarchitecture to have an effect on the electrical properties of the nanocomposites. Further studies explore the use of confocal Raman spectroscopy as a tool in probing the nanofiller-matrix interface. A simulation based on the scattering of incident photons was compared with empirical data from a range of dielectric �lms; modi�cations to the scattering photon approach relates physically obtained values for bulk attenuation directly to those observed in confocal Raman depth profiles. Although it was found that the revised model was able to produce confocal Raman depth profiles that closely match experimental data from the nanocomposite films, the nature of nanoparticle agglomeration during functionalisation and the typical resolution of confocal Raman systems do not allow for the detection of chemical changes on the filler.
608

Variation and reliability in digital CMOS circuit design

Ghahroodi, Massoud January 2014 (has links)
The silicon chip industry continues to provide devices with feature sizes at Ultra-Deep-Sub-Micron (UDSM) dimensions. This results in higher device density and lower power and cost per function. While this trend is positive, there are a number of negative side effects, including the increased device parameter variation, increased sensitivity to soft errors, and lower device yields. The lifetime of next- generation devices is also decreasing due to lower reliability margins and shorter product lifetimes. This thesis presents an investigation into the challenges of UDSM CMOS circuit design, with a review of the research conducted in this field. This investigation has led to the development of a methodology to determine the timing vulnerability factors of UDSM CMOS that leads to a more realistic definition of the Window of Vulnerability (WOV) for Soft-Error-Rate (SER) computation. We present an implementation of a Radiation-Hardened 32-bit Pipe-lined Processor as well as two novel radiation hardening techniques at Gate-level. We present a Single Event-Upset (SEU) tolerant Flip-Flop design with 38% less power overhead and 25% less area overhead at 65nm technology, compared to the conventional Triple Modular Redundancy (TMR) technique for Flip-Flop design. We also propose an approach for in-field repair (IFR) by trading area for reliability. In the case of permanent faults, spare logic blocks will replace the faulty blocks on the fly. The simulation results show that by tolerating approximately 70% area overhead and less than 18% power overhead, the reliability is increased by a factor of x10 to x100 for various component failure rates.
609

Towards a systematic process for modelling complex systems in event-B

Alkhammash, Eman January 2014 (has links)
Formal methods are mathematical techniques used for developing large systems. The complexity of growing systems pose an increasing challenge in the task of formal development and requires a significant improvement of formal techniques and tool support. Event-B is a formal method used for modelling and reasoning about systems. The Rodin platform is an open tool that supports Event-B specification and verification. This research aims to address some challenges in modelling complex systems. The main challenges addressed in this thesis cover three aspects: The first aspect focuses on providing a way to manage the complexity of large systems. The second aspect focuses on bridging the gap between the requirements and the formal models. The third aspect focuses on supporting the reuse of models and their proofs. To address the first challenge, we have attempted to simplify the task of formal development of large systems using a compositional technique. The compositional technique aims at dividing the system into smaller parts starting from requirements, followed on by a construction of the specification of each part in isolation, and then finally composing these parts together to model the overall behaviour of the system. We classified the requirements into two categories: The first category consists of a different set of requirements, each of which describes a particular component of the system. The second category describes the composition requirements that show how components interact with each other. The first category is used to construct Event-B specification of each component separately from other components. The second category is used to show the interaction of the separated models using the composition technique. To address the second and the third challenges, we proposed two techniques in this thesis. The first technique supports construction of a formal model from informal requirements with the aim of retaining traceability to requirements in models. This approach makes use of the UML-B and atomicity decomposition (AD) approaches. UML-B provides the UML graphical notation that enables the development of an Event-B formal model, while the AD approach provides a graphical notation to illustrate the refinement structures and assists in the organisation of refinement levels. The second technique supports the reusability of Event-B formal models and their respective proof obligations. This approach adopts generic instantiation and composition approaches to form a new methodology for reusing existing Event-B models into the development process of other models. Generic instantiation technique is used to create an instance of a pattern that consists of refinement chain in a way that preserves proofs while composition is used to enable the integration of several sub-models into a large model. FreeRTOS (real-time operating system) was selected as a case study to identify and address the above mentioned general problems in the formal development of complex systems.
610

Understanding institutional collaboration networks : effects of collaboration on research impact and productivity

Yao, Jiadi January 2014 (has links)
There is substantial competition among academic institutions. They compete for students, researchers, reputation, and funding. For success, they need not only to excel in teaching, but also their research profile is considered an important factor. Institutions accordingly take actions to improve their research profiles. They encourage researchers to publish frequently and regularly (publish or perish) on the assumption that this generates both more and better research. Collaboration has also been encouraged by institutions and even required by some funding calls. This thesis examines the empirical evidence on the interrelations among institutional research productivity, impact and collaborativity. It studies article publication data across ACM and Web of Science covering five disciplines { Computer Science, Pharmacology, Materials Science, Psychology and Law. Institutions that publish less seek to publish collaboratively with other institutions. Collaboration boosts productivity for all the disciplines investigated excepted Law; however, the amount of productivity increase resulting from the institutions' attempt to collaborate more is small. The world's most productive institutions publish at least 50% of their papers on their own. Institutions doing more collaborative work are not found to correlate strongly with their impact either. The correlation between collaborativity and individual paper impact or institutional impact is small once productivity has been partialled out. In Computer Science, Pharmacology and Materials Science, no correlation is found. The decisive factor appears to be productivity. Partialling out productivity results in the largest reductions in the remaining correlations. It may be that only better equipped and well-funded institutions can publish without having to rely on external collaborators. These institutions have been publishing most of their output non-collaboratively, and are also of high quality and highly reputable, which may have equipped and funded them in the first place.

Page generated in 0.0931 seconds