Spelling suggestions: "subject:"[een] CLASSIFIED"" "subject:"[enn] CLASSIFIED""
71 |
Performance of reduced-scale vortex amplifiers used to control glovebox dustZhang, Guobin January 2005 (has links)
Ventilation systems for a nuclear plant must have a very high reliability and effectiveness. In this application, fluidic devices have advantages which electro-mechanical and pneumatic devices lack. Fluidic devices will not easily wear out, they have a relatively fast response and in some cases they may be cheaper than an equivalent conventional device. Most importantly, they have fewer moving parts (usually none) so are inherently reliable, so long as the fluidic design is effective. So vortex amplifiers (VXA) are ideal for active ventilation systems where access for maintenance is problematic. From 1995 to 2000, space limitations at Sellafield drove the desire to minimise VXA size and also glovebox size. Recently completed plant expansions use a smaller version of VXA produced by scaling geometrically the existing standard model. It is called the mini-VXA. Subsequent performance of the mini-VXA has been disappointing with high oxygen levels noted in the inerted gloveboxes; this required an expensive increase in the inert gas supply rate of gloveboxes to mitigate against fire risk. After doing experiments using a mini-VXA and typical glovebox, the author has confirmed the high 02 levels. The 02 distribution in the glovebox indicates that oxygen is entering the glovebox by the VXA supply ports; against the general direction of flow. The ultimate source of this back leakage is the control port (that is open to atmosphere) and smoke visualisation studies on the mock VXA indicate a mechanism. This is due to separated flow patterns with excessive control port momentum. A temporary solution using an orifice plate and spacing chamber has been shown to reduce essential nitrogen supply to one quarter that without the modification. Addition of the orifice plates enables further reduction in nitrogen use, and the smallest orifice tested performs best with no discernable cost in pressure drop and therefore fan power. The author also found the following points. The ratio of control port area to supply port area is a critical parameter affecting mixing of the two airstreams. Yet exit port area is unimportant. The ratio of supply port area to exit port area has no influence on discharge coefficient (at least within the scope of current work). It is also identified that the ratio of chamber height to exit port radius does not affect the discharge coefficient or two angle parameters. Doubling chamber height, supply port area and control port area at the same time has a slight effect on the discharge coefficient (attributed partly to a viscous effect), but no effect on the two angle parameters. The chamber height has little effect on Reynolds number. If the supply port area is not too small relative to the exit port, the supply port area will not significantly affect Reynolds number. The use of discharge coefficient and the two angle parameters to characterize VXA performance breaks with the traditional form of dimensionless characteristics that are used for the purpose. Testing these alternate characteristics has enabled the momentum (which dominates control of VXA performance) to be more explicitly expressed in updated design rules.
|
72 |
Low-noise frequency synthesis and picosecond timing for satellite laser ranging systemsKolbl, Josef Karl January 2001 (has links)
The main aims of the research are to develop various high-speed hardware circuits based on the latest electronic devices and integrated circuit technologies to provide time measurement with one picosecond accuracy, thereby enabling the development of a satellite laser ranging (SLR) system with submillimeter precision. Different types of oscillators and frequency multipliers have been developed (RF and microwave) in order to provide a synchronous and low phase noise clock signal to the SLR timing system, which is phase-locked to the Universal Time Clock (UTC). A technique to quantify phase noise in signal sources is presented and verified. The development of the ranging system encompasses the analog timing verniers, the digital timing system, acquisition and processing of the ranging data, and the controlling of the peripherals, like the laser. The mixed analog/digital timing system architecture provides a time interval determination of two events with picosecond accuracy. Optical calibration techniques and an electronic timing calibration technique were developed to provide calibration of the timing system down to one picosecond accuracy and femtoseconds of resolution, traceable to the International Standard (speed of light, metric standard). The work done has led to several electronic modules for measuring precisely laser runtimes to artificial satellites and to the Moon which are now in successful and permanent operation in five SLR stations around Tokyo, one SLR station in Australia, and one SLR station in Germany. Furthermore, the work has produced three papers and two patents and won the First Prize of Innovation Awards from Deggendorf Government. The research and development work pushed the picosecond timing technology to an extent where the SLR stations in Australia, Tokyo and Germany now have a significant improvement in their ranging data accuracy in comparison to their previous timing equipment, thereby achieving more precise environmental monitoring.
|
73 |
Mathematical solutions to problems in radiological protection involving air sampling and biokinetic modellingBirchall, Alan January 1998 (has links)
Intakes of radionuclides are estimated with the personal air sampler (PAS) and by biological monitoring techniques: in the case of plutonium, there are problems with both methods. The statistical variation in activity collected when sampling radioactive aerosols with low number concentrations was investigated. By treating man as an ideal sampler, an analytical expression was developed for the probability distribution of intake following a single measurement on a PAS. The dependence on aerosol size, specific activity and density was investigated. The methods were extended to apply to routine monitoring procedures for plutonium. Simple algebraic approximations were developed to give the probability of exceeding estimated intakes and doses by given factors. The conditions were defined under which PAS monitoring meets the ICRP definition of adequacy. It was shown that the PAS is barely adequate for monitoring plutonium at ALl levels in typical workplaceconditions. Two algorithms were developed, enabling non-recycling and recycling compartmental models to be solved. Their accuracy and speed were investigated, and methods of dealing with partitioning, continuous intake, and radioactive progeny were discussed. Analytical, rather than numerical, methods were used. These are faster, and thus ideally suited for implementation on microcomputers. The algorithms enable non-specialists to solve quickly and easily any first order compartmental model, including all the ICRP metabolic models. Nonrecycling models with up to 50 compartments can be solved in seconds: recycling models take a little longer. A biokinetic model for plutonium in man following systemic uptake was developed. The proposed ICRP lung model (1989) was represented by a first order compartmental model. These two models were combined, and the recycling algorithm was used to calculate urinary and faecal excretion of plutonium following acute or chronic intake by inhalation. The results indicate much lower urinary excretion than predicted by ICRP Publication 54.
|
74 |
The micro-optical ring electrode : a new and novel electrode system for photoelectrochemistryPennarun, Gaelle January 1999 (has links)
The design of a novel photoelectrochemical sensor, the micro-optical ring electrode (MORE), is described. Based on a thin-ring microelectrode and using at fibre-optic light guide as the insulating material interior of the ring, the MORE has been deisigned, constructed and developed to permit electrochemical investigation of photochemically generated solution species. Initial characterisation of the electrode behaviour in the dark has been ccomplished by the use of ferricyanide in conjunction with predictive mathematical models of the time dependence of the current at micro ring electrode. The photocharacterisation of the MORE has been achieved looking at the photochemical response of tris (2,2'biyridine)ruthenium(II) in presence of the quenching agent Fe3+ . Subsequent application of the MORE has been in the electrochemical investigation of photoactive drugs employed in Cancer Therapy. In the following study, the microelectrochemistry of methylene blue, a dye commonly employed on Photodynamic Therapy (PDT), has been characterised in the dark using, in the first instance, gold disc microelectrodes. The electrochemical behaviour of MB+ on gold disc microelectrodes has than been compared to the results obtained when using the MORE. Exploration of the photoelectrochemical response of the MORE is reported, achieved via the interrogation of the photoelectrochemistry of MB+. Photocurrent signals obtained during cyclic voltammetric and chronoamperometric studies of MB\ conducted with the MORE under illuminated conditions and in the absence of any deliberately added reducing agent, are attributed to the formation and subsequent detection of 3 MB+ within the diffusion layer of the microring electrode. The data demonstrate that the use of the MORE for direct electrochemical detection of photogenerated species with lifetimes of < 9 x 5 10- s is possible. The electrochemistry of 3MB+ over the applied potential range from -0.4 to +1.0 V versus SCE is elucidated and discussed in the context of the behaviour of photoexcited MB+ in the presence of deliberately added reducing agent Fe3+. In order to investigate the production of singlet oxygen associated with cancer treatment, an attempt was made to study the MB+/02 system. This part of the project has not been completed, however a preliminary study of the electrochemistry of the MB.
|
75 |
A problem solving strategy based on knowledge-based systemsGillies, Alan Cameron January 1992 (has links)
The historical development of knowledge based systems (KBS) from artificial intelligence (AT) has led to a number of characteristics which isolate knowledge based systems from the rest of software development. In particular, it has led to the growth of 'stand alone' systems. This thesis argues that this has restricted the use of KBS to a narrow range of problems, and has reduced the effectiveness of the consequent solutions. By considering first a specific problem in some depth, the thesis seeks to develop an alternative approach, where KBS is considered as simply another software technology to be used within an integrated solution. The problem considered is the automatic analysis of photoelastic fringe patterns, and KBS methods are employed alongside conventional image processing techniques to produce an integrated solution. The conventional algorithmic solution is first constructed and evaluated. This solution, having proved partially successful, is then enhanced by the use of KBS techniques to provide a full solution. From this specific example, a framework for integration is derived. This framework is tested in an unrelated application to consider whether the approach adopted has more general utility than one specific class of problem. This problem was the provision of decision support for business planning based upon market research. The resulting strategy and design is described together with details of how the system was implemented under the supervision of the author. The thesis concludes with an evaluation of the work and its conthbution to knowledge in the twin areas of the specific solutions and the underlying methods.
|
76 |
Lossless image compression for aerospace non-destructive testing applicationsLin, Xin-Yu January 2004 (has links)
This thesis studies areas of image compression and relevant image processmg techniques with the application to Non-destructive Testing (NDT) images of aircraft components. The research project includes investigation of current data compression techniques and design of efficient compression methods for NDT images. Literature review was done initially to investigate the fundamental principles of data compression and existing methods of lossless and lossy image compression techniques. Such investigation provides not only the theoretical background, but also the comparative benchmarks for the research project. Chapter 2 provides general knowledge of image compression. The basic predictive coding strategy is introduced at the beginning of chapter 3. Fundamental theories of the Integer Wavelet Transform (IWT) can be found in chapter 4. The research projects proposed mainly three innovative methods for lossless compression of NDT images. Namely, the region-based method that employs regionoriented adaptation; the texture-based method that employs a mixed model for the prediction of image regions with strong texture patterns; and a hybrid method that utilizes advantages from both predictive coding and IWT coding. The main philosophy of lossless image compression is to de-correlate the original image data as much as possible by mapping from spatial domain to spatial domain in the predictive coding strategy or from spatial domain to transform domain in the IWT coding strategy. The proposed region-based method aims to achieve the best mapping by adapting the de-correlation to the statistical properties of decomposed regions using the component's CAD model. With the aid of component CAD models to divide the NDT images of aircraft components into different regions based on the material structures, the design of the predictors and the choice of the IWT are optimised according to the specific image features contained in each region having the same material structure. The texture-based method achieves the best de-correlation by using a mixed data model in the region possessing strong texture patterns. A hybrid scheme for lossless compression of the NDT images of aircraft components is presented. The method combines the predictive coding and the IWT. After region-based predictive coding, the IWT is applied to the error images produced for each decomposed region to achieve further image de-correlation by preserving the information contained in the error images with fewer transform coefficients. The main advantages of using the IWT are its multi-resolution nature and lossless property with integer grey level values in images mapped to integer wavelet coefficients. The proposed methods are shown to offer a significantly higher compression ratio than other compression methods. The high compression efficiency is seen to be achieved by not only a combination of the predictive coding and the IWT, but also optimisation in the design of the predictor and the choice of the transform according to the specific image features contained in each region having similar material structures.
|
77 |
Development and analysis of hybrid adaptive neuro-fuzzy inference systems for the recognition of weak signals preceding earthquakesKonstantaras, Anthony J. January 2004 (has links)
Prior to an earthquake, there is energy storage in the seismogenic area, the release of which results in a number of micro-cracks, which in effect produce a weak electric signal. Initially, there is a rapid rise in the number of propagating cracks, which creates a transient electric field. The whole process lasts in the order of several tens of minutes, and the resulting electric signal is considered as an electric earthquake precursor (EEP). Electric earthquake precursor recognition is mainly prevented by the very essence of the signal itself. The nature of the signal, according to the theory of propagating cracks, is usually a very weak electric potential anomaly appearing on the Earth's electric field prior to an earthquake, often unobservable within the severely stronger embedded in noise electric background. Furthermore, EEP signals vary in terms of duration and size making reliable recognition even more difficult. The work described in this thesis incorporates neuro-fuzzy technology for the reliable recognition of EEP signals within the electric field. Neuro-fuzzy networks are neural networks with intrinsic fuzzy logic abilities, i.e. the weights of the neurons in the network define the premise and consequent parameters of a fuzzy inference system. In particular, the adaptive neuro-fuzzy inference system (ANFIS) is used, which has been shown to be effective as a universal approximator that can match any input/output data set, providing the system is adequately trained. An average model for EEP signals has been identified based on a time function describing the evolution of the number of propagating cracks. Pattern recognition is performed by the neural network to identify the average EEP model from within the electric field. The fuzzy nature of the neuro-fuzzy model, though, enables the network to classify as EEPs, signals that are not exactly the same but do approximate the average EEP model. On the other hand, signals that look like EEPs but do not approximate enough the average model are being suppressed preventing false classification. The effectiveness of the proposed network is demonstrated using electrotelluric data recorded in NW Greece in 1995. Following training, testing with unseen data verifies the reliable performance of the model.
|
78 |
Automatic image alignment for clinical evaluation of patient setup errors in radiotherapySu, QingLang January 2004 (has links)
In radiotherapy, the treatment is typically pursued by irradiating the patient with high energy x-ray beams conformed to the shape of the tumour from multiple directions. Rather than administering the total dose in one session, the dose is often delivered in twenty to thirty sessions. For each session several settings must be reproduced precisely (treatment setup). These settings include machine setup, such as energy, direction, size and shape of the radiation beams as well as patient setup, such as position and orientation of the patient relative to the beams. An inaccurate setup may result in not only recurrence of the tumour but also medical complications. The aim of the project is to develop a novel image processing system to enable fast and accurate evaluation of patient setup errors in radiotherapy by automatic detection and alignment of anatomical features in images acquired during treatment simulation and treatment delivery. By combining various image processing and mathematical techniques, the thesis presents the successful development of an effective approach which includes detection and separation of collimation features for establishment of image correspondence, region based image alignment based on local mutual information, and application of the least-squares method for exhaustive validation to reject outliers and for estimation of global optimum alignment. A complete software tool was developed and clinical validation was performed using both phantom and real radiotherapy images. For the former, the alignment accuracy is shown to be within 0.06 cm for translation and 1.14 degrees for rotation. More significantly, the translation is within the ±0.1 cm machine setup tolerance and the setup rotation can vary between ±1 degree. For the latter, the alignment was consistently found to be similar or better than those based on manual methods. Therefore, a good basis is formed for consistent, fast and reliable evaluation of patient setup errors in radiotherapy.
|
79 |
Advancing acoustography by multidimensional signal processing techniquesBach, Michael January 2006 (has links)
The research presented in this thesis is the investigation into multidimensional signal processing for acoustography. Acoustography is a novel inspection technique similar to x-ray. However, instead of using hazardous ionising radiation, acoustography is based on sound. Inspection data are intensity images of the interior of components under inspection. Acoustography is a novel screening technique to inspect components without physically damaging them. Multidimensional signal processing refers to the processing of inspection data by signal and image processing techniques. The acoustographic imaging system is characterised with focus on signal and image processing. This system characterisation investigates into various degradations and image influencing properties. Signal and image processing techniques are then formally defined for the context of processing acoustographic data. The applicability of denoising and segmentation techniques is demonstrated. Filter primitives are demonstrated to be able to remove certain noise features. Particular focus rests on denoising and segmentation techniques based on physical analogy with diffusion. The diffusivity is locally controlled by data gradient measures. The diffusion technique using certain parameter settings performs the denoising of the inspection data without manipulating true image features. The same algorithm with a different parameter settings is employed to perform segmentation and thus separating fault features from background. Based on response characteristics of the acoustographic system, a data fusion algorithm has been developed to merge multiple observations into one datum, thereby increasing the dynamic range. The two stage algorithm consists of an iterative curve fitting algorithm followed by a reverse calculation using the curve parameter to yield a single observation. The algorithm has been improved further towards robustness to noise. Further, the fusion of denoised data is demonstrated. As a direct result of the work presented, future work is suggested to improve inferring from observation data to the state of the component under investigation. Further work is suggested to improve the understanding of the imaging system and inverse methods are proposed which take into account various particularities of the acoustographic system.
|
80 |
Exploring collaborative agreement in interactionsNneamaka, Chigbo Onyinyechukwu January 2017 (has links)
The benefits of play and collaboration in children’s learning and development cannot be overemphasized. Through play, children learn many social skills and how to be creative but children’s play is not always harmonious as it relies on power relations between groups. As children grow, they build peer groups where they prefer to play with same-sex peers and may display gender-typed behaviours, which grows stronger as they grow into adolescence. On the other hand, working in small groups enhances children’s problem solving skills and motivation, encourages development of skills of critical thinking and communication and allows longer retention of concepts. To reap the benefits associated with collaboration, there is need for children to develop and practice skills for effective collaboration. Collaborative games provide platforms for children to practice the skills required for effective collaboration however, in some collaborative games where players are expected to collaborate and learn the skills associated with collaboration, competition still occurs. This can be detrimental especially in the classroom settings as it can increase hostility between students and weaken the intrinsic motivation to learn due to focus on winning. In this research, the concept of Enforced Collaborative Agreement (ECA) is introduced and explored. ECA is a type of interaction whereby collaborative agreement is required in order to play a digital game. It is believed that ECA games would make co-located children play together in an equitable and inclusive way thus allowing them to contribute and participate equally when working together. The aim of the research is to understand the behaviours participants aged between 11-16 years old grouped in pairs and within co-located spaces exhibit to reach agreement while playing an ECA enabled game using a range of interaction methods. While several research works have been undertaken to explore collaboration in enforced situations none has explored collaboration in the way described in this thesis (using a range of data gathering approaches and focusing on how participants reach agreement). Additionally, this research explores the effects of ECA on the participants’ enjoyment, one of the dimensions of gameplay experience and highlights the importance of ECA in enabling collaborative interactions. A mixed methods and user-centred approach was taken where established methods such as observation of the participants’ behaviours during interaction, survey (fun Toolkit and questionnaire), logging participants’ actions and unstructured interview were used. The key contribution of this research is the understanding of ECA as a concept and methods to study it. Additional contributions are the understanding of how participants collaborate to reach agreement within one part of the larger space where ECA can be applied and associated design guidelines for designers wishing to design games/applications that support ECA.
|
Page generated in 0.0354 seconds