Spelling suggestions: "subject:"computer cience anda informatics"" "subject:"computer cience anda lnformatics""
171 |
Explorations of knowledge management in a defence engineering environmentScarrott, Connie Elizabeth January 2003 (has links)
This thesis originates from first hand early experiences of the researcher regarding current processes and practices in operation in BAE SYSTEMS Ltd (now referred to hereafter as `the Company'), and recognises the potential for improvement within the realm of knowledge management. The huge volume of internal and external information overwhelms the majority of organisations and knowledge management provides solutions to enable organisations to be effective, efficient, and competitive. The software agent approach and information retrieval technique indicates great potential for effectively managing information. This research seeks to answer the questions of whether software agents can provide the Company with solutions to the knowledge management issues identified in this inquiry and whether they can also be used elsewhere within the organisation to improve other aspects of the business. The research analysis shows that software agents offer a wide applicability across the Company; can be created with relative ease and can provide benefits by improving the effectiveness and efficiency of processes. Findings also provided valuable insight into human-computer-interface design and usability aspects of software agent applications. The research deals with these questions using action research in order to develop a collaborative change mechanism within the Company and a practical applicability of the research findings in situ. Using a pluralistic methodology the findings provide a combination of the subjective and objective views intermittently within the research cycles thereby giving the researchera more holistic view of this research. Little attention has been paid to integrating software agent technologies into the knowledge management processes.This research proposes a software agent application that incorporates: (1) Co-ordination of software agents for information retrieval to manage information gathering, filtering, and dissemination; (2) To promote effective interpretation of information and more efficient processes;(3) Building accurate search profiles weighted on pre-defined criteria; (4) Integrating and organising a Company resource management knowledge-base; (5) Ensuring that the right information gets to the right personnel at the right time; and (6) So the Company can effectively assign the right experts to the right roles within the Company.
|
172 |
Application of object-orientation to HDL-based designsCabanis, David January 2000 (has links)
The increase in the scale of VLSI circuits over the last two decades has been of great importance to the development process. To cope with this evergrowing design complexity. new development techniques and methodologies have been researched and applied. The early 90's have witnessed the uptake of a new kind of design methodology based on Hardware Description Languages (HDL). This methodology has helped to master the possibilities inherent in our ability to manufacture ever-larger designs. However. while HDL based design methodology is sufficient to address today's standard ASIC sizes, it reaches its limits when considering tomorrow's design scales. Already. RISC processor chip descriptions can contain tens of thousands of HDLlines. Object-Oriented design methodology has recently had a considerable Impact in the software design community as it is tightly coupled with the handling of complex systems. Object-Orientation concentrates on data rather than functions since. throughout the design process. data are more stable than functions. Methodologies for both hardware and software have been introduced through the application of HDLs to hardware design. Common design constructs and principles that have proved successful in software language development should therefore be considered in order to assess their suitability for HDLs based designs. A new methodology was created to emphasise on encapsulation. abstraction and classification of designs. using standard VHDL constructs. This achieves higher levels of modelling along with an Improved reusability through design inheritance. The development of extended semantics for integrating Object-Orientation in the VHDL language is described. Comparisons are made between the modelling abilities of the proposed extension and other competing proposals. A UNIX based Object-Oriented to standard VHDL pre-processor is described along with translation techniques and their issues related to synthesis and simulation. This tool permitted validation of the new design methodology by application to existing design problems.
|
173 |
Time-domain concatenative text-to-speech synthesisVine, Daniel Samuel Gordon January 1998 (has links)
A concatenation framework for time-domain concatenative speech synthesis (TDCSS) is presented and evaluated. In this framework, speech segments are extracted from CV, VC, CVC and CC waveforms, and abutted. Speech rhythm is controlled via a single duration parameter, which specifies the initial portion of each stored waveform to be output. An appropriate choice of segmental durations reduces spectral discontinuity problems at points of concatenation, thus reducing reliance upon smoothing procedures. For text-to-speech considerations, a segmental timing system is described, which predicts segmental durations at the word level, using a timing database and a pattern matching look-up algorithm. The timing database contains segmented words with associated duration values, and is specific to an actual inventory of concatenative units. Segmental duration prediction accuracy improves as the timing database size increases. The problem of incomplete timing data has been addressed by using `default duration' entries in the database, which are created by re-categorising existing timing data according to articulation manner. If segmental duration data are incomplete, a default duration procedure automatically categorises the missing speech segments according to segment class. The look-up algorithm then searches the timing database for duration data corresponding to these re-categorised segments. The timing database is constructed using an iterative synthesis/adjustment technique, in which a `judge' listens to synthetic speech and adjusts segmental durations to improve naturalness. This manual technique for constructing the timing database has been evaluated. Since the timing data is linked to an expert judge's perception, an investigation examined whether the expert judge's perception of speech naturalness is representative of people in general. Listening experiments revealed marked similarities between an expert judge's perception of naturalness and that of the experimental subjects. It was also found that the expert judge's perception remains stable over time. A synthesis/adjustment experiment found a positive linear correlation between segmental durations chosen by an experienced expert judge and duration values chosen by subjects acting as expert judges. A listening test confirmed that between 70% and 100% intelligibility can be achieved with words synthesised using TDCSS. In a further test, a TDCSS synthesiser was compared with five well-known text-to-speech synthesisers, and was ranked fifth most natural out of six. An alternative concatenation framework (TDCSS2) was also evaluated, in which duration parameters specify both the start point and the end point of the speech to be extracted from a stored waveform and concatenated. In a similar listening experiment, TDCSS2 stimuli were compared with five well-known text-tospeech synthesisers, and were ranked fifth most natural out of six.
|
174 |
Analytical and simulation performance modelling of indoor infrared wireless data communications protocolsBarker, Peter Jay January 2003 (has links)
The Infrared (IR) optical medium provides an alternative to radio frequencies (RF) for low cost, low power and short-range indoor wireless data communications. Low-cost optoelectronic components with an unregulated IR spectrum provide the potential for very high-speed wireless communication with good security. However IR links have a limited range and are susceptible to high noise levels from ambient light sources. The Infrared Data Association (IrDA) has produced a set of communication protocol standards (IrDA I. x) for directed point-to-point IR wireless links using a HDLC (High-level Data Link Control) based data link layer which have been widely adopted. To address the requirement for multi-point ad-hoc wireless connectivity, IrDA have produced a new standard (Advanced Infrared -AIr) to support multiple-device non-directed IR Wireless Local Area Networks (WLANs). AIr employs an enhanced physical layer and a CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) based MAC (Media Access Control) layer employing RTS/CTS (Request To Send / Clear To Send) media reservation. This thesis is concerned with the design of IrDA based IR wireless links at the datalink layer, media access sub-layer, and physical layer and presents protocol performance models with the aim of highlighting the critical factors affecting performance and providing recommendations to system designers for parameter settings and protocol enhancements to optimise performance. An analytical model of the IrDA 1.x data link layer (IrLAP Infrared Link Access -Protocol) using Markov analysis of the transmission window width providing saturation condition throughput in relation to the link bit-error-rate (BER), datarate andprotocol parameter settings is presented. Results are presented for simultaneous optimisation of the data packetsize and transmission window size. A simulation model of the IrDA l. x protocol, developed with OPNETTM Modeler, is used for validation of analytical results and to produce non-saturation throughput and delay performance results. An analytical model of the AIr MAC protocol providing saturation condition utilisation and delay results in relation to the number of contending devices and MAC protocol parametersis presented.Results indicate contention window size values for optimum utilisation. The effectiveness of the AIr contention window linear back-off process is examined through Markov analysis. An OPNET simulation model of the Alf protocol is used for validation of the analytical model results and provides non-reservation throughput and delay results. An analytical model of the IR link physical layer is presented and derives expressions for signal-to-noise ratio (SNR) and BER in relation to link transmitter and receiver characteristics, link geometry, noise levels and line encoding schemes. The effect of third user interference on BER and resulting link asymmetry is also examined, indicating the minimum separation distance for adjacent links. Expressions for BER are linked to the data link layer analysis to provide optimum throughput results in relation to physical layer propertiesandlink distance.
|
175 |
Performance modelling and enhancement of wireless communication protocolsChatzimisios, Periklis January 2004 (has links)
In recent years, Wireless Local Area Networks(WLANs) play a key role in the data communications and networking areas, having witnessed significant research and development. WLANs are extremely popular being almost everywhere including business,office and home deployments.In order to deal with the modem Wireless connectivity needs,the Institute of Electrical and Electronics Engineers(IEEE) has developed the 802.11 standard family utilizing mainly radio transmission techniques, whereas the Infrared Data Association (IrDA) addressed the requirement for multipoint connectivity with the development of the Advanced Infrared(Alr) protocol stack. This work studies the collision avoidance procedures of the IEEE 802.11 Distributed Coordination Function (DCF) protocol and suggests certain protocol enhancements aiming at maximising performance. A new, elegant and accurate analysis based on Markov chain modelling is developed for the idealistic assumption of unlimited packet retransmissions as well as for the case of finite packet retry limits. Simple equations are derived for the through put efficiency, the average packet delay, the probability of a packet being discarded when it reaches the maximum retransmission limit, the average time to drop such a packet and the packet inter-arrival time for both basic access and RTS/CTS medium access schemes.The accuracy of the mathematical model is validated by comparing analytical with OPNET simulation results. An extensive and detailed study is carried out on the influence of performance of physical layer, data rate, packet payload size and several backoff parameters for both medium access mechanisms. The previous mathematical model is extended to take into account transmission errors that can occur either independently with fixed Bit Error Rate(BER) or in bursts. The dependency of the protocol performance on BER and other factors related to independent and burst transmission errors is explored. Furthermore, a simple-implement appropriate tuning of the back off algorithm for maximizing IEEE 802-11 protocol performance is proposed depending on the specific communication requirements. The effectiveness of the RTS/CTS scheme in reducing collision duration at high data rates is studied and an all-purpose expression for the optimal use of the RTS/CTS reservation scheme is derived. Moreover, an easy-to-implement backoff algorithm that significantly enhances performance is introduced and an alternative derivation is developed based on elementary conditional probability arguments rather than bi-dimensional Markov chains. Finally, an additional performance improvement scheme is proposed by employing packet bursting in order to reduce overhead costs such as contention time and RTS/CTSex changes. Fairness is explored in short-time and long-time scales for both the legacy DCF and packet bursting cases. AIr protocol employs the RTS/CTS medium reservation scheme to cope with hidden stations and CSMA/CA techniques with linear contention window (CW) adjustment for medium access. A 1-dimensional Markov chain model is constructed instead of the bi-dimensional model in order to obtain simple mathematical equations of the average packet delay.This new approach greatly simplifies previous analyses and can be applied to any CSMA/CA protocol.The derived mathematical model is validated by comparing analytical with simulation results and an extensive Alr packet delay evaluation is carried out by taking into account all the factors and parameters that affect protocol performance. Finally, suitable values for both backoff and protocol parameters are proposed that reduce average packet delay and, thus, maximize performance.
|
176 |
The colour concept generator : a computer tool to propose colour concepts for productsEves, Bob January 1997 (has links)
This thesis documents research undertaken into the design and evaluation of a computer tool (Colour Concept Generator) to produce colour schemes for products from verbal descriptors depicting a required aesthetic image or style. The system was designed to translate between descriptive words and colour combinations and aims to provide a form of ideas stimulus for a product designer at the initial stages of the design process. The computer system uses elements of artificial intelligence (AI) to `learn' colour and descriptor semiotic relations from a product designer based upon a proposed objective criteria or to reflect a designers personal style. Colour concepts for products can then be generated from descriptors based upon these semiotic relations. The philosophy of the research is based upon the idea of computing colour aesthetics at the front end of the design process and the design of an Al software mechanism to facilitate this. The problem was analysed with respect to the available literature on colour and a set of detail requirements for the system were presented. The system was then designed and code based upon the requirements and evaluated in terms of the overall philosophy, system methodology and application of computer media. The research is a contribution to the field of computer aided design regarding colour aesthetics and demonstrates the possibility of using an artificial intelligent machine to inspire and stimulate creative human thought. The Al software mechanism of the Colour Concept Generator is presented as an application of Al to aesthetic design. 11
|
177 |
Link layer protocol performance of indoor infrared wireless communicationsVitsas, Vasileios January 2002 (has links)
The increasing deployment of portable computers and mobile devices leads to an increasing demand for wireless connections. Infrared presentsseveral advantagesover radio for indoor wireless connectivity but infrared link quality is affected by ambient infrared noise and by low power transmission levels due to eye safety limitations. The Infrared Data Association (IrDA) has developed the widely used IrDA 1.x protocol standard for short range, narrow beam, point to point connections.IrDA addressedthe requirement for indoor multipoint connectivity with the development of the Advanced Infrared (AIr) protocol stack. This work analyses infrared link layer design based on IrDA proposals for addressing link layer topics and suggests implementation issues and protocol modifications that improve the operation of short range infrared connections. The performance of optical wireless links is measuredby the utilization, which can be drawn at the data link layer. A new mathematical model is developed that reaches a simple equation that calculates IrDA 1.x utilization. The model is validated by comparing its outcome with simulation results obtained using the OPNET modeler. The mathematical model is employed to study the effectiveness on utilization of physical and link layer parameters.The simple equation gives insights for the optimum control of the infrared link for maximum utilization. By differentiating the utilization equation, simple formulas are derived for optimum values of the window and frame size parameters. Analytical results indicate that significant utilization increase is observed if the optimum values are implemented, especially for high error rate links. A protocolimprovement that utilizes special Supervisory frames (S-frames) to pass transmission control is proposed to deal with delays introduced by F-timer expiration. Results indicate that employing the special S-frame highly improves utilization when optimum window and frame size values are implemented. The achieved practical utilization increase for optimum parameter implementation is confirmed by meansof simulation. AIr protocol trades speedfor range by employing Repetition Rate (RR) coding to achieve the increased transmission range required for wireless LAN connectivity. AIr employs the RTS/CTS medium reservation scheme to cope with hidden stations and CSMA/CA techniques with linear contention window (CW) adjustment for medium access. A mathematical model is developed for the AIr collision avoidance (CA) procedures and validated by comparing analysis with simulation results. The model is employed to examine the effectiveness of the CA parameters on utilization. By differentiating the utilization equation, the optimum CW size that maximises utilization as a function of the number of the transmitting stations is derived. The proposed linear CW adjustment is very effective in implementing CW values close to optimum and thus minimizing CA delays. AIr implements a Go-Back-N retransmission scheme at high or low level to cope with transmission errors. AIr optionally implements a Stop-and-Wait retransmission scheme to efficiently implement RR coding. Analytical models for the AIr retransmission schemes are developed and employed to compare protocol utilization for different link parametervalues. Finally, the effectiveness of the proposedRR coding on utilization for different retransmission schemes is explored.
|
178 |
The baby project : processing character patterns in textual representations of languageRogers, Paul Anton Peter January 2000 (has links)
This thesis describes an investigation into a proposed theory of AI. The theory postulates that a machine can be programmed to predict aspects of human behaviour by selecting and processing stored, concrete examples of previously experienced patterns of behaviour. Validity is tested in the domain of natural language. Externalisations that model the resulting theory of NLP entail fuzzy components. Fuzzy formalisms may exhibit inaccuracy and/or over productivity. A research strategy is developed, designed to investigate this aspect of the theory. The strategy includes two experimental hypotheses designed to test, 1) whether the model can process simple language interaction, and 2) the effect of fuzzy processes on such language interaction. Experimental design requires three implementations, each with progressive degrees of fuzziness in their processes. They are respectively named: Nonfuzz Babe, CorrBab and FuzzBabe. Nonfuzz Babe is used to test the first hypothesis and all three implementations are used to test the second hypothesis. A system description is presented for Nonfuzz Babe. Testing the first hypothesis provides results that show NonfuzzBabe is able to process simple language interaction. A system description for CorrBabe and FuzzBabe is presented. Testing the second hypothesis, provides results that show a positive correlation between degree of fuzzy processes and improved simple language performance. FuzzBabe's ability to process more complex language interaction is then investigated and model-intrinsic limitations are found. Research to overcome this problem is designed to illustrate the potential of externalisation of the theory and is conducted less rigorously than previous part of this investigation. Augmenting FuzzBabe to include fuzzy evaluation of non-pattern elements of interaction is hypothesised as a possible solution. The term FuzzyBaby was coined for augmented implementation. Results of a pilot study designed to measure FuzzyBaby's reading comprehension are given. Little research has been conducted that investigates NLP by the fuzzy processing of concrete patterns in language. Consequently, it is proposed that this research contributes to the intellectual disciplines of NLP and AI in general.
|
179 |
Optimisation of multiplier-less FIR filter design techniquesCemes, Radovan January 1996 (has links)
This thesis is concerned with the design of multiplier-less (ML) finite impulse response (FIR) digital filters. The use of multiplier-less digital filters results in simplified filtering structures, better throughput rates and higher speed. These characteristics are very desirable in many DSP systems. This thesis concentrates on the design of digital filters with power-of-two coefficients that result in simplified filtering structures. Two distinct classesof ML FIR filter design algorithms are developed and compared with traditional techniques. The first class is based on the sensitivity of filter coefficients to rounding to power-of-two. Novel elements include extending of the algorithm for multiple-bands filters and introducing mean square error as the sensitivity criterion. This improves the performance of the algorithm and reduces the complexity of resulting filtering structures. The second class of filter design algorithms is based on evolutionary techniques, primarily genetic algorithms. Three different algorithms based on genetic algorithm kernel are developed. They include simple genetic algorithm, knowledge-based genetic algorithm and hybrid of genetic algorithm and simulated annealing. Inclusion of the additional knowledge has been found very useful when re-designing filters or refining previous designs. Hybrid techniques are useful when exploring large, N-dimensional searching spaces. Here, the genetic algorithm is used to explore searching space rapidly, followed by fine search using simulated annealing. This approach has been found beneficial for design of high-order filters. Finally, a formula for estimation of the filter length from its specification and complementing both classes of design algorithms, has been evolved using techniques of symbolic regression and genetic programming. Although the evolved formula is very complex and not easily understandable, statistical analysis has shown that it produces more accurate results than traditional Kaiser's formula. In summary, several novel algorithms for the design of multiplier-less digital filters have been developed. They outperform traditional techniques that are used for the design of ML FIR filters and hence contributed to the knowledge in the field of ML FIR filter design.
|
180 |
Optical waveguide analysis using transmission linesQian, Xin January 2005 (has links)
Optical fibres have been used as a key medium for telecommunication and networking for more than two decades because in principle they offer sufficient transmission capacity, reaching total rates as high as Tbits/s per fibre. Critical fibre properties such as mode field profiles, single-mode propagation conditions and dispersion characteristics can all be related to the optical fibre refractive index profiles. For this reason, it is of fundamental importance to be able to determine the optical fibre refractive index profiles. In this thesis, a novel Transmission-Line technique has been studied and extended for both the forward and inverse solutions. In the forward solution of the Transmission-Line technique, it is shown that the technique is not only capable of determining exactly the propagation constants in optical fibres with real refractive index profiles, but also evaluating accurately the complex propagation constants in single-mode fibres with arbitrary complex refractive index profiles. To illustrate the effectiveness of this technique, it is applied to the evaluation and manipulation of the gain in a typical 980 nm pumped Erbium-Doped fibre as well as to the calculation of the attenuation of optical fibres when radial loss factors are presented. Moreover, based on the Transmission-Line equivalent circuit model, the exact analytical formulas are derived for a recursive algorithm which allows direct and efficient calculation of dispersion of arbitrary refractive index profile optical fibres. The proposed algorithm computes dispersion directly from the propagation constants without the need for curve fitting and successive subsequent numerical differentiation. The algorithm results in savings for both storage memory and computation time. In the inverse solution using the Transmission-Line technique, the optical fibre refractive index profile synthesis from the given mode electric field distribution is developed and demonstrated. The application of the Transmission-Line principles in the study of optical fibre properties was developed for the first time in the early 80's. However, until now the potential of using Transmission-Line technique for the design of optical fibres based on the given electric field pattern had not been examined. From Maxwell's equations, the Transmission-Line equivalent circuits are derived for a homogeneous symmetric optical fibre. This work demonstrates how to use the Transmission-Line model to reconstruct the exact refractive index profile from the electric field data. The accuracy of the reconstructed optical fibre refractive index profile is examined numerically.
|
Page generated in 0.1206 seconds