• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4907
  • 1014
  • 627
  • 610
  • 522
  • 115
  • 108
  • 86
  • 65
  • 59
  • 36
  • 34
  • 34
  • 28
  • 25
  • Tagged with
  • 10271
  • 1750
  • 1432
  • 1154
  • 946
  • 883
  • 862
  • 856
  • 842
  • 807
  • 772
  • 754
  • 718
  • 691
  • 627
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

The development of a sensitive and reliable molecular method for the detection of human pathogenic viruses in bivalve molluscs

Milne, Sarah Amelia January 2002 (has links)
The overall aim of this project was to develop a sensitive, specific and reliable molecular assay for the detection of human pathogenic viruses in shellfish. In initial studies, 'viral surrogates' were used to evaluate two different assay formats, the reverse transcriptase-polymerase chain reaction-enzyme-linked immunosorbent assay (RT-PCR-ELISA) and the enzyme-labelled deoxynucleic acid-enzyme-linked coagulation assay (EDNA-ELCA). Each format was tested for ease of use, reliability and sensitivity, when compared with ethidium bromide gel detection. The RT-PCR-ELISA proved to be a successful alternative to ethidium bromide gel electrophoresis and studies involving virus detection in contaminated environmental samples were performed. The newly developed ELISA method was able to successfully detect enterovirus (EV) and Norwalk-like viruses (NLVs) in artificially, and naturally contaminated shellfish. In shellfish studies the ELISA had a detection sensitivity of 10-100, and 100-1000 TCID50 PV respectively, when a traditional elution/precipitation, and an immunocapture procedure were employed. The assay could also successfully detect virus in trout kidney samples, artificially contaminated with infectious pancreatic necrosis virus (IPNV), with detection sensitivities of 105, and 107 pfu reported when the elution/precipitation, and immunocapture procedures were used. In naturally contaminated shellfish samples, positive NLV and EV detection were achieved using the ELISA method. The ability of the ELISA to positively detect virus in environmental samples was compared to the TaqMan quantitative PCR system, an alternative detection method. Both methods were used to screen various contaminated environmental samples, including faeces and shellfish. The ELISA performed well, and unlike the TaqMan system no optimisation for each sample matrix tested was required. Overall the ELISA was shown to be a very robust and sensitive method. The technique was easily established in a new laboratory and no specialised equipment was required to perform the assay. The method has a high sample throughput, capable of screening 96 samples per run. Each sample takes only approximately 50 s to screen, making the technique extremely time efficient. The ELISA is a safe, quick, reliable technique, which has great potential for use as a standard virus detection method in a standard equipped laboratory.
122

The use of ESR spectroscopy for the detection of irradiated crustacea with particular reference to Nephrops norvegicus (Norway lobster)

Stewart, Eileen Mary January 1993 (has links)
No description available.
123

A CCD based camera for digital imaging of the nightglow

MacIntosh, Michael J. January 1986 (has links)
This thesis deals with the development of a microprocessor controlled CCD based camera for digital imaging of the nightglow. A brief description of the techniques used to image the nightglow is given and the reasons for choosing a CCD as the detector are discussed. The fundamentals of CCD operation are then described with particular emphasis on buried channel CCD image sensors as the P8603 CCD used in the camera is of this type. A major part of the thesis is devoted to the detailed design of the camera electronics which consists of three main sections; (i) a MC6802 based microprocessor controller with 4 K of ROM and 64 K of dynamic RAM; (ii) a display interface which allows an on-line display of the images to be produced on an oscilloscope for monitoring purposes while observing; and (iii) the CCD interface which consists of the drive pulse buffers for the image, store and readout sections of the CCD, the bias voltage generators for the CCD on-chip charge amplifier, and the signal processing electronics which has a choice of four software selectable gains and uses correlated double sampling to achieve low noise levels. The design of a digital cassette system for recording the image data is also described. The system, which is based on a low cost stereo cassette recorder, accepts and produces data in the same RS232 serial format used by the camera and is capable of operating at up to 9600 baud on two channels. A further section deals with the optical, structural and cryogenic design. This includes a description of the camera optical system which is based on a commercial FI.4 CCTV lens, theoretical calculations of the expected response of the camera to a range of nightglow emissions, the design of the liquid nitrogen cryostat which is used to cool the CCD, the design of the camera chassis, and calculations to determine (i) the CCD temperature required to reduce the dark current to an acceptable level; and (ii) the capacity of the liquid nitrogen reservoir which is necessary to allow a whole night's observing without refilling. The detailed operation of the camera control program, which is written in 6800 assembly language, is then described with the aid of flowcharts. Currently the control program is set up to give a one minute integration period using half-frame imaging and a 3 x 2 pixel amalgamation. The final section of the thesis deals with the testing and performance of the camera. Several experiments were carried out including the measurement of the various possible ampilifier gains, the noise performance of the system, the angular response of the camera optics, and the calibration of the camera using a standard light to allow the absolute intensity of nightglow emissions to be calculated. Theoretical calculations of the expected noise levels and the expected response of the camera to the standard light are also included. A suite of image processing programs, written in Pascal for an Apple II microcomputer, are then described. These programs allow various operations to be performed such as scanning the images stored on tape, and correcting for the defective columns on the CCD and the angular response of the camera optics. Lastly, the performance of the camera in the field is discussed and the results of observations made locally, which include photographs of images believed to show hydroxyl airglow structure, are presented.
124

Target Detection Using a Wavelet-Based Fractal Scheme

Stein, Gregory W. 22 May 2006 (has links)
In this thesis, a target detection technique using a rotational invariant wavelet-based scheme is presented. The technique is evaluated on Synthetic Aperture Rader (SAR) imaging and compared with a previously developed fractal-based technique, namely the extended fractal (EF) model. Both techniques attempt to exploit the textural characteristics of SAR imagery. Recently, a wavelet-based fractal feature set, similar to the proposed one, was compared with the EF feature for a general texture classification problem. The wavelet-based technique yielded a lower classification error than EF, which motivated the comparison between the two techniques presented in this paper. Experimental results show that the proposed techniques feature map provides a lower false alarm rate than the previously developed method.
125

Automatic novice program comprehension for semantic bug detection

Ade-Ibijola, Abejide Olu January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science April 2016 / Automatically comprehending novice programs with the aim of giving useful feedback has been an Artificial Intelligence problem for over four decades. Solving this problem basically entails manipulating the underlying program plans; i.e. extracting and comparing the novice's plan to the expert's plan and inferring where the novice's bug is from. The bugs of interest in this domain are often semantic bugs as all syntactic bugs are handled by automatic debuggers --- built in most compilers. Hence, a program that debugs like the human expert should understand the problem and know the expected solution(s) in order to detect semantic bugs. This work proposes a new approach to comprehending novice programs using: regular expressions for the recognition of plans in program text, principles from formal language theory for defining the space of program plan variations, and automata-based algorithms for the detection of semantic bugs. The new approach is tested with a repository of novice programs with known semantic bugs and specific bugs were detected. As a proof of concept, the theories presented in this work are further implemented in software prototypes. If the new idea is implemented in a robust software tool, it will find applications in comprehending first year students' programs, thereby supporting the human expert in teaching programming.
126

Detecção autônoma de intrusões utilizando aprendizado de máquina / Autonomous intrusion detection via machine learning

Ferreira, Eduardo Alves 05 May 2011 (has links)
A evolução da tecnologia da informação popularizou o uso de sistemas computacionais para a automação de tarefas operacionais. As tarefas de implantação e manutenção desses sistemas computacionais, por outro lado, não acompanharam essa tendência de forma ágil, tendo sido, por anos, efetuadas de forma manual, implicando alto custo, baixa produtividade e pouca qualidade de serviço. A fim de preencher essa lacuna foi proposta uma iniciativa denominada Computação Autônoma, a qual visa prover capacidade de autogerenciamento a sistemas computacionais. Dentre os aspectos necessários para a construção de um sistema autônomo está a detecção de intrusão, responsável por monitorar o funcionamento e fluxos de dados de sistemas em busca de indícios de operações maliciosas. Dado esse contexto, este trabalho apresenta um sistema autônomo de detecção de intrusões em aplicações Web, baseado em técnicas de aprendizado de máquina com complexidade computacional próxima de linear. Esse sistema utiliza técnicas de agrupamento de dados e de detecção de novidades para caracterizar o comportamento normal de uma aplicação, buscando posteriormente por anomalias no funcionamento das aplicações. Observou-se que a técnica é capaz de detectar ataques com maior autonomia e menor dependência sobre contextos específicos em relação a trabalhos anteriores / The use of computers to automatically perform operational tasks is commonplace, thanks to the information technology evolution. The maintenance of computer systems, on the other hand, is commonly performed manually, resulting in high costs, low productivity and low quality of service. The Autonomous Computing initiative aims to approach this limitation, through selfmanagement of computer systems. In order to assemble a fully autonomous system, an intrusion detection application is needed to monitor the behavior and data flows on applications. Considering this context, an autonomous Web intrusion detection system is proposed, based on machine-learning techniques with near-linear computational complexity. This system is based on clustering and novelty detection techniques, characterizing an application behavior, to later pinpoint anomalies in live applications. By conducting experiments, we observed that this new approach is capable of detecting anomalies with less dependency on specific contexts than previous solutions
127

Automatic Pitch Detection and Shifting of Musical Tones in Real Time

Kim, Jinho January 2013 (has links)
Thesis advisor: Sergio Alvarez / Musical notes are acoustic stimuli with specific properties that trigger a psychological perception of pitch. Pitch is directly associated with the fundamental frequency of a sound wave, which is typically the lowest frequency of a periodic waveform. Shifting the perceived pitch of a sound wave is most easily done by changing the playback speed, but this method warps some of the characteristics and changes the time scale. This thesis aims to accurately shift the pitch of musical notes while preserving its other characteristics, and it implements this in real time on an Android device. There are various methods of detecting and shifting pitch, but in the interests of simplicity, accuracy, and speed, a three step process is used. First, the fundamental pitch of a stable periodic section of the signal is found using the Yin pitch detection algorithm. Secondly, pitch marks that represent the local peak of energy are found, each spaced out by roughly one period (inverse of the fundamental frequency). Lastly, these marks are used in the Pitch Synchronous Overlap and Add (PSOLA) algorithm to generate a new signal with the desired fundamental frequency and similar acoustical characteristics to the original signal. / Thesis (BS) — Boston College, 2013. / Submitted to: Boston College. College of Arts and Sciences. / Discipline: Computer Science Honors Program. / Discipline: Computer Science.
128

Effect of noise on bathymetric side scan profiling sonar system resolution

Al-Naimy, Mahmood January 1983 (has links)
Rapid developments in the search for means to provide detailed seabed mapping has led to the introduction of the Bathymetric Side Scan Profiling Sonar (BSSPS) System, which uses a two-transducer interferometer to map seabed features. Error in the resolved relative phase of the BSSPS system has crippled its application for resolving very detailed seabed features. This present work is concerned with the study, analysis and evaluation of the sources contributing to the system relative-phase error. Most of the sources of noise contributing to the relative-phase error can be prevented or reduced by good instrumentation and careful design, except the glint and the newly introduced source of noise, sliding ladder (SL). These sources were found to be unavoidable and cannot be eliminated, being part of the backscattered signal. Glint is only influenced by the angle of incidence, y, transducer separation, d, and pulse duration t. Sliding ladder noise is influenced by the grazing angle, e, (angle of reception relative to the boresight), pulse duration, t, and transducer separation, d. Reducing t has the effect of slightly reducing the relative-phase error due to glint, but greatly increasing it due to SL. Alternatively, reducing d has the effect of reducing the error due to both glint and SL, but it degrades the system resolving power. The choice of d and t is decided by the type of application and required resolution. This work also develops the design and implementation of the inverse tan method used to separate the relative-phase and envelope of the two received signals. The inverse tan method for resolving the relative phase (complex signal processor) is found to be simple, easy to implement, and accurate. In order to study the contribution and effect of the individual sources of noise on the relative-phase error, the BSSPS system was simulated. The designed computer model proved to be flexible, reliable and very useful. It was extensively used to test theoretical analysis as well as to achieve individual and collective glint and SL effects. Also the system was employed to test the influence of some of the system parameters on the sources of noise. Using the BSSPS simulated system we were able to provide some valuable guidelines for the sonar design and application concerning resolution, optimum mapped distance and an approach to reduce the relative-phase error (averaging). Applications of the present findings are not restricted to sonar systems, but would be just as useful to similar radar applications.
129

Real-time surveillance system: video, audio, and crowd detection. / CUHK electronic theses & dissertations collection

January 2008 (has links)
A learning-based approach to detect abnormal audio information is presented, which can be applied to audio surveillance systems that work alone or as supplements to video surveillance systems. / An automatic surveillance system is also presented that can generate a density map with multi-resolution cells and calculate the density distribution of the image by using texture analysis technique. Hosed on the estimated density distribution, the SVM method is used to solve the classification problem of detecting abnormal situations caused by changes in density distribution. / Anti-terrorism has become a global issue, and surveillance has become increasingly popular in public places such as elevators, banks, airports, and casinos. With traditional surveillance systems, human observers inspect the monitor arrays. However, with screen arrays becoming larger as the number of cameras increases, human observers may feel burdened, lose concentration, and make mistakes, which may be significant in such crucial positions as security posts. To solve this problem, I have developed an intelligent surveillance system that can understand human actions in real-time. / I have built a low-cost PC-based real-time video surveillance system that can model and analyze human real-time actions based on learning by demonstration. By teaching the system the difference between normal and abnormal human actions, the computational action models built inside the trained machines can automatically identify whether newly observed behavior requires security interference. The video surveillance system can detect the following abnormal behavior in a crowded environment using learning algorithms: (1) running people in a crowded environment; (2) falling down movements when most people are walking or standing; and (3) a person carrying an abnormally long bar in a square. Even a person running and waving a hand in a very crowded environment can be detected using an optical flow algorithm. / I have developed a real-time face detection and classification system in which the classification problem is defined as differentiating and is used to classify the front of a face as Asian or non-Asian. I combine the selected principal component analysis (PCA) and independent component analysis (ICA) features into a support vector machine (SVM) classifier to achieved a good classification rate. The system can also be used for other binary classifications of face images, such as gender and age classification without much modification. / This thesis establishes a framework for video, audio, and crowd surveillance, and successfully implements it on a mobile surveillance robot. The work is of significance in understanding human behavior and the detection of abnormal events, and has potential applications in areas such as security monitoring in household and public spaces. / To test my algorithms, the video and audio surveillance technology are implemented on a mobile platform to develop a household surveillance robot. The robot can detect a moving target and track it across a large field of vision using a pan/tilt camera platform, and can detect abnormal behavior in a cluttered environment; such as a person suddenly running or falling down on the floor. When abnormal audio information is detected, a camera on the robot is triggered to further confirm the occurrence of the abnormal event. / Wu, Xinyu. / "May 2008." / Adviser: Yangsheng Xu. / Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1915. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 101-109). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
130

Detecção autônoma de intrusões utilizando aprendizado de máquina / Autonomous intrusion detection via machine learning

Eduardo Alves Ferreira 05 May 2011 (has links)
A evolução da tecnologia da informação popularizou o uso de sistemas computacionais para a automação de tarefas operacionais. As tarefas de implantação e manutenção desses sistemas computacionais, por outro lado, não acompanharam essa tendência de forma ágil, tendo sido, por anos, efetuadas de forma manual, implicando alto custo, baixa produtividade e pouca qualidade de serviço. A fim de preencher essa lacuna foi proposta uma iniciativa denominada Computação Autônoma, a qual visa prover capacidade de autogerenciamento a sistemas computacionais. Dentre os aspectos necessários para a construção de um sistema autônomo está a detecção de intrusão, responsável por monitorar o funcionamento e fluxos de dados de sistemas em busca de indícios de operações maliciosas. Dado esse contexto, este trabalho apresenta um sistema autônomo de detecção de intrusões em aplicações Web, baseado em técnicas de aprendizado de máquina com complexidade computacional próxima de linear. Esse sistema utiliza técnicas de agrupamento de dados e de detecção de novidades para caracterizar o comportamento normal de uma aplicação, buscando posteriormente por anomalias no funcionamento das aplicações. Observou-se que a técnica é capaz de detectar ataques com maior autonomia e menor dependência sobre contextos específicos em relação a trabalhos anteriores / The use of computers to automatically perform operational tasks is commonplace, thanks to the information technology evolution. The maintenance of computer systems, on the other hand, is commonly performed manually, resulting in high costs, low productivity and low quality of service. The Autonomous Computing initiative aims to approach this limitation, through selfmanagement of computer systems. In order to assemble a fully autonomous system, an intrusion detection application is needed to monitor the behavior and data flows on applications. Considering this context, an autonomous Web intrusion detection system is proposed, based on machine-learning techniques with near-linear computational complexity. This system is based on clustering and novelty detection techniques, characterizing an application behavior, to later pinpoint anomalies in live applications. By conducting experiments, we observed that this new approach is capable of detecting anomalies with less dependency on specific contexts than previous solutions

Page generated in 0.1178 seconds