Spelling suggestions: "subject:"spotted""
1 |
Requirements for digitized aircraft spotting (Ouija) board for use on U.S. Navy aircraft carriers /Thate, Timothy J. Michels, Adam S. January 2002 (has links) (PDF)
Thesis (M.S. in Information Systems Management)--Naval Postgraduate School, September 2002. / Thesis advisor(s): Alex Bordetsky, Glenn Cook. Includes bibliographical references. Also available online.
|
2 |
The application of classical information retrieval techniques to spoken documentsJames, David Anthony January 1995 (has links)
No description available.
|
3 |
Efficient Temporal Action Localization in VideosAlwassel, Humam 17 April 2018 (has links)
State-of-the-art temporal action detectors inefficiently search the entire video for specific actions. Despite the encouraging progress these methods achieve, it is crucial to design automated approaches that only explore parts of the video which are the most relevant to the actions being searched. To address this need, we propose the new problem of action spotting in videos, which we define as finding a specific action in a video while observing a small portion of that video. Inspired by the observation that humans are extremely efficient and accurate in spotting and finding action instances in a video, we propose Action Search, a novel Recurrent Neural Network approach that mimics the way humans spot actions. Moreover, to address the absence of data recording the behavior of human annotators, we put forward the Human Searches dataset, which compiles the search sequences employed by human annotators spotting actions in the AVA and THUMOS14 datasets. We consider temporal action localization as an application of the action spotting problem. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently (observing on average 17.3% of the video) but it also accurately finds human activities with 30.8% mAP (0.5 tIoU), outperforming state-of-the-art methods
|
4 |
The effects of common household cleaning agents and aging on the removal of quantitatively applied food stains from rayon, nylon, and olefin pile upholstery fabricsHofbauer, Brenda Hess January 1982 (has links)
The objectives of this research were to quantitatively apply food stains to a rayon, a nylon, and an olefin pile upholstery fabric, and to determine the effects of aging times and cleaning agents on their removal. Another objective was to correlate the instrumental color change measurements with ratings obtained from a consumer panel.
The specimens were soiled with mustard, vegetable oil, milk, and syrup. After aging for one day or two weeks, the specimens were treated for stain removal with a detergent-vinegar solution, perchloroethylene, isopropyl alcohol, or ammonia water while attached to a simulated chair arm.
Soil removal was evaluated by measuring light reflectance and color values on a Hunter Color-Difference Meter®. A consumer panel rated the specimens according to AATCC Stain Release Replicas, and stated whether or not each specimen was acceptable for use in their homes.
Statistical analyses indicated the following major conclusions: (1) the fabric and stain variables significantly affected the instrumental values of color change; (2) the variables exhibiting a significant effect on the consumer ratings were fabric, stain, and stain remover; (3) the rayon fabric tended to react the most unfavorably of the three fabrics to the treatment; (4) the milk and mustard stains tended to be the most easily removed, while the oil and the syrup stains were more difficult; and (5) a correlation existed between instrumental values and consumer ratings of color change. / Master of Science
|
5 |
From Time series signal matching to word spotting in multilingual historical document images / De la mise en correspondance de séries temporelles au word spotting dans les images de documents historiques multilinguesMondal, Tanmoy 18 December 2015 (has links)
Cette thèse traite dela mise en correspondance de séquences appliquée au word spotting (localisation de motsclés dans des images de documents sans en interpréter le contenu). De nombreux algorithmes existent mais très peu d’entre eux ont été évalués dans ce contexte. Nous commençons donc par une étude comparative de ces méthodes sur plusieurs bases d’images de documents historiques. Nous proposons ensuite un nouvel algorithme réunissant la plupart des possibilités offertes séparément dans les autres algorithmes. Ainsi, le FSM (Flexible Sequence Matching) permet de réaliser des correspondances multiples sans considérer des éléments bruités dans la séquence cible, qu’ils se situent au début, à la fin ou bien au coeur de la correspondance. Nous étendons ensuite ces possibilités à la séquence requête en définissant un nouvel algorithme (ESC : Examplary Sequence Cardinality). Finalement, nous proposons une méthode d’appariement alternative utilisant une mise en correspondance inexacte de chaines de codes (shape code) décrivant les mots. / This thesis deals with sequence matching techniques, applied to word spotting (locating keywords in document images without interpreting the content). Several sequence matching techniques exist in the literature but very few of them have been evaluated in the context of word spotting. This thesis begins by a comparative study of these methods for word spotting on several datasets of historical images. After analyzing these approaches, we then propose a new algorithm, called as Flexible Sequence Matching (FSM) which combines most of the advantages offered separately by several other previously explored sequence matching algorithms. Thus, FSM is able to skip outliers from target sequence, which can be present at the beginning, at the end or in the middle of the target sequence. Moreover it can perform one-to-one, one-to-many and many-to-one correspondences between query and target sequence without considering noisy elements in the target sequence. We then also extend these characteristics to the query sequence by defining a new algorithm (ESC : Examplary Sequence Cardinality). Finally, we propose an alternative word matching technique by using an inexact chain codes (shape code), describing the words.
|
6 |
Concept Design and Testing of a GPS-less System for Autonomous Shovel-Truck SpottingOWENS, BRETT 29 January 2013 (has links)
Haul truck drivers frequently have difficulties spotting beside shovels. This is typically a combination of reduced visibility and poor mining conditions. Based on first-hand data collected from the Goldstrike Open Pit, it was learned that, on average, 9% of all spotting actions required corrective movements to facilitate loading. This thesis investigates an automated solution to haul truck spotting that does not rely on the use of the satellite global positioning system (GPS), since GPS can perform unreliably. This thesis proposes that if spotting was automated, a significant decrease in cycle times could result.
Using conventional algorithms and techniques from the field of mobile robotics, vehicle pose estimation and control algorithms were designed to enable autonomous shovel-truck spotting. The developed algorithms were verified by using both simulation and field testing with real hardware. Tests were performed in analog conditions on an automation-ready Kubota RTV 900 utility truck. When initiated from a representative pose, the RTV successfully spotted to the desired location (within 1 m) in 95% of the conducted trials. The results demonstrate that the proposed approach is a strong candidate for an auto-spot system. / Thesis (Master, Mining Engineering) -- Queen's University, 2013-01-28 09:49:20.584
|
7 |
Geometric and Structural-based Symbol Spotting. Application to Focused Retrieval in Graphic Document CollectionsRusiñol Sanabra, Marçal 18 June 2009 (has links)
No description available.
|
8 |
The Minimal Word Hypothesis: A Speech Segmentation StrategyMeador, Diane L. January 1996 (has links)
Previous investigations have sought to determine how listeners might locate word boundaries in the speech signal for the purpose of lexical access. Cutler (1990) proposes the Metrical Segmentation Strategy (MSS), such that only full vowels in stressed syllables and their preceding syllabic onsets are segmented from the speech stream. I report the results of several experiments which indicate that the listener segments the minimal word, a phonologically motivated prosodic constituent, during processing of the speech signal. These experiments were designed to contrast the MSS with two prosodic alternative hypotheses. The Syllable Hypothesis posits that listeners segment a linguistic syllable in its entirety as it is produced by the speaker. The Minimal Word Hypothesis proposes that a minimal word is segmented according to implicit knowledge the listener has concerning statistically probable characteristics of the lexicon. These competing hypotheses were tested by using a word spotting method similar to that in Cutler and Norris (1988). The subjects' task was to detect real monosyllabic words embedded initially in bisyllabic nonce strings. Both open (CV) and closed (CVC) words were embedded in strings containing a single intervocalic consonant. The prosodic constituency of this consonant was varied by manipulating factors affecting prosodic structure: stress, the sonority of the consonant, and the quality of the vowel in the first syllable. The assumption behind the method is that word detection will be facilitated when embedded word and segmentation boundaries are coincident. Results show that these factors are influential during segmentation. The degree of difficulty in word detection is a function of how well the speech signal corresponds to the minimal word. Findings are consistently counter to both the MSS and Syllable hypotheses. The Minimal Word Hypothesis takes advantage of statistical properties of the lexicon, ensuring a strategy which is successful more often than not. The minimal word specifies the smallest possible content word in a language in terms of prosodic structure while simultaneously affiliating the greatest amount of featural information within the structural limits. It therefore guarantees an efficient strategy with as few parses as possible.
|
9 |
Brake Judder - An Investigation of the Thermo-elastic and Thermo-plastic Effects during BrakingBryant, David, Fieldhouse, John D., Talbot, C.J. January 2011 (has links)
This paper considers a study of the thermo-elastic behaviour of a disc brake during heavy braking. The work is concerned with working towards developing design advice that provides uniform heating of the disc, and equally important, even dissipation of heat from the disc blade. The material presented emanates from a combination of modeling, on-vehicle testing but mainly laboratory observations and subsequent investigations. The experimental work makes use of a purpose built high speed brake dynamometer which incorporates the full vehicle suspension for controlled simulation of the brake and vehicle operating conditions. Advanced instrumentation allows dynamic measurement of brake pressure fluctuations, disc surface temperature and discrete vibration measurements. Disc run-out measurements using non-contacting displacement transducers show the disc taking up varying orders of deformation ranging from first to third order during high speed testing. This surface interrogation during braking identifies disc deformation including disc warping, 'ripple' and the effects of 'hot spotting'. The mechanical measurements are complemented by thermal imaging of the brake, these images showing the vane and vent patterns on the surface of the disc. The results also include static surface scanning, or geometry analysis, of the disc which is carried out at appropriate stages during testing. The work includes stress relieving of finished discs and subsequent dynamometer testing. This identifies that in-service stress relieving, due to high heat input during braking, is a strong possibility for the cause of disc 'warping'. It is also seen that an elastic wave is established during a braking event, the wave disappearing on release of the brake.
|
10 |
[en] A FEW-SHOT LEARNING APPROACH FOR VIDEO ANNOTATION / [pt] UMA ABORDAGEM FEW-SHOT LEARNING PARA ANOTAÇÃO DE VÍDEOSDEBORA STUCK DELGADO DE SOUZA 04 July 2024 (has links)
[pt] Cada vez mais, os vídeos se tornam uma parte integrante de nossa vida
cotidiana. Plataformas como YouTube, Facebook e Instagram recebem uma
enorme quantidade de horas de vídeo todos os dias. Quando focamos na
categoria de vídeos esportivos, é evidente o crescente interesse em obter dados
estatísticos, especialmente no futebol. Isso é valioso tanto para melhorar a
performance de atletas e equipes quanto para plataformas que utilizam essas
informações, como as de apostas. Consequentemente, o interesse em resolver
problemas relacionados à Visão Computacional tem aumentado. No caso do
Aprendizado Supervisionado, a qualidade das anotações dos dados é mais um
ponto importante para o sucesso das pesquisas. Existem várias ferramentas
de anotação disponíveis no mercado, porém poucas com o foco nos quadros
relevantes e com suporte a modelos de Inteligência Artificial. Neste sentido, este
trabalho envolve a utilização da técnica de Transfer Learning com a extração
de features em uma Rede Neural Convolucional (CNN); a investigação de um
modelo de classificação baseado na abordagem Few-Shot Learning em conjunto
com o algoritmo K-Nearest Neighbors (KNN); a avaliação dos resultados com
abordagens diferentes para o balanceamento de classes; o estudo da geração do
gráfico 2D com o t-Distributed Stochastic Neighbor Embedding (t-SNE) para
análise das anotações e a criação de uma ferramenta para anotação de frames
importantes em vídeos, com o intuito de auxiliar as pesquisas e testes. / [en] More and more videos are part of our daily life. Platforms like Youtube,
Facebook and Instagram receive a large amount of hours of videos every
day. When we focus on the sports videos category, the growing interest in
obtaining statistical data is evident, especially in soccer. This is valuable
both for improving the performance of athletes and teams and for platforms
that use this information, such as betting platforms. Consequently, interest
in solving problems related to Computer Vision has increased. In the case
of Supervised Learning, the quality of data annotations is another important
point for the success of research. There are several annotation tools available on
the market, but few focus on relevant frames and support Artificial Intelligence
models. In this sense, this work involves the use of the Transfer Learning
technique for Feature Extraction in a Convolutional Neural Network (CNN);
the investigation of a classification model based on the Few-Shot Learning
approach together with the K-Nearest Neighbors (KNN) algorithm; evaluating
results with different approaches to class balancing; the study of 2D graph
generation with t-Distributed Stochastic Neighbor Embedding (t-SNE) for
annotation analysis and the creation of a tool for annotating important frames
in videos, with the aim of assisting research and testing.
|
Page generated in 0.0693 seconds