Spelling suggestions: "subject:"A* 3research"" "subject:"A* 1research""
251 |
Multi-modal people detection from aerial videoFlynn, Helen January 2015 (has links)
There has been great interest in the use of small robotic helicopter vehicles over the last few years. Although there are regulatory issues involved in flying these that are still to be solved, they have the potential to provide a practical mobile aerial platform for a small fraction of the cost of a conventional manned helicopter. One potential class of applications for these is in searching for people, and this thesis explores a new generation of cameras which are suitable for this purpose. We propose HeatTrack, a novel algorithm to detect and track people in aerial imagery taken from a combined infrared/visible camera rig. A Local Binary Patterns (LBP) detector finds silhouettes in the infrared image which are used guide the search in the visible light image, and a Kalman filter combines information from both modalities in order to track a person more accurately than if only a single modality were available. We introduce a method for matching the thermal signature of a person to their corresponding patch in the visible modality, and show that this is more accurate than traditional homography-based matching. Furthermore, we propose a method for cancelling out camera motion which allows us to estimate a velocity for the person, and this helps in determining the location of a person in subsequent frames. HeatTrack demonstrates several advantages over tracking in the visible domain only, particularly in cases where the person shows up clearly in infrared. By narrowing down the search to the warmer parts of a scene, the detection of a person is faster than if the whole image were searched. The use of two imaging modalities instead of one makes the system more robust to occlusion; this, in combination with estimation of the velocity of a person, enables tracking even when information is lacking in either modality. To the best of our knowledge, this is the first published algorithm for tracking people in aerial imagery using a combined infrared/visible camera setup.
|
252 |
Impacts of cleanser, material type, methods for cleaning and training on canine decontaminationPowell, Ellie B 01 May 2018 (has links)
Search-and -rescue (SAR) teams spend days and sometimes weeks in the field following a disaster. After completing their assigned mission, handlers and canines return to base, potentially bringing contaminated material with them. There were 3 objectives for this study; (1) the effects of cleanser and equipment materials on the efficiency of decontamination protocols, (2) the effects of improved treatments on the efficiency of decontamination protocols and (3) the use of field kits and improved training on decontamination techniques in the field. In the first study, straps (n = 54) were cut from biothane, leather and nylon. Straps were washed with three kinds of cleansers; Dawn dishwashing detergent, Johnson and Johnson’s Head-to-toe baby wash and Simple Green. In addition, three different types of treatments: 5-minute soak (A), double 5-minute soak (B) and a 3-minute soak with a 2-minute agitation (C). In the second study, straps (n = 40) of leather and nylon were utilized. Unlike the previous study, only Dawn dishwashing detergent and Johnson and Johnson’s Head-To-Toe-Baby Wash were selected as cleansers for decontamination. In addition, improved treatments (PW or SK) were created and utilized to further decontaminant the straps. The finally part of the study utilized canine teams (n = 10), composed of canine and handlers and were randomly assigned to one of two groups. Groups were structured as follows: TRAINED (n = 5) received 30-minutes of interactive training (using the illustrated guide contained in the kit) on proper utilization of equipment provided; UNTRAINED (n = 5) received the same field kit and an illustrated guide with no interactive training. An oil-based pseudocontaminant (GloGerm®) was topically applied to the straps in the first two studies and then to four anatomic sites on the canine participants: cranial neck, between the shoulder blades, left medial hindlimb and hind left paw in the last study. Pre- and post-images were taken of the straps and at the four anatomical locations prior to and following decontamination. Images were analyzed via two methods 1) categorical scores; 2) measured fluorescent reduction. Categorical scores were assigned, using two blinded reviewers (Venable et. al., 2017). The categorical scores were allotted as follows: 0 = <24% contaminant reduction; 1 = 25-50% contaminant reduction; 2 = 51-75% contaminant reduction; and 3 = >76% contaminant reduction (Lee et al., 2014). No score discrepancies >1 were observed between reviewers. Score data were analyzed using SAS version 9.4 (SAS Institute Inc., Cary, NC), as a Chi Square with PROC FREQ and measurement data were analyzed using PROC ANOVA. Results in the first study indicate that material (P = .2331), cleanser (P = .2156) and treatment (P = .9139) had no effect on contaminant reduction. However, when treatments were improved in the second study, power wash was more effective at contaminant reduction (P = .0004). In addition, material was also determined to have an effect on decontamination (P = .0135). Although, the kind of cleanser used had no effect (P = .3564). Additionally, in the last study, TRAINED handlers were more effective at contamination reduction (P = .0093) as compared to their UNTRAINED counterparts. The initial results indicate that no combination of material, cleanser or treatment had any effect on reducing the oil-based contaminants. Nevertheless, with improved treatments there is a potential to more thoroughly decontaminate the collars and leashes. In addition, study three indicates that handlers, when properly trained, can achieve reduction of oil-based contaminants with a basic field kit and a garden hose. These data have implications for management of canines in the field that may be exposed to unknown substances and require timely decontamination.
|
253 |
Desenvolvimento e aplicação de um software cristalográfico com protocolo de acesso a um banco de dados distribuídoUtuni, Vegner Hizau dos Santos [UNESP] 13 April 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:32:11Z (GMT). No. of bitstreams: 0
Previous issue date: 2009-04-13Bitstream added on 2014-06-13T20:23:08Z : No. of bitstreams: 1
utuni_vhs_dr_araiq.pdf: 1401616 bytes, checksum: 9b101e8602c8294ef048247253081576 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Desde a revolução provocada pela segunda geração de computadores ocorrida por volta de 1960 e que permitiu a disseminação dos computadores para os diversos setores da sociedade, vem acontecendo uma evolução na capacidade de processamento dos chips e conseqüentemente no conceito de software. Em se tratando especificamente de softwares científicos, esse incremente no volume e velocidade de processamento de dados torna possível a aplicação de modelos físicoquímicos cada vez mais complexos. Em 1969, o cristalógrafo Hugo Rietveld criou um método que utiliza este novo paradigma tecnológico e que hoje é conhecido como método de Rietveld. Desenvolvido especificamente para o refinamento de dados de difração raios X de amostra policristalinas, passou a ser utilizado em todas as áreas da pesquisa em novos materiais. Para uma boa estabilidade do processo de refinamento é necessário fornecer ao modelo uma aproximação inicial de cada fase que compõe a amostra. Esta exigência é necessária para permitir a estabilidade do processo iterativo que irá ajustar os dados experimentais à função teórica, característica que obriga a uma dependência de bancos de dados especializados. O processo de refinamento utilizando o método de Rietveld é complexo e não linear o que implica necessariamente no uso de um software. Esta característica aliada à dependência de bancos de dados cristalográficos justifica a utilização da nova tecnologia de bancos de dados distribuídos, qualidade desejável e de grande interesse para a comunidade científica. Um banco de dados distribuído permite que os vários softwares que empregam o método de Rietveld troquem entre si as informações necessárias para iniciar um refinamento. O gerenciamento deste banco de dados é feito de forma automática pelo próprio software sem interferência humana. A viabilidade da hipótese da utilização... / Since the revolution occurred for the second generation of computers in the 1960 and that it allowed the computers dissemination for the diverse sectors of the society, consequently it comes happening a development in the capacity of chips processing and in the software concept. Specifically treating to scientific software, the volume and processing speed of data increasing becomes possible the application of more complexes physical-chemistry models. In 1969, the crystallographer Hugo Rietveld created a method that uses this new technological paradigm and that today is known as the Rietveld method. Developed specifically to the refinement of polycrystalline X-ray diffraction data, it passed to be used in all areas of the research in new materials. For a good stability of the refinement processes it is necessary to supply to the model an initial approach of each phase that composes the sample. This condition is necessary to allow the stability of the iterative process that will go to adjust the experimental data to the theoretical function, characteristic that it compels to a dependence of specialized data bases. The refinement process using the Rietveld method is complex and not linear which implies necessarily in the use of a software. This characteristic in combination to the dependence of a crystallographic data bases strongest justified the use of the new technology of distributed data bases, desirable quality and of great interest for the scientific community. A distributed data base allows that the several software that using the Rietveld method change information between itself necessary to initiate the refinement process. The management of this data base is made automatically without human interference. The hypothesis viability of the use of a P2P network with CIF archives was demonstrated through the Hera software implementation. New algorithms to automatize the creation... (Complete abstract click electronic access below)
|
254 |
Search engine optimisation or paid placement systems: user preferenceNeethling, Riaan January 2007 (has links)
Thesis
submitted in fulfilment
of the requirements for the degree
Magister Technologiae
in
Information Technology
in the
Faculty of Informatics and Design
at the
CAPE PENINSULA UNIVERSITY OF TECHNOLOGY
2007 / The objective of this study was to investigate and report on user preference of
Search Engine Optimisation (SEO), versus Pay Per Click (PPC) results. This will
assist online advertisers to identify their optimal Search Engine Marketing (SEM)
strategy for their specific target market.
Research shows that online advertisers perceive PPC as a more effective SEM
strategy than SEO. However, empirical evidence exists that PPC may not be the
best strategy for online advertisers, creating confusion for advertisers considering a
SEM campaign. Furthermore, not all advertisers have the funds to implement a dual
strategy and as a result advertisers need to choose between a SEO and PPC
campaign. In order for online advertisers to choose the most relevant SEM strategy,
it is of importance to understand user perceptions of these strategies.
A quantitative research design was used to conduct the study, with the purpose to
collect and analyse data. A questionnaire was designed and hosted on a busy
website to ensure maximal exposure. The questionnaire focused on how search
engine users perceive SEM and their click response towards SEO and PPC
respectively. A qualitative research method was also used in the form of an
interview. The interview was conducted with representatives of a leading South
African search engine, to verify the results and gain experts’ opinions.
The data was analysed and the results interpreted. Results indicated that the user
perceived relevancy split is 45% for PPC results, and 55% for SEO results,
regardless of demographic factors. Failing to invest in either one could cause a
significant loss of website traffic. This indicates that advertisers should invest in both
PPC and SEO. Advertisers can invest in a PPC campaign for immediate results, and
then implement a SEO campaign over a period of time. The results can further be
used to adjust a SEM strategy according to the target market group profile of an
advertiser, which will ensure maximum effectiveness.
|
255 |
The crossover point between keyword rich website text and spamdexingZuze, Herbert January 2011 (has links)
Thesis
Submitted in fulfilment
of the requirements for the degree
MAGISTER TECHNOLOGIAE
In
BUSINESS INFORMATION SYSTEMS
in the
FACULTY OF BUSINESS
at the
CAPE PENINSULA UNIVERSITY OF TECHNOLOGY
2011 / With over a billion Internet users surfing the Web daily in search of information, buying,
selling and accessing social networks, marketers focus intensively on developing websites
that are appealing to both the searchers and the search engines. Millions of webpages are
submitted each day for indexing to search engines. The success of a search engine lies in its
ability to provide accurate search results. Search engines’ algorithms constantly evaluate
websites and webpages that could violate their respective policies. For this reason some
websites and webpages are subsequently blacklisted from their index.
Websites are increasingly being utilised as marketing tools, which result in major competition
amongst websites. Website developers strive to develop websites of high quality, which are
unique and content rich as this will assist them in obtaining a high ranking from search
engines. By focusing on websites of a high standard, website developers utilise search
engine optimisation (SEO) strategies to earn a high search engine ranking.
From time to time SEO practitioners abuse SEO techniques in order to trick the search
engine algorithms, but the algorithms are programmed to identify and flag these techniques
as spamdexing. Search engines do not clearly explain how they interpret keyword stuffing
(one form of spamdexing) in a webpage. However, they regard spamdexing in many different
ways and do not provide enough detail to clarify what crawlers take into consideration when
interpreting the spamdexing status of a website. Furthermore, search engines differ in the
way that they interpret spamdexing, but offer no clear quantitative evidence for the crossover
point of keyword dense website text to spamdexing. Scholars have indicated different views
in respect of spamdexing, characterised by different keyword density measurements in the
body text of a webpage. This raised several fundamental questions that form the basis of this
research.
This research was carried out using triangulation in order to determine how the scholars,
search engines and SEO practitioners interpret spamdexing. Five websites with varying
keyword densities were designed and submitted to Google, Yahoo! and Bing. Two phases of
the experiment were done and the results were recorded. During both phases almost all of
the webpages, including the one with a 97.3% keyword density, were indexed. The
aforementioned enabled this research to conclusively disregard the keyword stuffing issue,
blacklisting and any form of penalisation. Designers are urged to rather concentrate on
usability and good values behind building a website.
The research explored the fundamental contribution of keywords to webpage indexing and
visibility. Keywords used with or without an optimum level of measurement of richness and
poorness result in website ranking and indexing. However, the focus should be on the way in
which the end user would interpret the content displayed, rather than how the search engine
would react towards the content. Furthermore, spamdexing is likely to scare away potential
clients and end users instead of embracing them, which is why the time spent on
spamdexing should rather be used to produce quality content.
|
256 |
Hur sökfraser är utformadeClarinsson, Richard January 2006 (has links)
Millions of people are using search engines every day when they are trying to find information on the Internet. The purpose of this report is to find out how people formulate search queries. The result in this report is based on an empirical study which is based on a search log from the Swedish search engine Seek.se. One of the results in this thesis is that nearly all search queries are based on keywords. / Miljoner människor använder sökmotorer varje dag när de försöker hitta information på Internet. Syftet med den här uppsatsen är att ta reda på hur individer formulerar sökfraser. Resultatet i rapporten är baserad på en empirisk studie som är baserad på sökloggen för den svenska sökmotorn Seek.se. Uppsatsen kommer bl.a. fram till att nästan alla sökningar som görs på Internet är nyckelordsbaserade.
|
257 |
An Architecture for Mobile Local Information Search : Focusing on Wireless LAN and Cellular IntegrationSidduri, Sridher Rao January 2008 (has links)
The thesis work intends to provide architecture for mobile local information search service using Wireless LAN and cellular integration. Search technology has been popular and driving business bodies with increasing e-commerce opportunities. The search technology has been recently brought to portable devices such as mobile phones and PDA devices by extending the research scope. Mobile search revenues are expected to surpass Internet search revenues in near future. Mobile local search on the other hand is getting much popular with growing number of mobile subscribers. Mobile phones have been chosen to provide mobile local search services because of its high possessivity and portable nature. In this thesis work, the author would like to propose a generalized architecture for mobile local information search in a new perspective by involving cellular service provider directly with a minimum co-operation from consumers and retailers. When providing mobile local search services, cellular operator has to maintain a replica of databases of all the existing retailers. Updating the replica at cellular operator at regular intervals has been leading to synchronization problems that produce out-dated results to mobile users. The aspects that have driven the author towards proposing the architecture are solving database synchronization problems and thriving for effective search results. The existing architecture of web search, mobile search and mobile local search are analyzed to identify the domain specific challenges and research gaps. Proposed architecture is designed and evaluated by using an approach called Architecture Tradeoff Analysis Method (ATAM). The architecture is evaluated against its quality attributes and the results are presented. / Sridher Rao Sidduri Lindblomsvagen, 97, Rum no 555, 372 33 Ronneby, Sweden E-mail: srsi05@student.bth.se
|
258 |
Desarrollo de una Solución Logística para la Programación de Operaciones en una Compañía SiderúrgicaRiquelme Niklitschek, Felipe Andrés January 2009 (has links)
El presente trabajo de título tuvo como objetivo el diseño, desarrollo y evaluación de una herramienta que permitiera apoyar la toma de decisiones respecto de la programación de operaciones en cada una de las dos plantas con que actualmente cuenta una compañía siderúrgica. Se buscó de esta forma encontrar la secuencia en que debieran ejecutarse los distintos trabajos mensuales, minimizando los tiempos de producción así como los tiempos de retrasos en las fechas de entrega.
La investigación se centró, por razones de tiempo, únicamente en el proceso de laminación de la compañía, que es aquel en donde se le da la forma final a los productos siderúrgicos mediante deformación termomecánica. Esta elección no fue al azar y se tomó considerando el hecho de que dicho proceso es hasta el día de hoy el principal “cuello de botella” y por lo tanto representaba las mayores oportunidades de ganancia.
Ahora bien, fue posible demostrar que el problema pertenece a la clase NP-Hard por lo que no se conocen algoritmos capaces de resolverlo en un tiempo polinomial. Como consecuencia, y dado que el tamaño de la instancia es relativamente grande, se hizo necesario incorporar enfoques heurísticos que permitieran obtener resultados suficientemente buenos en un tiempo de computación razonable.
Es así como se optó por un algortimo de Búsqueda Tabú. La elección se basó principalmente en los buenos resultados reportados en la literatura para otros problemas de programación de operaciones (Lin y Ying, 2006; Gupta y Smith, 2007 y Valente y Alves, 2008).
Gran parte del desempeño de este tipo de heurística depende de dos elementos: la solución inicial y la metodología de generación de vecindades. Es por ello que la estrategia seguida consistió en evaluar un amplio espectro de las técnicas más utilizadas para tales fines, escogiendo finalmente aquella combinación que presentó un mejor desempeño.
Los resultados obtenidos muestran que la aplicación de la heurística propuesta a instancias reales permite obtener reducciones importantes en comparación a la situación actual: un 7% promedio en los tiempos de producción y disminuciones promedio del 35% en lo que se refiere a los tiempos de retraso. Por otra parte, se observa una dramática caída del 82% promedio en lo que al tiempo necesario para determinar la programación respecta.
Finalmente, cabe destacar que la investigación realizada sugiere también que aún hay espacio para futuras mejoras, por lo que se recomienda dar continuidad al estudio y en lo posible ampliarlo a otros procesos de la cadena productiva.
|
259 |
An adaptive fuzzy based recommender system for enterprise searchAlhabashneh, O. Y. A. January 2015 (has links)
This thesis discusses relevance feedback including implicit parameters, explicit parameters and user query and how they could be used to build a recommender system to enhance the search performance in the enterprise. It presents an approach for the development of an adaptive fuzzy logic based recommender system for enterprise search. The system is designed to recommend documents and people based on the user query in a task specific search environment. The proposed approach provides a new mechanism for constructing and integrating a task, user and document profiles into a unified index thorough the use of relevance feedback and fuzzy rule based summarisation. The three profiles are fuzzy based and are created using the captured relevance feedback. In the task profile, each task was modelled as a sequence of weighted terms which were used by the users to complete the task. In the user profile, the user was modelled as a sequence of weighted terms which were used to search for the required information. In the document profile the document was modelled as a group of weighted terms which were used by the users to retrieve the document. Fuzzy sets and rules were used to calculate the term weight based on the term frequency in the user queries. An empirical research was carried out to capture the relevance feedback from 35 users on 20 predefined simulated enterprise search tasks and to investigate the correlation between the implicit and explicit relevance feedback. Based on the results, an adaptive linear predictive model was developed to estimate the document relevancy from the implicit feedback parameters. The predicted document relevancy was then used to train the fuzzy system which created and integrated the three profiles, as briefly described above. The captured data set was used to develop and train the fuzzy system. The proposed system achieved 89% accuracy performance classifying the relevant documents. With regard to the implementation, Apache Sorl, Apache Tikka, Oracle 11g and Java were used to develop a prototype system. The overall retrieval accuracy performance of the proposed system was tested by carrying out a comparative retrieval accuracy performance evaluation based on Precision (P), Recall (R) and ranking analysis. The values of P and R of the proposed system were compared with two other systems being the standard inverted index based Solr system and the semantic indexing based lucid system. The proposed system enhanced the value of P significantly where the average of P value has been increased from 0.00428 to 0.064 as compared with the standard Sorl and from 0.0298 to 0.064 compared with Lucid. In other words, the proposed system has managed to decrease the number of irrelevant documents in the search result which means that the ability of the system to show the relevant document has been enhanced. The proposed system has also enhanced the value of R. The average value of R has been increased significantly (doubling) from 0.436 to 0.828 as compared with the standard Solr and from 0.76804 to 0.828 as compared with Lucid. This means that the ability of the system to retrieve the relevant document has also been enhanced. Furthermore the ability of the system to rank higher the relevant documents has been improved as compared with the other two systems.
|
260 |
Sequential and parallel large neighborhood search algorithms for the periodic location routing problemHemmelmayr, Vera 05 1900 (has links) (PDF)
We propose a large neighborhood search (LNS) algorithm to solve the periodic location routing problem (PLRP). The PLRP combines location and routing decisions over a planning horizon in which customers require visits according to a given frequency and the specific visit days can be chosen. We use parallelization strategies that can exploit the availability of multiple processors. The computational results show that the algorithms obtain better results than previous solution methods on a set of standard benchmark instances from the literature.
|
Page generated in 0.0249 seconds