• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

How to develop graphic design for games with low-pixel density

Kry, Tobias January 2013 (has links)
This thesis is written for the Bachelor of Arts degree in Game Design and Graphics at Gotland University in Sweden. A method used in graphic design for games with low-pixel density was initially studied in the course Advanced Game Project and has in this work been further developed. In general, since regular size-reduction of pictures often results in visually incomplete bitmaps, the goal of this thesis is to provide a better overview on the process of reduction where the important features of the original picture are maintained throughout the reduction phase.
12

Recognizing human activities from low-resolution videos

Chen, Chia-Chih, 1979- 01 February 2012 (has links)
Human activity recognition is one of the intensively studied areas in computer vision. Most existing works do not assume video resolution to be a problem due to general applications of interests. However, with continuous concerns about global security and emerging needs for intelligent video analysis tools, activity recognition from low-resolution and low-quality videos has become a crucial topic for further research. In this dissertation, We present a series of approaches which are developed specifically to address the related issues regarding low-level image preprocessing, single person activity recognition, and human-vehicle interaction reasoning from low-resolution surveillance videos. Human cast shadows are one of the major issues which adversely effect the performance of an activity recognition system. This is because human shadow direction varies depending on the time of the day and the date of the year. To better resolve this problem, we propose a shadow removal technique which effectively eliminates a human shadow cast from a light source of unknown direction. A multi-cue shadow descriptor is employed to characterize the distinctive properties of shadows. Our approach detects, segments, and then removes shadows. We propose two different methods to recognize single person actions and activities from low-resolution surveillance videos. The first approach adopts a joint feature histogram based representation, which is the concatenation of subspace projected gradient and optical flow features in time. However, in this problem, the use of low-resolution, coarse, pixel-level features alone limits the recognition accuracy. Therefore, in the second work, we contributed a novel mid-level descriptor, which converts an activity sequence into simultaneous temporal signals at body parts. With our representation, activities are recognized through both the local video content and the short-time spectral properties of body parts' movements. We draw the analogies between activity and speech recognition and show that our speech-like representation and recognition scheme improves recognition performance in several low-resolution datasets. To complete the research on this subject, we also tackle the challenging problem of recognizing human-vehicle interactions from low-resolution aerial videos. We present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle spatial relationships with the pre-specified event definitions in a piecewise fashion. Our framework can be generalized to recognize any type of human-vehicle interaction from aerial videos. / text
13

Application of a bioinformatic/biochemical hybrid approach to determine the structure of protein complexes and multi domain proteins.

Dmitri Mouradov Unknown Date (has links)
A recent shift towards proteomics has seen many structural genomics initiatives set up for high-throughput structure determination using traditional methods of x-ray crystallography and NMR. The next step in the proteomic revolution focuses on the interplay of multi-protein complexes and transient protein-protein interactions, which are involved in many cellular functions. Greater understanding of protein-protein interactions will inevitably lead to better comprehension of the regulation of cellular process, which has implications in biomedical sciences and biotechnology. Even though many high-resolution initiatives focus on proteins and protein complexes, their structure-determination success rates are still low. An emerging approach uses chemical cross-linking and mass spectrometry to derive a set of sparse distance constrains, which can be used for building models of proteins and to map out residues in protein interaction interface based on partial structural information. This technique allows low-resolution identification of protein structures and their interactions in cases where traditional structure determination techniques did not produce results. Chemical cross-linkers have been successfully used for many years in identifying interacting proteins. However, recent advances in mass spectrometry have allowed the identification of exact insertion points of low-abundance cross-links and hence has opened up a new perspective on the use of cross-linkers in combination with computational structure prediction. For protein interaction studies, the approach uses chemical cross-linking information with molecular docking, so that the cross-links are treated as explicit constraints in the calculations. This study focuses on a low-cost and rapid approach to structure prediction, where partial structural information and distance constraints can be used to obtain the relative orientation of interacting proteins and domains, specifically as a rescue strategy where traditional high resolution structure determination methods were unsuccessful. This hybrid biochemical/bioinformatics approach was applied in the determination of structure of the latexin:carboxypeptidase A complex, and succeeded in achieving 4 Å rmsd compared to the crystal structure determined subsequently (Mouradov et al., 2006). Application of the bioinformatics/biochemical approach to multi-domain proteins was carried out on murine acyl-CoA thioesterase 7 (Acot7). X-ray crystallography provided structures of the two separate domains of Acot7, however the full length protein did not crystalise. Combining chemical cross-linking, mass spectrometry, molecular docking and homology modeling we were able to reconstruct how the two domains are arranged in the full length protein (Forwood et al., 2007). Limitations of this technique caused by the enormous complexity of the cross-linking reaction mixtures were identified and emphasized by analysing a large (four protein) complex of DNA polymerase III, where only one inter-protein cross-link was identified. A rapid and cost-effective method for identification of cross-linked peptides using a commercially available cross-linker was developed as part of the overall aim of streamlining the hybrid biochemical/bioinformatics in order for it to become a generally applicable technique for rapid protein structure characterisation (King et al., 2008). Finally an in-house software package was developed for assignment of cross-linked peptides based on m/z values.
14

Application of a bioinformatic/biochemical hybrid approach to determine the structure of protein complexes and multi domain proteins.

Dmitri Mouradov Unknown Date (has links)
A recent shift towards proteomics has seen many structural genomics initiatives set up for high-throughput structure determination using traditional methods of x-ray crystallography and NMR. The next step in the proteomic revolution focuses on the interplay of multi-protein complexes and transient protein-protein interactions, which are involved in many cellular functions. Greater understanding of protein-protein interactions will inevitably lead to better comprehension of the regulation of cellular process, which has implications in biomedical sciences and biotechnology. Even though many high-resolution initiatives focus on proteins and protein complexes, their structure-determination success rates are still low. An emerging approach uses chemical cross-linking and mass spectrometry to derive a set of sparse distance constrains, which can be used for building models of proteins and to map out residues in protein interaction interface based on partial structural information. This technique allows low-resolution identification of protein structures and their interactions in cases where traditional structure determination techniques did not produce results. Chemical cross-linkers have been successfully used for many years in identifying interacting proteins. However, recent advances in mass spectrometry have allowed the identification of exact insertion points of low-abundance cross-links and hence has opened up a new perspective on the use of cross-linkers in combination with computational structure prediction. For protein interaction studies, the approach uses chemical cross-linking information with molecular docking, so that the cross-links are treated as explicit constraints in the calculations. This study focuses on a low-cost and rapid approach to structure prediction, where partial structural information and distance constraints can be used to obtain the relative orientation of interacting proteins and domains, specifically as a rescue strategy where traditional high resolution structure determination methods were unsuccessful. This hybrid biochemical/bioinformatics approach was applied in the determination of structure of the latexin:carboxypeptidase A complex, and succeeded in achieving 4 Å rmsd compared to the crystal structure determined subsequently (Mouradov et al., 2006). Application of the bioinformatics/biochemical approach to multi-domain proteins was carried out on murine acyl-CoA thioesterase 7 (Acot7). X-ray crystallography provided structures of the two separate domains of Acot7, however the full length protein did not crystalise. Combining chemical cross-linking, mass spectrometry, molecular docking and homology modeling we were able to reconstruct how the two domains are arranged in the full length protein (Forwood et al., 2007). Limitations of this technique caused by the enormous complexity of the cross-linking reaction mixtures were identified and emphasized by analysing a large (four protein) complex of DNA polymerase III, where only one inter-protein cross-link was identified. A rapid and cost-effective method for identification of cross-linked peptides using a commercially available cross-linker was developed as part of the overall aim of streamlining the hybrid biochemical/bioinformatics in order for it to become a generally applicable technique for rapid protein structure characterisation (King et al., 2008). Finally an in-house software package was developed for assignment of cross-linked peptides based on m/z values.
15

Análise da qualidade de carne bovina por ressonância magnética nuclear em baixa resolução / Analysis of beef quality by low resolution NMR

Cátia Crispilho Corrêa 31 August 2007 (has links)
A espectroscopia de Ressonância Magnética Nuclear (RMN) em baixo campo tem demonstrado ser um método rápido e confiável para se avaliar a qualidade da carne. A predição desta, especialmente a carne de porco, tem sido avaliada usando o tempo de relaxação transversal (T2). As medidas foram feitas com a seqüência de pulsos CPMG (Carr-Purcell-Meibom-Gill) que correlaciona os parâmetros da qualidade da carne de suínos, tais como, a capacidade de retenção de água (CRA), o pH e perda de água por cocção (PPC), com tempo de relaxação T2. Neste trabalho, analisou-se a qualidade da carne bovina pela seqüência de pulsos CPMG, que é dependente do tempo relaxação transversal (T2), e pela técnica de precessão livre de onda continua (CWFP), que depende de T2 e da relaxação longitudinal (T1). Foram analisadas amostras do contra-filé do tipo músculo- longissimus lumborum, retiradas da região da 12ª e da 6ª costela de bovinos, de três grupos genéticos de bovinos; adaptados ou não às condições tropicais, sendo dois deles de 3/4 e um de 9/16 de sangue europeu. Os dados de CPMG foram analisados utilizando-se os processamentos de análise multiexponencial discreta, análise multiexponencial contínua, baseada na transformada inversa de Laplace (ILT) e com métodos quimiométricos como análises de componentes principais (PCA), análises de agrupamentos hierárquicos (HCA) e regressão por mínimos quadrados parciais (PLS). Os dados de CWFP por serem mais complexos do que os do CPMG foram analisados principalmente por métodos quimiométricos. Conclui-se que não há diferença significativa entre o corte realizado na 12ª costela com o da 6ª costela e que a técnica CWFP é mais eficaz na separação entre o sexo e grupo genético dos animais, do que a técnica CPMG / Low resolution Nuclear magnetic resonance spectroscopy (NMR) is shown to be a fast and accurate method for evaluating meat quality. The prediction of meat quality, especially in pork, has been performed using transverse relaxation time (T2). The measurements have been made with CPMG (Carr-Purcell-Meibom-Gill) pulse sequence which correlate the pork quality parameters such as water-holdcapacity (WHC), pH, and cooking loss (CL) with T2. In this work, the beef quality has been analyzed evaluated by transverse relaxation time (T2) measured by CPMG pulse sequence and by the T2, and, longitudinal relaxation time (T1), measured by CWFP (Continuous Wave Free Precession) technique The samples were collected from Longissimus lumborium muscle, from the 12th and 6th ribs region. The animals from three genetic groups, (two were ¾ and one was 9/16 european) were used in the experiments. The CPMG data were processed by multiexponential fitting, inverse Laplace transformation and chemometrics methods such as principal component analysis, hierarchic cluster analysis and partial least square regression. The CPMG data were not able to distinguish between the genetic group sex and the meat cut. The CWFP data analyzed by chemometrics methods show difference between Canchin males and females and between Canchin and Angus males. The results show almost no difference between the samples collected from the 12th and 6th rib.
16

Reconhecimento facial em imagens de baixa resolução

SILVA, José Ivson Soares da 24 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-07T12:14:52Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5) / Made available in DSpace on 2016-04-07T12:14:52Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5) Previous issue date: 2015-02-24 / FADE / Tem crescido o uso de sistemas computacionais para reconhecimento de pessoas por meio de dados biométricos, consequentemente os métodos para realizar o reconhecimento tem evoluído. A biometria usada no reconhecimento pode ser face, voz, impressão digital ou qualquer característica física capaz de distinguir as pessoas. Mudanças causadas por cirurgias, envelhecimento ou cicatrizes, podem não causar mudanças significativas nas características faciais tornando possível o reconhecimento após essas mudanças de aparência propositais ou não. Por outro lado tais mudanças se tornam um desafio para sistemas de reconhecimento automático. Além das mudanças físicas há outros fatores na obtenção da imagem que influenciam o reconhecimento facial como resolução da imagem, posição da face em relação a câmera, iluminação do ambiente, oclusão, expressão. A distância que uma pessoa aparece na cena modifica a resolução da região da sua face, o objetivo de sistemas direcionados a esse contexto é que a influência da resolução nas taxas de reconhecimento seja minimizada. Uma pessoa mais distante da câmera tem sua face na imagem numa resolução menor que uma que esteja mais próxima. Sistemas de reconhecimento facial têm um menor desempenho ao tratar imagens faciais de baixa resolução. Uma das fases de um sistema de reconhecimento é a extração de características, que processa os dados de entrada e fornece um conjunto de informações mais representativas das imagens. Na fase de extração de características os padrões da base de dados de treinamento são recebidos numa mesma dimensão, ou seja, no caso de imagens numa mesma resolução. Caso as imagens disponíveis para o treinamento sejam de resoluções diferentes ou as imagens de teste sejam de resolução diferente do treinamento, faz-se necessário que na fase de pré-processamento haja um tratamento de resolução. O tratamento na resolução pode ser aplicando um aumento da resolução das imagens menores ou redução da resolução das imagens maiores. O aumento da resolução não garante um ganho de informação que possa melhorar o desempenho dos sistemas. Neste trabalho são desenvolvidos dois métodos executados na fase de extração de características realizada por Eigenface, os vetores de características são redimensionados para uma nova escala menor por meio de interpolação, semelhante ao que acontece no redimensionamento de imagens. No primeiro método, após a extração de características, os vetores de características e as imagens de treinamento são redimensionados. Então, as imagens de treinamento e teste são projetadas no espaço de características pelos vetores de dimensão reduzida. No segundo método, apenas os vetores de características são redimensionados e multiplicados por um fator de compensação. Então, as imagens de treinamento são projetadas pelos vetores originais e as imagens de teste são projetadas pelos vetores reduzidos para o mesmo espaço. Os métodos propostos foram testados em 4 bases de dados de reconhecimento facial com a presença de problemas de variação de iluminação, variação de expressão facial, presença óculos e posicionamento do rosto. / In the last decades the use of computational systems to recognize people by biometric data is increasing, consequently the efficacy of methods to perform recognition is improving. The biometry used for recognition can be face, voice, fingerprint or other physical feature that enables the distiction of different persons. Facial changes caused by surgery, aging or scars, does not necessarily causes significant changes in facial features. For a human it is possible recognize other person after these interventions of the appearance. On the other hand, these interventions become a challenge to computer recognition systems. Beyond the physical changes there are other factors in aquisition of an image that influence the face recognition such as the image resolution, position between face and camera, light from environment, occlusions and variation of facial expression. The distance that a person is at image aquisition changes the resolution of face image. The objective of systems for this context is to minimize the influence of the image resolution for the recognition. A person more distant from the camera has the image of the face in a smaller resolution than a person near the camera. Face recognition systems have a poor performance to analyse low resolution image. One of steps of a recognition system is the features extraction that processes the input data so provides more representative images. In the features extraction step the images from the training database are received at same dimension, in other words, to analyse the images they have the same resolution. If the training images have different resolutions of test images it is necessary a preprocessing to normalize the image resolution. The preprocessing of an image can be to increase the resolution of small images or to reduce the resolution of big images. The increase resolution does not guarantee that there is a information gain that can improves the performance of the recognition systems. In this work two methods are developed at features extraction step based on Eigenface. The feature vectors are resized to a smaller scale, similar to image resize. In first method, after the feature extraction step, the feature vectors and the training images are resized. Then the training and test images are projected to feature space by the resized feature vectors. In second method, only the feature vectors are resized and multiplied by a compensation factor. The training images are projected by original feature vectors and the test images are projected by resized feature vectors to the same space. The proposed methods were tested in 4 databases of face recognition with presence of light variation, variation of facial expression, use of glasses and face position.
17

IR Image Macine Learning for Smart Homes

Nerborg, Amanda, Josse, Elias January 2020 (has links)
Sweden is expecting an aging population and a shortage of healthcare professionals in the near future. This amounts to problems like providing a safe and dignified life for the elderly both economically and practically. Technical solutions that contribute to safety, comfort and quick help when needed is essential for this future. Nowadays, a lot of solutions include a camera, which is effective but invasive on personal integrity. Griddy, a hardware solution containing a Panasonic Grid-EYE, an infrared thermopile array sensor, offers more integrity for the user. Griddy was developed by students in a previous project and was used for this projects data collecting. With Griddy mounted over a bed and additional software to determine if the user is on the bed or not a system could offer monitoring with little human interaction. The purpose was to determine if this system could predict human presence with high accuracy and what limitations it might have. Two data sets, a main and a variational, were captured with Griddy. The main data set consisted of 240 images with the label “person” and 240 images with the label “no person”. The machine learning algorithms used were Support Vector Machine (SVM), k-Nearest Neighbors (kNN) and Neural Network (NN). With 10-Fold Cross Validation, the highest accuracy found was for both SVM and kNN (0.99). This was verified with both algorithms accuracy (1.0) on the test set. The results for the variational data set showed lower reliability in the system when faced with variations not presented in the training, such as elevated room temperature or a duvet covering the person. More work needs to be done to expand the main data set to meet the challenge of variations. / I Sveriges väntas i den närmaste framtiden en åldrande population och en brist på vårdpersonal. Detta innebär både ekonomiska och praktiska problem för att ge äldre ett säkert och värdigt liv. Tekniska lösningar som kan bidra med säkerhet, komfort och snabb hjälp vid behov är av essentiell vikt i framtiden. Idag innehåller många lösningar en kamera. Detta är en effektiv men integritetskränkande lösning. Griddy, som är en hårdvarulösning innehållande en Panasonic Grid-EYE, en infraröd termosensor, erbjuder mer integritet för brukaren.  Griddy utvecklades av studenter i ett tidigare projekt och användes för datainsamling i detta projektet. Genom att montera Griddy över sängen och använda en tillhörande mjukvara, som avgör om brukaren är i sängen eller inte, skulle ett system kunna erbjuda övervakning med lite mänsklig inblandning. Syftet var att ta reda på om detta system skulle kunna avgöra brukarens närvaro med hög tillförlitlighet och vilka begränsningar systemet skulle ha. Två datasamlingar samlades in med hjälp av Griddy. En huvudsaklig datasamling och en med variation. Den huvudsakliga datasamlingen bestod av 240 bilder med etiketten "person" och 240 bilder med etiketten "ingen person". Algoritmerna för maskininlärning som användes var Support Vector Machine (SVM), k-Nearest Neighbors (kNN) och Neural Network (NN). Med 10-Fold Cross Validation fanns den högsta tillförlitligheten med algoritmerna SVM och kNN (0.99). Detta verifierades med tillförlitligheten för testsamlingen hos SVM och kNN (1.0). För datasamlingen med variation visade resultaten på en lägre tillförlitlighet när systemet mötte variationer som det inte tränats med, såsom förhöjd rumstemperatur eller ett täcke över personen. Slutsatsen är att en huvudsaklig datasamling bör utökas med mer variation så att systemet tränas till att klara större utmaningar.
18

Raman Spectroscopic and structural studies of indigo and its four 6,6'-Dihalogeno analogues

Bowen, Richard D., Edwards, Howell G.M., Jorge Villar, Susana E., Karapanayiotis, Thanassis January 2004 (has links)
No / The Raman and electron impact mass spectra of synthetic indigo and its four 6,6'-dihalogeno analogues are reported and discussed. The influence of varying the halogen on these Raman spectra is considered. Particular emphasis is laid on distinguishing indigo from 6,6'-dibromoindigo and differentiating between the dihalogenocompounds, so as to develop protocols for determining whether artefacts are coloured with dyes of marine or terrestrial origin and whether such artefacts are dyed with genuine Tyrian Purple or with dihalogenoindigo substitutes that do not contain bromine. The value of even low resolution electron impact mass spectrometry in a forensic context as a means of identifying authentic 6,6'-dibromoindigo and distinguishing it from its dihalogenoanalogues is emphasised.
19

L'effet de la psychoneurothérapie sur l'activité électrique du cerveau d'individus souffrant du trouble dépressif majeur unipolaire

Paquette, Vincent January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
20

L'effet de la psychoneurothérapie sur l'activité électrique du cerveau d'individus souffrant du trouble dépressif majeur unipolaire

Paquette, Vincent January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal

Page generated in 0.0904 seconds