• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 46
  • 12
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Mätosäkerhet vid kalibrering av referensutrustning för blodtrycksmätning : En modell för framtagning av mätosäkerhet för referensmanometer WA 767 / Measurement uncertainty for calibration of reference equipment for blood pressure measurement : A model for obtaining measurement uncertainty of reference manometer WA 767

Patzauer, Rebecka, Wessel, Elin January 2016 (has links)
Avdelningen för Medicinsk teknik på Akademiska sjukhuset har uppdaterat befintliga kalibreringsprotokoll för Welch Allyn 767 som används som referensmanometer vid kalibrering av blodtrycksmätare. I protokollet ska det enligt ISO 9001 och ISO 13485 ingå att vid varje kalibreringspunkt ange mätosäkerheten.  Rutiner kring detta var inte definierade. En modell för att ta fram mätosäkerhet utformades utifrån standardiserade metoder från “Guide to the expression of uncertainty in measurement” och anpassades för att kunna användas på den medicintekniska avdelningen. En mätmetod för kalibrering togs fram och med modellen beräknades mätosäkerhet för en referensmanometer. Mätosäkerheten med definierad mätmetod blev lägre än den av Welch Allyn specificerade mätosäkerheten på ± 3 mmHg. Felfortplantning från kalibrering till blodtrycksmätning undersöktes. Mätosäkerheten ökade i varje steg, varför avdelningen bör ta fram ett protokoll för hur kalibrering genomförs, och på så sätt förbättra spårbarheten. / The department of Medical Technology at Akademiska sjukhuset has updated their current protocol for calibration for Welch Allyn 767, which serves as a reference manometer for blood pressure meters when being calibrated. According to ISO 9001 and ISO 13485, the protocol has to include a measurement uncertainty for every given point of calibration. The routines regarding this were undefined. A model for retrieving measurement uncertainty was designed using standardized methods from “Guide to the expression of uncertainty in measurement” and was customized to be used at the department of Medical Technology. A method for calibration was created and used to calculate the measurement uncertainty for the reference manometer. This measurement uncertainty was smaller than the one specified by Welch Allyn, which was ± 3 mmHg. Propagation of uncertainty from the calibration to the blood pressure measurement was investigated. The measurement uncertainty increased in every step. Therefore, the department should introduce a protocol for how a calibration is performed, and thereby improve the traceability.
22

A Multi-Target Graph-Constrained HMM Localisation Approach using Sparse Wi-Fi Sensor Data / Graf-baserad HMM Lokalisering med Wi-Fi Sensordata av Gångtrafikanter

Danielsson, Simon, Flygare, Jakob January 2018 (has links)
This thesis explored the possibilities of using a Hidden Markov Model approach for multi-target localisation in an urban environment, with observations generated from Wi-Fi sensors. The area is modelled as a network of nodes and arcs, where the arcs represent sidewalks in the area and constitutes the hidden states in the model. The output of the model is the expected amount of people at each road segment throughout the day. In addition to this, two methods for analyzing the impact of events in the area are proposed. The first method is based on a time series analysis, and the second one is based on the updated transition matrix using the Baum-Welch algorithm. Both methods reveal which road segments are most heavily affected by a surge of traffic in the area, as well as potential bottleneck areas where congestion is likely to have occurred. / I det här examensarbetet har lokalisering av gångtrafikanter med hjälp av Hidden Markov Models utförts. Lokaliseringen är byggd på data från Wi-Fi sensorer i ett område i Stockholm. Området är modellerat som ett graf-baserat nätverk där linjerna mellan noderna representerar möjliga vägar för en person att befinna sig på. Resultatet för varje individ är aggregerat för att visa förväntat antal personer på varje segment över en hel dag. Två metoder för att analysera hur event påverkar området introduceras och beskrivs. Den första är baserad på tidsserieanalys och den andra är en maskinlärningsmetod som bygger på Baum-Welch algoritmen. Båda metoderna visar vilka segment som drabbas mest av en snabb ökning av trafik i området och var trängsel är troligt att förekomma.
23

Unsupervised hidden Markov model for automatic analysis of expressed sequence tags

Alexsson, Andrei January 2011 (has links)
This thesis provides an in-depth analyze of expressed sequence tags (EST) that represent pieces of eukaryotic mRNA by using unsupervised hidden Markov model (HMM). ESTs are short nucleotide sequences that are used primarily for rapid identificationof new genes with potential coding regions (CDS). ESTs are made by sequencing on double-stranded cDNA and the synthesizedESTs are stored in digital form, usually in FASTA format. Since sequencing is often randomized and that parts of mRNA contain non-coding regions, some ESTs will not represent CDS.It is desired to remove these unwanted ESTs if the purpose is to identifygenes associated with CDS. Application of stochastic HMM allow identification of region contents in a EST. Softwares like ESTScanuse HMM in which a training of the HMM is done by supervised learning with annotated data. However, because there are not always annotated data at hand this thesis focus on the ability to train an HMM with unsupervised learning on data containing ESTs, both with and without CDS. But the data used for training is not annotated, i.e. the regions that an EST consists of are unknown. In this thesis a new HMM is introduced where the parameters of the HMM are in focus so that they are reasonablyconsistent with biologically important regionsof an mRNA such as the Kozak sequence, poly(A)-signals and poly(A)-tails to guide the training and decoding correctly with ESTs to proper statesin the HMM. Transition probabilities in the HMMhas been adapted so that it represents the mean length and distribution of the different regions in mRNA. Testing of the HMM's specificity and sensitivityhave been performed via BLAST by blasting each EST and compare the BLAST results with the HMM prediction results.A regression analysis test shows that the length of ESTs used when training the HMM is significantly important, the longer the better. The final resultsshows that it is possible to train an HMM with unsupervised machine learning but to be comparable to supervised machine learning as ESTScan, further expansion of the HMM is necessary such as frame-shift correction of ESTs byimproving the HMM's ability to choose correctly positioned start codons or nucleotides. Usually the false positive results are because of incorrectly positioned start codons leadingto too short CDS lengths. Since no frame-shift correction is implemented, short predicted CDS lengths are not acceptable and is hence not counted as coding regionsduring prediction. However, when there is a lack of supervised models then unsupervised HMM is a potential replacement with stable performance and able to be adapted forany eukaryotic organism.
24

Automated phoneme mapping for cross-language speech recognition

Sooful, Jayren Jugpal 11 January 2005 (has links)
This dissertation explores a unique automated approach to map one phoneme set to another, based on the acoustic distances between the individual phonemes. Although the focus of this investigation is on cross-language applications, this automated approach can be extended to same-language but different-database applications as well. The main goal of this investigation is to be able to use the data of a source language, to train the initial acoustic models of a target language for which very little speech data may be available. To do this, an automatic technique for mapping the phonemes of the two data sets must be found. Using this technique, it would be possible to accelerate the development of a speech recognition system for a new language. The current research in the cross-language speech recognition field has focused on manual methods to map phonemes. This investigation has considered an English-to-Afrikaans phoneme mapping, as well as an Afrikaans-to-English phoneme mapping. This has been previously applied to these language instances, but utilising manual phoneme mapping methods. To determine the best phoneme mapping, different acoustic distance measures are compared. The distance measures that are considered are the Kullback-Leibler measure, the Bhattacharyya distance metric, the Mahalanobis measure, the Euclidean measure, the L2 metric and the Jeffreys-Matusita distance. The distance measures are tested by comparing the cross-database recognition results obtained on phoneme models created from the TIMIT speech corpus and a locally-compiled South African SUN Speech database. By selecting the most appropriate distance measure, an automated procedure to map phonemes from the source language to the target language can be done. The best distance measure for the mapping gives recognition rates comparable to a manual mapping process undertaken by a phonetic expert. This study also investigates the effect of the number of Gaussian mixture components on the mapping and on the speech recognition system’s performance. The results indicate that the recogniser’s performance increases up to a limit as the number of mixtures increase. In addition, this study has explored the effect of excluding the Mel Frequency delta and acceleration cepstral coefficients. It is found that the inclusion of these temporal features help improve the mapping and the recognition system’s phoneme recognition rate. Experiments are also carried out to determine the impact of the number of HMM recogniser states. It is found that single-state HMMs deliver the optimum cross-language phoneme recognition results. After having done the mapping, speaker adaptation strategies are applied on the recognisers to improve their target-language performance. The models of a fully trained speech recogniser in a source language are adapted to target-language models using Maximum Likelihood Linear Regression (MLLR) followed by Maximum A Posteriori (MAP) techniques. Embedded Baum-Welch re-estimation is used to further adapt the models to the target language. These techniques result in a considerable improvement in the phoneme recognition rate. Although a combination of MLLR and MAP techniques have been used previously in speech adaptation studies, the combination of MLLR, MAP and EBWR in cross-language speech recognition is a unique contribution of this study. Finally, a data pooling technique is applied to build a new recogniser using the automatically mapped phonemes from the target language as well as the source language phonemes. This new recogniser demonstrates moderate bilingual phoneme recognition capabilities. The bilingual recogniser is then further adapted to the target language using MAP and embedded Baum-Welch re-estimation techniques. This combination of adaptation techniques together with the data pooling strategy is uniquely applied in the field of cross-language recognition. The results obtained using this technique outperform all other techniques tested in terms of phoneme recognition rates, although it requires a considerably more time consuming training process. It displays only slightly poorer phoneme recognition than the recognisers trained and tested on the same language database. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
25

Setting fire to our bed: a look at narrative persuasion through investigating depictions of intimate partner violence

Masterson, Desirae Sarah 09 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis sought to attain a greater understanding of persuasion through narrative. First, a rhetorical analysis was conducted. The rhetorical analysis identified fantasy themes represented in two original music video artifacts. These themes formed what the author calls Symbolic Convergence Cycle of Intimate Partner Violence (IPV). Next, an experiment was conducted to provide further evidence that realistic narrative presentations have a greater ability to shape perceptions than more abstract presentations. Findings included that women were more likely to identify subtle abusive behaviors as abusive then men. However, after exposure to conditions containing the visual portion of the music video “Love the Way You Lie”, both female and male participants were less likely to identify subtle abusive behavior as abusive. This revealed that even though two messages can contain the same themes about the subject of IPV, the way that these messages were presented effected the way in which viewers interpreted the messages.
26

Storied voices in Native American texts : Harry Robinson, Thomas King, James Welch and Leslie Marmon Silko

Chester, Blanca Schorcht 05 1900 (has links)
"Storied Voices in Native American Texts: Harry Robinson, Thomas King, James Welch and Leslie Marmon Silko" approaches Native American literatures from within an interdisciplinary framework that complicates traditional notions o f literary "origins" and canon. It situates the discussion of Native literatures in a Native American context, suggesting that contemporary Native American writing has its roots in Native oral storytelling traditions. Each of these authors draws on specific stories and histories from his or her Native culture. They also draw on European elements and contexts because these are now part o f Native American experience. I suggest that Native oral tradition is already inherently novelistic, and the stories that lie behind contemporary Native American writing explicitly connect past and present as aspects o f current Native reality. Contemporary Native American writers are continuing an on-going and vital storytelling tradition through written forms. A comparison of the texts o f a traditional Native storyteller, Robinson, with the highly literate novels of King, Welch and Silko, shows how orally told stories connect with the process o f writing. Robinson's storytelling suggests how these stories "theorize" the world as he experiences it; the Native American novel continues to theorize Native experience in contemporary times. Native writers use culturally specific stories to express an on-going Native history. Their novels require readers to examine their assumptions about who is telling whose story, and the traditional distinctions made between fact and fiction, history and story. King's Green Grass. Running Water takes stories from Western European literary traditions and Judeao-Christian mythology and presents them as part of a Native creation story. Welch's novel Fools Crow re-writes a particular episode from history, the Marias River Massacre, from a Blackfeet perspective. Silko's Almanac of the Dead recreates the Mayan creation story o f the Popol Vuh in the context o f twentiethcentury American culture. Each of these authors maintains the dialogic fluidity of oral storytelling performance in written forms and suggests that stories not only reflect the world, but that they create it in the way that Robinson understands storytelling as a form of theory.
27

Pré-processamento, extração de características e classificação offline de sinais eletroencefalográficos para uso em sistemas BCI

Machado, Juliano Costa January 2012 (has links)
O uso de sistemas denominados Brain Computer Interface, ou simplesmente BCI, para controle de dispositivos tem gerado cada vez mais trabalhos de análise de sinais de EEG, principalmente devido ao fato do desenvolvimento tecnológico dos sistemas de processamento de dados, trazendo novas perspectiva de desenvolvimento de equipamentos que auxiliem pessoas com debilidades motoras. Neste trabalho é abordado o comportamento dos classificadores LDA (Discriminante Linear de Fisher) e o classificador Naive Bayes para classificação de movimento de mão direita e mão esquerda a partir da aquisição de sinais eletroencefalográficos. Para análise destes classificadores foram utilizadas como características de entrada a energia de trechos do sinal filtrados por um passa banda com frequências dentro dos ritmos sensório-motor e também foram utilizadas componentes de energia espectral através do periodograma modificado de Welch. Como forma de pré-processamento também é apresentado o filtro espacial Common Spatial Pattern (CSP) de forma a aumentar a atividade discriminativa entre as classes de movimento. Foram obtidas taxas de acerto de até 70% para a base de dados geradas neste trabalho e de até 88% utilizando a base de dados do BCI Competition II, taxas de acertos compatíveis com outros trabalhos na área. / Brain Computer Interface (BCI) systems usage for controlling devices has increasingly generated research on EEG signals analysis, mainly because the technological development of data processing systems has been offering a new perspective on developing equipment to assist people with motor disability. This study aims to examine the behavior of both Fisher's Linear Discriminant (LDA) and Naive Bayes classifiers in determining both the right and left hand movement through electroencephalographic signals. To accomplish this, we considered as input feature the energy of the signal trials filtered by a band pass with sensorimotor rhythm frequencies; spectral power components from the Welch modified periodogram were also used. As a preprocessing form, the Common Spatial Pattern (CSP) filter was used to increase the discriminative activity between classes of movement. The database created from this study reached hit rates of up to 70% while the BCI Competition II reached hit rates up to 88%, which is consistent with the literature.
28

Pré-processamento, extração de características e classificação offline de sinais eletroencefalográficos para uso em sistemas BCI

Machado, Juliano Costa January 2012 (has links)
O uso de sistemas denominados Brain Computer Interface, ou simplesmente BCI, para controle de dispositivos tem gerado cada vez mais trabalhos de análise de sinais de EEG, principalmente devido ao fato do desenvolvimento tecnológico dos sistemas de processamento de dados, trazendo novas perspectiva de desenvolvimento de equipamentos que auxiliem pessoas com debilidades motoras. Neste trabalho é abordado o comportamento dos classificadores LDA (Discriminante Linear de Fisher) e o classificador Naive Bayes para classificação de movimento de mão direita e mão esquerda a partir da aquisição de sinais eletroencefalográficos. Para análise destes classificadores foram utilizadas como características de entrada a energia de trechos do sinal filtrados por um passa banda com frequências dentro dos ritmos sensório-motor e também foram utilizadas componentes de energia espectral através do periodograma modificado de Welch. Como forma de pré-processamento também é apresentado o filtro espacial Common Spatial Pattern (CSP) de forma a aumentar a atividade discriminativa entre as classes de movimento. Foram obtidas taxas de acerto de até 70% para a base de dados geradas neste trabalho e de até 88% utilizando a base de dados do BCI Competition II, taxas de acertos compatíveis com outros trabalhos na área. / Brain Computer Interface (BCI) systems usage for controlling devices has increasingly generated research on EEG signals analysis, mainly because the technological development of data processing systems has been offering a new perspective on developing equipment to assist people with motor disability. This study aims to examine the behavior of both Fisher's Linear Discriminant (LDA) and Naive Bayes classifiers in determining both the right and left hand movement through electroencephalographic signals. To accomplish this, we considered as input feature the energy of the signal trials filtered by a band pass with sensorimotor rhythm frequencies; spectral power components from the Welch modified periodogram were also used. As a preprocessing form, the Common Spatial Pattern (CSP) filter was used to increase the discriminative activity between classes of movement. The database created from this study reached hit rates of up to 70% while the BCI Competition II reached hit rates up to 88%, which is consistent with the literature.
29

Pré-processamento, extração de características e classificação offline de sinais eletroencefalográficos para uso em sistemas BCI

Machado, Juliano Costa January 2012 (has links)
O uso de sistemas denominados Brain Computer Interface, ou simplesmente BCI, para controle de dispositivos tem gerado cada vez mais trabalhos de análise de sinais de EEG, principalmente devido ao fato do desenvolvimento tecnológico dos sistemas de processamento de dados, trazendo novas perspectiva de desenvolvimento de equipamentos que auxiliem pessoas com debilidades motoras. Neste trabalho é abordado o comportamento dos classificadores LDA (Discriminante Linear de Fisher) e o classificador Naive Bayes para classificação de movimento de mão direita e mão esquerda a partir da aquisição de sinais eletroencefalográficos. Para análise destes classificadores foram utilizadas como características de entrada a energia de trechos do sinal filtrados por um passa banda com frequências dentro dos ritmos sensório-motor e também foram utilizadas componentes de energia espectral através do periodograma modificado de Welch. Como forma de pré-processamento também é apresentado o filtro espacial Common Spatial Pattern (CSP) de forma a aumentar a atividade discriminativa entre as classes de movimento. Foram obtidas taxas de acerto de até 70% para a base de dados geradas neste trabalho e de até 88% utilizando a base de dados do BCI Competition II, taxas de acertos compatíveis com outros trabalhos na área. / Brain Computer Interface (BCI) systems usage for controlling devices has increasingly generated research on EEG signals analysis, mainly because the technological development of data processing systems has been offering a new perspective on developing equipment to assist people with motor disability. This study aims to examine the behavior of both Fisher's Linear Discriminant (LDA) and Naive Bayes classifiers in determining both the right and left hand movement through electroencephalographic signals. To accomplish this, we considered as input feature the energy of the signal trials filtered by a band pass with sensorimotor rhythm frequencies; spectral power components from the Welch modified periodogram were also used. As a preprocessing form, the Common Spatial Pattern (CSP) filter was used to increase the discriminative activity between classes of movement. The database created from this study reached hit rates of up to 70% while the BCI Competition II reached hit rates up to 88%, which is consistent with the literature.
30

Storied voices in Native American texts : Harry Robinson, Thomas King, James Welch and Leslie Marmon Silko

Chester, Blanca Schorcht 05 1900 (has links)
"Storied Voices in Native American Texts: Harry Robinson, Thomas King, James Welch and Leslie Marmon Silko" approaches Native American literatures from within an interdisciplinary framework that complicates traditional notions o f literary "origins" and canon. It situates the discussion of Native literatures in a Native American context, suggesting that contemporary Native American writing has its roots in Native oral storytelling traditions. Each of these authors draws on specific stories and histories from his or her Native culture. They also draw on European elements and contexts because these are now part o f Native American experience. I suggest that Native oral tradition is already inherently novelistic, and the stories that lie behind contemporary Native American writing explicitly connect past and present as aspects o f current Native reality. Contemporary Native American writers are continuing an on-going and vital storytelling tradition through written forms. A comparison of the texts o f a traditional Native storyteller, Robinson, with the highly literate novels of King, Welch and Silko, shows how orally told stories connect with the process o f writing. Robinson's storytelling suggests how these stories "theorize" the world as he experiences it; the Native American novel continues to theorize Native experience in contemporary times. Native writers use culturally specific stories to express an on-going Native history. Their novels require readers to examine their assumptions about who is telling whose story, and the traditional distinctions made between fact and fiction, history and story. King's Green Grass. Running Water takes stories from Western European literary traditions and Judeao-Christian mythology and presents them as part of a Native creation story. Welch's novel Fools Crow re-writes a particular episode from history, the Marias River Massacre, from a Blackfeet perspective. Silko's Almanac of the Dead recreates the Mayan creation story o f the Popol Vuh in the context o f twentiethcentury American culture. Each of these authors maintains the dialogic fluidity of oral storytelling performance in written forms and suggests that stories not only reflect the world, but that they create it in the way that Robinson understands storytelling as a form of theory. / Arts, Faculty of / English, Department of / Graduate

Page generated in 0.0367 seconds