231 |
The prevalence of pathogenic E. coli strains identified from drinking water in selected rural areas of South Africa and Gabon using the compartmental bag testMbedzi, Rendani Livingstone 05 1900 (has links)
MSc (Microbiology) / See the attached abstract below
|
232 |
A tale of two applications: closed-loop quality control for 3D printing, and multiple imputation and the bootstrap for the analysis of big data with missingnessWenbin Zhu (12226001) 20 April 2022 (has links)
<div><b>1. A Closed-Loop Machine Learning and Compensation Framework for Geometric Accuracy Control of 3D Printed Products</b></div><div><b><br></b></div>Additive manufacturing (AM) systems enable direct printing of three-dimensional (3D) physical products from computer-aided design (CAD) models. Despite the many advantages that AM systems have over traditional manufacturing, one of their significant limitations that impedes their wide adoption is geometric inaccuracies, or shape deviations between the printed product and the nominal CAD model. Machine learning for shape deviations can enable geometric accuracy control of 3D printed products via the generation of compensation plans, which are modifications of CAD models informed by the machine learning algorithm that reduce deviations in expectation. However, existing machine learning and compensation frameworks cannot accommodate deviations of fully 3D shapes with different geometries. The feasibility of existing frameworks for geometric accuracy control is further limited by resource constraints in AM systems that prevent the printing of multiple copies of new shapes.<div><br></div><div>We present a closed-loop machine learning and compensation framework that can improve geometric accuracy control of 3D shapes in AM systems. Our framework is based on a Bayesian extreme learning machine (BELM) architecture that leverages data and deviation models from previously printed products to transfer deviation models, and more accurately capture deviation patterns, for new 3D products. The closed-loop nature of compensation under our framework, in which past compensated products that do not adequately meet dimensional specifications are fed into the BELMs to re-learn the deviation model, enables the identification of effective compensation plans and satisfies resource constraints by printing only one new shape at a time. The power and cost-effectiveness of our framework are demonstrated with two validation experiments that involve different geometries for a Markforged Metal X AM machine printing 17-4 PH stainless steel products. As demonstrated in our case studies, our framework can reduce shape inaccuracies by 30% to 60% (depending on a shape's geometric complexity) in at most two iterations, with three training shapes and one or two test shapes for a specific geometry involved across the iterations. We also perform an additional validation experiment using a third geometry to establish the capabilities of our framework for prospective shape deviation prediction of 3D shapes that have never been printed before. This third experiment indicates that choosing one suitable class of past products for prospective prediction and model transfer, instead of including all past printed products with different geometries, could be sufficient for obtaining deviation models with good predictive performance. Ultimately, our closed-loop machine learning and compensation framework provides an important step towards accurate and cost-efficient deviation modeling and compensation for fully 3D printed products using a minimal number of printed training and test shapes, and thereby can advance AM as a high-quality manufacturing paradigm.<br></div><div><br></div><div><b>2. Multiple Imputation and the Bootstrap for the Analysis of Big Data with Missingness</b></div><div><br></div><div>Inference can be a challenging task for Big Data. Two significant issues are that Big Data frequently exhibit complicated missing data patterns, and that the complex statistical models and machine learning algorithms typically used to analyze Big Data do not have convenient quantification of uncertainties for estimators. These two difficulties have previously been addressed using multiple imputation and the bootstrap, respectively. However, it is not clear how multiple imputation and bootstrap procedures can be effectively combined to perform statistical inferences on Big Data with missing values. We investigate a practical framework for the combination of multiple imputation and bootstrap methods. Our framework is based on two principles: distribution of multiple imputation and bootstrap calculations across parallel computational cores, and the quantification of sources of variability involved in bootstrap procedures that use subsampling techniques via random effects or hierarchical models. This framework effectively extends the scope of existing methods for multiple imputation and the bootstrap to a broad range of Big Data settings. We perform simulation studies for linear and logistic regression across Big Data settings with different rates of missingness to characterize the frequentist properties and computational efficiencies of the combinations of multiple imputation and the bootstrap. We further illustrate how effective combinations of multiple imputation and the bootstrap for Big Data analyses can be identified in practice by means of both the simulation studies and a case study on COVID infection status data. Ultimately, our investigation demonstrates how the flexible combination of multiple imputation and the bootstrap under our framework can enable valid statistical inferences in an effective manner for Big Data with missingness.<br></div>
|
233 |
Electronic Flight Bag / Electronic Flight BagKúšik, Lukáš January 2021 (has links)
Cieľom tejto diplomovej práce je vytvoriť Electronic Flight Bag (EFB) aplikáciu pre mobilné telefóny s operačným systémom Android. Pre splnenie tejto úlohy bola preskúmaná aktuálna legislatíva ohľadom EFB aplikácií spolu s najmodernejšími EFB aplikáciami dostupnými na aplikačnom trhu. Na základe týchto informácií je navrhnutá a implementovaná EFB aplikácia určená pre pilotov všeobecného letectva. Výsledný produkt obsahuje funkcie pre plánovanie letu, vlastnú leteckú mapu, pilotný denník, katalóg letísk s dátami z celého sveta a ďalšie. Podpora offline zaručuje funkčnosť v reálnych podmienkach letu. Konečný produkt sa taktiež snaží inovovať nad existujúcimi EFB aplikáciami zahrnutím funkcionalít, akými sú napríklad automatické kontrolné zoznamy a náhľad v rozšírenej realite.
|
234 |
Påverkar prehospitala luftvägshjälpmedel överlevnaden hos patienter som drabbats av hjärtstopp? : en litteraturstudieHenriksson, Jonatan, Tedmar, Jens January 2020 (has links)
Bakgrund Vid ett prehospitalt hjärtstopp krävs utöver hjärt- och lungräddning med bröstkompressioner och defibrillering med hjärtstartare, även avancerad luftvägshantering för att skapa en fri luftväg vilket ambulanssjuksköterskan ansvarar för. Det finns en mängd olika luftvägshjälpmedel som ambulanssjuksköterskan kan använda sig av. För en del sjuksköterskor inom ambulanssjukvården kan en viss osäkerhet kring användningen av luftvägshjälpmedel finnas då de kan sakna rätt kompetens, utbildning eller ej fått tillräcklig träning i användandet för att utföra det på ett patientsäkert sätt. Syfte Syftet med denna studie var att jämföra prehospitala luftvägshjälpmedel vid hjärtstopp utanför sjukhus i förhållande till överlevnad. Metod Studien är en litteraturöversikt med kvantitativ ansats. Studien genomfördes genom en systematisk sökning av vetenskapliga artiklar vilka har jämfört olika luftvägshjälpmedel vid prehospitala hjärtstopp. Databaser som PubMed och CINAHL har främst använts. De utvalda artiklarna har kvalitetsgranskat. Resultat Två huvudfynd framkom där mask- och blåsa var korrelerad till högre prevalens av överlevnad och där endotracheal intubering var korrelerad till högre prevalens att uppnå återkomst av spontan cirkulation. Slutsats Av de inkluderade artiklarna visar resultatet på att mask- och blåsa är bästa alternativet för överlevnad och att endotracheal intubering är bästa alternativet för att uppnå återkomst av spontan cirkulation under ett prehospitalt hjärtstopp. Dock bör slutsatsen tas med försiktighet då resultaten kan skilja sig och bero på en mängd olika faktorer som skiljer sig åt i de olika studierna. / Background In addition to cardiac and pulmonary rescue with chest compressions and defibrillation with defibrillator, pre-hospital cardiac arrest also requires advanced airway management to create a clear airway for which the ambulance nurse is responsible. There are a variety of respiratory aids that the ambulance nurse can use. For some nurses in ambulance care, there may be some uncertainty about the use of respiratory aids as they may lack the right skills, education or have not received sufficient training in the use of it to perform it in a patient-safe manner. Aim The purpose of this study was to compare prehospital airway aids in cardiac arrest outside of hospital in relation to survival. Method The study is a literature review with a quantitative approach. The study was conducted through a systematic search of scientific articles comparing different respiratory aids at prehospital cardiac arrest. Databases such as PubMed and CINAHL have mainly been used. The selected articles have been quality checked. Results Two main findings emerged where bag valve mask was correlated to higher prevalence to survival and where endotracheal intubation was correlated to higher prevalence to achieve return of spontaneous circulation. Conclusion Of the included articles, the results indicate that bag valve mask is the best option for survival and that endotracheal intubation is the best option for achieving return of spontaneous circulation during a prehospital cardiac arrest. However, the conclusion should be taken with caution as the results may differ and depend on a variety of factors that differ in the different studies.
|
235 |
Production of filamentous fungal biomass on waste-derived volatile fatty acids for ruminant feed supplementation and it's in vitro digestion analysisBouzarjomehr, Mohammadali January 2022 (has links)
Single cell proteins such as that of edible filamentous fungal biomass are considered as a promising sustainable source of animal feed supplementation. Filamentous fungi can be cultivated on different organic substrates including volatile fatty acids (VFAs) such as acetic, propionic, and butyric acids. These VFAs can be generated through the famous waste valorisation approach of anaerobic digestion (AD) as intermediate metabolites. This project investigates a sustainable approach for the production of animal feed supplementation through cultivation of fungal biomass on waste derived VFAs along with the in vitro analysis of fungal biomass digestibility as ruminant feed. In this regard, optimum conditions for the production of Aspergillus oryzae biomass on different VFAs effluents derived from anaerobic digestion process of food waste plus chicken manure (FWCKM) and potato protein liquor (PPL) at different pH, nitrogen sources, and feed mixture was studied. Accordingly, analyses showed that PPL has the highest biomass yield with 0.4 (g biomass/g consumed VFAs) based on the volatile solids (VS) by adjusting pH to 6.2. Furthermore, the digestibility of the produced fungal biomass is analysed by using three different in vitro digestion methods including Tilley and Terry (TT) method, Gas Production Method (GPM), and Nylon Bag Method (NBM) and the results are compared with the conventional feed (silage and rapeseed meal). Results obtained from different digestibility methods illustrate that different A. oryzae fungal biomass had approximately 10-15 % higher dry matter digestibility fraction compared to silage and rapeseed meal (reference feeds). Hence, these results revealed that A. oryzae fungal biomass can grow on VFAs effluents and produce protein-rich fungal biomass while this biomass has better digestibility compared to conventional feeds and confirmed the initial hypothesis of the study.
|
236 |
Automatic Detection of Brain Functional Disorder Using Imaging DataDey, Soumyabrata 01 January 2014 (has links)
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data. As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach. We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy. The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem. In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network. Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects. So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information. In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
|
237 |
Conception et construction d'un nouveau type de détecteur PICO, ayant pour but de valider la pertinence d'avoir des parois en plastique soupleMonette, Valérie 04 1900 (has links)
La collaboration scientifique PICO a comme but de développer des détecteurs pour découvrir la matière sombre. Bien que le modèle standard de la physique des particules décrit bien la matière qui nous entoure, plusieurs phénomènes nous indiquent qu'il reste d'autres particules à ajouter à ce modèle. Les recherches montrent que la matière connue ne représente que 5% de toute la masse-énergie contenue dans l'Univers, ce qui laisse place à découvrir les 95% restants. Le groupe PICO se spécialise dans la détection de matière sombre en utilisant les chambres à bulles.
Les chambres à bulles de PICO ont été très performantes au cours des dernière décennies, mais elles ont récemment atteint leur apogée. Ce mémoire a pour but d'évaluer une autre approche quant au design des détecteurs pour voir s'il y a de nouvelles technologies permettant de poursuivre l'utilisation des chambres à bulles. Cependant, des contraintes mécaniques, de radiopureté, financières et techniques s'appliquent sur les matériaux utilisés pour la construction de la chambre. Nous nous intéressons donc à voir si l'utilisation d'un sac en nylon pour remplacer la jarre en verre qui contient le fréon amènerait une approche novatrice à la jarre. Afin d'évaluer cette idée, la construction d'un modèle réduit de la chambre à bulles, nommé NBBC (\textit{Nylon Bac Bubble Chamber}), a été mis sur pied. Si le concept fonctionne réellement, ce détecteur justifierait la création d'un modèle plus gros que tout ce qui a été réalisé jusqu'à présent en termes de détection par chambre à bulles.
Ce mémoire présente toutes les étapes nécessaire à la réalisation de la chambre test NBBC, en commençant par un bref historique des détecteurs déjà existants. Les principes du fonctionnement de la chambre sont ensuite exposés, suivis d'une description de tous les circuits électriques développés et de tous les codes écrits. Finalement, les tests effectués sont présentés et le dernier chapitre est dédié aux conclusions obtenues et aux conseils sur la poursuite du projet. / The PICO scientific collaboration seeks to discover dark matter. Although the standard model of particle physics describes well the matter we already know, several phenomena show that there are more particles to add to this model. These shows that «normal» matter makes up only 5% of all the mass-energy in the Universe, leaving the other 95% still to be found. The PICO group specializes in the use of bubble chambers to find this dark matter.
PICO bubble chambers have been very efficient, but they have reached their peak in recent years. The purpose of this master thesis is to evaluate other technologies that can be used to construct larger bubble chambers. Mechanical, radiopurity, financial and technical constraints apply to the materials used for the construction of the chamber. We are therefore interested to see if the use of a nylon bag to replace the glass jar containing freon could bring an innovative perspective to this jar. To evaluate this idea, we have embarked to build a prototype chamber, called NBBC. If the concept actually works, this detector would justify the creation of a larger model than anything that has ever been done in terms of bubble chamber detection before.
This MSc thesis presents all the steps for the NBBC test chamber construction, starting with a brief history of the existing detectors. The working principles of the chamber will then be explained, followed by a description of all the new design elements. Finally, the tests carried out are presented and the last chapter is dedicated to the conclusions obtained and the advice on the continuation of the project.
|
238 |
An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space StructureLe, Thu Anh 05 1900 (has links)
This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.
|
239 |
Subliminal priming : Manipulation till att välja en specifik kulör på plastpåse / Subliminal priming : Manipulation to choose a specific colour on a plastic bagNordberg, Rickard January 2014 (has links)
Primad information är lättare tillgängligt i minnet och kan således lättare bli igenkänt. Förutsättningar för priming är bland annat subliminal perception, mål, tillförlitlighet, icke vaksamt och icke vanemässigt. Studiens syfte är att få bredare förståelse gällande subliminal primings påverkan. Frågeställningen var om kunder i en affär kan manipuleras, primas, till att ta en specifik kulör på plastpåse vid kassan samt om det finns någon könsskillnad vid effekten av priming. Deltagarna var 490 kunder, varav 333 män. Två olika skyltar med olika kulörer placerades vid kassan. Det noterades om kunderna valde den primade kulören på plastpåse eller inte. Kontrollgruppen bestod av 117 personer och dessa fick inte se någon skylt. Resultatet visade en signifikant skillnad, deltagarna valde samma kulör på plastpåse som skylten. Inga könsskillnader påträffades. Forskning visar att primingeffekter kan motstridas genom att individen gör sig medveten av potentiell omedveten påverkan. / Primed information is more accessible in memory and can thus easily be recognized. Prerequisites for priming include subliminal perception, goals, reliability, non alert and non habitually. The study aims to gain broader understanding regarding subliminal primings influence. The purpose of this thesis was to see whether the customers in a store could be manipulated, primed, to take a specific colour on plastic bags at checkout and if there are any gender differences in the effect of priming. Participants were 490 customers, of whom 333 men. Two different signs with different colours were placed at the checkout. It was noted if customers chose the primed colour of the plastic bag or not. The results showed a significant difference, the participants chose the same colour on the plastic bag as the sign. No gender differences were found. Research shows that priming effects can be opposed if people make themselves aware of potential unconscious influences.
|
240 |
Charting habitus : Stephen King, the author protagonist and the field of literary productionPalko, Amy Joyce January 2009 (has links)
While most research in King studies focuses on Stephen King’s contribution to the horror genre, this thesis approaches King as a participant in American popular culture, specifically exploring the role the author-protagonist plays in his writing about writing. I have chosen Bourdieu’s theoretical construct of habitus through which to focus my analysis into not only King’s narratives, but also into his non-fiction and paratextual material: forewords, introductions, afterwords, interviews, reviews, articles, editorials and unpublished archival documents. This has facilitated my investigation into the literary field that King participates within, and represents in his fiction, in order to provide insight into his perception of the high/low cultural divide, the autonomous and heteronomous principles of production and the ways in which position-taking within that field might be effected. This approach has resulted in a study that combines the methods of literary analysis and book history; it investigates both the literary construct and the tangible page. King’s part autobiography, part how-to guide, On Writing (2000), illustrates the rewards such an approach yields, by indicating four main ways in which his perception of, and participation in, the literary field manifests: the art/money dialectic, the dangers inherent in producing genre fiction, the representation of art produced according to the heteronomous principle and the relationship between popular culture and the Academy. The texts which form the focus of the case studies in this thesis, The Shining, Misery, The Dark Half, Bag of Bones and Lisey’s Story demonstrate that there exists a dramatisation of King’s habitus at the level of the narrative which is centred on the figure of the author-protagonist. I argue that the actions of the characters Jack Torrance, Paul Sheldon, Thad Beaumont, Mike Noonan and Scott Landon, and the situations they find themselves in, offer an expression of King’s perception of the literary field, an expression which benefits from being situated within the context of his paratextually articulated pronouncements of authorship, publication and cultural production.
|
Page generated in 0.3474 seconds