• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Generalising from Case Studies

Wikfeldt, Emma January 2016 (has links)
The generalisability of case study findings is heavily criticised in the scientific community. This study attempts to answer to what extent generalisation is possible, through a literature review. Resources were collected by searching in databases and in reference lists. A presentation of arguments from both sides will follow, finding that generalisation is possible to almost the same extent as quantitative research, if done correctly and carefully, with great concern and accuracy.
2

Gaussian Graphical Model Selection for Gene Regulatory Network Reverse Engineering and Function Prediction

Kontos, Kevin 02 July 2009 (has links)
One of the most important and challenging ``knowledge extraction' tasks in bioinformatics is the reverse engineering of gene regulatory networks (GRNs) from DNA microarray gene expression data. Indeed, as a result of the development of high-throughput data-collection techniques, biology is experiencing a data flood phenomenon that pushes biologists toward a new view of biology--systems biology--that aims at system-level understanding of biological systems. Unfortunately, even for small model organisms such as the yeast Saccharomyces cerevisiae, the number p of genes is much larger than the number n of expression data samples. The dimensionality issue induced by this ``small n, large p' data setting renders standard statistical learning methods inadequate. Restricting the complexity of the models enables to deal with this serious impediment. Indeed, by introducing (a priori undesirable) bias in the model selection procedure, one reduces the variance of the selected model thereby increasing its accuracy. Gaussian graphical models (GGMs) have proven to be a very powerful formalism to infer GRNs from expression data. Standard GGM selection techniques can unfortunately not be used in the ``small n, large p' data setting. One way to overcome this issue is to resort to regularization. In particular, shrinkage estimators of the covariance matrix--required to infer GGMs--have proven to be very effective. Our first contribution consists in a new shrinkage estimator that improves upon existing ones through the use of a Monte Carlo (parametric bootstrap) procedure. Another approach to GGM selection in the ``small n, large p' data setting consists in reverse engineering limited-order partial correlation graphs (q-partial correlation graphs) to approximate GGMs. Our second contribution consists in an inference algorithm, the q-nested procedure, that builds a sequence of nested q-partial correlation graphs to take advantage of the smaller order graphs' topology to infer higher order graphs. This allows us to significantly speed up the inference of such graphs and to avoid problems related to multiple testing. Consequently, we are able to consider higher order graphs, thereby increasing the accuracy of the inferred graphs. Another important challenge in bioinformatics is the prediction of gene function. An example of such a prediction task is the identification of genes that are targets of the nitrogen catabolite repression (NCR) selection mechanism in the yeast Saccharomyces cerevisiae. The study of model organisms such as Saccharomyces cerevisiae is indispensable for the understanding of more complex organisms. Our third contribution consists in extending the standard two-class classification approach by enriching the set of variables and comparing several feature selection techniques and classification algorithms. Finally, our fourth contribution formulates the prediction of NCR target genes as a network inference task. We use GGM selection to infer multivariate dependencies between genes, and, starting from a set of genes known to be sensitive to NCR, we classify the remaining genes. We hence avoid problems related to the choice of a negative training set and take advantage of the robustness of GGM selection techniques in the ``small n, large p' data setting.
3

Variable Selection and Parameter Estimation Using a Continuous and Differentiable Approximation to the L0 Penalty Function

VanDerwerken, Douglas Nielsen 10 March 2011 (has links) (PDF)
L0 penalized likelihood procedures like Mallows' Cp, AIC, and BIC directly penalize for the number of variables included in a regression model. This is a straightforward approach to the problem of overfitting, and these methods are now part of every statistician's repertoire. However, these procedures have been shown to sometimes result in unstable parameter estimates as a result on the L0 penalty's discontinuity at zero. One proposed alternative, seamless-L0 (SELO), utilizes a continuous penalty function that mimics L0 and allows for stable estimates. Like other similar methods (e.g. LASSO and SCAD), SELO produces sparse solutions because the penalty function is non-differentiable at the origin. Because these penalized likelihoods are singular (non-differentiable) at zero, there is no closed-form solution for the extremum of the objective function. We propose a continuous and everywhere-differentiable penalty function that can have arbitrarily steep slope in a neighborhood near zero, thus mimicking the L0 penalty, but allowing for a nearly closed-form solution for the beta-hat vector. Because our function is not singular at zero, beta-hat will have no zero-valued components, although some will have been shrunk arbitrarily close thereto. We employ a BIC-selected tuning parameter used in the shrinkage step to perform zero-thresholding as well. We call the resulting vector of coefficients the ShrinkSet estimator. It is comparable to SELO in terms of model performance (selecting the truly nonzero coefficients, overall MSE, etc.), but we believe it to be more intuitive and simpler to compute. We provide strong evidence that the estimator enjoys favorable asymptotic properties, including the oracle property.
4

Driving Performance Adaptation Through Practice With And Without Distracters In A Simulated Environment

Gentzler, Marc 01 January 2014 (has links)
A preponderance of research points to the detrimental effects of distraction on driving performance. An interesting question is whether practice can improve distracted driving. The results from the few longitudinal simulator-based research studies conducted on driving distraction have been inconclusive. This may be because practice effects could be confounded with participants adapting to driving in the simulator. Therefore, participants in the current studies were trained until performance reached a steady state prior to introducing the distracters. In this dissertation, two single-subject design studies were used to investigate the effects of training on distracted driving. The first study included two participants who experienced several different types of distracters. In the second study distracters were introduced before and after the training phase. The two distracters selected for Study 2 included conversing on a handheld phone and texting on a touchscreen phone continuously while driving in a city scenario. Previous research has not compared texting to phone, has had relatively little examination of texting and driving alone, and has primarily focused on hands-free phones and on highway settings. Participants drove on a city route which they had previously memorized to add realism to the driving task. Measures collected included speed maintenance, lane deviations/position errors, stop errors, and turn errors in both studies. In Study 2, subjective workload and reaction time were also collected. Findings indicated that training improved performance substantially for all participants in both studies compared to the initial baseline. Participants who experienced six and even nine sessions of the initial baseline did not necessarily improve more than those who only had three sessions. Performance for some participants did not improve in the initial baseline. The lower iii error levels in training remained fairly stable in subsequent baselines showing that actual learning did occur. Texting had higher error levels than phone both pre and post-training. There were no practice effects noticed for the distracters post-training for any of the participants, and in fact errors increased across sessions for phone and especially texting in Study 2. Training helped improve performance during the phone distraction more so than texting overall, although this varied for different dependent measures. Although errors were reduced after training in the distracter phases, the data actually showed that the performance difference between the baselines and the distracters pre-training was smaller than the differences post-training. Based on these findings, it is recommended that researchers conducting driving simulation research systematically train their participants on driving the simulator before they begin data collection.
5

Statistical Inference for Change Points in High-Dimensional Offline and Online Data

Li, Lingjun 07 April 2020 (has links)
No description available.
6

Adaptive Mixture Estimation and Subsampling PCA

Liu, Peng January 2009 (has links)
No description available.
7

Köpprocessen via en mobilanpassad webbshop ur användarens perspektiv / The buying process on a mobile webshop from the user's perspective

Johansson, Camilla, Hultqvist, Tabita January 2017 (has links)
Syfte – Trenden med e-handel ökar stadigt och allt fler konsumenter utför köp via mobiltelefonen. I en undersökning som Episerver utfört surfar 90 procent av alla konsumenter via mobilen men endast 19 procent genomför köp på en webbplats. Det beror på tekniska problem, svårigheter med otillräcklig tillgång till information och registreringar som kräver för många steg. Risken är hög att en konsument lämnar webbplatsen om de upplever den svårnavigerad. Idag finns generella principer inom design för webbplatser och Google Analytics används för att mäta trafik och se hur användaren navigerar på webbplatsen. Det som inte framkommer är bakomliggande orsaker till användarens beteende. Trots att de generella principerna för design är vedertagna och används idag visar undersökningar att konsumenter upplever webbplatsen svårnavigerad och lämnar. För att förstå användarens upplevelse av webbplatsen är det därför viktigt att utföra en studie där man samlar kvalitativa data. Syftet med studien var därför att ta reda på hur en användare upplever användbarhet och utformning under en köpprocess på en mobilanpassad webbshop för mode. Metod – För att uppnå studiens syfte har en små-N-studie genomförts där tre webbshopar granskades. En fallstudie har även genomförts och inkluderade en observation samt en intervju med sex deltagare. Resultat – Studien visar att de tre webbshopar som granskats är utformade responsivt. Vedertagna ikoner, som hamburgermeny, varukorg och sökfunktion används. Kontraster med färg och typografi mot färgad bakgrund, hierarki och whitespace används. Samtliga webbshopar använder progressive disclosure för att dölja information och komprimera innehåll. Bekräftelse förekommer med hjälp av betoning. I fallstudien framkom det att användaren upplever frustration och förvirring under flera delar av köpprocessen. De frustrerande områdena är: oväntade händelser och popup-rutor som stör processen, webbshopens filtrering som saknar alternativ att sortera på mönster eller valda alternativ som inte följer med genom processen. Knappar var svåra att trycka på grund av dess storlek eller placering. Användaren blev även förvirrad över navigationen när flera kategorier slagits ihop och olika ord används för att benämna samma sak på olika webbplatser. Studien visar även att användaren upplever att sökrutan är smidig och effektiv att använda. Till sist upplever användaren att det fungerar bra att skrolla om webbsidan inte blir för lång. Implikationer – Studiens resultat visar på vikten av att kartlägga användarens upplevelse av interaktion samt utformningen av webbshopen ännu tydligare för att möjliggöra utveckling och erbjuda en positiv upplevelse av webbshopen under köpprocessen. Webbshoparna bör därför fokusera på att optimera webbshopens interaktion och utformning genom att genomgående följa User Interface Guidelines framtagna för applikationer och genomföra kvalitativa undersökningar med sina användare för att möta framtida behov då mobilanvändningen ökar. Begränsningar – Studiens resultat är begränsat till webbshopar i Sverige. I andra länder kan utformning och uppfattning av en mobilanpassad webbshop vara annorlunda. Om utvecklingen av användbarhet och mobilanpassade webbshopar sker snabbt och med stora förändringar kan studiens livslängd bli begränsad. / Purpose – The e-commerce is increasing and more consumers choose to shop with their phone. A study made by Episerver shows that 90 percent of all people surf on their phone but only 19 percent buy with their phone. The main reasons are technical problems, difficulties, limited access to information and registers that requires many steps. There is a high risk that a user will leave the website if they experience it complicated. There are general principles of design for websites to create good design and Google Analytics is used to analyse how a user navigates on the website. But the navigation does not describe why the user navigates in a certain way. Even though the general principles of design are being used users still experience websites complicated and decides to leave. To get a better understanding of the users experience of the website it is important to do a qualitative study. The purpose of this study is to investigate how a user experience the usability and design during a buying process on a mobile webshop within fashion. Method – In order to achieve the purpose of this study, a small-N study has been conducted on three webshops. A case study has also been conducted that included an observation and an interview with six participants. Findings – The study shows that all three reviewed webshops are designed responsive. Recognized icons, such as the burger menu, the cart and the magnifying glass for search function are used. Color contrasts and contrast in typography against background, hierarchy and whitespace.  All webshops uses progressive disclosure to conceal information and compress content.  Confirmations by emphasizing are used. Frustration and confusion appeared in the case study during several parts of the buying process. The frustrating parts are: unexpected events and pop-up boxes that interrupt the process, the missing alternative to filter products with patterns in the filtration function or chosen sizes of clothes in the filtration function that do not follow through the process. Buttons were difficult to press because of the size of it or its placement. The user also got confused on how to use the navigation when several of the categories were merged and when the same words had different meaning in different words were used in different webshops. The study also shows that the user experience that the search function was easy and effective to use. At last the user experienced the scroll positive if the website is not too long. Implications –  The result of this study shows the importance of mapping the user’s perception of the website to be able to develop and offer a positive experience on the website during the buying process. The webshop therefore should focus on optimizing the interaction by following the User Interface Guidelines for applications and perform qualitative studies with the users to meet future needs as mobile usage increases. Limitations – The result of this study is limited to webshops in Sweden. The design and perception of the webshop can differ from other countries. If the mobile websites and the usability changes fast and with major changes, the lifetime of this study might be limited.
8

Gaussian graphical model selection for gene regulatory network reverse engineering and function prediction

Kontos, Kevin 02 July 2009 (has links)
One of the most important and challenging ``knowledge extraction' tasks in bioinformatics is the reverse engineering of gene regulatory networks (GRNs) from DNA microarray gene expression data. Indeed, as a result of the development of high-throughput data-collection techniques, biology is experiencing a data flood phenomenon that pushes biologists toward a new view of biology--systems biology--that aims at system-level understanding of biological systems.<p><p>Unfortunately, even for small model organisms such as the yeast Saccharomyces cerevisiae, the number p of genes is much larger than the number n of expression data samples. The dimensionality issue induced by this ``small n, large p' data setting renders standard statistical learning methods inadequate. Restricting the complexity of the models enables to deal with this serious impediment. Indeed, by introducing (a priori undesirable) bias in the model selection procedure, one reduces the variance of the selected model thereby increasing its accuracy.<p><p>Gaussian graphical models (GGMs) have proven to be a very powerful formalism to infer GRNs from expression data. Standard GGM selection techniques can unfortunately not be used in the ``small n, large p' data setting. One way to overcome this issue is to resort to regularization. In particular, shrinkage estimators of the covariance matrix--required to infer GGMs--have proven to be very effective. Our first contribution consists in a new shrinkage estimator that improves upon existing ones through the use of a Monte Carlo (parametric bootstrap) procedure.<p><p>Another approach to GGM selection in the ``small n, large p' data setting consists in reverse engineering limited-order partial correlation graphs (q-partial correlation graphs) to approximate GGMs. Our second contribution consists in an inference algorithm, the q-nested procedure, that builds a sequence of nested q-partial correlation graphs to take advantage of the smaller order graphs' topology to infer higher order graphs. This allows us to significantly speed up the inference of such graphs and to avoid problems related to multiple testing. Consequently, we are able to consider higher order graphs, thereby increasing the accuracy of the inferred graphs.<p><p>Another important challenge in bioinformatics is the prediction of gene function. An example of such a prediction task is the identification of genes that are targets of the nitrogen catabolite repression (NCR) selection mechanism in the yeast Saccharomyces cerevisiae. The study of model organisms such as Saccharomyces cerevisiae is indispensable for the understanding of more complex organisms. Our third contribution consists in extending the standard two-class classification approach by enriching the set of variables and comparing several feature selection techniques and classification algorithms.<p><p>Finally, our fourth contribution formulates the prediction of NCR target genes as a network inference task. We use GGM selection to infer multivariate dependencies between genes, and, starting from a set of genes known to be sensitive to NCR, we classify the remaining genes. We hence avoid problems related to the choice of a negative training set and take advantage of the robustness of GGM selection techniques in the ``small n, large p' data setting. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.0445 seconds