• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 103
  • 39
  • 35
  • 32
  • 23
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 691
  • 126
  • 126
  • 123
  • 105
  • 93
  • 89
  • 82
  • 76
  • 70
  • 59
  • 57
  • 54
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Algoritmen som hjälper vid tidsplanering : ett stöd för personer med ADHD

Månsson, Nikolaj January 2019 (has links)
Denna studie avhandlar vilka behov som personer med ADHD har av digitala hjälpmedel, vilka hjälpmedel som finns tillgängliga på marknaden, samt vilka råd som finns att tillgå vid utveckling. Då studien syftar bidra med en utformning av ett digitalt verktyg som stödjer dessa individer i tidsplanering, så utförs även en litteraturstudie över hur sannolikhetsberäkning kan användas för att planera och förutsäga utgång av aktiviteter. En enkätstudie genomförs där personer diagnosticerade med ADHD får svara på frågor om vilka problem som de upplever vid planering, vilken erfarenhet som de har av planering med digitala hjälpmedel, samt vilka önskemål som de har på funktioner hos en applikation som stödjer planering. Studiens metoddel utreder även om det finns möjlighet att använda sig av tidigare insamlade dataset över en populations arbete med aktiviteter, samt metod för att samla in ny data. I studiens resultatdel presenteras en applikation för tidsplanering för den målgrupp som personer med en ADHD-diagnos utgör. Denna applikation är baserad på studiens litteraturstudie och resultatet av dess enkätstudie. I studiens del för diskussion avhandlas vilket behov som målgruppen beskriver, ett förslag på utformning av digitalt verktyg, vilka uträkningar som har varit aktuella för att användare ska få ut så mycket som möjligt av ett planerat arbetspass, samt hur långt ett produktivt arbetspass kan vara.
102

Automatic Document Classification Applied to Swedish News

Blein, Florent January 2005 (has links)
<p>The first part of this paper presents briefly the ELIN[1] system, an electronic newspaper project. ELIN is a framework that stores news and displays them to the end-user. Such news are formatted using the xml[2] format. The project partner Corren[3] provided ELIN with xml articles, however the format used was not the same. My first task has been to develop a software that converts the news from one xml format (Corren) to another (ELIN).</p><p>The second and main part addresses the problem of automatic document classification and tries to find a solution for a specific issue. The goal is to automatically classify news articles from a Swedish newspaper company (Corren) into the IPTC[4] news categories.</p><p>This work has been carried out by implementing several classification algorithms, testing them and comparing their accuracy with existing software. The training and test documents were 3 weeks of the Corren newspaper that had to be classified into 2 categories.</p><p>The last tests were run with only one algorithm (Naïve Bayes) over a larger amount of data (7, then 10 weeks) and categories (12) to simulate a more real environment.</p><p>The results show that the Naïve Bayes algorithm, although the oldest, was the most accurate in this particular case. An issue raised by the results is that feature selection improves speed but can seldom reduce accuracy by removing too many features.</p>
103

Automatic Document Classification Applied to Swedish News

Blein, Florent January 2005 (has links)
The first part of this paper presents briefly the ELIN[1] system, an electronic newspaper project. ELIN is a framework that stores news and displays them to the end-user. Such news are formatted using the xml[2] format. The project partner Corren[3] provided ELIN with xml articles, however the format used was not the same. My first task has been to develop a software that converts the news from one xml format (Corren) to another (ELIN). The second and main part addresses the problem of automatic document classification and tries to find a solution for a specific issue. The goal is to automatically classify news articles from a Swedish newspaper company (Corren) into the IPTC[4] news categories. This work has been carried out by implementing several classification algorithms, testing them and comparing their accuracy with existing software. The training and test documents were 3 weeks of the Corren newspaper that had to be classified into 2 categories. The last tests were run with only one algorithm (Naïve Bayes) over a larger amount of data (7, then 10 weeks) and categories (12) to simulate a more real environment. The results show that the Naïve Bayes algorithm, although the oldest, was the most accurate in this particular case. An issue raised by the results is that feature selection improves speed but can seldom reduce accuracy by removing too many features.
104

One General Approach For Analysing Compositional Structure Of Terms In Biomedical Field

Chao, Yang, Zhang, Peng January 2013 (has links)
The root is the primary lexical unit of Ontological terms, which carries the most significant aspects of semantic content and cannot be reduced into small constituents. It is the key of ontological term structure. After the identification of root, we can easily get the meaning of terms. According to the meaning, it’s helpful to identify the other parts of terms, such as the relation, definition and so on. We have generated a general classification model to identify the roots of terms in this master thesis. There are four features defined in our classification model: the Token, the POS, the Length and the Position. Implementation is followed using Java and algorithm is followed using Naïve Bayes. We implemented and evaluated the classification model using Gene Ontology (GO). The evaluation results showed that our framework and model were effective.
105

Option pricing and Bayesian learning /

Jönsson, Ola. January 2007 (has links) (PDF)
Univ., Diss. 2007--Lund, 2007.
106

Self-designing optimal group sequential clinical trials /

Thach, Chau Thuy. January 2000 (has links)
Thesis (Ph. D.)--University of Washington, 2000. / Vita. Includes bibliographical references (leaves 107-111).
107

A Bayesian approach to estimating heterogeneous spatial covariances /

Damian, Doris. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (p. 1226-131).
108

A Bayesian approach to parametric image analysis /

Spilker, Mary Elizabeth. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (leaves 102-108).
109

EMPIRICAL BAYES NONPARAMETRIC DENSITY ESTIMATION OF CROP YIELD DENSITIES: RATING CROP INSURANCE CONTRACTS

Ramadan, Anas 16 September 2011 (has links)
This thesis examines a newly proposed density estimator in order to evaluate its usefulness for government crop insurance programs confronted by the problem of adverse selection. While the Federal Crop Insurance Corporation (FCIC) offers multiple insurance programs including Group Risk Plan (GRP), what is needed is a more accurate method of estimating actuarially fair premium rates in order to eliminate adverse selection. The Empirical Bayes Nonparametric Kernel Density Estimator (EBNKDE) showed a substantial efficiency gain in estimating crop yield densities. The objective of this research was to apply EBNKDE empirically by means of a simulated game wherein I assumed the role of a private insurance company in order to test for profit gains from the greater efficiency and accuracy promised by using EBNKDE. Employing EBNKDE as well as parametric and nonparametric methods, premium insurance rates for 97 Illinois counties for the years 1991 to 2010 were estimated using corn yield data from 1955 to 2010 taken from the National Agricultural Statistics Service (NASS). The results of this research revealed substantial efficiency gain from using EBNKDE as opposed to other estimators such as Normal, Weibull, and Kernel Density Estimator (KDE). Still, further research using other crops yield data from other states will provide greater insight into EBNKDE and its performance in other situations.
110

Evaluation of fully Bayesian disease mapping models in correctly identifying high-risk areas with an application to multiple sclerosis

Charland, Katia. January 2007 (has links)
Disease maps are geographical maps that display local estimates of disease risk. When the disease is rare, crude risk estimates can be highly variable, leading to extreme estimates in areas with low population density. Bayesian hierarchical models are commonly used to stabilize the disease map, making them more easily interpretable. By exploiting assumptions about the correlation structure in space and time, the statistical model stabilizes the map by shrinking unstable, extreme risk estimates to the risks in surrounding areas (local spatial smoothing) or to the risks at contiguous time points (temporal smoothing). Extreme estimates that are based on smaller populations are subject to a greater degree of shrinkage, particularly when the risks in adjacent areas or at contiguous time points do not support the extreme value and are more stable themselves. / A common goal in disease mapping studies is to identify areas of elevated risk. The objective of this thesis is to compare the accuracy of several fully Bayesian hierarchical models in discriminating between high-risk and background-risk areas. These models differ according to the various spatial, temporal and space-time interaction terms that are included in the model, which can greatly affect the smoothing of the risk estimates. This was accomplished with simulations based on the cervical cancer rate of Kentucky and at-risk person-years of the state of Kentucky's 120 counties from 1995 to 2002. High-risk areas were 'planted' in the generated maps that otherwise had background relative risks of one. The various disease mapping models were applied and their accuracy in correctly identifying high- and background-risk areas was compared by means of Receiver Operating Characteristic curve methodology. Using data on Multiple Sclerosis (MS) on the island of Sardinia, Italy we apply the more successful models to identify areas of elevated MS risk.

Page generated in 0.028 seconds