• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6249
  • 2098
  • 2
  • Tagged with
  • 8350
  • 8347
  • 7953
  • 7887
  • 881
  • 823
  • 661
  • 641
  • 641
  • 618
  • 560
  • 475
  • 409
  • 397
  • 344
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Exploring Latent Semantic Vector Models Enriched With N-grams

Grönqvist, Leif January 2006 (has links)
Denna avhandling behandlar en sorts vektorrumsmodell som jag kallar ”Latent Semantic Vector Model”, eller LSVM, framtagen med tekniken ”Latent Semantic Indexing”. En LSVM har många användningsområden men jag har i första hand tittat på en direkt tillämpning: dokumentsökning. Det en LSVM kan tillföra dokumentsökning är möjligheten att söka efter innehåll snarare än specifika sökord. Att använda sig av en LSVM i ett dokumentsökningssystem har visat sig förbättra kvaliteten på de returnerade dokumentlistorna – det blir lättare för användaren att hitta den information han eller hon är ute efter. Det problem som angrips i det här arbetet är att en LSVM i normalfallet bara innehåller enkla ord, medan termer man söker efter ofta är flerordsuttryck. Jag har försökt träna upp modeller som är konfigurerade på olika sätt med avseende på parametrar som träningsdata, vokabulär, matrisstorlek, kontextstorlek och inte minst olika sätt att få in flerordsuttryck direkt i modellerna. Syftet har varit att avgöra hur prestanda för en LSVM påverkas då man går från en ordbaserad modell till en som innehåller både ord och flerordsuttryck. För att kunna mäta förändringen har två utvärderingsmetoder använts: synonymtest och dokumentsökning. Synonymtestningen har gjorts för svenska och dokumentsökningen för svenska och engelska. Resultaten förbättras för synonymtestningen men försämras för dokumentsökning. För engelsk dokumentsökning är förändringen inte signifikant. Arbetet har även resulterat i två nya resurser som är mycket användbara för utvärdering av flera typer av modeller: utvärderingsmängden SweHP560, innehållande 560 svenska synonym-uppgifter från Högskoleprovet, och de nya måtten RankEff och WRS för utvärdering av dokumentsökningssystem, som tar bättre hand om problemet med ofullständigt facit i utvärderingsdata än existerande mått som MAP och bpref. / This thesis deals with a kind of vector space model called “Latent Semantic Vector Model”, or LSVM, calculated by the technique “Latent Semantic Indexing”. An LSVM can be used for many things, but I have mainly looked at one direct application: document retrieval. What we can gain from an LSVM is the possibility of searching for content rather than specific keywords. Using an LSVM in a document retrieval system has been shown to improve the quality of the returned document lists, which makes it easier for the user to find the information he or she wants. The problem attacked in this thesis is that an LSVM in the normal case contains just single words, while the terms one searches for in many cases are multi-word expressions. LSVMs have been trained with various parameter settings for training data, vocabulary, matrix size, context size, and last but not least, different ways to include multi-word expressions directly into the models. The aim has been to determine how the performance of an LSVM changes when we go from a word-based model to a model containing both words and multi-word expressions. To be able to measure the changes, two evaluation methods have been used: synonym tests and document retrieval. Synonym testing has been performed for Swedish and document retrieval for both Swedish and English. The results are improved when multi-word expressions are added for the synonym test task, but change for the worse for document retrieval. For English, the latter change is not significant. This work has also resulted in two new resources, well suited for evaluation of various models: the evaluation set SweHP560, containing 560 Swedish synonym test queries from “Högskoleprovet”, and the new metrics RankEff and WRS for document retrieval evaluation, which handle the problem of an incomplete gold standard in a better way than existing metrics like MAP and bpref.
72

Visual Servoing for Manipulation : Robustness and Integration Issues

Kragic, Danica January 2001 (has links)
QC 20100505
73

Conditional Inapproximability and Limited Independence

Austrin, Per January 2008 (has links)
Understanding the theoretical limitations of efficient computation is one of the most fundamental open problems of modern mathematics. This thesis studies the approximability of intractable optimization problems. In particular, we study so-called Max CSP problems. These are problems in which we are given a set of constraints, each constraint acting on some k variables, and are asked to find an assignment to the variables satisfyingas many of the constraints as possible. A predicate P : [q]ᵏ → {0, 1} is said to be approximation resistant if it is intractable to approximate the corresponding CSP problem to within a factor which is better than what is expected from a completely random assignment to the variables. We prove that if the Unique Games Conjecture is true, then a sufficient condition for a predicate P :[q]ᵏ → {0, 1} to be approximation resistant is that there exists a pairwise independent distribution over [q]ᵏ which is supported on the set of satisfying assignments Pˉ¹(1) of P. We also study predicates P : {0, 1}² → {0, 1} on two boolean variables. The corresponding CSP problems include fundamental computational problems such as Max Cut and Max 2-Sat. For any P, we give an algorithm and a Unique Games-based hardness result. Under a certain geometric conjecture, the ratios of these two results are shown to match exactly. In addition, this result explains why additional constraints beyond the standard “triangle inequalities” do not appear to help when solving these problems. Furthermore,we are able to use the generic hardness result to obtain improved hardness for the special cases of Max 2-Sat and Max 2-And. For Max 2-Sat, we obtain a hardness of αLLZ + ε ≈ 0.94016, where αLLZ is the approximation ratio of the algorithm due to Lewin, Livnat and Zwick. For Max 2-And, we obtain a hardness of 0.87435. For both of these problems, our results surprisingly demonstrate that the special case of balanced instances (instances where every variable occurs positively and negatively equally often) is not the hardest. Furthermore, the result for Max 2-And also shows that Max Cut is not the hardest 2-CSP. Motivated by the result for k-CSP problems, and their fundamental importance in computer science in general, we then study t-wise independent distributions with random support. We prove that, with high probability, poly(q) ・ n² random points in [q]ⁿ can support a pairwise independent distribution. Then, again with high probability, we show that (poly(q) ・n)ᵗ log(nᵗ) random points in [q]ⁿ can support a t-wise independent distribution. For constant t and q, we show that Ω(nᵗ) random points are necessary in order to be able to support a t-wise independent balanced distribution with non-negligible probability. Also, we show that every subset of [q]ⁿ with size at least qⁿ(1−poly(q)ˉᵗ) can support a t-wise independent distribution. Finally, we prove a certain noise correlation bound for low-degree functions with small Fourier coefficients. This type of result is generally useful in hardness of approximation, derandomization, and additive combinatorics. / QC 20100630
74

Neural Mechanisms Determining Visuospatial Working Memory Tasks : Biophysical Modeling, Functional MRI and EEG

Edin, Fredrik January 2007 (has links)
Visuospatial working memory (vsWM) is the ability to temporarily retain goal-relevant visuospatial information in memory. It is a key cognitive function related to general intelligence, and it improves throughout childhood and through WM training. Information is maintained in vsWM through persistent neuronal activity in a fronto-parietal network that consists of the intraparietal sulcus (IPS) and the frontal eye field (FEF). This network is regulated by the dorsolateral prefrontal cortex (dlPFC). The features of brain structure and activity that regulate the access to and storage capacity of visuospatial WM (vsWM) are still unknown. The aim of my doctoral work has been to find such features by combining a biophysically based model of vsWM activity with functional MRI (fMRI) and EEG experiments. In study I, we combined modeling and fMRI and showed that stronger fronto-parietal synaptic connections result in developmental increases in brain activity and in improved vsWM during development. This causal relationship was established by ruling out other previously suggested mechanisms, such as myelination or synaptic pruning, In study II, we combined modeling and EEG to further explore the connectivity of the network. We showed that FEF→IPS connections are stronger than IPS→FEF connections, and that stimuli enter IPS. This arrangement of connections prevents distracting stimuli from being stored. Study III was a theoretical study showing that errors in measurements of the amplitude of brain activity affect the estimation of effective connection strength. In study IV, we analyzed EEG data from WM training in children with epilepsy. Improvements on the trained task were accompanied by increased frontal and parietal signal power, but not fronto-parietal coherence. This indicates that local changes in FEF and IPS could underlie improvements on the trained task. dlPFC is important for the performance on a large variety of cognitive tasks. In study V, we combined modeling with fMRI to test the hypothesis that dlPFC improves vsWM capacity by providing stabilizing excitatory inputs to IPS, and that dlPFC filters distracters by specifically lowering the capacity of neurons storing distracters. fMRI data confirmed the model hypothesis. We further showed that a dysfunctional dlPFC could explain the link between vsWM capacity and distractibility, as is found in ADHD. The model suggests that dlPFC carries out its multifaceted behavior not by performing advanced calculations itself, but by providing bias signals that control operations performed in the regions it connects to. A specific aim of this thesis has been to describe the mechanistic model in a way that is accessible to people without a modeling background. / QC 20100705
75

Computational modeling of the lamprey CPG : from subcellular to network level

Huss, Mikael January 2007 (has links)
Due to the staggering complexity of the nervous system, computer modelling is becoming one of the standard tools in the neuroscientist's toolkit. In this thesis, I use computer models on different levels of abstraction to compare hypotheses and seek un- derstanding about pattern-generating circuits (central pattern generators, or CPGs) in the lamprey spinal cord. The lamprey, an ancient and primitive animal, has long been used as a model system for understanding vertebrate locomotion. By examining the lamprey spinal locomotor network, which is a comparatively simple prototype of pattern-generating networks used in higher animals, it is possible to obtain insights about the design principles behind the spinal generation of locomotion. A detailed computational model of a generic spinal neuron within the lamprey locomotor CPG network is presented. This model is based, as far as possible, on published experimental data, and is used as a building block for simulations of the whole CPG network as well as subnetworks. The model construction process itself revealed a number of interesting questions and predictions which point toward new laboratory experiments. For example, a novel potential role for KNaF channels was proposed, and estimates of relative soma/dendritic conductance densities for KCaN and KNaS channels were given. Apparent inconsistencies in predicted spike widths for intact vs. dissociated neurons were also found. In this way, the new model can be of benefit by providing an easy way to check the current conceptual understanding of lamprey spinal neurons. Network simulations using this new neuron model were then used to address aspects of the overall coordination of pattern generation in the whole lamprey spinal cord CPG as well as rhythm-generation in smaller hemisegmental networks. The large-scale simulations of the whole spinal CPG yielded several insights: (1) that the direction of swimming can be determined from only the very rostral part of the cord, (2) that reciprocal inhibition, in addition to its well-known role of producing alternating left-right activity, facilitates and stabilizes the dynamical control of the swimming pattern, and (3) that variability in single-neuron properties may be crucial for accurate motor coordination in local circuits. We used results from simulations of smaller excitatory networks to propose plausible mechanisms for obtaining self-sustaining bursting activity as observed in lamprey hemicord preparations. A more abstract hemisegmental network model, based on Izhikevich neurons, was used to study the sufficient conditions for obtaining bistability between a slower, graded activity state and a faster, non-graded activity state in a recurrent excitatory network. We concluded that the inclusion of synaptic dynamics was a sufficient condition for the appearance of such bistability. Questions about rhythmic activity intrinsic to single spinal neurons – NMDA-TTX oscillations – were addressed in a combined experimental and computational study. We showed that these oscillations have a frequency which grows with the concentration of bath-applied NMDA, and constructed a new simplified computational model that was able to reproduce this as well as other experimental results. A combined biochemical and electrophysiological model was constructed to examine the generation of IP3-mediated calcium oscillations in the cytosol of lamprey spinal neurons. Important aspects of these oscillations were captured by the combined model, which also makes it possible to probe the interplay between intracellular biochemical pathways and the electrical activity of neurons. To summarize, this thesis shows that computational modelling of neural circuits on different levels of abstraction can be used to identify fruitful areas for further experimental research, generate experimentally testable predictions, or to give insights into possible design principles of systems that are currently hard to perform experiments on. / QC 20100714
76

On practical machine learning and data analysis

Gillblad, Daniel January 2008 (has links)
This thesis discusses and addresses some of the difficulties associated with practical machine learning and data analysis. Introducing data driven meth- ods in e. g. industrial and business applications can lead to large gains in productivity and efficiency, but the cost and complexity are often overwhelm- ing. Creating machine learning applications in practise often involves a large amount of manual labour, which often needs to be performed by an experi- enced analyst without significant experience with the application area. We will here discuss some of the hurdles faced in a typical analysis project and suggest measures and methods to simplify the process. One of the most important issues when applying machine learning meth- ods to complex data, such as e. g. industrial applications, is that the processes generating the data are modelled in an appropriate way. Relevant aspects have to be formalised and represented in a way that allow us to perform our calculations in an efficient manner. We present a statistical modelling framework, Hierarchical Graph Mixtures, based on a combination of graphi- cal models and mixture models. It allows us to create consistent, expressive statistical models that simplify the modelling of complex systems. Using a Bayesian approach, we allow for encoding of prior knowledge and make the models applicable in situations when relatively little data are available. Detecting structures in data, such as clusters and dependency structure, is very important both for understanding an application area and for speci- fying the structure of e. g. a hierarchical graph mixture. We will discuss how this structure can be extracted for sequential data. By using the inherent de- pendency structure of sequential data we construct an information theoretical measure of correlation that does not suffer from the problems most common correlation measures have with this type of data. In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diag- nosed is initially brought into use. We describe how to create an incremental classification system based on a statistical model that is trained from empiri- cal data, and show how the limited available background information can still be used initially for a functioning diagnosis system. To minimise the effort with which results are achieved within data anal- ysis projects, we need to address not only the models used, but also the methodology and applications that can help simplify the process. We present a methodology for data preparation and a software library intended for rapid analysis, prototyping, and deployment. Finally, we will study a few example applications, presenting tasks within classification, prediction and anomaly detection. The examples include de- mand prediction for supply chain management, approximating complex simu- lators for increased speed in parameter optimisation, and fraud detection and classification within a media-on-demand system. / QC 20100727
77

The Texture-Transform : An Operator for Texture Detection and Discrimination

Tavakoli Targhi, Alireza January 2009 (has links)
In this thesis we present contributions related to texture detection and discrimination to be used for analyzing real world images. Many computer vision applications can benefit from a fast and low dimensional texture descriptor. Several texture descriptors have been introduced and used for texture image classification and texture segmentation on images with a single or a mixture of textures. For evaluation of these descriptors a number of texture image databases (e.g. CURet, Photex, KTH-TIPS2, ALOT) have been introduced containing images of different types of natural and virtual texture samples. Classification and segmentation experiments have often been performed on such databases. In real world images we have a variety of textured and non textured objects with different backgrounds. Many of the existing texture descriptors (e.g. filter banks, textons) due to their nature fire on brightness edges. Therefore they are not always applicable for texture detection and discrimination in such real world images, especially indoor images which in general contain non textured structures mixed with textured objects. In the thesis we introduce a texture descriptor, the Texture-transform, with the following properties that are desirable for bottom-up processing in real-world applications: (i) It captures small-scale structure in terms of roughness or smoothness of the image patch. (ii) It provides a low dimensional output (usually just a single dimension) which is easy to store and perform calculations on. (iii) It generally does not fire on brightness edges. This is in contrast to for instance filters which tend to identify a strip around a brightness edge as a separate region. (iv) It has few parameters which need tuning. The most significant parameter that unavoidably appears is scale. It is here simply provided by the size of the local image patch. (v) It can be computed fast and used in real-time systems and easily be incorporated in multiple cue vision systems. Last but not least, it is extremely easy to implement, for example in just a few lines of Matlab. The Texture-Transform is derived in a manner different from other descriptors reviewed in this thesis, but related to other frequency based methods. The key idea is to investigate the variability of a window of an image by considering the singular values or eigenvalues of matrices formed directly from grey values of local patches. We show that these properties satisfy the requirements for many applications by extensive experiments in two main tests, one of detection and another of discrimination, as in [Kruizinga and Petkov, 1999]. We also demonstrate that the Texture-transform allows us to identify and segment out natural textures in images, without yielding too many spurious regions from brightness edges. In these experiments we perform comparisons with other descriptors of a similar low-dimensional type. Due to the nature of our descriptor it of course lacks invariance. Hence, it cannot by itself be used for classification, since the results do not carry over from one image to another. However, as a proof of concept we show experimentally that the detected textured regions can be used in a subsequent classification task. Invariance is not needed in all tasks of detection and discrimination, at least with regard to orientation and contrast, as we discuss and demonstrate in the thesis. As examples of real word applications, we show the function of the Texture-transform on detection of street plate names, visual attention, and vegetation segmentation. Moreover, we study the application of texture features to animal detection and also address learning the visual appearance of textured surfaces from very few training samples using a photometric stereo technique to artificially generate new samples. / QC 20100811
78

Automatisering av tester som utförs genom ett grafiskt användargränssnitt

Farrington, Daniel, Martinsson, Herman January 1999 (has links)
<p>Målet med detta examensarbete har varit att undersöka hur automatiserade tester skall införas i ett visst företags utvecklingsprocess. De tester som är aktuella för automatisering är de tester som genomförs via det grafiska användargränssnittet.</p><p>Arbetet har främst inriktat sig på att utvärdera vilket verktyg som är lämpligast att använda sig av. Dessa verktyg använder sig av en speciell teknik kallad Capture/Replay, men erbjuder även möjligheter att skriva testfall för hand. Utvärderingen visade att två av de testade verktygen är ungefär likvärdiga. Att endast rekommendera ett av dessa var omöjligt eftersom det hade krävts en än djupare analys av de behov som finns i företagets testverksamhet.</p><p>Arbetet tar även upp saker som man ska tänka på när man inför automatiserade tester i sin verksamhet. Arbetet har mynnat ut i en handlingsplan som företaget kan följa vid införandet av automatiserade tester.</p>
79

Frågehantering i en mobil databasmiljö

Jonsson, Lars-Göran January 2001 (has links)
<p>Rapportens studie jämför en distribuerad och mobil databasmiljö med avseende på frågehantering. Områden som fokuseras inom frågehantering är: dataorganisation och distribution, lokaliseringshantering, skalbarhet, frågekostnader samt frånkopplad operation. Rapportens slutsatser fastslår att grundläggande skillnader inte existerar mellan miljöerna med avseende på frågehantering, vilket ger till följd att formulerad problemhypotes verifieras.</p>
80

Trådlösa nätverk

Karlsson, Fredrik January 2006 (has links)
<p>This report describes what wireless local area networks are and the use and configuration of them. These WLAN as they also are called can be used both in new installations and expansion of existing networks. Further on this report reveals security in this area. Security is incredible important in all networks not at least wireless because you don’t have to be in the environment physically to get the information, you only have to be in the coverage area that can be several hundred meters. It also takes up several different standards for wireless LANs. These are more or less topical and while this paper is written new standards are taking form so this really is under development.</p><p>The work has been done in co-operation with Koneo Rosenlund where most of this work has been performed. The products that were opted to work with were Linksys products within the 802.11g standard.</p><p>The reason for this inquiry was that Koneo had noticed and thought that wireless LANs is the future and because they wanted an environment to show old and new customers and use in their daily work.</p><p>I have found that the wireless LAN is an excellent option to the traditional LANs. They are cheap to install, easy to administrate and expand. If someone wants to change office there is no need to draw a new cable or change cables in a switch, just place the computer on the new desk and everything will work as it used to. For most users it’s ok in safety matters to use WLANs but for them with top secret documents or in a hospital where the radio waves can disturb other equipment then it’s a worse alternative.</p><p>I think that I fulfilled most of the goals we set up before I started this work. Some part have been fulfilled beyond our expectations and others have been more difficult to solve.</p>

Page generated in 0.0343 seconds