• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 80
  • Tagged with
  • 168
  • 168
  • 168
  • 168
  • 168
  • 168
  • 168
  • 25
  • 19
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Wireless Short Range Communication Technologies for Home Automation

Oyekunle, Abiola Taiwo January 2008 (has links)
<p>A modern home contains varieties of electronic equipment and systems like: TV, Hi-fi equipment, central heating systems, fire alarm systems, security alarm systems, lighting systems etc. Enabling these devices to communicate is the first step towards the long-predicted smart home, but this requires communication standards to follow. It can be anticipated that the technology must be wireless in order for such network to be feasible.  Large set of standards are present for as well wired as wireless communication in between such devices, but today no standard communication interface available.</p><p> </p><p>The goal of this project is to survey available standards for short-range wireless communication, and to evaluate and compare their capabilities to become a general standard for home automation. The evaluation must take such aspects as security, range, network architecture and the heterogeneous set of devices into consideration. Furthermore, this thesis proposes how to interconnect the home network to the external network for remote supervision and control.</p>
2

Riktlinjer för kommunikation mellan anvecklare på ett IT-baserat forum för anvecklare.

Petersson, Johan, Karlsson, Olle January 2010 (has links)
No description available.
3

Text Clustering Exploration : Swedish Text Representation and Clustering Results Unraveled

Rosell, Magnus January 2009 (has links)
Text clustering divides a set of texts into clusters (parts), so that texts within each cluster are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on familiar ones. The main contributions of this thesis are an investigation of text representation for Swedish and some extensions of the work on how to use text clustering as an exploration tool. We have also done some work on synonyms and evaluation of clustering results. Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Swedish has more morphological variation than for instance English. We show that it is beneficial to use the lemma form of words rather than the word forms. Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages. Our experiments show that it is beneficial to split solid compounds into their parts when building the representation. The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. We have also tried to differentiate between homographs, words that look alike but mean different things, by augmenting all words with a tag indicating their part of speech. None of our experiments using phrases or part of speech information have shown any improvement over using the ordinary model. Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is – text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe how evaluation can be improved for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. In some related work we have built a dictionary of synonyms. We use it to compare two different principles for automatic word relation extraction through clustering of words. Text clustering can be used to explore the contents of a text set. We have developed a visualization method that aids such exploration, and implemented it in a tool, called Infomat. It presents the representation matrix directly in two dimensions. When the order of texts and words are changed, by for instance clustering, distributional patterns that indicate similarities between texts and words appear. We have used Infomat to explore a set of free text answers about occupation from a questionnaire given to over 40 000 Swedish twins. The questionnaire also contained a closed answer regarding smoking. We compared several clusterings of the text answers to the closed answer, regarded as a categorization, by means of clustering evaluation. A recurring text cluster of high quality led us to formulate the hypothesis that “farmers smoke less than the average”, which we later could verify by reading previous studies. This hypothesis generation method could be used on any set of texts that is coupled with data that is restricted to a limited number of possible values. / QC 20100806
4

Understanding Supply Chain Intergration : A Connectivity &amp; Willingness Perspective.

Ekholm, Christer January 2011 (has links)
No description available.
5

Utvärdering av Studentportalen : En enkätundersökning bland studenter på Linköpings universitet

Dahlström, Kristin January 2009 (has links)
Studentportalen är tänkt att vara ett hjälpmedel för studenterna under deras studietid genom att tillhandahålla olika tjänster och funktioner som behövs för studierna. är kan studenterna bland annat registrera sig på kurser, anmäla sig till tentor, beställa intyg och skapa scheman. Jag gjorde en enkätundersökning bland universitetets studenter där för att utreda vad de har för åsikter om tudentportalen, e-postsystemet och, till viss del, tudentwebben. ndersökningen behandlade bland annat inloggning, länkstruktur, tudentportalens tjänster och funktioner avseende enkelhet att hitta dem samt information om och inlärning av dem samt hjälpfunktionen. Resultatet visade att studenterna har många bra ideer om hur tudentportalen kan förbättras. dagens samhälle blir användbarhet mer och mer viktigt och de önskade förbättringarna av tudentportalen skulle öka dess användbarhet. om det ser ut nu så loggar studenterna inte in i tudentportalen så ofta som systemägarna vill. Undersökningen visade att studenter i genomsnitt loggar in på tudentportalen mer sällan än varannan vecka, vilket kan bero på att tudentportalen inte tillhandahåller funktioner och tjänster som behöver användas minst en gång i veckan. om ett exempel finns  tentamensanmälningen som förvisso kan behöva användas några gånger per termin men inte så ofta som en gång i veckan. ystemet behöver således attraktiva funktioner och tjänster som lockar studenterna till tudentportalen. Under hösten 2007 byttes studente-postsystemet ut mot ett nytt system, kallat e-GO. Denna studente-post bygger på oogles teknik och ger studenterna tillgång till ett stort lagringsutrymme och även en del andra tjänster. etta system fick positiv kritik av studenterna tack vare dess stora lagringsutrymme. Då denna uppsats är tänkt att ligga till grund för systemförvaltarnas vidare utveckling av tudentportalen så hoppas jag att den information som framkom av enkätundersökningen kommer att vara till hjälp och att systemet utvecklas så som studenterna önskar.
6

Kunskapsöverföring : En teoretisk verklighet?

Andersson, Johan, Serbner, Martin, Ståhl, Maria January 2008 (has links)
<p>Kraven på dagens företag har ökat markant de senaste åren. För dagens</p><p>företag är det viktigt att hela tiden kunna mäta sig och helst även vara</p><p>steget före sina konkurrenter. En viktig del i detta är att ha rätt typ av</p><p>kompetens inom olika områden och positioner inom det egna företaget.</p><p>För att uppnå det är det idag vanligt att företagen arrangerar traineeprogram.</p><p>I denna uppsats beskrivs problematiken vid kunskapsöverföring till</p><p>en trainee och de problemområden som finns kring den processen.</p><p>För att kunna utforma ett väl fungerande traineeprogram från företagets</p><p>sida är det mycket viktigt att förstå vilka mål som företaget vill uppnå genom</p><p>att genomföra ett traineeprogram.</p><p>Vårt tillvägagångssätt och metodval var att intervjua personer som genomgått</p><p>fallföretagets traineeprogram. Vi formulerat vårt problem som är;</p><p>Hur överförs kunskap till trainees inom vårt fallföretag idag och kan den</p><p>förbättras?</p><p>Under arbetets gång upptäckte vi brister i fallföretagets traineeprogram</p><p>men vi fann även många positiva delar. Empiri avsnittet gav oss mycket</p><p>värdefull kunskap som vi hade nytta av i analys samt slutsats avsnittet. I</p><p>vår slutsats redovisar vi olika förslag och rekommendationer till möjliga</p><p>åtgärder beträffande metodval för kunskapsöverföring.</p>
7

Textual information retrieval : An approach based on language modeling and neural networks

Georgakis, Apostolos A. January 2004 (has links)
<p>This thesis covers topics relevant to information organization and retrieval. The main objective of the work is to provide algorithms that can elevate the recall-precision performance of retrieval tasks in a wide range of applications ranging from document organization and retrieval to web-document pre-fetching and finally clustering of documents based on novel encoding techniques.</p><p>The first part of the thesis deals with the concept of document organization and retrieval using unsupervised neural networks, namely the self-organizing map, and statistical encoding methods for representing the available documents into numerical vectors. The objective of this section is to introduce a set of novel variants of the self-organizing map algorithm that addresses certain shortcomings of the original algorithm.</p><p>In the second part of the thesis the latencies perceived by users surfing the Internet are shortened with the usage of a novel transparent and speculative pre-fetching algorithm. The proposed algorithm relies on a model of behaviour for the user browsing the Internet and predicts his future actions when surfing the Internet. In modeling the users behaviour the algorithm relies on the contextual statistics of the web pages visited by the user.</p><p>Finally, the last chapter of the thesis provides preliminary theoretical results along with a general framework on the current and future scientific work. The chapter describes the usage of the Zipf distribution for document organization and the usage of the adaboosting algorithm for the elevation of the performance of pre-fetching algorithms. </p>
8

Object Based Concurrency for Data Parallel Applications : Programmability and Effectiveness

Diaconescu, Roxana Elena January 2002 (has links)
<p>Increased programmability for concurrent applications in distributed systems requires automatic support for some of the concurrent computing aspects. These are: the decomposition of a program into parallel threads, the mapping of threads to processors, the communication between threads, and synchronization among threads.</p><p>Thus, a highly usable programming environment for data parallel applications strives to conceal data decomposition, data mapping, data communication, and data access synchronization.</p><p>This work investigates the problem of programmability and effectiveness for scientific, data parallel applications with irregular data layout. The complicating factor for such applications is the recursive, or indirection data structure representation. That is, an efficient parallel execution requires a data distribution and mapping that ensure data locality. However, the recursive and indirect representations yield poor physical data locality. We examine the techniques for efficient, load-balanced data partitioning and mapping for irregular data layouts. Moreover, in the presence of non-trivial parallelism and data dependences, a general data partitioning procedure complicates arbitrary locating distributed data across address spaces. We formulate the general data partitioning and mapping problems and show how a general data layout can be used to access data across address spaces in a location transparent manner.</p><p>Traditional data parallel models promote instruction level, or loop-level parallelism. Compiler transformations and optimizations for discovering and/or increasing parallelism for Fortran programs apply to regular applications. However, many data intensive applications are irregular (sparse matrix problems, applications that use general meshes, etc.). Discovering and exploiting fine-grain parallelism for applications that use indirection structures (e.g. indirection arrays, pointers) is very hard, or even impossible.</p><p>The work in this thesis explores a concurrent programming model that enables coarse-grain parallelism in a highly usable, efficient manner. Hence, it explores the issues of implicit parallelism in the context of objects as a means for encapsulating distributed data. The computation model results in a trivial SPMD (Single Program Multiple Data), where the non-trivial parallelism aspects are solved automatically.</p><p>This thesis makes the following contributions:</p><p>- It formulates the general data partitioning and mapping problems for data parallel applications. Based on these formulations, it describes an efficient distributed data consistency algorithm.</p><p>- It describes a data parallel object model suitable for regular and irregular data parallel applications. Moreover, it describes an original technique to map data to processors such as to preserve locality. It also presents an inter-object consistency scheme that tries to minimize communication.</p><p>- It brings evidence on the efficiency of the data partitioning and consistency schemes. It describes a prototype implementation of a system supporting implicit data parallelism through distributed objects. Finally, it presents results showing that the approach is scalable on various architectures (e.g. Linux clusters, SGI Origin 3800).</p>
9

Interactive Process Models

Jørgensen, Håvard D. January 2004 (has links)
<p>Contemporary business process systems are built to automate routine procedures. Automation demands well-understood domains, repetitive processes, clear organisational roles, an established terminology, and predefined plans. Knowledge work is not like that. Plans for knowledge intensive processes are elaborated and reinterpreted as the work progresses. Interactive process models are created and updated by the project participants to reflect evolving plans. The execution of such models is controlled by users and only partially automated. An interactive process system should</p><p>- Enable modelling by end users,</p><p>- Integrate support for ad-hoc and routine work,</p><p>- Dynamically customise functionality and interfaces, and</p><p>- Integrate learning and knowledge management in everyday work.</p><p>This thesis reports on an engineering project, where an interactive process environment called WORKWARE was developed. WORKWARE combines workflow and groupware. Following an incremental development method, multiple versions of systems have been designed, implemented and used. In each iteration, usage experience, validation data, and the organisational science literature generated requirements for the next version.</p>
10

Discernibility and Rough Sets in Medicine: Tools and Applications

Øhrn, Aleksander January 2000 (has links)
<p>This thesis examines how discernibility-based methods can be equipped to posses several qualities that are needed for analyzing tabular medical data, and how these models can be evaluated according to current standard measures used in the health sciences. To this end, tools have been developed that make this possible, and some novel medical applications have been devised in which the tools are put to use.</p><p>Rough set theory provides a framework in which discernibility-based methods can be formulated and interpreted, and also forms an appealing foundation for data mining and knowledge discovery. When the medical domain is targeted, several factors become important. This thesis examines some of these factors, and holds them up to the current state-of-the-art in discernibility-based empirical modelling. Bringing together pertinent techniques, suitable adaptations of relevant theory for model construction and assessment are presented. Rough set classifiers are brought together with ROC analysis, and it is outlined how attribute costs and semantics can enter the modelling process.</p><p>ROSETTA, a comprehensive software system for conducting data analyses within the framework of rough set theory, has been developed. Under the hypothesis that the accessibility of such tools lowers the threshold for abstract ideas to migrate into concrete realization, this aids in reducing a gap between theoreticians and practitioners, and enables existing problems to be more easily attacked. The ROSETTA system boasts a set of flexible and powerful algorithms, and sets these in a user-friendly environment designed to support all phases of the discernibility-based modelling methodology. Researchers world-wide have already put the system to use in a wide variety of domains.</p><p>By and large, discernibility-based data analysis can be varied along two main axes: Which objects in the universe of discourse that we deem it necessary to discern between, and how we define that discernibility among these objects is allowed to take place. Using ROSETTA, this thesis has explored various facets of this also in three novel and distinctly different medical applications:</p><p>*A method is proposed for identifying population subgroups for which expensive tests may be avoided, and experiments with a real-world database on a cardiological prognostic problem suggest that significant savings are possible.</p><p>* A method is proposed for anonymizing medical databases with sensitive contents via cell suppression, thus aiding to preserve patient confidentiality.</p><p>* Very simple rule-based classifiers are employed to diagnose acute appendicitis, and their relative performance is compared to a team of experienced surgeons. The added value of certain biochemical tests is also demonstrated.</p>

Page generated in 0.1329 seconds