• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 18
  • 13
  • 13
  • 9
  • 7
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 159
  • 74
  • 59
  • 36
  • 29
  • 23
  • 23
  • 22
  • 19
  • 16
  • 15
  • 15
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Från sällan till ofta : En fallstudie inom professionell idrott om sporadiska besökares konsumtion / From seldom to frequent : A case study withinprofessional sports regarding low frequent spectators’consumption

Lundgren, Fredrik, Järnkrok, Emma January 2016 (has links)
Problemformulering: Idag råder en negativ trend gällande antalet åskådare för en majoritet av organisationerna inomprofessionell ishockey i Sverige. Då publiken utgör en viktig del i ett matchevenemang, både ekonomiskt och upplevelsemässigt, är detta en problematisk utveckling. Tidigare forskning visar att den del av publiken som besöker ett matchevenemang 1-3 gånger per säsong är en fördelaktig grupp att påverka för att öka dess konsumtion. Däremot har den tidigare forskningen primärt studerat besökare inom professionell idrott i generella termer alternativt fokuserat på hängivna fans samt haft en kvantitativ forskningsstrategi. Syfte och frågeställningar: Studiens syfte är att djupare förstå de sporadiska besökarna och undersöka hur varumärket kan användas för att påverka dem till att gå oftare.• Vilka faktorer som påverkar konsumtionen lyfts fram av sporadiska besökare?• Hur kan kunskapen om de identifierade faktorerna användas för att få den sporadiska besökaren att gå oftare utifrån ett varumärkesperspektiv? Metod: Med en kvalitativ forskningsstrategi och en deduktiv ansats med induktiva inslag har denna studie genomfört tolv semistrukurerade intervjuer. Respondenterna i denna studie är besökare av fallorganisationens ishockeymatcher och går på 1-3 matchevenemang per säsong. Resultat: Denna studie har uppmärksammat ett antal faktorer som genom varumärket kan påverka den sporadiska besökarens konsumtion. Studien har identifierat stämning som ett konsumtionsmotiv hos en grupp av sporadiska besökare samt att studien funnit olika varianter av det socialamotivet och preferenser av matchresultatets karaktär. Vidare har studien funnit att en grupp sporadiska besökare har en hög kunskapsnivå gällande ishockey och till sist har studien ävenfunnit djupare beskrivningar av verklighetsflykt och underhållning vilka även de är faktorer som påverkar sporadiska besökares konsumtion. Forskningsbidrag: Denna studie ger detaljerade beskrivningar av sporadiska besökares konsumtionsbeteende och varumärkesassociationer inom professionell idrott. Vidare bidrar studien med kompletterande indikationer av faktorer som påverkar de sporadiska besökarnas konsumtionsfrekvens av matchevenemang. / Problem definition: Today there is a negative trend regarding the number of spectators among a majority of the organizations within professional ice hockey in Sweden. This is a problematic development since the spectators play an important part for the event of the games, both financially and experientially. Previous research shows that the customer segment which visits one to three games per season is a favorable group when it comes to increasing their consumption. Though, previous research has fore most studied spectators in general terms or have had the devoted fansas main focus. Furthermore, the previous research has primarily been quantitative regarding its research strategy. Aim and research questions: The aim of the study is to gain a deeper understanding of low frequent spectators and understand how the brand can be used in order to influence them to go more often.• Which factors, which influence the consumption, is described by the the low frequent spectators?• How can the knowledge of these identified factors be used in order to influence the low frequent spectator to visit more often from a brand perspective? Methodology: With a qualitative research strategy and a deductive, with element of inductive, research approach, this study has completed twelve semi-structured interviews. The respondents are spectators of ice hockey games of the organization which this study have studied and who visits 1-3 games per season. Results: A number of factors which can influence the low frequent spectators’ consumption through the brand have been noticed through this study. The atmosphere has been identified as a consumption motive among a group of low frequent spectators and the study has also found different versions of the social motive as well as preferences for the nature of the games results. Furthermore, the study has found a group of low frequent spectators which has a high level of knowledge about ice hockey. Finally, the study has also found deeper descriptions of escape and entertainment which also are factors which influence the consumption among a group of low frequent spectators within professional sports. Research grants: This study provides detailed descriptions of low frequent spectators’ consumer behavior and brand associations within professional sports. Furthermore, the study provides additional indications of factors which affect the low frequent spectators’ consumption regarding games.
112

Distributed frequent subgraph mining in the cloud / Fouille de sous-graphes fréquents dans les nuages

Aridhi, Sabeur 29 November 2013 (has links)
Durant ces dernières années, l’utilisation de graphes a fait l’objet de nombreux travaux, notamment en bases de données, apprentissage automatique, bioinformatique et en analyse des réseaux sociaux. Particulièrement, la fouille de sous-graphes fréquents constitue un défi majeur dans le contexte de très grandes bases de graphes. De ce fait, il y a un besoin d’approches efficaces de passage à l’échelle pour la fouille de sous-graphes fréquents surtout avec la haute disponibilité des environnements de cloud computing. Cette thèse traite la fouille distribuée de sous-graphe fréquents sur cloud. Tout d’abord, nous décrivons le matériel nécessaire pour comprendre les notions de base de nos deux domaines de recherche, à savoir la fouille de sous-graphe fréquents et le cloud computing. Ensuite, nous présentons les contributions de cette thèse. Dans le premier axe, une nouvelle approche basée sur le paradigme MapReduce pour approcher la fouille de sous-graphes fréquents à grande échelle. L’approche proposée offre une nouvelle technique de partitionnement qui tient compte des caractéristiques des données et qui améliore le partitionnement par défaut de MapReduce. Une telle technique de partitionnement permet un équilibrage des charges de calcul sur une collection de machine distribuée et de remplacer la technique de partitionnement par défaut de MapReduce. Nous montrons expérimentalement que notre approche réduit considérablement le temps d’exécution et permet le passage à l’échelle du processus de fouille de sous-graphe fréquents à partir de grandes bases de graphes. Dans le deuxième axe, nous abordons le problème d’optimisation multi-critères des paramètres liés à l’extraction distribuée de sous-graphes fréquents dans un environnement de cloud tout en optimisant le coût monétaire global du stockage et l’interrogation des données dans le nuage. Nous définissons des modèles de coûts de gestion et de fouille de données avec une plateforme de fouille de sous-graphe à grande échelle sur une architecture cloud. Nous présentons une première validation expérimentale des modèles de coûts proposés. / Recently, graph mining approaches have become very popular, especially in certain domains such as bioinformatics, chemoinformatics and social networks. One of the most challenging tasks in this setting is frequent subgraph discovery. This task has been highly motivated by the tremendously increasing size of existing graph databases. Due to this fact, there is urgent need of efficient and scaling approaches for frequent subgraph discovery especially with the high availability of cloud computing environments. This thesis deals with distributed frequent subgraph mining in the cloud. First, we provide the required material to understand the basic notions of our two research fields, namely graph mining and cloud computing. Then, we present the contributions of this thesis. In the first axis, we propose a novel approach for large-scale subgraph mining, using the MapReduce framework. The proposed approach provides a data partitioning technique that consider data characteristics. It uses the densities of graphs in order to partition the input data. Such a partitioning technique allows a balanced computational loads over the distributed collection of machines and replace the default arbitrary partitioning technique of MapReduce. We experimentally show that our approach decreases significantly the execution time and scales the subgraph discovery process to large graph databases. In the second axis, we address the multi-criteria optimization problem of tuning thresholds related to distributed frequent subgraph mining in cloud computing environments while optimizing the global monetary cost of storing and querying data in the cloud. We define cost models for managing and mining data with a large scale subgraph mining framework over a cloud architecture. We present an experimental validation of the proposed cost models in the case of distributed subgraph mining in the cloud.
113

Da doutrina e do método em medicina legal. Ensaio epistemológico sobre uma ciência bio-psico-social / Doctrine and Method in Forensic Medicine. An Epistemological Essay on a Bio-Psycho-Social Science

Freire, Jose Jozefran Berto 24 April 2009 (has links)
Este ensaio inicia-se com um sucinto painel a respeito de uma pesquisa bibliográfica sobre Medicina Legal que começa com Ambroise Paré em 1532 e chega ao ano de 2008; pesquisa esta que muito nos ajudou no planejamento desta Tese. É preciso que se diga que o conceito de Medicina Legal só aparece em 1621 com Paolo Zacchia (Quaestiones Medico Legales...). Nosso objetivo neste trabalho envolve diferentes aspectos teóricos dessa ciência. O primeiro é o de demonstrar que a Medicina Legal pode ser a ciência de uma classe no sentido da Lógica e que ela não seria, portanto, a exemplo da Medicina Clínica, uma ciência do indivíduo, como diz Gilles Granger em sua obra sobre Epistemologia já tornada célebre. O segundo aspecto teórico seria o de propor a Medicina Legal como uma ciência do freqüente aristotélico (hòs epì tò polú ws epi to polu - termo cunhado pelo helenista Porchat Pereira, enquanto conceito filosófico) graças ao qual situamos nossa ciência entre o acidental e o necessário e universal do pensamento lógico-matemático. Em terceiro lugar discorreremos sobre o fato de que os laudos médico-legais estão normalmente restritos a constatação empírica e, então, iremos demonstrar a também indispensável consideração das condições a priori da possibilidade de se estabelecer o visum et repertum, ou seja, a consideração do papel do encéfalo na leitura da experiência possível ao ser humano, numa linguagem atual. No que diz respeito à prática, realizamos uma pesquisa na qual analisamos 996 laudos médico-legais, no Brasil, cujos problemas nos remeteram à questão do Método. Sobre o Método, consideramos as teorias de Aristóteles, Descartes, Kant, Popper, Piaget e Granger, deixando de lado os grandes empiristas como Francis Bacon, David Hume, Stuart Mill, na medida em que suas teorias, devido às crenças embutidas no próprio Empirismo, não há lugar para o cérebro como condição primeira de qualquer tipo de leitura da experiência no mundo sensível. Ora, muitos biólogos, inclusive no Brasil, a partir do Prêmio Nobel em Fisiologia ou Medicina, Konrad Lorenz, consideram que o a priori kantiano pode ser interpretado hoje, como o aspecto endógeno, orgânico, da possibilidade humana de conhecer o mundo (o encéfalo) necessário a toda e qualquer leitura da experiência vivida, sobretudo quando houver a necessidade de explicá-la e reportá-la a terceiros. No caso da Medicina Legal, reportá-la à Justiça, com muitíssimas implicações psico-sociais. Nós proporemos então, à Medicina Legal, um Método Dialético, procurando demonstrar suas vantagens teóricas e práticas. / This essay begins with an overview of a bibliographic research on Forensic Medicine, from its early onset with Ambroise Paré, in 1532, up to the present. This research played an important role on the planning of this thesis (it is important to emphasize that the proper concept of Forensic Medicine, however, appears only in 1621 with Paolo Zacchia,-Quaestiones-Medico-Legales...). Our purpose in this work includes many theoretical aspects of this science, first of all, to show that Forensic Medicine should be considered a science of a class in the sense of Logic, instead of, as in Clinical Medicine, \"a science of the individual\", as stated by Gilles Granger in his already classical work on epistemology. The second theoretical objective will be to propose Forensic Medicine as a science of the \"Aristotelian frequent (hòs epì tò polú term coined by the Hellenist Porchat Pereira as a philosophical concept), through which such science should find its place between the realm of the mere accidental, and that of the necessary and universal, proper to the logic-mathematical reasoning. Thirdly, we also address the problem that the forensic reports are usually limited to empirical observations, and, then, we will also demonstrate the necessity to take into account a priori conditions of the possibility of the making of Visum et repertum, that means, we aim to consider the role of the brain in the framing of the human beings accessible experience, through a contemporary approach. On the empirical side, we conducted an extensive research, analyzing 996 Brazilian forensic reports, whose problems led us to questioning the generally used method. On the method, we discussed the theories of Aristotle, Descartes, Kant, Popper, Piaget and Granger, leaving aside the great empiricists such as Francis Bacon, David Hume, Stuart Mill, as in their theories, due to the embedded beliefs of empiricism, there is no room for the brain as the first condition of the possibility of any kind of sensory experience of the world. However, many biologists worldwide (Brazil included), since the work of Konrad Lorenz (Nobel Prize, Physiology or Medicine, 1973) has begun to circulate amid the scientific community, moved to consider that the Kantian a priori can be interpreted as the endogenous aspect (organic) of the human ability to apprehend the world, necessary to any construction of actual experience, especially when there is an intent to explain and report it to others. In the case of Forensic Medicine, it means reporting the findings to the Judicial System, which entails many psycho-social effects. Finally, we propose to Forensic Medicine a dialectical method, aiming to demonstrate its theoretical and practical advantages.
114

Paljon palveluja tarvitsevien asiakkaiden yksilöity sosiaali- ja terveyspalvelujen yhteen kokoaminen

Ylitalo-Katajisto, K. (Kirsti) 19 November 2019 (has links)
Abstract The purpose of this study was to describe and understand the individualised integration of social and health services for frequent attenders by customer profile from the perspective of knowledge-based management. The study was carried out using the multi-method approach. Sub-study (Ⅰ) described what kind of customer profiles could be identified among municipal residents based on diaries (n=15) at the planning stage of the social and health care centre. Sub-study (Ⅱ) identified the customer profiles of frequent attenders based on service plans (n=56). Sub-study (Ⅲ) described, in the form of a register study, based on four customer profiles, the use of primary healthcare, emergency care and specialised healthcare services by frequent attenders (n=2388) and the social services decided to them. The data of the sub-studies was analysed by means of content analysis and systematic analysis as well as statistically. As a result of the study, customer profiles were generated both for municipal residents and for frequent attenders. The purpose of identifying customer profiles for municipal residents was to seek preunderstanding for the definition of frequent attenders’ customer profiles. With frequent attenders, physical, mental and social service needs are intertwined. The use of social and health services was highly individualised according to the customers’ current life situation. The study highlighted from the perspective of knowledge-based management the need for individualised integration of social and health services for frequent attenders and for the multi-disciplinary social and health information and the flow of information between different social and health service operators it requires. The results of the study can be utilised in the construction and management of the integration of social and health services for frequent attenders. / Tiivistelmä Tutkimuksen tarkoituksena oli kuvailla ja ymmärtää paljon palveluja tarvitsevien asiakkaiden yksilöityä sosiaali- ja terveyspalvelujen (sote) integraatiota asiakasprofiileittain tietoperustaisen johtamisen näkökulmasta. Tutkimus toteutettiin monimenetelmäisesti. Osatutkimus (Ⅰ) kuvasi, millaisia kuntalaisten asiakasprofiileja oli tunnistettavissa päiväkirjojen (n = 15) avulla hyvinvointikeskuksen suunnitteluvaiheessa. Osatutkimuksessa (Ⅱ) palvelusuunnitelmien (n = 56) perusteella tunnistettiin paljon palveluja tarvitsevien asiakasprofiileja. Osatutkimuksessa (Ⅲ) kuvailtiin rekisteritutkimuksena neljään asiakasprofiiliin perustuen paljon palveluja tarvitsevien asiakkaiden perusterveydenhuollon, päivystyksen ja erikoissairaanhoidon palvelujen käyttöä (n = 2 388) sekä heille myönnettyjä sosiaalipalveluja. Osatutkimusten aineistot analysoitiin sisällönanalyysillä, systemaattisella analyysillä sekä tilastollisesti. Tutkimuksen tuloksena syntyi sekä kuntalaisten että paljon palveluja tarvitsevien asiakkaiden asiakasprofiileja. Kuntalaisten asiakasprofiilien tunnistamisella haettiin esiymmärrystä paljon palveluja tarvitsevien asiakkaiden asiakasprofiilien määrittelyyn. Paljon palveluja tarvitsevilla asiakkailla fyysiset, psyykkiset ja sosiaaliset palvelutarpeet kietoutuivat toisiinsa. Sote-palvelujen käyttö oli vahvasti yksilöity asiakkaiden oman elämäntilanteen mukaan. Tutkimus nosti esille tietoperustaisen johtamisen näkökulmasta perusteen paljon palveluja tarvitsevien asiakkaiden yksilöityyn sosiaali- ja terveyspalvelujen yhteen kokoamiseen ja sen edellyttämään monialaiseen sote-tietoon ja tiedon liikkumiseen eri sote-toimijoiden välillä. Tutkimuksen tuloksia voidaan hyödyntää paljon palveluja tarvitsevien asiakkaiden sote-integraation rakentamisessa ja johtamisessa.
115

Novel frequent itemset hiding techniques and their evaluation / Σύγχρονες μέθοδοι τεχνικών απόκρυψης συχνών στοιχειοσυνόλων και αξιολόγησή τους

Καγκλής, Βασίλειος 20 May 2015 (has links)
Advances in data collection and data storage technologies have given way to the establishment of transactional databases among companies and organizations, as they allow enormous volumes of data to be stored efficiently. Most of the times, these vast amounts of data cannot be used as they are. A data processing should first take place, so as to extract the useful knowledge. After the useful knowledge is mined, it can be used in several ways depending on the nature of the data. Quite often, companies and organizations are willing to share data for the sake of mutual benefit. However, these benefits come with several risks, as problems with privacy might arise, as a result of this sharing. Sensitive data, along with sensitive knowledge inferred from these data, must be protected from unintentional exposure to unauthorized parties. One form of the inferred knowledge is frequent patterns, which are discovered during the process of mining the frequent itemsets from transactional databases. The problem of protecting such patterns is known as the frequent itemset hiding problem. In this thesis, we review several techniques for protecting sensitive frequent patterns in the form of frequent itemsets. After presenting a wide variety of techniques in detail, we propose a novel approach towards solving this problem. The proposed method is an approach that combines heuristics with linear-programming. We evaluate the proposed method on real datasets. For the evaluation, a number of performance metrics are presented. Finally, we compare the results of the newly proposed method with those of other state-of-the-art approaches. / Η ραγδαία εξέλιξη των τεχνολογιών συλλογής και αποθήκευσης δεδομένων οδήγησε στην καθιέρωση των βάσεων δεδομένων συναλλαγών σε οργανισμούς και εταιρείες, καθώς επιτρέπουν την αποδοτική αποθήκευση τεράστιου όγκου δεδομένων. Τις περισσότερες φορές όμως, αυτός ο τεράστιος όγκος δεδομένων δεν μπορεί να χρησιμοποιηθεί ως έχει. Μια πρώτη επεξεργασία των δεδομένων πρέπει να γίνει, ώστε να εξαχθεί η χρήσιμη πληροφορία. Ανάλογα με τη φύση των δεδομένων, αυτή η χρήσιμη πληροφορία μπορεί να χρησιμοποιηθεί στη συνέχεια αναλόγως. Αρκετά συχνά, οι εταιρείες και οι οργανισμοί είναι πρόθυμοι να μοιραστούν τα δεδομένα μεταξύ τους με στόχο το κοινό τους όφελος. Ωστόσο, αυτά τα οφέλη συνοδεύονται με διάφορους κινδύνους, καθώς ενδέχεται να προκύψουν προβλήματα ιδιωτικής φύσης, ως αποτέλεσμα αυτής της κοινής χρήσης των δεδομένων. Ευαίσθητα δεδομένα, μαζί με την ευαίσθητη γνώση που μπορεί να προκύψει από αυτά, πρέπει να προστατευτούν από την ακούσια έκθεση σε μη εξουσιοδοτημένους τρίτους. Μια μορφή της εξαχθείσας γνώσης είναι τα συχνά μοτίβα, που ανακαλύφθηκαν κατά την εξόρυξη συχνών στοιχειοσυνόλων από βάσεις δεδομένων συναλλαγών. Το πρόβλημα της προστασίας συχνών μοτίβων τέτοιας μορφής είναι γνωστό ως το πρόβλημα απόκρυψης συχνών στοιχειοσυνόλων. Στην παρούσα διπλωματική εργασία, εξετάζουμε διάφορες τεχνικές για την προστασία ευαίσθητων συχνών μοτίβων, υπό τη μορφή συχνών στοιχειοσυνόλων. Αφού παρουσιάσουμε λεπτομερώς μια ευρεία ποικιλία τεχνικών απόκρυψης, προτείνουμε μια νέα προσέγγιση για την επίλυση αυτού του προβλήματος. Η προτεινόμενη μέθοδος είναι μια προσέγγιση που συνδυάζει ευρετικές μεθόδους με γραμμικό προγραμματισμό. Για την αξιολόγηση της προτεινόμενης μεθόδου χρησιμοποιούμε πραγματικά δεδομένα. Για τον σκοπό αυτό, παρουσιάζουμε επίσης και μια σειρά από μετρικές αξιολόγησης. Τέλος, συγκρίνουμε τα αποτελέσματα της νέας προτεινόμενης μεθόδου με άλλες κορυφαίες προσεγγίσεις.
116

VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS

Gao, Jizhou 01 January 2013 (has links)
This dissertation addresses the difficulties of semantic segmentation when dealing with an extensive collection of images and 3D point clouds. Due to the ubiquity of digital cameras that help capture the world around us, as well as the advanced scanning techniques that are able to record 3D replicas of real cities, the sheer amount of visual data available presents many opportunities for both academic research and industrial applications. But the mere quantity of data also poses a tremendous challenge. In particular, the problem of distilling useful information from such a large repository of visual data has attracted ongoing interests in the fields of computer vision and data mining. Structural Semantics are fundamental to understanding both natural and man-made objects. Buildings, for example, are like languages in that they are made up of repeated structures or patterns that can be captured in images. In order to find these recurring patterns in images, I present an unsupervised frequent visual pattern mining approach that goes beyond co-location to identify spatially coherent visual patterns, regardless of their shape, size, locations and orientation. First, my approach categorizes visual items from scale-invariant image primitives with similar appearance using a suite of polynomial-time algorithms that have been designed to identify consistent structural associations among visual items, representing frequent visual patterns. After detecting repetitive image patterns, I use unsupervised and automatic segmentation of the identified patterns to generate more semantically meaningful representations. The underlying assumption is that pixels capturing the same portion of image patterns are visually consistent, while pixels that come from different backdrops are usually inconsistent. I further extend this approach to perform automatic segmentation of foreground objects from an Internet photo collection of landmark locations. New scanning technologies have successfully advanced the digital acquisition of large-scale urban landscapes. In addressing semantic segmentation and reconstruction of this data using LiDAR point clouds and geo-registered images of large-scale residential areas, I develop a complete system that simultaneously uses classification and segmentation methods to first identify different object categories and then apply category-specific reconstruction techniques to create visually pleasing and complete scene models.
117

Distributed frequent subgraph mining in the cloud

Aridhi, Sabeur 29 November 2013 (has links) (PDF)
Recently, graph mining approaches have become very popular, especially in certain domains such as bioinformatics, chemoinformatics and social networks. One of the most challenging tasks in this setting is frequent subgraph discovery. This task has been highly motivated by the tremendously increasing size of existing graph databases. Due to this fact, there is urgent need of efficient and scaling approaches for frequent subgraph discovery especially with the high availability of cloud computing environments. This thesis deals with distributed frequent subgraph mining in the cloud. First, we provide the required material to understand the basic notions of our two research fields, namely graph mining and cloud computing. Then, we present the contributions of this thesis. In the first axis, we propose a novel approach for large-scale subgraph mining, using the MapReduce framework. The proposed approach provides a data partitioning technique that consider data characteristics. It uses the densities of graphs in order to partition the input data. Such a partitioning technique allows a balanced computational loads over the distributed collection of machines and replace the default arbitrary partitioning technique of MapReduce. We experimentally show that our approach decreases significantly the execution time and scales the subgraph discovery process to large graph databases. In the second axis, we address the multi-criteria optimization problem of tuning thresholds related to distributed frequent subgraph mining in cloud computing environments while optimizing the global monetary cost of storing and querying data in the cloud. We define cost models for managing and mining data with a large scale subgraph mining framework over a cloud architecture. We present an experimental validation of the proposed cost models in the case of distributed subgraph mining in the cloud.
118

Evolutionary algorithms and frequent itemset mining for analyzing epileptic oscillations

Smart, Otis Lkuwamy 28 March 2007 (has links)
This research presents engineering tools that address an important area impacting many persons worldwide: epilepsy. Over 60 million people are affected by epilepsy, a neurological disorder characterized by recurrent seizures that occur suddenly. Surgery and anti-epileptic drugs (AED s) are common therapies for epilepsy patients. However, only persons with seizures that originate in an unambiguous, focal portion of the brain are candidates for surgery, while AED s can lead to very adverse side-effects. Although medical devices based upon focal cooling, drug infusion or electrical stimulation are viable alternatives for therapy, a reliable method to automatically pinpoint dysfunctional brain and direct these devices is needed. This research introduces a method to effectively localize epileptic networks, or connectivity between dysfunctional brain, to guide where to insert electrodes in the brain for therapeutic devices, surgery, or further investigation. The method uses an evolutionary algorithm (EA) and frequent itemset mining (FIM) to detect and cluster frequent concentrations of epileptic neuronal action potentials within human intracranial electroencephalogram (EEG) recordings. In an experiment applying the method to seven patients with neocortical epilepsy (a total of 35 seizures), the approach reliably identifies the seizure onset zone, in six of the subjects (a total of 31 seizures). Hopefully, this research will lead to a better control of seizures and an improved quality of life for the millions of persons affected by epilepsy.
119

Effective Characterization of Sequence Data through Frequent Episodes

Ibrahim, A January 2015 (has links) (PDF)
Pattern discovery is an important area of data mining referring to a class of techniques designed for the extraction of interesting patterns from the data. A pattern is some kind of a local structure that captures correlations and dependencies present in the elements of the data. In general, pattern discovery is about finding all patterns of `interest' in the data and a popular measure of interestingness for a pattern is its frequency of occurrence in the data. Thus the problem of frequent pattern discovery is to find all patterns in the data whose frequency of occurrence exceeds some user defined threshold. However, frequency of a pattern is not the only measure for finding patterns of interest and there also exist other measures and techniques for finding interesting patterns. This thesis is concerned with efficient discovery of inherent patterns from long sequence (or temporally ordered) data. Mining of such sequentially ordered data is called temporal data mining and the temporal patterns that are discovered from large sequential data are called episodes. More specifically, this thesis explores efficient methods for finding small and relevant subsets of episodes from sequence data that best characterize the data. The thesis also discusses methods for comparing datasets, based on comparing the sets of patterns representing the datasets. The data in a frequent episode discovery framework is abstractly viewed as a single long sequence of events. Here, the event is a tuple, (Ei; ti), where Ei is referred to as an event-type (taking values from a finite alphabet set) and ti is the time of occurrence. The events are ordered in the non-decreasing order of the time of occurrence. The pattern of interest in such a sequence is called an episode, which is a collection of event-types with a partial order defined over it. In this thesis, the focus is on a special type of episode called serial episode, where there is a total order defined among the collection of event-types representing the episode. The occurrence of an episode is essentially a subset of events from the data whose event-types match the set of eventtypes associated with the episode and the order in which they occur conforms to the underlying partial order of the episode. The frequency of an episode is some measure of how often it occurs in the event stream. Many different notions of frequency have been defined in literature. Given a frequency definition, the goal of frequent episode discovery is to unearth all episodes which have a frequency greater than a user-defined threshold. The size of an episode is the number of event-types in the episode. An episode β is called a subepisode of another episode β, if the collection of event-types of β is a subset of the corresponding collection of α and the event-types of β satisfy the same partial order relationships present among the corresponding event-types of α. The set of all episodes can be arranged in a partial order lattice, where each level i contains episodes of size i and the partial order is the subepisode relationship. In general, there are two approaches for mining frequent episodes, based on the way one traverses this lattice. The first approach is to traverse this lattice in a breadth-first manner, and is called the Apriori approach. The other approach is the Pattern growth approach, where the lattice is traversed in a depth-first manner. There exist different frequency notions for episodes, and many Apriori based algorithms have been proposed for mining frequent episodes under the different frequencies. However there do not exist Pattern-growth based methods for many of the frequency notions. The first part of the thesis proposes new Pattern-growth methods for discovering frequent serial episodes under two frequency notions called the non-overlapped frequency and the total frequency. Special cases, where certain additional conditions, called the span and gap constraints, are imposed on the occurrences of the episodes are also considered. The proposed methods, in general, consist of two steps: the candidate generation step and the counting step. The candidate generation step involves finding potential frequent episodes. This is done by following the general Pattern growth approach for finding the candidates, which is the depth-first traversal of the lattice of all episodes. The second step, which is the counting step, involves counting the frequencies of the episodes. The thesis presents efficient methods for counting the occurrences of serial episodes using occurrence windows of subepisodes for both the non-overlapped and total frequency. The relative advantages of Pattern-growth approaches over Apriori approaches are also discussed. Through detailed simulation results, the effectiveness of this approach on a host of synthetic and real data sets is shown. It is shown that the proposed methods are highly scalable and efficient in runtime as compared to the existing Apriori approaches. One of the main issues in frequent pattern mining is the huge number of frequent patterns, returned by the discovery methods, irrespective of the approach taken to solve the problems. The second part of this thesis, addresses this issue and discusses methods of selecting a small subset of relevant episodes from event sequences. There have been a few approaches, discussed in the literature, for finding a small subset of patterns. One set of methods are information theory based methods, where patterns that provide maximum information are searched for. Another approach is the Minimum Description Length (MDL) principle based summarization schemes. Here the data is encoded using a subset of patterns (which forms the model for the data) and its occurrences. The subset of patterns that has the maximum efficiency in encoding the data is the best representative model for the data. The MDL principle takes into account both the encoding efficiency of the model as well as model complexity. A method, called Constrained Serial episode Coding(CSC), is proposed based on the MDL principle, which returns a highly relevant, non-redundant and small subset of serial episodes. This also includes an encoding scheme, where the model representation and the encoding of the data are efficient. An interesting feature of this algorithm for isolating a small set of relevant episodes is that it does not need a user-specified threshold on frequency. The effectiveness of this method is shown on two types of data. The first is data obtained from a detailed simulator for a reconfigurable coupled conveyor system. The conveyor system consists of different intersecting paths and packages flow through such a network. Mining of such data can allow one to unearth the main paths of package ows which can be useful in remote monitoring and visualization of the system. On this data, it is shown that the proposed method is able to return highly consistent sub paths, in the form of serial episodes, with great encoding efficiency as compared to other known related sequence summarization schemes, like SQS and GoKrimp. The second type of data consists of a collection of multi-class sequence datasets. It is shown that the selected episodes from the proposed method form good features in classi cation. The proposed method is compared with SQS and GoKrimp, and it is shown that the episodes selected by this method help in achieving better classification results as compared to other methods. The third and nal part of the thesis discusses methods for comparing sets of patterns representing different datasets. There are many instances when one is interested in comparing datasets. For example, in streaming data, one is interested in knowing whether the characteristics of the data are the same or have changed significantly. In other cases, one may simply like to compare two datasets and quantify the degree of similarity between them. Often, data are characterized by a set of patterns as described above. Comparing sets of patterns representing datasets gives information about the similarity/dissimilarity between the datasets. However not many measures exist for comparing sets of patterns. This thesis proposes a similarity measure for comparing sets of patterns which in turn aids in comparison of di erent datasets. First, a kernel for comparing two patterns, called the Pattern Kernel, is proposed. This kernel is proposed for three types of patterns: serial episodes, sequential patterns and itemsets. Using this kernel, a Pattern Set Kernel is proposed for comparing different sets of patterns. The effectiveness of this kernel is shown in classification and change detection. The thesis concludes with a summary of the main contributions and some suggestions for extending the work presented here.
120

Da doutrina e do método em medicina legal. Ensaio epistemológico sobre uma ciência bio-psico-social / Doctrine and Method in Forensic Medicine. An Epistemological Essay on a Bio-Psycho-Social Science

Jose Jozefran Berto Freire 24 April 2009 (has links)
Este ensaio inicia-se com um sucinto painel a respeito de uma pesquisa bibliográfica sobre Medicina Legal que começa com Ambroise Paré em 1532 e chega ao ano de 2008; pesquisa esta que muito nos ajudou no planejamento desta Tese. É preciso que se diga que o conceito de Medicina Legal só aparece em 1621 com Paolo Zacchia (Quaestiones Medico Legales...). Nosso objetivo neste trabalho envolve diferentes aspectos teóricos dessa ciência. O primeiro é o de demonstrar que a Medicina Legal pode ser a ciência de uma classe no sentido da Lógica e que ela não seria, portanto, a exemplo da Medicina Clínica, uma ciência do indivíduo, como diz Gilles Granger em sua obra sobre Epistemologia já tornada célebre. O segundo aspecto teórico seria o de propor a Medicina Legal como uma ciência do freqüente aristotélico (hòs epì tò polú ws epi to polu - termo cunhado pelo helenista Porchat Pereira, enquanto conceito filosófico) graças ao qual situamos nossa ciência entre o acidental e o necessário e universal do pensamento lógico-matemático. Em terceiro lugar discorreremos sobre o fato de que os laudos médico-legais estão normalmente restritos a constatação empírica e, então, iremos demonstrar a também indispensável consideração das condições a priori da possibilidade de se estabelecer o visum et repertum, ou seja, a consideração do papel do encéfalo na leitura da experiência possível ao ser humano, numa linguagem atual. No que diz respeito à prática, realizamos uma pesquisa na qual analisamos 996 laudos médico-legais, no Brasil, cujos problemas nos remeteram à questão do Método. Sobre o Método, consideramos as teorias de Aristóteles, Descartes, Kant, Popper, Piaget e Granger, deixando de lado os grandes empiristas como Francis Bacon, David Hume, Stuart Mill, na medida em que suas teorias, devido às crenças embutidas no próprio Empirismo, não há lugar para o cérebro como condição primeira de qualquer tipo de leitura da experiência no mundo sensível. Ora, muitos biólogos, inclusive no Brasil, a partir do Prêmio Nobel em Fisiologia ou Medicina, Konrad Lorenz, consideram que o a priori kantiano pode ser interpretado hoje, como o aspecto endógeno, orgânico, da possibilidade humana de conhecer o mundo (o encéfalo) necessário a toda e qualquer leitura da experiência vivida, sobretudo quando houver a necessidade de explicá-la e reportá-la a terceiros. No caso da Medicina Legal, reportá-la à Justiça, com muitíssimas implicações psico-sociais. Nós proporemos então, à Medicina Legal, um Método Dialético, procurando demonstrar suas vantagens teóricas e práticas. / This essay begins with an overview of a bibliographic research on Forensic Medicine, from its early onset with Ambroise Paré, in 1532, up to the present. This research played an important role on the planning of this thesis (it is important to emphasize that the proper concept of Forensic Medicine, however, appears only in 1621 with Paolo Zacchia,-Quaestiones-Medico-Legales...). Our purpose in this work includes many theoretical aspects of this science, first of all, to show that Forensic Medicine should be considered a science of a class in the sense of Logic, instead of, as in Clinical Medicine, \"a science of the individual\", as stated by Gilles Granger in his already classical work on epistemology. The second theoretical objective will be to propose Forensic Medicine as a science of the \"Aristotelian frequent (hòs epì tò polú term coined by the Hellenist Porchat Pereira as a philosophical concept), through which such science should find its place between the realm of the mere accidental, and that of the necessary and universal, proper to the logic-mathematical reasoning. Thirdly, we also address the problem that the forensic reports are usually limited to empirical observations, and, then, we will also demonstrate the necessity to take into account a priori conditions of the possibility of the making of Visum et repertum, that means, we aim to consider the role of the brain in the framing of the human beings accessible experience, through a contemporary approach. On the empirical side, we conducted an extensive research, analyzing 996 Brazilian forensic reports, whose problems led us to questioning the generally used method. On the method, we discussed the theories of Aristotle, Descartes, Kant, Popper, Piaget and Granger, leaving aside the great empiricists such as Francis Bacon, David Hume, Stuart Mill, as in their theories, due to the embedded beliefs of empiricism, there is no room for the brain as the first condition of the possibility of any kind of sensory experience of the world. However, many biologists worldwide (Brazil included), since the work of Konrad Lorenz (Nobel Prize, Physiology or Medicine, 1973) has begun to circulate amid the scientific community, moved to consider that the Kantian a priori can be interpreted as the endogenous aspect (organic) of the human ability to apprehend the world, necessary to any construction of actual experience, especially when there is an intent to explain and report it to others. In the case of Forensic Medicine, it means reporting the findings to the Judicial System, which entails many psycho-social effects. Finally, we propose to Forensic Medicine a dialectical method, aiming to demonstrate its theoretical and practical advantages.

Page generated in 0.0449 seconds