• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Meta-analysis applied to Multi-agent Software Engineering / Méta-analyse pour le génie logiciel des systèmes multi-agents

Razo Ruvalcaba, Luis Alfonso 23 July 2012 (has links)
Considérant un point de vue général de cette thèse aborde le problème de trouver, à partir d'un ensemble de blocs de construction, un sous-ensemble qui procure une solution à un problème donné. Ceci est fait en tenant compte de la compatibilité de chacun des blocs de construction par rapport au problème et l'aptitude d'interaction entre ces parties pour former ensemble une solution. Dans la perspective notamment de la thèse sont les blocs de construction de méta-modèles et le problème donné est une description d'un problème peut être résolu en utilisant un logiciel et d'être résolu en utilisant un système multi-agents. Le noyau de la proposition de thèse est un processus qui analyse un problème donné et puis il proposé une solution possible basée sur système multi-agents pour ce problème. Il peut également indiquer que le problème ne peut être résolu par ce paradigme. Le processus adressée par la thèse consiste en les étapes principales suivantes: (1) A travers un processus de caractérisation on analyse la description du problème pour localiser le domaine de solutions, puis choisissez une liste de candidats des méta-modèles. (2) Les caractérisations de méta-modèles candidats sont prises, ils sont définis dans plusieurs domaines de la solution. On fait la chois parmi le domaine trouvé dans la étape précédant. (3) On crée un système multi-agents où chaque agent représente un candidat méta-modèle. Dans cette société les agents interagissent les uns avec les autres pour trouver un groupe de méta-modèles qui est adapté pour représenter une solution donnée. Les agents utilisent des critères appropriés pour chaque méta-modèle à représenter. Il évalue également la compatibilité des groupes créés pour résoudre le problème de décider le groupe final qui est la meilleure solution. Cette thèse se concentre sur la fourniture d'un processus et un outil prototype pour résoudre plutôt la dernière étape de la liste. Par conséquent, le chemin proposé a été créé à l'aide de plusieurs concepts de la méta-analyse, l'intelligence artificielle de coopération, de la cognition bayésienne, incertitude, la probabilité et statistique. / From a general point of view this thesis addresses an automatic path to build a solution choosing a compatible set of building blocks to provide such a solution to solve a given problem. To create the solution it is considered the compatibility of each available building block with the problem and also the compatibility between each building block to be employed within a solution all together. In the particular perspective of this thesis the building blocks are meta-models and the given problem is a description of a problem that can be solved using software using a multi-agent system paradigm. The core of the thesis proposal is the creation of a process based on a multi-agent system itself. Such a process analyzes the given problem and the available meta-models then it matches both and thus it suggests one possible solution (based on meta-models) for the problem. Nevertheless if no solution is found it also indicates that the problem can not be solved through this paradigm using the available meta-models. The process addressed by the thesis consists of the following main steps: (1) Through a process of characterization the problem description is analyzed in order to locate the solution domain and therefore employ it to choose a list of most domain compatible meta-models as candidates. (2) There are required also meta-model characterization that evaluate each meta-model performance within each considered domain of solution. (3) The matching step is built over a multi-agent system where each agent represents a candidate meta-model. Within this multi-agent system each agent interact with each other in order to find a group of suitable meta-models to represent a solution. Each agent use as criteria the compatibility between their represented candidate meta-model with the other represented meta-models. When a group is found the overall compatibility with the given problem is evaluated. Finally each agent has a solution group. Then these groups are compared between them in order to find the most suitable to solve the problem and then to decide the final group. This thesis focuses on providing a process and a prototype tool to solve the last step. Therefore the proposed path has been created using several concepts from meta-analysis, cooperative artificial intelligence, Bayesian cognition, uncertainty, probability and statistics.
32

Analýza burzovních dat metodami UI / Analysis of Stock Exchange Data to UI Methods

Kutina, Michal January 2008 (has links)
The graduation thesis "Analysis of stock-exchange data using AI methods" is focused on the use of neural networks while predicating the exchange-rate movements on Change. The theoretical part is divided into three independent units. The Change matters and the related individual terms are described in the first part. In the second part, the two basic approaches to the stock-exchange data analysis are analyzed, these two approaches being the fundamental and technical analysis. The third, and the last, theoretical part forms an individual unit describing the Artificial Intelligence theory. Particularly the issue of the neuronal networks is described in detail. The practical part seeks the use for the chosen neuronal network GAME. It analyses the chosen YMZ9 market. It focuses on the prediction of the exchange-rate movements using the "sliding window" method. The last chapter summarizes the results and it proves that under certain circumstances it is possible to properly use the neuronal networks both for the prediction of the stock-exchange movements and as one of the corner-stones of the profitable trading system.
33

Mediální reprezentace fenoménu deepfakes / Media representation of deepfake

Janjić, Saška January 2022 (has links)
This master thesis explores the media representation of deepfakes. The first part summarizes previous research followed by a comprehensive review of deepfakes, including the technology allowing for their emergence, current uses and methods of regulation and detection. The second part connects the phenomenon with important theoretical concepts such as social construction of reality and the crucial role of media in this process. The empirical part consists of research combining two methods - quantitative content analysis and qualitative critical discourse analysis. The research analysis is focused on media articles dealing with deepfakes in order to find out how the media represent this phenomenon. The results show that current media discourse of deepfakes is strongly negative as the media frame them as a security threat. This negative representation is highly speculative since journalists often invent their own stories of future disastrous consequences of the technology for national security due to lack of current examples. The findings show an apparent hierarchy of the harms posed by deepfakes which is present in media coverage, and reflects gender sereotypes and inequality in the current society. Harm in the form of non-consensual fake pornography targeting women is neglected in the media...
34

Aide à la décision médicale et télémédecine dans le suivi de l’insuffisance cardiaque / Medical decision support and telemedecine in the monitoring of heart failure

Duarte, Kevin 10 December 2018 (has links)
Cette thèse s’inscrit dans le cadre du projet "Prendre votre cœur en mains" visant à développer un dispositif médical d’aide à la prescription médicamenteuse pour les insuffisants cardiaques. Dans une première partie, une étude a été menée afin de mettre en évidence la valeur pronostique d’une estimation du volume plasmatique ou de ses variations pour la prédiction des événements cardiovasculaires majeurs à court terme. Deux règles de classification ont été utilisées, la régression logistique et l’analyse discriminante linéaire, chacune précédée d’une phase de sélection pas à pas des variables. Trois indices permettant de mesurer l’amélioration de la capacité de discrimination par ajout du biomarqueur d’intérêt ont été utilisés. Dans une seconde partie, afin d’identifier les patients à risque de décéder ou d’être hospitalisé pour progression de l’insuffisance cardiaque à court terme, un score d’événement a été construit par une méthode d’ensemble, en utilisant deux règles de classification, la régression logistique et l’analyse discriminante linéaire de données mixtes, des échantillons bootstrap et en sélectionnant aléatoirement les prédicteurs. Nous définissons une mesure du risque d’événement par un odds-ratio et une mesure de l’importance des variables et des groupes de variables. Nous montrons une propriété de l’analyse discriminante linéaire de données mixtes. Cette méthode peut être mise en œuvre dans le cadre de l’apprentissage en ligne, en utilisant des algorithmes de gradient stochastique pour mettre à jour en ligne les prédicteurs. Nous traitons le problème de la régression linéaire multidimensionnelle séquentielle, en particulier dans le cas d’un flux de données, en utilisant un processus d’approximation stochastique. Pour éviter le phénomène d’explosion numérique et réduire le temps de calcul pour prendre en compte un maximum de données entrantes, nous proposons d’utiliser un processus avec des données standardisées en ligne au lieu des données brutes et d’utiliser plusieurs observations à chaque étape ou toutes les observations jusqu’à l’étape courante sans avoir à les stocker. Nous définissons trois processus et en étudions la convergence presque sûre, un avec un pas variable, un processus moyennisé avec un pas constant, un processus avec un pas constant ou variable et l’utilisation de toutes les observations jusqu’à l’étape courante. Ces processus sont comparés à des processus classiques sur 11 jeux de données. Le troisième processus à pas constant est celui qui donne généralement les meilleurs résultats / This thesis is part of the "Handle your heart" project aimed at developing a drug prescription assistance device for heart failure patients. In a first part, a study was conducted to highlight the prognostic value of an estimation of plasma volume or its variations for predicting major short-term cardiovascular events. Two classification rules were used, logistic regression and linear discriminant analysis, each preceded by a stepwise variable selection. Three indices to measure the improvement in discrimination ability by adding the biomarker of interest were used. In a second part, in order to identify patients at short-term risk of dying or being hospitalized for progression of heart failure, a short-term event risk score was constructed by an ensemble method, two classification rules, logistic regression and linear discriminant analysis of mixed data, bootstrap samples, and by randomly selecting predictors. We define an event risk measure by an odds-ratio and a measure of the importance of variables and groups of variables using standardized coefficients. We show a property of linear discriminant analysis of mixed data. This methodology for constructing a risk score can be implemented as part of online learning, using stochastic gradient algorithms to update online the predictors. We address the problem of sequential multidimensional linear regression, particularly in the case of a data stream, using a stochastic approximation process. To avoid the phenomenon of numerical explosion which can be encountered and to reduce the computing time in order to take into account a maximum of arriving data, we propose to use a process with online standardized data instead of raw data and to use of several observations per step or all observations until the current step. We define three processes and study their almost sure convergence, one with a variable step-size, an averaged process with a constant step-size, a process with a constant or variable step-size and the use of all observations until the current step without storing them. These processes are compared to classical processes on 11 datasets. The third defined process with constant step-size typically yields the best results
35

Pulse Repetition Interval Modulation Classification using Machine Learning / Maskininlärning för klassificering av modulationstyp för pulsrepetitionsintervall

Norgren, Eric January 2019 (has links)
Radar signals are used for estimating location, speed and direction of an object. Some radars emit pulses, while others emit a continuous wave. Both types of radars emit signals according to some pattern; a pulse radar, for example, emits pulses with a specific time interval between pulses. This time interval may either be stable, change linearly, or follow some other pattern. The interval between two emitted pulses is often referred to as the pulse repetition interval (PRI), and the pattern that defines the PRI is often referred to as the modulation. Classifying which PRI modulation is used in a radar signal is a crucial component for the task of identifying who is emitting the signal. Incorrectly classifying the used modulation can lead to an incorrect guess of the identity of the agent emitting the signal, and can as a consequence be fatal. This work investigates how a long short-term memory (LSTM) neural network performs compared to a state of the art feature extraction neural network (FE-MLP) approach for the task of classifying PRI modulation. The results indicate that the proposed LSTM model performs consistently better than the FE-MLP approach across all tested noise levels. The downside of the proposed LSTM model is that it is significantly more complex than the FE-MLP approach. Future work could investigate if the LSTM model is too complex to use in a real world setting where computing power may be limited. Additionally, the LSTM model can, in a trivial manner, be modified to support more modulations than those tested in this work. Hence, future work could also evaluate how the proposed LSTM model performs when support for more modulations is added. / Radarsignaler används för att uppskatta plats, hastighet och riktning av objekt. Vissa radarer sänder ut signaler i form av pulser, medan andra sänder ut en kontinuerlig våg. Båda typer av radarer avger signaler enligt ett visst mönster, till exempel avger en pulsradar pulser med ett specifikt tidsintervall mellan pulserna. Detta tidsintervall kan antingen vara konstant, förändras linjärt, eller följa ett annat mönster. Intervallet mellan två pulser benämns ofta pulsrepetitionsintervall (PRI), och mönstret som definierar PRIn benämns ofta modulering. Att klassificera vilken PRI-modulering som används i en radarsignal är en viktig del i processen att identifiera vem som skickade ut signalen. Felaktig klassificering av den använda moduleringen kan leda till en felaktig gissning av identiteten av agenten som skickade ut signalen, vilket kan leda till ett dödligt utfall. Detta arbete undersöker hur väl det framtagna neurala nätverket som består av ett långt korttidsminne (LSTM) kan klassificera PRI-modulering i förhållande till en modern modell som använder särskilt utvalda beräknade särdrag från data och klassificerar dessa särdrag med ett neuralt nätverk. Resultaten indikerar att LSTM-modellen konsekvent klassificerar med högre träffsäkerhet än modellen som använder särdrag, vilket gäller för alla testade brusnivåer. Nackdelen med LSTM-modellen är att den är mer komplex än modellen som använder särdrag. Framtida arbete kan undersöka om LSTM-modellen är för komplex för att använda i ett verkligt scenario där beräkningskraften kan vara begränsad. Dessutom skulle framtida arbete kunna utvärdera hur väl LSTM-modellen kan klassificera PRI-moduleringar när stöd för fler moduleringar än de som testats i detta arbete läggs till, detta då stöd för ytterligare PRI-moduleringar kan läggas till i LSTM-modellen på ett trivialt sätt.
36

The Shades of Styles : A human search for words communicating all aspects of styles.

Hellerslien, Erlend January 2021 (has links)
This research is an investigative attempt on the concept of style´s development to potentially noticing our diverse human history on viewing the aspect of styles, starting (in the part one) by looking into the problem of the development of styles and its characteristic of representation in terms of its messages, realties, semiotics, and human collaboration. Leading towards the human search in seeing style more commonly neutral for a more meaningful dialog. The research shows then (in the part two) the potential to build a Digital Style Dictionary and A Digital Visual Compass: A Human-Centric Guide on The Aspect of Seeing Reality’s that can support identifying aspects of multiple realities (core reality, abstract reality, surreal reality and artificial reality) — where two cases (in the part three) of visual styles get analyzed, discussed, reframed, and presented (Transpace and Swisch). Fundamentally this paper looks to provoke a discussion on what we humans want the point to be in seeing styles. The complexity is as grand as our diversity, but still, this research highlights the hope to respectfully identify the distinctive shades of styles for the sake of a more significant human dialog and inclusion. The research´s grand ambition is knowingly bigger than what it itself can grasp to complete right now (2021) fully. It proposes an idea for the near future to shape a Digital Style Dictionary and a Digital Visual Compass that works for the common human aspect of seeing styles. This research is a first attempt towards shaping the fundamental frame towards a spectrum of the style´s, that we can respectfully continue to articulate for the sake to include better human communication on the aspect of seeing distinctiveness, not that style´s stands in a capital value program between something “high” or “low.” Instead, we can now start to collaborate in shaping and building these potential tools as A Digital Style Dictionary and A Digital Visual Compass in sharing a more human-centric spectrum of styles to push the human evolution of knowledge further.

Page generated in 0.0876 seconds