• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

CRL2ALF : En översättare från PowerPC till ALF

Björnhager, Jens January 2011 (has links)
Realtidssystem ställer hårda krav på dess ingående mjukvaras temporala beteende. Programmen måste bete sig deterministiskt och ge svar inom satta tidsgränser. Med hårda krav följer större behov av verktyg att testa koden med. WCET (Worst Case Execution Time)-analys har som mål att finna en övre gräns för ett programs exekveringstid. SWEET (SWEdish Execution Time) är ett verktyg för WCET-analys utvecklat av en forskargrupp vid Mälardalens Högskola. PowerPC är en klassisk processorarkitektur som utvecklades av Apple, Motorola och IBM och släpptes 1991. Den har bland annat använts av Apple i äldre versioner av deras Macintosh-datorer och i TV-spelskonsoler såsom Nintendo GameCube och är stor inom inbyggda system. Tidigare har endast analys av källkod, C, varit möjlig i SWEET. Målet för detta examensarbete var att möjliggöra analys av körbara program för vilka källkoden ej är tillgänglig, Detta gjordes genom att konstruera en översättare från PowerPC-binärer till det programformat som SWEET använder för sina statiska analyser, ALF, med hjälp av tredjepartsverktyget aiT från AbsInt GmbH. Resultatet blev en med undantag för flyttalsinstruktioner komplett översättare av PowerPC-program till ALF-kod. De flesta genererade programfiler som har testats i SWEET har gett lyckat resultat. / Real-time systems put tough timing requirements on the software running on them. The programs must behave deterministically and respond within set time limits. With these demands comes a higher demand on verification tools. The goal of a WCET (Worst Case Execution Time) analysis is to derive the upper bound of a program's execution time. SWEET (SWEdish Execution Time) is a tool for WCET analysis developed by a research group at Mälardalen University. PowerPC is a classic processor architecture that was developed by Apple, Motorola and IBM and was released in 1991. It has been used in older versions of Apple's Macintosh computers and in video game consoles such as the GameCube from Nintendo, and is a popular choice for embedded solutions. Previously you could only do analyses on code generated from C in SWEET. The goal of this MSC thesis was to construct a converter from PowerPC binaries to the program format that SWEET uses for its analyses, ALF, with the help of the third-party tool aiT from AbsInt GmbH. The result is a - with the exception of floating-point instructions - complete converter from PowerPC programs to ALF. Most of the generated program files have been tested within SWEET with successful results.
2

Ovula Ring® - Eine neue Methode der Zyklusdiagnostik bei der Frau. Die Darstellung der kontinuierlichen Körperkerntemperaturmessung in Form eines Cyclofertilogramms und die Erarbeitung eines Scores zur Zyklus-Klassifikation

Komár, Marta Alicja 10 March 2020 (has links)
Ovula Ring® - Eine neue Methode der Zyklusdiagnostik bei der Frau. Die Darstellung der kontinuierlichen Körperkerntemperaturmessung in Form eines Cyclofertilogramms und die Erarbeitung eines Scores zur Zyklus-Klassifikation:1 Einleitung 1.1 Eine historische Betrachtung der Körpertemperaturmessung 1.1.1 Carl August Wunderlich 1.1.2 Anpassung der Thermometer für die klinische Praxis 1.1.3 William Squire und Mary Putnam Jacobi 1.1.4 Ludwig Fraenkel 1.1.5 Theodoor Hendrik Van de Velde 1.1.6 Gerhard Döring 1.2 Die Körperkerntemperatur 1.3 Körpertemperaturmessung in der Medizin 1.4 Neue Temperaturmessmethoden in der Zyklusdiagnostik 1.5 Die Physiologie des weiblichen Menstruationszyklus 1.5.1 Ovarieller Zyklus 1.5.2 Endometrialer Zyklus 1.5.3 Biphasischer Zyklus 1.5.4 Der Einfluss weiblicher Hormone auf die Körperkerntemperatur 1.6 Der Einfluss von Zyklusstörungen auf die Körperkerntemperatur 1.6.1 Corpus-luteum-Insuffizienz 1.6.2 LUF-Syndrom 1.6.3 Oligoovulation und Anovulation 1.7 Unerfüllter Kinderwunsch 2 Aufgabenstellung 3 Materialien und Methoden 3.1 Materialien 3.1.1 Messinstrument mit technischer Beschreibung 3.1.2 Handhabung 3.1.3 Zyklustagebücher 3.1.4 Fragebogen zum Tragekomfort 3.1.5 Chronotypfragebogen 3.2 Methoden 3.2.1 Studiendesign und Datenerhebung 3.2.2 Geplanter Studienablauf 3.2.3 Verfahren bei erwünschten Ereignissen 3.2.4 Zyklusauswertung mittels OvulaRing® 3.2.5 Cyclofertilogramm-Score 3.2.6 Temperaturdaten 3.2.7 Datenschutz 4 Ergebnisse 4.1 Kollektivbeschreibung 4.2 Alter und Body-Mass-Index 4.3 Durchgeführter Studienablauf 4.4 Gründe für die Studienteilnahme 4.5 Registrierte Schwangerschaften 4.6 Ergebnisse des Zyklusmonitorings und Ermittlung des CFG-Scores 4.6.1 Verteilung der Zykluslängen 4.6.2 Normzyklen 4.6.3 Ovulationszeitpunkte 4.6.4 Oligoovulatorische Zyklen 4.6.5 Zyklen mit LUF-Syndrom 4.6.6 Hypo- und hypertherme Zyklusphasen 4.6.7 Cyclofertilogramme von registrierten Schwangerschaften 4.7 Ergebnisse der Temperaturauswertung 4.7.1 Nadir 4.7.2 Periovulatorischer Temperaturanstieg 4.8 Auswertung der Fragebögen 5 Diskussion 5.1 Zykluslänge 5.2 Ovulation 5.3 Hypertherme Zyklusphase 5.4 Cyclofertilogramm- Score 5.5 Biorhythmus und Nadir 5.6 Tragekomfort des OvulaRing® 5.7 OvulaRing® und andere kontinuierliche Temperaturmessmethoden in der Zyklusdiagnostik 6 Zusammenfassung der Arbeit 7 Literaturverzeichnis 8 Abbildungsverzeichnis 9 Tabellenverzeichnis 10 Anhang Zyklustagebuch Selbstständigkeitserklärung Publikationen Curriculum vitae Danksagung
3

The Efficacy of Forward-Edge Control-Flow Integrity in Mitigating Memory Corruption Vulnerabilities : The Case of the Android Stack

Olofsson, Viktor January 2023 (has links)
Memory corruption is one of the oldest and most prominent problems in the field of computer security. In order to protect the vulnerabilities that arise from memory corruption, a mitigation technique called Control-flow Integrity (CFI) was developed. The Android Open Source Project utilizes a specific implementation of the CFI policy called forward-edge CFI in the compilation of the Android system. However, memory corruption vulnerabilities are still a problem for Android systems. This raises the question: Is forward-edge CFI really effective in mitigating memory corruption vulnerabilities? In this research, the efficacy of forward-edge CFI in terms of mitigating memory corruption vulnerabilities in Android systems is analyzed. This is done by analyzing nine Common Vulnerabilities and Exposures (CVE) in terms of how they can be exploited and whether forward-edge CFI could mitigate them. Additionally, the Android binaries containing the vulnerabilities are analyzed in an attempt to detect the presence of CFI instrumentation. CFI was detected in one of nine vulnerable Android binaries, implying that there exist memory corruption vulnerabilities that forward-edge CFI definitely can not protect. The analysis of nine CVEs showed that five CVEs could be mitigated by forward-edge CFI. These results indicate that forward-edge CFI could definitely mitigate a portion of the memory corruption vulnerabilities plaguing Android systems. However, in order to protect a greater portion of memory corruption vulnerabilities, forward-edge CFI should be combined with other mitigation techniques such as Shadow Stacks.
4

Deep neural semantic parsing: translating from natural language into SPARQL / Análise semântica neural profunda: traduzindo de linguagem natural para SPARQL

Luz, Fabiano Ferreira 07 February 2019 (has links)
Semantic parsing is the process of mapping a natural-language sentence into a machine-readable, formal representation of its meaning. The LSTM Encoder-Decoder is a neural architecture with the ability to map a source language into a target one. We are interested in the problem of mapping natural language into SPARQL queries, and we seek to contribute with strategies that do not rely on handcrafted rules, high-quality lexicons, manually-built templates or other handmade complex structures. In this context, we present two contributions to the problem of semantic parsing departing from the LSTM encoder-decoder. While natural language has well defined vector representation methods that use a very large volume of texts, formal languages, like SPARQL queries, suffer from lack of suitable methods for vector representation. In the first contribution we improve the representation of SPARQL vectors. We start by obtaining an alignment matrix between the two vocabularies, natural language and SPARQL terms, which allows us to refine a vectorial representation of SPARQL items. With this refinement we obtained better results in the posterior training for the semantic parsing model. In the second contribution we propose a neural architecture, that we call Encoder CFG-Decoder, whose output conforms to a given context-free grammar. Unlike the traditional LSTM encoder-decoder, our model provides a grammatical guarantee for the mapping process, which is particularly important for practical cases where grammatical errors can cause critical failures. Results confirm that any output generated by our model obeys the given CFG, and we observe a translation accuracy improvement when compared with other results from the literature. / A análise semântica é o processo de mapear uma sentença em linguagem natural para uma representação formal, interpretável por máquina, do seu significado. O LSTM Encoder-Decoder é uma arquitetura de rede neural com a capacidade de mapear uma sequência de origem para uma sequência de destino. Estamos interessados no problema de mapear a linguagem natural em consultas SPARQL e procuramos contribuir com estratégias que não dependam de regras artesanais, léxico de alta qualidade, modelos construídos manualmente ou outras estruturas complexas feitas à mão. Neste contexto, apresentamos duas contribuições para o problema de análise semântica partindo da arquitetura LSTM Encoder-Decoder. Enquanto para a linguagem natural existem métodos de representação vetorial bem definidos que usam um volume muito grande de textos, as linguagens formais, como as consultas SPARQL, sofrem com a falta de métodos adequados para representação vetorial. Na primeira contribuição, melhoramos a representação dos vetores SPARQL. Começamos obtendo uma matriz de alinhamento entre os dois vocabulários, linguagem natural e termos SPARQL, o que nos permite refinar uma representação vetorial dos termos SPARQL. Com esse refinamento, obtivemos melhores resultados no treinamento posterior para o modelo de análise semântica. Na segunda contribuição, propomos uma arquitetura neural, que chamamos de Encoder CFG-Decoder, cuja saída está de acordo com uma determinada gramática livre de contexto. Ao contrário do modelo tradicional LSTM Encoder-Decoder, nosso modelo fornece uma garantia gramatical para o processo de mapeamento, o que é particularmente importante para casos práticos nos quais erros gramaticais podem causar falhas críticas em um compilador ou interpretador. Os resultados confirmam que qualquer resultado gerado pelo nosso modelo obedece à CFG dada, e observamos uma melhora na precisão da tradução quando comparada com outros resultados da literatura.
5

Alignement automatique de textes parallèles Français-Japonais

Nakamura-Delloye, Yayoi 17 December 2007 (has links) (PDF)
L'alignement automatique consiste à trouver une correspondance entre des unités de textes parallèles. Nous nous intéressons plus particulièrement à la réalisation d'un système qui procède à l'alignement au niveau des propositions, unités profitables dans beaucoup d'applications.<br />La présente thèse est constituée de deux types de travaux : les travaux introducteurs et ceux constituant le noyau central. Ce dernier s'articule autour de la notion de proposition syntaxique.<br />Les travaux introducteurs comprennent l'étude des généralités sur l'alignement ainsi que des travaux consacrés à l'alignement des phrases. Ces travaux ont conduit à la réalisation d'un système d'alignement des phrases adapté au traitement des textes français et japonais.<br />Le noyau de la thèse est composé de deux types de travaux, études linguistiques et réalisations informatiques. Les études linguistiques se divisent elles-mêmes en deux sujets : la proposition en français et la proposition en japonais. Le but de nos études sur la proposition française est de définir une grammaire pour la détection des propositions. Pour cet effet, nous avons cherché à définir une typologie des propositions, basée sur des critères uniquement formels. Dans les études sur le japonais, nous définissons d'abord la phrase japonaise sur la base de l'opposition thème-rhème. Nous tentons ensuite d'élucider la notion de proposition.<br />Les réalisations informatiques comportent trois tâches composant ensemble au final l'opération d'alignement des propositions, incarnées par trois systèmes informatiques distincts : deux détecteurs de propositions (un pour le français et un pour le japonais), ainsi qu'un système d'alignement des propositions.
6

Alignement automatique de textes parallèles français-japonais

Nakamura-Delloye, Yayoi 17 December 2007 (has links) (PDF)
L'alignement automatique consiste à trouver une correspondance entre des unités de textes parallèles. Nous nous intéressons plus particulièrement à la réalisation d'un système qui procède à l'alignement au niveau des propositions, unités profitables dans beaucoup d'applications.<br />La présente thèse est constituée de deux types de travaux : les travaux introducteurs et ceux constituant le noyau central. Ce dernier s'articule autour de la notion de proposition syntaxique.<br />Les travaux introducteurs comprennent l'étude des généralités sur l'alignement ainsi que des travaux consacrés à l'alignement des phrases. Ces travaux ont conduit à la réalisation d'un système d'alignement des phrases adapté au traitement des textes français et japonais.<br />Le noyau de la thèse est composé de deux types de travaux, études linguistiques et réalisations informatiques. Les études linguistiques se divisent elles-mêmes en deux sujets : la proposition en français et la proposition en japonais. Le but de nos études sur la proposition française est de définir une grammaire pour la détection des propositions. Pour cet effet, nous avons cherché à définir une typologie des propositions, basée sur des critères uniquement formels. Dans les études sur le japonais, nous définissons d'abord la phrase japonaise sur la base de l'opposition thème-rhème. Nous tentons ensuite d'élucider la notion de proposition.<br />Les réalisations informatiques comportent trois tâches composant ensemble au final l'opération d'alignement des propositions, incarnées par trois systèmes informatiques distincts : deux détecteurs de propositions (un pour le français et un pour le japonais), ainsi qu'un système d'alignement des propositions.
7

Generování modelů pro testy ze zdrojových kódů / Generating of Testing Models from Source Code

Kraut, Daniel January 2019 (has links)
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.
8

Literacy Training in an Urban High School Professional Learning Community

Ross-Norris, Vicki Sandra 01 January 2017 (has links)
The purpose of this study was to explore the essence of professional learning experiences shared by teachers who participated in a professional learning community (PLC) at a New York City high school in the South Bronx. Guided by Hord's PLC characteristics and Bruner's constructivism theories, this phenomenological study addressed the research questions of what PLC practices urban high school teachers employ to support the academic-literacy achievement of their students of low social economic status (SES); the role of administration in the PLC process; and the roles of a shared mission, values, vision, norms, and collaborative knowledge on the functioning of the PLC. Data collected from the 6 PLC teachers included semi-structured individual interviews, observations of PLC meetings over a 2-month period, participating teacher reflective journal entries, and a researcher's log. Manual data analysis consisted of reading raw data multiple times to determine patterns, themes, and relationships. Additionally, concept and descriptive coding approaches facilitated data source analysis. Gerund words and short phrases generated labels and categories that resulted symbolic representation. The results were that the urban high school teachers demonstrated Hord's PLC characteristics and Bruner's constructivism theories within their PLC's practices and principles leading to decision-making and solutions to problems such as improving teachers' literacy practices, students' literacy skills and classroom behavior, and school wide Individualized Educational Plan process. The findings of this study support the engagement of urban high school teachers in self-directed PLC activities that may promote social change by improving literacy instruction and literacy achievement among students of low SES.
9

On-the-Fly Dynamic Dead Variable Analysis

Self, Joel P. 22 March 2007 (has links) (PDF)
State explosion in model checking continues to be the primary obstacle to widespread use of software model checking. The large input ranges of variables used in software is the main cause of state explosion. As software grows in size and complexity the problem only becomes worse. As such, model checking research into data abstraction as a way of mitigating state explosion has become more and more important. Data abstractions aim to reduce the effect of large input ranges. This work focuses on a static program analysis technique called dead variable analysis. The goal of dead variable analysis is to discover variable assignments that are not used. When applied to model checking, this allows us to ignore the entire input range of dead variables and thus reduce the size of the explored state space. Prior research into dead variable analysis for model checking does not make full use of dynamic run-time information that is present during model checking. We present an algorithm for intraprocedural dead variable analysis that uses dynamic run-time information to find more dead variables on-the-fly and further reduce the size of the explored state space. We introduce a definition for the maximal state space reduction possible through an on-the-fly dead variable analysis and then show that our algorithm produces a maximal reduction in the absence of non-determinism.

Page generated in 0.0689 seconds