• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 10
  • 7
  • 7
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 11
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The Use of Nonword Repetition Tasks in the Assessment of Developmental Language Disorder in Bilingual Children

Kelly, Kirsten 17 June 2021 (has links)
To address the needs of the growing number of Spanish-English bilingual children in the United States, Nonword Repetition (NWR) tasks were created to reduce testing bias in the assessment and diagnosis of children with developmental language disorder (DLD). Several studies have shown promising results in the use of NWR tasks; however, fewer studies have addressed questions such as the use of different scoring methods or analyzing error patterns. Thus, this study was conducted to address these gaps in the research. An English and a Spanish NWR task were administered to 26 Spanish-English bilingual school aged children (6;0-9;4). Two different scoring methods (percent phoneme correct and whole word scoring) were compared for diagnostic accuracy and the types and frequency of errors were analyzed. Both scoring methods showed statistically significant differences between groups (participants with DLD and those with typically developing language). Whole word scoring in Spanish had the best diagnostic accuracy, according to sensitivity, specificity, and likelihood ratio measures. However, due to the small number of nonwords that any participant repeated correctly, this may not be a clinically practical scoring method. The Spanish NWR task was a better measure than the English NWR task in identifying children with DLD, suggesting that Spanish NWR could be used to assess DLD in bilingual children. Participants with DLD produced more consonant, vowel, substitution, and omission errors than those with typically developing language. There was no difference between groups for addition errors. Significantly more omission errors were made in Spanish, likely due to the longer nonwords. The longer nonwords may be key in distinguishing between typically developing children and those with DLD. These results have the potential to inform future clinical practices in selecting, scoring, and analyzing NWR tasks.
32

Combining the Power of Poetry, Repeated Readings, and Community Volunteers for Literacy Intervention: The Poetry Academy

Wilfong, Lori Georgianne 27 July 2006 (has links)
No description available.
33

Formal Methods for Intellectual Property Composition Across Synchronization Domains

Suhaib, Syed Mohammed 25 September 2007 (has links)
A significant part of the System-on-a-Chip (SoC) design problem is in the correct composition of intellectual property (IP) blocks. Ever increasing clock frequencies make it impossible for signals to reach from one end of the chip to the other end within a clock cycle; this invalidates the so-called synchrony assumption, where the timing of computation and communication are assumed to be negligible, and happen within a clock cycle. Missing the timing deadline causes this violation, and may have ramifications on the overall system reliability. Although latency insensitive protocols (LIPs) have been proposed as a solution to the problem of signal propagation over long interconnects, they have their own limitations. A more generic solution comes in the form of globally asynchronous locally synchronous (GALS) designs. However, composing synchronous IP blocks either over long multicycle delay interconnects or over asynchronous communication links for a GALS design is a challenging task, especially for ensuring the functional correctness of the overall design. In this thesis, we analyze various solutions for solving the synchronization problems related with IP composition. We present alternative LIPs, and provide a validation framework for ensuring their correctness. Our notion of correctness is that of latency equivalence between a latency insensitive design and its synchronous counterpart. We propose a trace-based framework for analyzing synchronous behaviors of different IPs, and provide a correct-by-construction protocol for their transformation to a GALS design. We also present a design framework for facilitating GALS designs. In the framework, Kahn process network specifications are refined into correct-by-construction GALS designs. We present formal definitions for the refinements towards different GALS architectures. For facilitating GALS in distributed embedded software, we analyze certain subclasses of synchronous designs using a Pomset-based semantic model that allows for desynchronization toward GALS. / Ph. D.
34

Preuves d’algorithmes distribués par raffinement

Tounsi, Mohamed 04 July 2012 (has links)
Dans cette thèse, nous avons étudié et développé un environnement de preuve pour les algorithmes distribués. Nous avons choisi de combiner d’une part l’approche "correct-par-construction" basée sur la méthode "B évènementielle" et d’autre part les calculs locaux comme un outil de codage et de preuve d’algorithmes distribués. Ainsi, nous avons proposé un patron et une approche qui caractérisent d’une façon incrémentale une démarche générale de preuve de plusieurs classes d’algorithmes distribués. Les solutions proposées sont validées et implémentées par un outil de preuve appelé B2Visidia. / In this thesis, we have studied and developed a proof environment for distributed algorithms. We have chosen to combine the “correct-by-construction” approach based on the “Event-B” method and the local computations models. These models define abstract computing processes for solving problems by distributed algorithms. Thus, we have proposed a pattern and an approach to characterize a general approach to prove several classes of distributed algorithms. The proposed solutions are implemented by a tool called B2Visidia.
35

Contribution to error analysis of algorithms in floating-point arithmetic / Contribution à l'analyse d'algorithmes en arithmétique à virgule flottante

Plet, Antoine 07 July 2017 (has links)
L’arithmétique virgule flottante est une approximation de l’arithmétique réelle dans laquelle chaque opération peut introduire une erreur. La norme IEEE 754 requiert que les opérations élémentaires soient aussi précises que possible, mais au cours d’un calcul, les erreurs d’arrondi s’accumulent et peuvent conduire à des résultats totalement faussés. Cela arrive avec une expression aussi simple que ab + cd, pour laquelle l’algorithme naïf retourne parfois un résultat aberrant, avec une erreur relative largement supérieure à 1. Il est donc important d’analyser les algorithmes utilisés pour contrôler l’erreur commise. Je m’intéresse à l’analyse de briques élémentaires du calcul en cherchant des bornes fines sur l’erreur relative. Pour des algorithmes suffisamment précis, en arithmétique de base β et de précision p, on arrive en général à prouver une borne sur l'erreur de la forme α·u + o(u²) où α > 0 et u = 1/2·β1-p est l'unité d'arrondi. Comme indication de la finesse d'une telle borne, on peut fournir des exemples numériques pour les précisions standards qui approchent cette borne, ou bien un exemple paramétré par la précision qui génère une erreur de la forme α·u + o(u²), prouvant ainsi l'optimalité asymptotique de la borne. J’ai travaillé sur la formalisation d’une arithmétique à virgule flottante symbolique, sur des nombres paramétrés par la précision, et à son implantation dans le logiciel de calcul formel Maple. J’ai aussi obtenu une borne d'erreur très fine pour un algorithme d’inversion complexe en arithmétique flottante. Ce résultat suggère le calcul d'une division décrit par la formule x/y = (1/y)·x, par opposition à x/y = (x·y)/|y|². Quel que soit l'algorithme utilisé pour effectuer la multiplication, nous avons une borne d'erreur plus petite pour les algorithmes décrits par la première formule. Ces travaux sont réalisés avec mes directeurs de thèse, en collaboration avec Claude-Pierre Jeannerod (CR Inria dans AriC, au LIP). / Floating-point arithmetic is an approximation of real arithmetic in which each operation may introduce a rounding error. The IEEE 754 standard requires elementary operations to be as accurate as possible. However, through a computation, rounding errors may accumulate and lead to totally wrong results. It happens for example with an expression as simple as ab + cd for which the naive algorithm sometimes returns a result with a relative error larger than 1. Thus, it is important to analyze algorithms in floating-point arithmetic to understand as thoroughly as possible the generated error. In this thesis, we are interested in the analysis of small building blocks of numerical computing, for which we look for sharp error bounds on the relative error. For this kind of building blocks, in base and precision p, we often successfully prove error bounds of the form α·u + o(u²) where α > 0 and u = 1/2·β1-p is the unit roundoff. To characterize the sharpness of such a bound, one can provide numerical examples for the standard precisions that are close to the bound, or examples that are parametrized by the precision and generate an error of the same form α·u + o(u²), thus proving the asymptotic optimality of the bound. However, the paper and pencil checking of such parametrized examples is a tedious and error-prone task. We worked on the formalization of a symbolicfloating-point arithmetic, over numbers that are parametrized by the precision, and implemented it as a library in the Maple computer algebra system. We also worked on the error analysis of the basic operations for complex numbers in floating-point arithmetic. We proved a very sharp error bound for an algorithm for the inversion of a complex number in floating-point arithmetic. This result suggests that the computation of a complex division according to x/y = (1/y)·x may be preferred, instead of the more classical formula x/y = (x·y)/|y|². Indeed, for any complex multiplication algorithm, the error bound is smaller with the algorithms described by the “inverse and multiply” approach.This is a joint work with my PhD advisors, with the collaboration of Claude-Pierre Jeannerod (CR Inria in AriC, at LIP).
36

Rigorous Design Flow for Programming Manycore Platforms / Flot de conception rigoureux pour la programmation de plates-formes manycore.

Bourgos, Paraskevas 09 April 2013 (has links)
L'objectif du travail présenté dans cette thèse est de répondre à un verrou fondamental, qui est «comment programmer d'une manière rigoureuse et efficace des applications embarquées sur des plateformes multi-coeurs?». Cette problématique pose plusieurs défis: 1) le développement d'une approche rigoureuse basée sur les modèles pour pouvoir garantir la correction; 2) le « mariage » entre modèle physique et modèle de calcul, c'est-à-dire, l'intégration du fonctionnel et non-fonctionnel; 3) l'adaptabilité. Pour s'attaquer à ces défis, nous avons développé un flot de conception rigoureux autour du langage BIP. Ce flot de conception permet l'exploration de l'espace de conception, le traitement à diffèrent niveaux d'abstraction à la fois pour la plate-forme et l'application, la génération du code et le déploiement sur des plates-formes multi-cœurs. La méthode utilisée s'appuie sur des transformations source-vers-source des modèles BIP. Ces transformations sont correctes-par-construction. Nous illustrons ce flot de conception avec la modélisation et le déploiement de plusieurs applications sur deux plates-formes différentes. La première plate-forme considérée est MPARM, une plate-forme virtuelle, basée sur des processeurs ARM et structurée avec des clusters, où chacun contient plusieurs cœurs. Pour cette plate-forme, nous avons considérée les applications suivantes: la factorisation de Cholesky, le décodage MPEG-2, le décodage MJPEG, la Transformée de Fourier Rapide et un algorithme de demosaicing. La seconde plate-forme est P2012/STHORM, une plate-forme multi-cœur, basée sur plusieurs clusters capable d'une gestion énergétique efficace. L'application considérée sur P2012/STHORM est l'algorithme HMAX. Les résultats expérimentaux montrent l'intérêt du flot de conception, notamment l'analyse rapide des performances ainsi que la modélisation au niveau du système, la génération de code et le déploiement. / The advent of many-core platforms is nowadays challenging our capabilities for efficient and predictable design. To meet this challenge, designers need methods and tools for guaranteeing essential properties and determining tradeoffs between performance and efficient resource management. In the process of designing a mixed software/hardware system, functional constraints and also extra-functional specifications should be taken into account as an essential part for the design of embedded systems. The impact of design choices on the overall behavior of the system should also be analyzed. This implies a deep understanding of the interaction between application software and the underlying execution platform. We present a rigorous model-based design flow for building parallel applications running on top of many-core platforms. The flow is based on the BIP - Behavior, Interaction, Priority - component framework and its associated toolbox. The method allows generation of a correct-by-construction mixed hardware/software system model for manycore platforms from an application software and a mapping. It is based on source-to-source correct-by-construction transformations of BIP models. It provides full support for modeling application software and validation of its functional correctness, modeling and performance analysis of system-level models, code generation and deployment on target many-core platforms. Our design flow is illustrated through the modeling and deployment of various software applications on two different hardware platforms; MPARM and platform P2012/STHORM. MPARM is a virtual ARM-based multi-cluster manycore platform, configured by the number of clusters, the number of ARM cores per cluster, and their interconnections. On MPARM, the software applications considered are the Cholesky factorization, the MPEG-2 decoding, the MJPEG decoding, the Fast Fourier Transform and the Demosaicing algorithm. Platform 2012 (P2012/STHORM) is a power efficient manycore computing fabric, which is highly modular and based on multiple clusters capable of aggressive fine-grained power management. As a case study on P2012/STHORM, we used the HMAX algorithm. Experimental results show the merits of the design flow, notably performance analysis as well as correct-by-construction system level modeling, code generation and efficient deployment.
37

Développement d'algorithmes répartis corrects par construction / Developing correct-by-construction distributed algorithms

Andriamiarina, Manamiary Bruno 20 October 2015 (has links)
Nous présentons dans cette thèse intitulée "Développement d'algorithmes répartis corrects par construction" nos travaux sur le développement et la vérification formels d'algorithmes répartis. Nous nous intéressons à ces algorithmes, à cause de la difficulté de leur vérification et validation. Pour analyser ces algorithmes, nous avons choisi d'utiliser Event B pour le raffinement de modèles, la vérification de propriétés de sûreté, et TLA, pour la vérification des propriétés temporelles (vivacité et équité). Nous nous sommes focalisé sur le paradigme de correction-par-construction, basé sur la modélisation par raffinement, la preuve de propriétés, ainsi que la réutilisation de modèles/preuves/propriétés (~ patrons de conception) pour guider le développement formel des algorithmes étudiés. Nous avons mis en place un paradigme de développement lors duquel un algorithme réparti est dans un premier temps caractérisé par les services qu'il fournit, et qui sont ensuite exprimés par des propriétés de vivacité, guidant la construction des modèles Event B de cet algorithme. Les règles d'inférence de TLA nous permettent ensuite de détailler les propriétés de vivacité, et de guider le développement formel par raffinement de l'algorithme. Ce paradigme, appelé "service-as-event", est caractérisé par des diagrammes d'assertions permettant de représenter les propriétés de vivacité (en prenant en compte l'équité) des algorithmes répartis étudiés, de comprendre leurs mécanismes. Ce paradigme nous a permis d'analyser des algorithmes de routage (Anycast RP de Cisco Systems et XY pour les réseaux-sur-puce (NoC)), des algorithmes de snapshot et des algorithmes d'auto-stabilisation. / The subject of this thesis is the formal development and verification of distributed algorithms. We are interested in this topic, because proving that a distributed algorithm satisfies given specification and properties is a difficult task. We choose to use the Event B method (refinement, safety properties) and the temporal logic TLA (fairness, liveness properties) for modelling the distributed algorithms. There are several existing approaches for formalising distributed algorithms, and we choose to focus on the "correct-by-construction" paradigm, which is characterised by the use of model refinement, proof of properties (safety, liveness) and reuse of formal models/proofs/properties, developments (~ design patterns) for modelling distributed algorithms. Our works introduce a paradigm which allows us to describe an algorithm with a set of services/functionalities, which are then expressed using liveness properties. These properties guide us in developing the formal Event B models of the studied algorithms. Inference rules from TLA allow to decompose the liveness properties, therefore detailing the services and guiding the refinement process. This paradigm, called "service-as-event" is also characterized by (assertions) diagrams, which allow to graphically represent liveness properties (with respect to fairness hypotheses) and detail the mecanisms and functioning of the studied distributed algorithms. The "service-as-event" paradigm allowed us to develop and verify the following algorithms : routing algorithms, such as Anycast RP (Cisco Systems), XY for Networks-on-Chip (NoC), snapshot and self-* algorithms.
38

Hermenêutica jurídica heterorreflexiva: limites e possibilidades de uma filosofia no direito

Carneiro, Wálber Araújo 07 December 2009 (has links)
Made available in DSpace on 2015-03-05T17:40:29Z (GMT). No. of bitstreams: 0 Previous issue date: 7 / Universidade do Vale do Rio dos Sinos / A pesquisa obedece ao movimento do método fenomenológico-hermenêutico de Martin Heidegger e busca a edificação de uma teoria hermenêutica voltada para a compreensão do direito. Analisa o modo como o conhecimento era concebido na antiguidade clássica e a relação da filosofia com outras formas de saber. Mostra como a ciência e a filosofia modernas foram sustentadas pela armação da técnica e como a racionalidade abstrata dominou as concepções jusnaturalistas da época. Analisa, por outro lado, a tradição antropológica do direito moderno e as limitações ao direito positivo impostas pelo direito natural. Levando em consideração o projeto de modernidade e o seu desvirtuamento, identifica as causas de consolidação do positivismo primitivo burguês e a redução do direito ao texto. O método silogístico e a redução do direito ao texto são as marcas do esquecimento do sentido do direito na modernidade. Visando a retomada do ser no direito, após concluir pela insipiência da crítica ao positivismo que se desenvolve no séc.
39

A motivação das decisões cíveis como condição de possibilidade para resposta correta / adequada

Motta, Cristina Reindolff da 17 September 2010 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-03-26T00:57:22Z No. of bitstreams: 1 CristinaMotaDireito.pdf: 1443424 bytes, checksum: 459171999951a14ed3d4e5b8812a91c7 (MD5) / Made available in DSpace on 2015-03-26T00:57:22Z (GMT). No. of bitstreams: 1 CristinaMotaDireito.pdf: 1443424 bytes, checksum: 459171999951a14ed3d4e5b8812a91c7 (MD5) Previous issue date: 2010 / Nenhuma / O dever constitucional de fundamentar viabiliza a obtenção de uma resposta correta/adequada da decisão, além de ser condição de possibilidade para a validade da decisão. É através da hermenêutica, com a análise do caso concreto, que se pode chegar a uma resposta correta/adequada ao caso. A interpretação da norma não está à mercê do aplicador, razão pela qual as decisões prescindem de leitura hermenêutica no intuito de fazer a correta leitura e aplicação da lei, uma vez que a resposta correta só se dá no caso concreto. A decisão correta deve estar baseada no direito como integridade, à margem da discricionariedade do decisor, que poderia, através do poder criador que lhe atribui a discricionariedade, decidir de acordo com a sua subjetividade. Este é o ponto fulcral do problema da fundamentação e das razões pelas quais ela se transformou, no âmbito do Estado Democrático de Direito, em um direito fundamental do cidadão e em um dever (have a duty) fundamental do juiz e do tribunal. A democracia, portanto, estará ligada umbilicalmente ao controle decisional. Por outro lado, fundamentação não quer dizer “qualquer fundamentação”, assim como não se pode atribuir “qualquer significado a um determinado texto”. A decisão, a partir da hermenêutica filosófica, revela uma faceta completamente antidiscricionária, levando à resposta correta ao caso concreto. A decisão deve demonstrar os critérios que foram utilizados como meio de evidenciar a sua correção e servir como norte para decisões futuras. A falta de fundamentação gera ausência de critérios de decisão, bem como impossibilita um controle externo das decisões. Portanto, por ser garantia do cidadão e ao mesmo tempo limitadora do julgador, a fundamentação é uma garantia fundamental. / The grounding constitutional right turns out to be an effective way to reach a correct/adequate answer to a decision and by also being the condition that enables the decision validation. It is through Hermeneutics together with the analysis of the substantial case that a correct/adequate decision to it can be reached. The interpretation of the norm is not up to the user being this the reason why the decisions ofthe substantial cases prescind from the Hermeneutics reading in order to have the correct understanding and enforcement of the law since the correct answer will only take place at the substantial case. The right decision at the substantial case must be based on law as integrity aside the discretion of the taker, who would be able to decide according tohis subjectivity by using the creative power given by the related discretion power. This is the crucial point of the foundational problem and its reasons why it turned into the citizen ́s fundamental right at the scope of welfare State and also as the fundamental duty of the judge/court. Hence, democracy will be inherently linked to the decision taking control. foundation does not mean any foundation since it cannot be given any meaning at a certain text.. The decision originated at philosophical hermeneutics reveals its entirely anti discretional side leading the correctanswer to the substantial case. The decision must show the criteria used so that it highlights its correction and also provide the direction for the future decisions. The lack of foundation promotes the absence of decisional criteria inasmuch as it hinders an external controlof the decisions. Thus the foundation as a means of the citizen ́s assurance, which is also restricting of the judger is essential garantee.
40

Identifiability in Knowledge Space Theory: a survey of recent results

Doignon, Jean-Paul 28 May 2013 (has links) (PDF)
Knowledge Space Theory (KST) links in several ways to Formal Concept Analysis (FCA). Recently, the probabilistic and statistical aspects of KST have been further developed by several authors. We review part of the recent results, and describe some of the open problems. The question of whether the outcomes can be useful in FCA remains to be investigated.

Page generated in 0.0491 seconds