• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 12
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 18
  • 16
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Zkoumání souvislostí mezi pokrytím poruch a testovatelností elektronických systémů / Investigating of Relations between Fault-Coverage and Testability of Electronic Systems

Rumplík, Michal January 2010 (has links)
This work deals with testability analysis of digital circuits and fault coverage. It contains a desription of digital systems, their diagnosis, a description of tools for generating and applying tests and sets of benchmark circuits. It describes the testing of circuits and experimentation in tool TASTE for testability analysis and commercial tool for generating and applying tests. The experiments are focused on increase the testability of circuits.
72

Improving the Gameplay Experience and Guiding Bottom Players in an Interactive Mapping Game

Ambekar, Kiran 05 1900 (has links)
In game based learning, motivating the players to learn by providing them a desirable gameplay experience is extremely important. However, it's not an easy task considering the quality of today's commercial non-educational games. Throughout the gameplay, the player should neither get overwhelmed nor under-challenged. The best way to do so is to monitor the player's actions in the game because these actions can tell the reason behind the player's performance. They can also tell about the player's lacking competencies or knowledge. Based on this information, in-game educational interventions in the form of hints can be provided to the player. The success of such games depends on their interactivity, motivational outlook and thus player retention. UNTANGLED is an online mapping game based on crowd-sourcing, developed by Reconfigurable Computing Lab, UNT for the mapping problem of CGRAs. It is also an educational game for teaching the concepts of reconfigurable computing. This thesis performs qualitative comparative analysis on gameplays of low performing players of UNTANGLED. And the implications of this analysis are used to provide recommendations for improving the gameplay experience for these players by guiding them. The recommendations include strategies to reach a high score and a compact solution, hints in the form of preset patterns and a clustering based approach.
73

Educação a Distância: estudo exploratório sobre a produção de materiais didáticos audiovisuais / Distance Education: an exploratory study on the production of audiovisual didactic materials

Caram, Nirave Reigota [UNESP] 27 June 2017 (has links)
Submitted by NIRAVE REIGOTA CARAM null (nira_rc@hotmail.com) on 2017-07-27T00:10:38Z No. of bitstreams: 1 TESE NIRAVE CARAM REVISADA.pdf: 2356486 bytes, checksum: 596cdfa58986c17cb7cb781050754d89 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-07-31T14:07:45Z (GMT) No. of bitstreams: 1 caram_nr_dr_arafcl.pdf: 2356486 bytes, checksum: 596cdfa58986c17cb7cb781050754d89 (MD5) / Made available in DSpace on 2017-07-31T14:07:45Z (GMT). No. of bitstreams: 1 caram_nr_dr_arafcl.pdf: 2356486 bytes, checksum: 596cdfa58986c17cb7cb781050754d89 (MD5) Previous issue date: 2017-06-27 / O cenário atual é caracterizado por mudanças significativas devidas, entre outros fatores, à adoção de novas tecnologias nos campos social, econômico e cultural. A informação tornou-se produto valioso na sociedade contemporânea, a qual é mediada pelas Tecnologias de Informação e Comunicação (TIC). O campo educacional, acompanhando tais mudanças adaptou-se adotando novas tecnologias no ensino. Inovações tecnológicas facilitam a formação dos indivíduos, fomentando a discussão sobre o livre acesso e a democratização da educação. A modalidade a distância na educação superior se expandiu rapidamente, fazendo uso de Materiais Didáticos Audiovisuais (MDAs) de diferentes formatos para a transmissão do conteúdo. Atualmente, a Educação a Distância (EaD) é regulamentada no Brasil e possui diretrizes de qualidade traçadas pelo Ministério da Educação (MEC) em documentos oficiais. Neste contexto, surge a presente pesquisa para analisar a produção dos MDAs em Instituições de Ensino Superior (IES), avaliando a qualidade envolvida em seu processo de produção a partir da ótica das equipes multidisciplinares. Foi realizada uma revisão bibliográfica sobre a temática e, posteriormente, pesquisa documental com a finalidade de embasar o estudo e direcionar a fase empírica, etapa subsequente da investigação. A coleta qualitativa de dados foi realizada em duas IES que ofertam cursos na modalidade a distância, com o objetivo de avaliar a percepção de qualidade dos materiais a partir das entrevistas com os profissionais dos Núcleos de Educação a Distância (NEaDs): gestores, docentes conteudistas e produtores técnicos; e de análise de amostras de MDAs através do método de Análise de Conteúdo de Bardin. Foi possível concluir, por um lado, que a percepção de qualidade em EaD ainda não está suficientemente definida pelas IES, pois a implantação desta modalidade de ensino é trabalhosa e onerosa e, portanto, carece de muito planejamento e uma visão inovadora do conceito de educação por meio da virtualidade. Por outro lado, a expansão desta modalidade de ensino precisa de esforços do Governo Federal para atualização de balizadores sobre qualidade, os Referenciais de Qualidade para a EaD. / The current scenario is characterized by significant changes due to, among other factors, the adoption of new technologies in the social, economic and cultural fields. Information has become a valuable product in contemporary society, mediated by Information and Communication Technologies (ICT). The educational field, following such changes, was adapted adopting new technologies in the teaching. Technological innovations facilitate the training of individuals by fostering discussion about free access and the democratization of education. The distance modality in higher education expanded rapidly, making use of Audiovisual Didactic Materials (MDAs) of different formats for the transmission of the content. Currently, Distance Education (EaD) is regulated in Brazil and has quality guidelines drawn by the Ministry of Education (MEC) in official documents. In this context, the present research to analyze the production of MDAs in Higher Education Institutions (HEI) evaluates the quality involved in their production process from the perspective of the multidisciplinary teams. A bibliographic review was done on the subject and, later, documentary research with the purpose of supporting the study and directing the empirical phase, subsequent stage of the investigation. The qualitative data collection was carried out in two HEIs that offer courses in the distance modality, with the objective of evaluating the perception of quality of the materials from the interviews with the professionals of the Nucleuses of Distance Education (NEaDs): managers, content teachers and technical producers; and analysis of MDAs samples by the Bardin´s Content Analysis method. It was possible to conclude on the one hand that the perception of quality in EaD is still not defined enough by HEIs, since the implementation of this modality of education is laborious and costly and therefore lacks much planning and an innovative vision of the concept of education through of virtuality. On the other hand, the expansion of this modality of education needs the efforts of the Federal Government to update validators about quality, the Quality Guidelines for EaD.
74

Mitteilungen des URZ 4/2001

Becher,, Ehrig,, Fritsche,, Hübner,, Meyer,, Müller,, Pester,, Richter,, Riedel,, Ziegler, 06 December 2001 (has links)
Inhalt: Neue Aufgaben im URZ - Änderungen im Kursprogramm; Ein Jahr CLiC - ein Überblick; Wissenschaftliche Rechnungen auf CLiC; "camo - campus mobil" - das neue Funknetz im Campus; Mailfilter - Teil 2; HBFG-Projekt: Kommunikations- und Applikationsserver-Infrastruktur; MONARCH-Dokumente in nichtlokalen Recherchesystemen; SPARC III CPU-Server gulliver; Software-News; TeX-Stammtisch
75

Performance Evaluation of Data Intensive Computing In The Cloud

Kaza, Bhagavathi 01 January 2013 (has links)
Big data is a topic of active research in the cloud community. With increasing demand for data storage in the cloud, study of data-intensive applications is becoming a primary focus. Data-intensive applications involve high CPU usage for processing large volumes of data on the scale of terabytes or petabytes. While some research exists for the performance effect of data intensive applications in the cloud, none of the research compares the Amazon Elastic Compute Cloud (Amazon EC2) and Google Compute Engine (GCE) clouds using multiple benchmarks. This study performs extensive research on the Amazon EC2 and GCE clouds using the TeraSort, MalStone and CreditStone benchmarks on Hadoop and Sector data layers. Data collected for the Amazon EC2 and GCE clouds measure performance as the number of nodes is varied. This study shows that GCE is more efficient for data-intensive applications compared to Amazon EC2.
76

The Relationship Between Student Engagement and Student Retention of Adult Learners at Community Colleges

Spitzig, Janet 05 May 2021 (has links)
No description available.
77

Mitteilungen des URZ 4/2001

Becher, Ehrig, Fritsche, Hübner, Meyer, Müller, Pester, Richter, Riedel, Ziegler 06 December 2001 (has links)
Inhalt: Neue Aufgaben im URZ - Änderungen im Kursprogramm; Ein Jahr CLiC - ein Überblick; Wissenschaftliche Rechnungen auf CLiC; 'camo - campus mobil' - das neue Funknetz im Campus; Mailfilter - Teil 2; HBFG-Projekt: Kommunikations- und Applikationsserver-Infrastruktur; MONARCH-Dokumente in nichtlokalen Recherchesystemen; SPARC III CPU-Server gulliver; Software-News; TeX-Stammtisch
78

Indikátory a benchmarky jako nástroj vzdělávací politiky EU / Indicators and benchmarks as a tool of the EU education policy

Hulík, Vladimír January 2011 (has links)
Vladimír Hulík: Indikátory a benchmarky jako nástroj vzdělávací politiky EU 2 Abstract This Diploma Thesis focuses on the issue of indicators and benchmarks in the European education policy. Theoretical concepts linked with these questions are introduced in the first part - the theory of europeanization, open method of coordination, theory of benchmarking (especially focused on the public sector) and politics of indicators as a critical eye in this field. In the second part, the historical development of the European education policy is described and the growth of its importance during the last 20 years. Three iniciatives of the European education policy are identified in which the indicators and benchmarking methods are used. The last part aims to study mutual relations among indicators (for which the benchmarks are set) through the use of the correlation matrix (Pearson's correlation).
79

Finer grained evaluation methods for better understanding of deep neural network representations

Bordes, Florian 08 1900 (has links)
Établir des méthodes d'évaluation pour les systèmes d'intelligence artificielle (IA) est une étape importante pour précisément connaître leurs limites et ainsi prévenir les dommages qu'ils pourraient causer et savoir quels aspects devraient être améliorés. Cela nécessite d'être en mesure de dresser des portraits précis des limitations associées à un système d'IA donné. Cela demande l'accès à des outils et des principes fiables, transparent, à jour et faciles à utiliser. Malheureusement, la plupart des méthodes d'évaluation utilisées à ce jour ont un retard significatif par rapport aux performances toujours croissantes des réseaux de neurones artificiels. Dans cette thèse par articles, je présente des méthodes et des principes d'évaluation plus rigoureux pour obtenir une meilleur compréhension des réseaux de neurones et de leurs limitations. Dans le premier article, je présente Representation Conditional Diffusion Model (RCDM), une méthode d'évaluation à l'état de l'art qui permet, à partir d'une représentation donnée -- par exemple les activations d'une couche donnée d'un réseau de neurones artificiels -- de générer une image. En utilisant les dernières avancées dans la génération d'images, RCDM permet aux chercheur·euse·s de visualiser l'information contenue à l'intérieur d'une représentation. Dans le deuxième article, j'introduis la régularisation par Guillotine qui est une technique bien connue dans la littérature sur l'apprentissage par transfert mais qui se présente différemment dans la littérature sur l'auto-apprentissage. Pour améliorer la généralisation à travers différentes tâches, on montre qu'il est important d'évaluer un modèle en coupant un certain nombre de couches. Dans le troisième article, j'introduis le score DéjaVu qui quantifie à quel point un réseau de neurones a mémorisé les données d'entraînement. Ce score utilise une petite partie d'une image d'entraînement puis évalue quelles informations il est possible d'inférer à propos du reste de l'image. Dans le dernier article, je présente les jeux de données photo-réalistes PUG (Photorealistic Unreal Graphics) que nous avons développés. Au contraire de données réelles, pour lesquelles générer des annotations est un processus coûteux, l'utilisation de données synthétiques offre un contrôle total sur la scène générée et sur les annotations. On utilise un moteur de jeux vidéo qui permet la synthèse d'images photo-réalistes de haute qualité, afin d'évaluer la robustesse d'un réseau de neurones pré-entraîné, ceci sans avoir besoin d'adapter ce réseau avec un entraînement additionnel. / Carefully designing benchmarks to evaluate the safety of Artificial Intelligent (AI) agents is a much-needed step to precisely know the limits of their capabilities and thus prevent potential damages they could cause if used beyond these limits. Researchers and engineers should be able to draw precise pictures of the failure modes of a given AI system and find ways to mitigate them. Drawing such portraits requires reliable tools and principles that are transparent, up-to-date, and easy to use by practitioners. Unfortunately, most of the benchmark tools used in research are often outdated and quickly fall behind the fast pace of improvement of the capabilities of deep neural networks. In this thesis by article, I focus on establishing more fine-grained evaluation methods and principles to gain a better understanding of deep neural networks and their limitations. In the first article, I present Representation Conditional Diffusion Model (RCDM), a state-of-the-art visualization method that can map any deep neural network representation to the image space. Using the latest advances in generative modeling, RCDM sheds light on what is learned by deep neural networks by allowing practitioners to visualize the richness of a given representation. In the second article, I (re)introduce Guillotine Regularization (GR) -- a trick that has been used for a long time in transfer learning -- from a novel understanding and viewpoint grounded in the self-supervised learning outlook. We show that evaluating a model by removing its last layers is important to ensure better generalization across different downstream tasks. In the third article, I introduce the DejaVu score which quantifies how much models are memorizing their training data. This score relies on leveraging partial information from a given image such as a crop, and evaluates how much information one can retrieve about the entire image based on only this partial content. In the last article, I introduce the Photorealistic Unreal Graphics (PUG) datasets and benchmarks. In contrast to real data for which getting annotations is often a costly and long process, synthetic data offers complete control of the elements in the scene and labeling. In this work, we leverage a powerful game engine that produces high-quality and photorealistic images to evaluate the robustness of pre-trained neural networks without additional finetuning.
80

Performance Evaluation of Kotlin and Java on Android Runtime / Prestandautvärdering av Kotlin och Java för Android Runtime

Schwermer, Patrik January 2018 (has links)
This study evaluates the performance of Kotlin and Java on Android Runtime using four benchmarks from the Computer Language Benchmarks Game suite, for which a total of 12 benchmark implementations are studied. The metrics used to evaluate the performance includes runtime, memory consumption, garbage collection, boxing of primitives as well as bytecode n-grams. To benchmark the languages, a benchmark application has been developed intended to run on an Android phone. The results indicate that Kotlin is slower than Java for all studied benchmarks by a varying factor. Furthermore, the use of idiomatic Kotlin features and constructs results in additional heap pressure and the need of boxed primitives. Other interesting results indicate the existence of an underlying garbage collection overhead when reclaiming Kotlin objects compared to Java. Furthermore, Kotlin produces larger and more varied bytecode than Java for a majority of the benchmarks. / Denna studie utvärderar prestandan mellan Kotlin och Java på Android Runtime genom 12 implementationer av fyra benchmarks från The Computer Language Benchmarks Game. De mätvärden som använts för att utvärdera prestandan inkluderar körtid, minnesanvändning, garbage collection, boxing av primitiver samt bytekod n-grams. För att benchmarka språken har en benchmarkapplikation tagits fram för Android. Resultaten visar att Kotlin är långsammare än Java för samtliga benchmarks. Vidare resulterar användandet av idiomatiska Kotlin-funktioner i ökad minnesanvänding samt behovet att representera primitiver som klasser. Andra intressanta resultat inkluderar existensen av en overhead för garbage collectorn för frigörandet av objekt som allokerats av Kotlin jämfört med Java. Vidare producerar Kotlin större bytekodfiler och uppvisar mer varierad bytekod än Java för en majoritet av de benchmarks som studerats.

Page generated in 0.0463 seconds