• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 614
  • 124
  • 124
  • 114
  • 71
  • 50
  • 20
  • 18
  • 17
  • 14
  • 12
  • 10
  • 8
  • 5
  • 4
  • Tagged with
  • 1398
  • 364
  • 228
  • 159
  • 156
  • 141
  • 138
  • 125
  • 124
  • 113
  • 110
  • 99
  • 96
  • 87
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Data-Driven Database Education: A Quantitative Study of SQL Learning in an Introductory Database Course

Von Dollen, Andrew C 01 July 2019 (has links)
The Structured Query Language (SQL) is widely used and challenging to master. Within the context of lab exercises in an introductory database course, this thesis analyzes the student learning process and seeks to answer the question: ``Which SQL concepts, or concept combinations, trouble students the most?'' We provide comprehensive taxonomies of SQL concepts and errors, identify common areas of student misunderstanding, and investigate the student problem-solving process. We present an interactive web application used by students to complete SQL lab exercises. In addition, we analyze data collected by this application and we offer suggestions for improvement to database lab activities.
142

Možnosti přípravy bíle emitujícího elektroluminiscenčního panelu / Preparation of white-electroluminescent panel

Guricová, Patrícia January 2019 (has links)
The aim of this work is to prepare white emitting electroluminescent device using printing techniques. Preparation options are discussed in order to minimise reabsorption in the phosphor layer and thus increase the overall radiation intensity. Model devices were prepared, the active layer of phosphor printed in a pattern of stripes and circles. The impact of the applied voltage and frequency was studied on these devices. It has been shown that, in terms of white emission, it is better to use the patterns compared to the phosphor mixture. The ratios of emission intensities of both phosphors are more even, therefor closer to the white light. The output of this work is model designed to determine the necessary frequency area for obtaining the white emission of ACEL device.
143

Effective and annotation efficient deep learning for image understanding / Méthodes d'apprentissage profond pour l'analyse efficace d'images en limitant l'annotation humaine

Gidaris, Spyridon 11 December 2018 (has links)
Le développement récent de l'apprentissage profond a permis une importante amélioration des résultats dans le domaine de l'analyse d'image. Cependant, la conception d'architectures d'apprentissage profond à même de résoudre efficacement les tâches d'analyse d'image est loin d'être simple. De plus, le succès des approches d'apprentissage profond dépend fortement de la disponibilité de données en grande quantité étiquetées manuellement (par des humains), ce qui est à la fois coûteux et peu pratique lors du passage à grande échelle. Dans ce contexte, l'objectif de cette thèse est d'explorer des approches basées sur l'apprentissage profond pour certaines tâches de compréhension de l'image qui permettraient d'augmenter l'efficacité avec laquelle celles-ci sont effectuées ainsi que de rendre le processus d'apprentissage moins dépendant à la disponibilité d'une grande quantité de données annotées à la main. Nous nous sommes d'abord concentrés sur l'amélioration de l'état de l'art en matière de détection d'objets. Plus spécifiquement, nous avons tenté d'améliorer la capacité des systèmes de détection d'objets à reconnaître des instances d'objets (même difficiles à distinguer) en proposant une représentation basée sur des réseaux de neurone convolutionnels prenant en compte le aspects multi-région et de segmentation sémantique, et capable de capturer un ensemble diversifié de facteurs d'apparence discriminants. De plus, nous avons visé à améliorer la précision de localisation des systèmes de détection d'objets en proposant des schémas itératifs de détection d'objets et un nouveau modèle de localisation pour estimer la boîte de délimitation d'un objet. En ce qui concerne le problème de l'étiquetage des images à l'échelle du pixel, nous avons exploré une famille d'architectures de réseaux de neurones profonds qui effectuent une prédiction structurée des étiquettes de sortie en apprenant à améliorer (itérativement) une estimation initiale de celles-ci. L'objectif est d'identifier l'architecture optimale pour la mise en œuvre de tels modèles profonds de prévision structurée. Dans ce contexte, nous avons proposé de décomposer la tâche d'amélioration de l'étiquetage en trois étapes : 1) détecter les estimations initialement incorrectes des étiquettes, 2) remplacer les étiquettes incorrectes par de nouvelles étiquettes, et finalement 3) affiner les étiquettes renouvelées en prédisant les corrections résiduelles. Afin de réduire la dépendance à l'effort d'annotation humaine, nous avons proposé une approche d'apprentissage auto-supervisée qui apprend les représentations sémantiques d'images à l'aide d'un réseau de neurones convolutionnel en entraînant ce dernier à reconnaître la rotation 2d qui est appliquée à l'image qu'il reçoit en entrée. Plus précisément, les caractéristiques de l'image tirées de cette tâche de prédiction de rotation donnent de très bons résultats lorsqu'elles sont transférées sur les autres tâches de détection d'objets et de segmentation sémantique, surpassant les approches d'apprentissage antérieures non supervisées et réduisant ainsi l'écart avec le cas supervisé. Enfin, nous avons proposé un nouveau système de reconnaissance d'objets qui, après son entraînement, est capable d'apprendre dynamiquement de nouvelles catégories à partir de quelques exemples seulement (typiquement, seulement un ou cinq), sans oublier les catégories sur lesquelles il a été formé. Afin de mettre en œuvre le système de reconnaissance proposé, nous avons introduit deux nouveautés techniques, un générateur de poids de classification basé sur l'attention et un modèle de reconnaissance basé sur un réseau neuronal convolutionnel dont le classificateur est implémenté comme une fonction de similarité cosinusienne entre les représentations de caractéristiques et les vecteurs de classification / Recent development in deep learning have achieved impressive results on image understanding tasks. However, designing deep learning architectures that will effectively solve the image understanding tasks of interest is far from trivial. Even more, the success of deep learning approaches heavily relies on the availability of large-size manually labeled (by humans) data. In this context, the objective of this dissertation is to explore deep learning based approaches for core image understanding tasks that would allow to increase the effectiveness with which they are performed as well as to make their learning process more annotation efficient, i.e., less dependent on the availability of large amounts of manually labeled training data. We first focus on improving the state-of-the-art on object detection. More specifically, we attempt to boost the ability of object detection systems to recognize (even difficult) object instances by proposing a multi-region and semantic segmentation-aware ConvNet-based representation that is able to capture a diverse set of discriminative appearance factors. Also, we aim to improve the localization accuracy of object detection systems by proposing iterative detection schemes and a novel localization model for estimating the bounding box of the objects. We demonstrate that the proposed technical novelties lead to significant improvements in the object detection performance of PASCAL and MS COCO benchmarks. Regarding the pixel-wise image labeling problem, we explored a family of deep neural network architectures that perform structured prediction by learning to (iteratively) improve some initial estimates of the output labels. The goal is to identify which is the optimal architecture for implementing such deep structured prediction models. In this context, we propose to decompose the label improvement task into three steps: 1) detecting the initial label estimates that are incorrect, 2) replacing the incorrect labels with new ones, and finally 3) refining the renewed labels by predicting residual corrections w.r.t. them. We evaluate the explored architectures on the disparity estimation task and we demonstrate that the proposed architecture achieves state-of-the-art results on the KITTI 2015 benchmark.In order to accomplish our goal for annotation efficient learning, we proposed a self-supervised learning approach that learns ConvNet-based image representations by training the ConvNet to recognize the 2d rotation that is applied to the image that it gets as input. We empirically demonstrate that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. Specifically, the image features learned from this task exhibit very good results when transferred on the visual tasks of object detection and semantic segmentation, surpassing prior unsupervised learning approaches and thus narrowing the gap with the supervised case.Finally, also in the direction of annotation efficient learning, we proposed a novel few-shot object recognition system that after training is capable to dynamically learn novel categories from only a few data (e.g., only one or five training examples) while it does not forget the categories on which it was trained on. In order to implement the proposed recognition system we introduced two technical novelties, an attention based few-shot classification weight generator, and implementing the classifier of the ConvNet based recognition model as a cosine similarity function between feature representations and classification vectors. We demonstrate that the proposed approach achieved state-of-the-art results on relevant few-shot benchmarks
144

Stochastic functional descent for learning Support Vector Machines

He, Kun 22 January 2016 (has links)
We present a novel method for learning Support Vector Machines (SVMs) in the online setting. Our method is generally applicable in that it handles the online learning of the binary, multiclass, and structural SVMs in a unified view. The SVM learning problem consists of optimizing a convex objective function that is composed of two parts: the hinge loss and quadratic regularization. To date, the predominant family of approaches for online SVM learning has been gradient-based methods, such as Stochastic Gradient Descent (SGD). Unfortunately, we note that there are two drawbacks in such approaches: first, gradient-based methods are based on a local linear approximation to the function being optimized, but since the hinge loss is piecewise-linear and nonsmooth, this approximation can be ill-behaved. Second, existing online SVM learning approaches share the same problem formulation with batch SVM learning methods, and they all need to tune a fixed global regularization parameter by cross validation. On the one hand, global regularization is ineffective in handling local irregularities encountered in the online setting; on the other hand, even though the learning problem for a particular global regularization parameter value may be efficiently solved, repeatedly solving for a wide range of values can be costly. We intend to tackle these two problems with our approach. To address the first problem, we propose to perform implicit online update steps to optimize the hinge loss, as opposed to explicit (or gradient-based) updates that utilize subgradients to perform local linearization. Regarding the second problem, we propose to enforce local regularization that is applied to individual classifier update steps, rather than having a fixed global regularization term. Our theoretical analysis suggests that our classifier update steps progressively optimize the structured hinge loss, with the rate controlled by a sequence of regularization parameters; setting these parameters is analogous to setting the stepsizes in gradient-based methods. In addition, we give sufficient conditions for the algorithm's convergence. Experimentally, our online algorithm can match optimal classification performances given by other state-of-the-art online SVM learning methods, as well as batch learning methods, after only one or two passes over the training data. More importantly, our algorithm can attain these results without doing cross validation, while all other methods must perform time-consuming cross validation to determine the optimal choice of the global regularization parameter.
145

Meta-Techniques for Faculty Development: A Continuous Improvement Model for Building Capacity to Facilitate in a Large Interprofessional Program

Williams, S. A., Johnson, Amy D., Cross, L. B. 01 January 2021 (has links)
Literature regarding faculty development in uniprofessional healthcare programs is prolific; however, little has been written about instructional programs designed for faculty delivering interprofessional education (IPE). In this paper, we describe the genesis, content, and improvement of a faculty development workshop which exemplifies a meta teaching model and was designed to serve faculty facilitators in a rapidly growing IPE program. Evaluations following initial delivery of the workshops in fall 2018 returned high faculty satisfaction ratings and feedback suggesting a need for even more pedagogical training with a stronger emphasis on meta techniques and less on a review of student content. In response, program developers incorporated additional teaching techniques in the spring 2019 training. Faculty evaluations in spring 2019 reflected even greater satisfaction with the increased focus on “meta skills”. The faculty development program described in this paper supports the need for a structured training process for faculty facilitating in IPE programs.
146

DESIGNING A NEOTERIC ARCHITECTURE & COMMUNICATION PROTOCOLS FOR CHINESE REMAINDER THEOREM BASED STRUCTURED PEER-TO-PEER NETWORKS WITH COMMON INTERESTS

Maddali Vigneswara, Iswarya 01 December 2021 (has links)
The core motive of this research is to construct a new hierarchical non-DHT based architecture for Peer-to-Peer (P2P) networks that facilitate common interests clustering. DHT based network maintenance is on the high end and it churning management is a complex task here. Providing efficient data querying performance and ensuring minimal churn management effort has interested us to pursue non-DHT route of P2P networking. And at each level of the proposed architecture hierarchy, existing networks are all structured and each such network has the diameter of 1 overlay hop. Such low diameters have immense importance in designing very efficient data lookup algorithms. We shall use a mathematical model based on the Chinese Remainder Theorem (CRT), generally used in cryptography, to define the neighborhood relations among peers to obtain the above-mentioned diameters. To the best of our knowledge, use of CRT in P2P network design is a completely new idea; it does not exist in the literature so far. It is worth mentioning its most important advantage from the viewpoint of speed of communication, that is its diameter, which is only 3 overlay hops. The protocol is not restricted to a single data source, and it incorporates peer heterogeneity as well.
147

Practitioner Resistance to Structured Interviews: A Comparison of Two Models

Nesnidol, Samantha A. 07 August 2019 (has links)
No description available.
148

The Psychometric Properties of the Diagnostic Interview Schedule for Children: Disruptive Behaviors in Preschool-Age Children

Rolon Arroyo, Benjamin 01 January 2012 (has links) (PDF)
The present study examined the psychometric properties of the Diagnostic Interview Schedule for Children (DISC-IV), specifically the disruptive behavior module for preschool-age children. The participants were 128 children (M = 4.43 years, SD = .54; Girls = 63) of African American (n = 37), European American (n = 41), Latino American (n = 38), and Mixed Ethnic (n = 12) background from Western Massachusetts. The overall internal consistency, concurrent validity, and predictive validity of the ADHD and ODD subsections were examined. Gender and ethnicity were examined as potential moderators of those as well. The DISC-IV and a behavior rating scale for teachers were administered at the beginning of the school year and the administration of the rating scale occurred again at end of the school year. The DISC-IV ADHD and ODD subsections exhibited acceptable overall internal consistency. The concurrent validity of the ADHD subsection was also found, but not for the ODD subsection. Most importantly, both DISC-IV subsections exhibited overall predictive validity, above initial teacher ratings. Partially supporting our hypotheses, ethnicity moderated the concurrent validity of the DISC-IV ADHD subsection, with DISC-IV scores of African American children having a stronger association with teachers’ ratings; boys also exhibited a stronger association than girls although not reaching significance. Also approaching significance, the DISC ADHD subsection appeared to predict year-end teacher ratings better for African American children than for European American and Latino American children. Overall, the DISC-IV was found to be a psychometrically reliable and valid diagnostic instrument for preschool-age children.
149

Advanced Structured Query Language Instruction for Engineers of the Office of Information Technology at Brigham Young University

Rackliffe, Vincent Brian 01 December 2005 (has links) (PDF)
This report describes the purpose, design, development and analysis of SQLTips, an online instructional delivery framework and set of instructional modules relating to advanced features and performance tuning of Oracle's Structured Query Language (SQL). SQLTips was developed using Wiki, server-side software that allows users to edit web pages with almost any browser. The report includes a literature review of existing SQL instructional materials and a review of instructional theory. The report also includes a description of the formative evaluation process and results. These results show that SQLTips is easy and enjoyable to use. Based on a scale of 1 to 7 with 7 being the most positive, the 10 modules comprising SQLTips averaged a 6.1 for ease of use and a 6.2 for enjoyability. Posttest results also showed an average increase of 46% upon completion of the instruction. The report also contains a critique of the project.
150

Status of Accountability in Online News Media: A Case Study of Nepal

Acharya, Bhanu Bhakta January 2014 (has links)
Scholars contend that media accountability to the public and professional stakeholders has been improving in recent years because of the increased use of digital platforms. Since most studies related to online news media accountability have focused on developed countries, this research study examines the state of accountability in online news media in Nepal, where access to online media is very limited and audiences are barely aware of media's journalistic responsibilities. By employing case study research method with three data sources, this research study assesses the state of online media accountability in Nepal, key challenges for ensuring accountability in journalism created using digital platforms, and the role of audiences in making online news media accountable. The study finds that Internet accessibility, media literacy, and availability of resources are the primary challenges to making media accountable in Nepal. The study concludes by offering recommendations for future research and practical applications.

Page generated in 0.044 seconds